really appreciate your work sir, I too believe in compounding. But still I don't know why sometimes it feels hard continuing.. still learning coding the old way. I feel, someday it will give me some edge, I just love coding the old way. Sometimes I feel anxious and kind of unsure about my approach but I have decided to continue what I am doing as I am too young in my 20's so I think it's ok to explore till I enjoy doing it. Thank you for sharing your work sir. Hope you keep learning and growing.
Same here. Farmer now, former network engineer and software project lead, but I stopped programming almost 20 years ago.
Now I build all sorts of apps for my farm and organizations I volunteer for. I can pound out an app for tracking sample locations for our forage associations soil sample truck, another for moisture monitoring, a fleet task/calendar/maintenance app in hours and iterate on them when I think of features.
And git was brand new when I left the industry, so I only started using it recently to any extent, and holy hell, is it ever awesome!
I'm finally able to build all the ideas I come up with when I'm sitting in a tractor and the GPS is steering.
Seriously exciting. I have a hard time getting enough sleep because I hammer away on new ideas I can't tear myself away from.
100% this, too. I am an IT professional - CTO for a large-ish enterprise (25-30bn yearly revenue). I am finding myself waking up at 4am every single day for the last 2 months to vibe code stuff i always wanted to build for myself, my family and friends, and never quite had the time for it. My sleep habits are definitely suffering but my happines is through the roof.
100% this. This is the new age of software - but it will be tiny little apps like this for each little user. They don't need to be mega apps, etc. Bespoke little apps that help your own little business or corner of the world.
I'm teaching my kid what I consider the AI dev stack: AI IDE (Antigravity for us), database (Supabase for us with a nice MCP server), and deployment (Github and Vercel for us). You can make wonderful little integrated apps with this in hours.
love to hear about what tech is like on farms today. do you run into the problems with fixing tractors and equipment and its all locked down with drm and you cant fix it without hacking the software?
Slightly moving into the other direction, after 17 years of science and tech optimism I see myself turning into a Luddite more and more.
First observation was that communication and social aspects of software seems crucial for success and proliferation.
And next came: that technology seems inept to solve any socio-econimic problems, but rather aggravates them.
I recommend you get Claude proper subscription. You can spend $100 a month for Max and get way more API usage out of it, or for $17 if you are patient about hitting limits its still way cheaper than using the API directly.
I have a similar experience but its moreso AI lets me build my side projects I only have time to research on, not much time or energy to actually code. I get to review the code and have Claude inspect it (most people I feel dont have Claude do code audits) and tell me where theres bugs, security issues, etc. I do this routinely enough.
This is more than just a bad side project - it's borderline malicious.
How confident is the OP that every single one of these 60 calculators work all the time, with all edge cases? Because if someone is on your website using your calculator, they are putting trust in you. If it's wrong, it could have downstream impacts on them. I hope every single one has a comprehensive set of tests with good edge cases. But realistically will they?
I'm actually pretty pro-AI development. But if you're going to use AI to help develop a website, at least focus on quality rather than quantity. AI makes quantity easy, but quality is still hard.
As an aside, the website doesn't even work for me. My clicks don't don anything.
> How confident is the OP that every single one of these 60 calculators work all the time, with all edge cases?
The compound interest calculator, which is their 'favorite page', already shows an incorrect value in the graph. So my faith in the other calculators isn't great. I also kinda doubt OP's story of them using that page all the time, since it took me about 20 seconds to find this issue.
This. I have so many things to say about the site, but have been withholding them in fear of "posting shallow dismissals, especially of other people's work"
I am still an "Engineer" but for years have been mostly meetings and Architecture, so I had same experience as you with Vibe Coding, I can get some of my ideas down quickly with my limited time available, but still apply my Engineering knowledge to drive the agents. it has been really enjoyable to get actual ideas out without hitting walls of blockers of getting things running. I know many people enjoy those problems, but I am one of those that after a day of solving hard problems, want to enjoy getting my personal ideas out. I wrote about one I built over Christmas: https://michaeldugmore.com/p/family-planner-vibe-coding-rule...
Creating a polished, usable app is just so much work, and so much of it isn't fun at all (to me). There are a few key parts that are fun, but building an intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
I'm bewildered when I read posts by the naysayers, because I'm sitting here building polished apps in a fraction of the time, and they work. At least much better than what I was able to build over a couple of weekends. They provide real value to me. And I'm still having fun building them.
I now vibe coded three apps, two of them web apps, in Rust, and I couldn't write a "Hello World" in Rust if you held a gun to my head. They look beautiful, are snappy, and it being Rust gives me a lot of confidence in its correctness (feel free to disagree here).
Of course I wouldn't vibe code in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
I can understand you don't want to spend effort for throwaway code.
> in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
That isn't going to cut it. You need to understand the problem domain, have a deep design taste to weigh current and future demands, form a conceptually coherent solution, formalize it to code, then feed back from the beginning. There is no prompt giving your AI those capabilities. You end up with mediocre solutions if you settle for understanding every line it spits out. To be fair, many programmers don't have those capabilities either, so it also a question of quality expectations.
I believe you can use LLMs as advanced search and as a generator for boilerplate. People liking it easy are also being easy with quality attributes, so anyone should be self aware where they are on that spectrum.
> Creating a polished, usable app is just so much work, and so much of it isn't fun at all (to me).
Then don’t do it. No one is forcing you. Are you also going to complain that building airplanes and ensuring food safety are too much work and not fun for you? Not everything needs to be or should be dumbed down to appeal to lowest common denominator.
Alternatively, go work at a company where you’re part of a team and other people do what you do not enjoy.
> I'm sitting here building polished apps in a fraction of the time
No, no you are not, guaranteed. “Polishing” means caring about every detail to make it perfect. If you’re letting the LLM make most of it, by definition it’s not polished.
No one is also keeping me from doing what I want to spend my time with on my days off.
> Are you also going to complain that building airplanes and ensuring food safety are too much work and not fun for you?
No, because this isn't remotely comparable to weekend hobby projects. What a weird question.
> No, no you are not, guaranteed. “Polishing” means caring about every detail to make it perfect. If you’re letting the LLM make most of it, by definition it’s not polished.
I guess we have different definitions of "polished" then.
> No, because this isn't remotely comparable to weekend hobby projects.
I agree. But those also don’t need:
> intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
Some of that, sure, but not all of it. Either it’s a weekend hobby project or it’s not, and your description is conflating both. A hobby is something done for fun.
Of course I wouldn't vibe code in a serious production project, but I'd
still use an AI agent, except I'd make sure I understand every line it
puts out.
So you value your ability to churn out insignificant dreck over the ability of others to use the internet? Because that's the choice you're making. All of the sites that churn your browser for a few seconds because they're trying to block AI DDoS bots, that's worth your convenience on meaningless projects? The increased blast radius of Cloudflare outages, that's a cost with foisting on to the rest of the internet for your convenience?
This is such a... unique angle. Of all the things to get angry at AI for, web crawlers and the impact on cloudflare outages are the ones that really grinds your gears?
He said fun, not easy. Sometimes it's precisely doing brainless stuff over and over again that becomes hard, like writing a template displaying a table of your results or implementing filter and pagination on a web app. I don't feel like I'm growing anymore when doing those things. Or even for some tests. Or when you need a Bash script automating menial stuff. (Still you could find new perspective on things.)
> Sometimes it's precisely doing brainless stuff over and over again that becomes hard, like writing a template displaying a table of your results or implementing filter and pagination on a web app.
I always have a hard time taking this complaint seriously, because the solution is absolutely trivial. Write a snippet. Have you really been out there, year after year, rewriting the same shit from scratch over and over? Just make a snippet. Make it good and generic and save it. Whenever you need to do something repeated on a new project, copy it (or auto-expand if you use it that often) and adapt. Snippet managers are a thing.
Or better yet, refactor your app so it doesn't require so much boilerplate - surely if you're doing the same thing over and over again you can just extract it into it's own function / method and abstract over it.
Same here. I’m an AI professor, but every time I wanted to try out an idea in my very limited time, I’d spend it all setting things up rather than focusing on the research. It has enabled me to do my own research again rather than relying solely on PhD students. I’ve been able to unblock my students and pursue my own projects, whereas before there were not enough hours in the day.
I'm not a bot. I'm not a native English speaker. I taught Enlish by myself. so I tried to use ai to tranlate what I really want to say. ( these words is typing by myself instead of AI)
If that’s the case, then mentioning using LLMs to help translate/organise what you want to say in your messages might be taken a bit better by others.
If you want to use LLMs to help express something you don’t know the words for in English then that is a good use for LLMs, if it’s called out. Otherwise your messages scream LLM bot to native speakers.
“You’re absolutely right”, “That hits different”, “Good call!” “–“ are all classic LLM giveaways.
I’m not a moderator here, so you don’t have to listen to me either way.
I, on the other hand, am getting gradually, but strongly, disillusioned, and importantly also feeling disenfrenchised, from coding and the world around it.
For myself, I’ve always enjoyed “getting my hands dirty” with code, and the advent of LLMs have been a boon. I’m retired from 34 years of coding (and managing), and never skipped a beat. I’ve released a few apps, since retiring. I’m currently working on the first app that incorporates a significant amount of LLM assistance. It’s a backend admin tool, but I’ll probably consider using the same methodology for more public-facing stuff, in the future.
I am not one to just let an LLM write a whole app or server, unsupervised (I have control issues), but have allowed them to write whole functions, and help me to find the causes of bugs.
What LLMs have given me, is a decreased hesitance to trying new things. I’ve been learning new stuff at a furious rate. My experience makes learning very fast. Having a place to ask questions, and get [mostly] good answers (experience helps me to evaluate the answers), is a game-changer.
> “A ship in harbor is safe, but that is not what ships are built for.”
–John A. Shedd
> Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean."
I’m not an AI hater but I do see this as evidence of LLMs being susceptible to chasing trends as much as people.
Next.js with server rendered React is not a stack that an experienced web developer would have recommended for a “clean” solution to a collection of financial calculators. It’s the answer you’d get if you asked for the stack that’s trending the most lately.
TBH, that's pretty much the stack I'd pick if I were building anything new by hand. If you look at the site, there's a lot going on and Next + React + Tailwind does not seem so crazy.
These are all quite reliable well-understood components, and far from "chasing trends" IMO.
Could be, but that stack happens to also solve for a lot of problems totally unexperienced people will struggle with (such as, to not look too far, CORS issues). Good reco for a non-tech person to build a frontend. And besides, who cares which stack is used as long as its used. Its not like this will ever be maintained. If anything, if a need for a new feature emerges 5 months down the road the whole thing can be re-done from scratch in one sitting without writing a single line of code.
The key phrase here is "I still had domain expertise". Many miss that AI is a multiplier. If you multiply 0 by AI, you get 0 (or hallucinated garbage). You multiplied your knowledge of compound interest and UX by AI's speed.
Without your background, the AI would have generated a beautiful interface that calculates mortgages using a savings account formula. Your role shifted from "code writer" to "logic validator" - this is the future of development for domain specialists
Similar path here - studied physics, worked in accounting/finance for years, hadn't shipped code in forever. The thing that clicked for me wasn't the AI itself but realising my domain knowledge had actually been compounding the whole time I wasn't coding.
The years "away" gave me an unusually clear picture of what problems actually need solving vs what's technically interesting to build. Most devs early in their careers build solutions looking for problems. Coming back after working in a specific domain, I had the opposite - years of watching people struggle with the same friction points, knowing exactly what the output needed to look like.
What I'd add to the "two camps" discussion below: I think there's a third camp that's been locked out until now. People who understand problems deeply but couldn't justify the time investment to become fluent enough to ship. Domain experts who'd be great product people if they could prototype. AI tools lower the floor enough that this group can participate again.
The $100 spent on Opus to build 60 calculators is genuinely good ROI compared to what that would have cost in dev hours, even for someone proficient. That's not about AI replacing developers - it's about unlocking latent capability in people who already understand the problem space.
Turing Test is not really science (an infallible test, measurable outcome). An AI might never be able to pass TT for all humans. Just gets to be a high-def AI. Makes TT a technology.
I'm at the opposite end. I feel AI is sucking all the joy out of the profession. Might pivot away and perhaps live a simpler life. Only problem is that I really need the paycheck :(
While my projects have not touched agentic AI yet and the type of code I have been writing is produced like back in the day (read documentation, write code, read documentation, write code ...) I expect that my next project will tether me to agentic AI systems more. I still have my hobby projects, which I code the old-fashioned way. Hey! at least it costs me much less that $100/month to tinker on projects ... more like the cost and wear on running my laptop!
There are people here "I can finally get all my ideas done!" Sure, if they are really important enough, I guess. But high technology is much, much less important to me than my employer or probably others here on HN. I can only be concerned with the paycheck at this point. And at this point, they are happy that I can read documentation, write code, read documentation, write code, and don't care how it gets done. (For what I am working in though, I'd just skip the AI training step.)
With that in mind, I like to use PLs as tools to clarify thinking. There are others that think using PLs and their accompanying tools as friction to their goals, but my friction is understanding the problems I am trying to solve. So, while taking the adventure into automated tooling might be interesting, it doesn't replace the friction (just the feeling I have to read more potential garbage code.)
Yup. I worked very hard, and for many years to acquire a skill in designing and writing systems. It is an art. And it is very disheartening to see people without any skills to behave the way they do. For now, the work I do cannot be replicated by these people, but I do not such high hopes for the distant future. Though at the point it can truly be automated I think it will be automating a large majority of non physical jobs (and those too will be likely getting automated by then)
On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
> On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There's a middle ground here that you're not considering (at least in the small amount of text). Vibe coders will spit out a lot of nonsense because they don't have the skills (or choose not) to tweak the output of their agents. A well seasoned developer using tools like Claude Code on such a codebase can remediate a lot more quickly at this point than someone not using any AI. The current best practices are akin to thinking like a mathematician with regards to calculator use, rather than like a student trying to just pass a class. Working in small chunks and understanding the output at every step is the best approach in some situations.
That's very true. The LLM can be an accelerator for the remediator, too, with the value-add coming from "actually knowing what they're doing", much as before.
> How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
I've read your whole reply and agree with most of it; what I don't agree with (or don't understand) is below:
> If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
My experience as a consultant to business is that they only ever bring in consultants when they need a fix and are in a hurry. No client of mine ever phoned me up to say "Hey, there, have you any timeslots next week to advise on the best way to do $FOO?", it's always "Hey there, we need to get out an urgent fix to this crashing/broken system/process - can we chat during your next free slot?".
> Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating.
I dunno about this - depends on the specialisation.
They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
Someone with deep knowledge of the insurance industry? Nope - they'll look for a f/timer. Someone with deep knowledge of payment processing? No consultant, they'll get a f/timer.
> My experience as a consultant to business is that they only ever bring in consultants when they need a fix and are in a hurry.
No, that's fair, and I think you're right about that. But refactoring 300,000 lines 'real quick' isn't going to happen, regardless of that. :)
> They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
I implicitly had narrow technical specialisations in mind, albeit including ones that intersect with things like "insurance industry workflows".
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
The amount of software needed and the amount being written are off many orders of magnitude. It has been that way since software's inception and I don't see it changing anytime soon. AI tools are like having a jr dev to do your grunt work. Soon it will be like a senior dev. Then like a dev team. I would love to have an entire dev team to do my work. It doesn't change the fact that I still have plenty of work for them to do. I'm not worried AI will take my job I will just be doing bigger jobs.
> Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
I am no longer sure thats the case. I had it chew through a gnarly problem with my own custom webrtc implementation on a esp32 SOC. It did not rely on any existing documentation as this stuff is quite obscure - it relied on me pointing to specs for webrtc, specs for esp32 SDK, and quite some prompting. But it solved the problems I was dreading to solve manually in a matter of a 2hr session. Thats for a hobby project, we are now starting to experiment using this in the enterprise, on obscure and horrible to work with platforms (such as some industry specific salesforce packages). I think claude can work effectively with existing code, specs on things that would never made it to stackoverflow before.
> And it is very disheartening to see people without any skills to behave the way they do.
The way the do, which is? I've skimmed comments and a lot of them is hate, hostility towards OP's project and coders "without skill" in general, also denial because there's no way anything vibe-coded worked. At best, there is strong tribalism on both ends.
There is definitely tribalism. I think a lot of the negativity is people who recognize the long term goals of these companies, not just to tech workers. Right now, these models are a threat to people who worked hard and invested their time, while it lets inexperienced or lazy people appear more competent. I think that less experienced developers (or people who don't care anymore or maybe ever) see what an LLM can do and immediately believe it will solve all their problems. That if you are not embracing this with full force you are going to be left behind.
You might see more opposing views in this thread, but if you browse this site often you'll see both sides.
Those embracing it heavily do not see the nuances carefully creating maintainable solutions, planning and recognizing tech debt, and where it's acceptable short term. They are also missing the theory building behind what is being created. Sure AI models might get even better and could solve everything. But I think it's naive to think that will be generally good for 90% of the population including people not in tech.
Using these models (text or image) devalues the work of everyone in more than one way. It is harmful for creative work and human expression.
This tech, and a lot of tech, especially ones built by large corporations for profit extraction and human exploitation, is very unlikely to improve the lives at a population level long term. It can be said for a lot of tech (ie. social media = powerful propaganda). The goal of the people creating these models are to not need humans for their work. At which point I don't know what would happen, kill the peasants?
I feel it's nice to use AI coding for side-projects, especially after work when I am kind of tired. Although the one issue is that if it gets stuck in a loop or just does not get the what is wrong and does the wrong thing no matter how you twist it, then you have to go into the weeds to fix it yourself and it feels so tiresome, at that point I think what if I had just done everything myself so my mental model would be better.
Also we are still designing systems and have to be able to define the problem properly, at least in my company when we look at the velocity in delivering projects it is barely up since AI because the bottlenecks are elsewhere..
> Do you truly believe it won't get better, maybe even better at whole system design and implementation than people?
What are you calling "growth"? Adoption, or LLM progress? LLM progress has objectively slowed down, and for rather obvious reasons. The leaps from GPT-2 to GPT-4 can't be reprised forever.
It will get better, but the rate at which it does may not continue to be exponential. Past performance is not indicative of future results.
While the agents models seem to continue to improve, I think LLMs as a whole have started seeing less and less benefits from the current scaling approaches.
I think what we currently have is pretty close to the ceiling for LLMs. But with the amount of money being spent there might be a new breakthrough (not llm)
It must depend on the person. I’ve been coding for all my life but have never been GOOD. I thoroughly enjoy coding, despite being frustrated many times.
Literally yesterday I remarked to my tech friends how fun coding with CoPilot is. I actually make forward progress now, and I understand all that the agent is doing.
For me, coding is an enjoyable means to an end. I do enjoy the process, but I enjoy the results more.
You could read the syntax and see what it logically did. But you likely don't always know why it did something, and you definitely don't know why another way wasn't chosen (maybe that way would have better aligned with your long term goals)
> They've even got their own slogan: "you're probably just not prompting it properly"
That's the same energy as telling other professions to "just learn to code, bro" once they are displaced by AI.
But I guess it doesn't feel nice once the shoe is on the other foot, though. If nobody values the quality of human art, why should anybody value the quality of human code?
>That's the same energy as telling other professions to "just learn to code, bro" once they are displaced by AI. But I guess it doesn't feel nice once the shoe is on the other foot, though.
It's the exact same neoliberal elites who told everyone to code one year and told them they'd all be automated of a job the next year.
I dunno who you think you're railing against but it isnt me.
Still waiting for the 100% vibe coded trading bot.
Im in this field and my system was heavily built with Claude, though not per vibe coding, more like a junior supporting me:
I do not see any person connecting a vibe coded bot to a real account soon, since if its about real money, people will hesitate. And if you have blown up one account with your vibe coded bot while you are not a professional dev, you will loose interest very quickly - such systems do not contain "just a few thousand lines of code": Sure you could speed up development massivly and "hit the rock sooner than later" when going vibe coded here :-D
Vibecoder here. I don't think so. I am a PE investor, and we are using it in our small portfolio companies to great effect. We can make small little mini-apps that do one thing right and help automate away extra work.
It's a miracle. Simply wouldn't have been done before. I think we'll see an explosion of software in small and midsize companies.
I admit it may be crappy software, but as long as the scope is small - who cares? It certainly is better than the janky manual paper processes, excel sheets, or just stuff in someone's head!
Funnily enough, Excel is the quintessential example of a fourth generation language, IDE, and database and it's the only one aside from SQL which actually succeeded from its time period. It's software, just like what you're building now, and just like what you're building now there are good points and bad points about it. The tradeoffs are different between the JS / Python code you're likely spinning up now vs. the Excel code that was being spun up before, but they rhyme.
Don't care about the critics. What you're doing is what people were doing in the 80s with their new PCs and tools that democratized this kind of development, like Basic and DBase.
Most developers are too full of themselves, in fact, most of us are a bunch of pretentious pricks. It is no wonder people are happy to be able to get what they want without our smugness and pretentiousness. Too bad some us are not like that and will end up getting unemployed anyway in the next few years.
I think the parent is talking about the people who post to LinkedIn that "SWE as a profession is dead" non-stop. I fully agree with you that it massively lowered the cost to create, but I'd argue that the people who's saying that SWE is dead wouldn't be able to go past the complexity barrier that most of us are accustomed to handling. I think the real winners would be the ones with domain expertise but didn't have the capacity to code (just like OP and you).
Correct. I think "real" software requires real development and architecture.
And to be honest, even the tiny apps I'm doing I wouldn't have been able to do without some background in how frontend / backend should work, what a relational database is, etc. (I was an unskilled technical PM in the dotcom boom in the 2000s so at least know my way around a database a little. I know what these parts of tech CAN do, but I didn't have the skills to make them do it myself.)
Curious about why the janky manual paper processes, excel sheets, or stuff not documented, was fixed only when vibe code was available. Was it just cost?
Time and thus cost. Early in my career I would look across a fairly large company at processes being ran on spreadsheets and see if it would be worth the time to create software to address and if those processes should be standardized. We barely scratched the surface with all the possible custom software opportunities for this company.
Cost and managerial overhead. We don't have a dev on staff. Even if we did, there is lots of managerial overhead to explain "the problem" and then iterate to a solution with a dev. Now you can just build the damn solution yourself!
1. Invoice billing review. Automated 80% of what was a manual process by providing AI suggestions in an automated way. Saved 3 hours per day of managers time. Increased topline by 10%. Dev time: 1 day
2. Data dashboards. We use janky saas that does not have APIs. Automated a scraper to login, download the reports daily, parse and upload to a database, and build a dashboard. Used to take my associate 3 hours per week to do this in a crappy spreadsheet. Now I have it in a perfect database much more frequently. Dev time: 4 hours.
We are attacking little problems all across the business now.
Everything is still touched by human - AI is just giving suggestions to humans to speed them up. Can get them 80-90% there.
I think also you need to compare it to what was already there. No QA on the humans. Done off the side of their desk with no oversite, process, or checking. Huge amounts of manual errors.
The new solution just needs to be better than the old one, it doesn't need to be perfect.
(But I 100% agree that I wouldn't let AI live against customers. It is helping us build automations faster, and doing a "little" thinking on recommendation rules that would be very hard to implement without something highly structured, which would be frankly impossible in our environment.)
> I think also you need to compare it to what was already there
No. The bar is "miracle" and can cure cancer etc and can replace all developers etc. The bar is much higher than existing manual processes. It absolutely needs to be perfection to match the lofty claims
Miracle was meant here "figuratively", esp for non tech people this wording seems plausible from their perspective, because they can now do that without dev support
I guess Vibe coding cleanup firms and offensive security researchers are plotting to find bugs costing firms millions of dollars worth of bugs or one creating a dreadful data breach.
yep, how do we define AI as a replacement for search engine, and templating engine, and inference engine (do X in Y)?
is there a term for that?
AI at our fingertips, accessible and useful, that's just a tool, that's not redefining us as an industry and denying people's jobs – that's an asset. (I used an em dash to prove I am not AI, as apparently double dash is now a sign of AI text!)*
Agree 100%; and the analogy with SEO is spot on! Those were everywhere 20 years ago. They're mostly gone, and so are their secret recipes and special tags and whatnot. AI gurus are the same! Not the same people but the same profile. It's so obvious.
"Comment NEAT to receive the link, and don't forget to connect so I can email you" -- this is the most infuriating line ever.
I'm in a similar position. At some point in the past few months I just stopped coding in my hobby time altogether. I'm almost 45 and not sure what else I could do, though. Hope you figure something out!
I'd recommend a pivot to hardware. I'm in the FPGA sector, and vibe coding isn't a thing for the most part, simply because the determinism required doesn't lend itself well to LLMs. It's so incredibly easy to introduce a bug at every single step, and the margin for error depending on volumes is near zero. You're often playing with a single clock cycle of headroom. I've yet to play with a single LLM (Claude Opus 4.5 is my latest trial) that doesn't introduce a massive amount of timing errors. Most semiconductor IP is proprietary, top-level secret, code never leaves the building. The data to build good models just isn't there like it is for software and the open-source ecosystem.
In comms, they have something like a 1:4 ratio of design to validation engineers. Defence is slightly different, as it depends on the company, but generally the tolerance for bugs is zero. Lets not get started on the HF trading folks and their risk appetite!
There's a lot of room for software engineers. Most FPGAs are SoC devices now, running some form of embedded linux doing high-level task management networking. Provided you know enough Verilog to know your way around, you'll be fine. You're also in a space where most engineers I know are preparing to retire in the next 5-10 years, so there will be a panic which will ripple across industries.
I jumped mid-career, and there were a few places I started before diving into live hardware projects (which is the only way to go from student to practitioner).
Consider security engineering. It requires constantly thinking about unconventional ways to attack systems, and taking advantage of common coding mistakes LLMs produce as often is humans because it learned from humans.
Security engineers will have jobs until software is perfectly secure... and that is going to be a while.
I do not use LLMs at all to do my job, and it is unlikely I ever would. Clients pay me -after- they had all their favorite LLMs take a pass.
have friends in Security Audits and the business model is great. The clients need external companies to give stamp of approval for their cyber insurance. Also its hard to find security holes but rather easy to validate, and it doesn't matter how ugly they are its just if you can get in or not .
And indeed the vibe coders will just create a lot more security issues
I don't get this sentiment, regressions still exist, you can't just prompt them away and a programmer will spend 10x more time fixing regressions, bug fixing and improvements than scaffolding in most projects that people pay for. If most of your time at work is not doing this, then you are already living a simple life.
I feel the same way. The only way I found that lets me cope with this is by having 1-2 personal projects, closed source, with me as the only user, where I slowly build things the way I enjoy, and where the outcome is useful software that doesn't try to monetise at the expense of the end user.
Came here to say this. I've been programming since I was 9, and it always had a strong aesthetic, artistic and creative dimension. That dimension has always been in tension with the economic demands of adult life, but I was good at finding the quiet corners in which to resolve it.
A lot of work was tedious, painstaking grind, but the reward at the end was considerable.
AI has completely annihilated all of the joy I got out of the process, and everything that attracted me to it with such abandon as an adolescent and a teenager. If someone had told me it was mostly slop curation, I would have stayed in school, stuck to my philosophy major, and who knows -- anything but this. I'm sure I'd have got reasonably far in law, too, despite the unpropitious time to be a JD.
I'm very much in a similar boat to you - I'm also considering a pivot away from SWE if this is what it's going to become. Luckily I'm still young and don't have anyone depending on me (other than myself).
I'm still working on my own small closed source projects, building them the way I want to, like a gameboy emulator - and I've gotten a lot of joy from those.
I think deskilling is an underrated concern. Programming among the competent is a mind-body experience and a matter of motor memory and habits of mind, and LLMs make you extraordinarily lazy.
No matter how 'senior' you are, when you lose touch with the code, you will, slowly, lose the ability to audit what LLMs spit out, while the world moves on. You got the ability to do that by banging your head against code the hard, "pre-AI" way, perhaps for decades, and if you don't do the reps, the muscle will atrophy. People who think this doesn't matter anymore, and you can just forget the code and "embrace exponentials" or whatever, are smoking the good crack; it _is_ about the code, which is exactly why LLMs' ability to write it is the object of such close examination and contestation.
Folks who realise this will show to advantage in the longer run. I don't mean that one shouldn't use LLMs as an accelerant -- that ship has sailed, I think. However, there is a really good case to be made for writing a lot by hand.
yup. the things i disliked most about programming were hyped up bullshit and losing autonomy.
These existed before but the culture surrounding AI delivered a double dose of both.
I have no problems with LLMs themselves or even how they are used but it has developed its own religion filled with dogma, faith based reasoning and priests which is utterly toxic.
The tools are shoved down our throats (thanks to the priesthood, AI use is now a job performance criteria) and when they fail we are not met with curiosity and a desire to understand but with hostility and gaslighting.
I quit my job over AI. Just felt like my job was approving pull requests where both the PR and the code itself was just slop. In all fairness, it was mainly CRUD applications so not a big deal but in the end I didn't feel like I had any control over the application anymore with hundreds of lines of slop being added every day.
One day I might start a consultancy business that only does artisanal code. You can hire me and my future apprentices to replace AI code with handcrafted code. I will use my company to teach the younger generation how to write code without AI tooling.
That's an interesting perspective. I guess it depends on what you want and how low the stakes are. Artisanal coffee, sure. Artisanal clothing, why not? Would you want an artisanal MRI machine? Not sure. I wouldn't really want it "hand crafted", I just want it to do it's job.
It's a shame to find an AI-written ad so highly upvoted here.
The author even insists that AI was used because of their poor English, which is the standard excuse on Reddit as well. But clearly, this is not a translation:
> Curious if others have similar stories. Anyone else come back to building after stepping away?
This is bog-standard AI slop to increase engagement.
Look at the blog on their linked site as well. AI-generated posts.
This has been posted here for SEO. This is a business venture.
It's times like this when I think HN needs a post downvote button. Flagging might not be quite appropriate here, but I hate to see this content cluttering up the front page.
By contrast, the moment I am no longer able to compete with AI users, is the moment I quit the industry. I have no interest in outsourcing my thinking.
Thankfully LLMs are still very stupid. Especially when it comes to security engineering, my specialty, so looks like I have a while yet.
I’m glad to see people finding coding accessible again. To me this kind of common “AI made coding fun and accessible again” message signals something deeper. As a field, we allowed our systems to get so complex that we lost people: and AI tools are bringing them back. Maybe we should look at how we have chosen to design systems and say “can these be made simpler and more accessible”? Even before AI systems I looked at my field with sadness: there is complexity growing everywhere and few people looking to address that. Instead, we seem to have incentivized creating complexity because new complicated systems that are hard to use lead to career advancement if you can point at something and say “I am one of the few who can deal with that” or “I created that complex thing”. The ability to handle the complexity makes an individual valuable even though the effect is it excludes many others.
Perhaps if we didn’t have deep layer cakes of frameworks and libraries, people would feel like they can code with or without AI. Feels like AI is going to hinder any efforts to address complexity and justify us living with unnecessary complexity simply because a machine can write the complex, hard to understand, brittle code for us.
Happy for everyone who enjoys it. For me it's the opposite: AI everywhere sucks the joy out of it and I'm seriously starting to consider a career shift after roughly 10 years of writing code for a living.
I feel you. There's a massive difference between crafting and assembling. AI turns us from artisans carving a detail into assembly line operators. If your joy came from solving algorithmic puzzles and optimizing loops, then yes, AI kills that
It might be worth looking into low-level dev (embedded, kernel, drivers) or complex R&D. Vibe coding doesn't work there yet, and the cost of error is too high for hallucinations. Real manual craftsmanship is still required there.
It helped me finish my webRTC client for a esp32 microcontroller. Thats fairly low level. It did it without breaking a sweat - 2hrs, and we had a model which works with my pipecat-based based server.
I loaded the lowest level piece of software I wrote in the last 15 years - a memory spoofing aimbot poc exploiting architectural issues in x86 (things like memory breakpoints set on logical memory - not hw addresses - allowing to read memory without tripping kernel-level detection tools, ability to trigger PFs on pages where the POC was hiding to escape detection, low level gnarly stuff like this). I asked it to clean up the code base and propose why it would not work under current version of windows. It did that pretty well.
Lower level stuff does of course exist, but not a whole lot IMHO. I would not assume claude will struggle with kernel level stuff at all. If anything, this is better documented than the over-abstraced mainstream stuff.
The cost of hallucinations though - you potentially have a stronger point there. It wouldn’t surprise me if that fails to sway some decision makers but it doesn’t give the average dev a bit more ground to work with.
I'm starting to think that people don't want to be programmers anymore, they want to be managers who delegate their work to someone or something else, and then come back, critique the work, and do another loop
I'm thinking back to my contracting days when a typical customer might have a team of ten people but only one or two did the bulk of the work. Now the whole team can be productive for whatever measure you use for productivity.
It's not so great for the one or two but fantastic for everybody else.
I guess I'm an outlier then because I actually like programming, and I've never wanted to be a manager, even a manager of an LLM. At least half the fun of making software is doing the programming
It feels like I am these days, with the amount of posts on here about how LLM-powered programming is the future and everyone should be doing it and it will make us all 10x developers
It sucks the joy out of it because to the extent that you build something with AI, (Obama voice) you didn't build that. I am allergic to the concept of developing with AI, especially for personal work, because AI-authored code isn't something I built, it's something I commissioned. It's like if I went onto Fiverr or Upwork with a spec and paid money and said "Here, build this" to a freelancer and then went back and forth with that person to correct and refine the result. I might get a halfway decent result in the end, but I don't get the experience of solving the problem myself. Experience solving problems yields new insights. It's why math textbooks have exercises: the only way to grasp the concepts is to solve problems with them.
With AI, you are no longer a developer, you're a product manager, analyst, or architect. What's neat about this, from a business perspective, is that you can in effect cut out all your developers and have a far smaller development workforce consisting of only product managers, analysts, and architects whom you call "developers" and pay developer salaries to. So you save money twice: once on dev workforce downsizing, and again on the pay grade demotion.
The problems I've been working on are at a much higher level than the nuts and bolts.
I'm currently exploring domain-specific languages aimed at writing web applications. I've been particularly interested in, much like bash, data flowing through pipelines. I have spent quite a bit of time and I'm definitely not vibe coding but I've probably only writen 1-2% of the code in these projects.
It is so much work to build out a new language with a surrounding ecosystem of tooling. Not even five years ago this would have necessarily been a full time multi-year endeavor or at least required a team of researchers. Now I can tinker away in my off hours.
No need to go that far. I bounced off weekend projects many times because I lost interest the moment I had to relive fighting the "modern" frontend ecosystem set up (or whatever else unrelated to the actual building), which is what I was already doing at the day job. In the end I just gave up because I'd rather get some rest and fun out of my time off. Now I can just skip that part entirely instead of tanning in front of <insert_webpack_or_equivalent> errors for hours on Saturday afternoon.
Huh? What about all the open source software you use, did you build all of it?
What about the phone in your hand, did you design that?
HN loves to believe they are the noble few - men and women of math and science, driven by nothing but the pure joy of their craft
But this whole AI thing has been super revealing. Almost everyone here is just the same old same old, only that now that the change is hitting close to home, you’re clutching your pearls and lamenting the days when devs were devs
The younger generation born into the AI world is going to leave you in the dust because they aren’t scared of it
My math teacher used to say that people felt this was about…calculators, imagine that
Take a deep breath and try again. You'll get more of a constructive argument with the person you're responding to were you to engage with intellectual honesty.
There seems to be two camps of people: those who love the coding and those who love delivering value/solutions. I am in the latter camp. The happy consumer and the polished product is what gives me satisfaction, the code is just really a vehicle from A to B. It’s a shame for anyone in the first camp who wants a career.
Agree with those 2 camps. The latter camp is all cheered up which is nice, but they should be asking the question if their solution is valuable enough to be maintained. If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise. If not, congratulations, you have invented throw-away code. Code of conduct: don't throw this code at people from the former camp.
Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term. The reverse is dangerous too, but can be offset to a certain extent with good product management.
> If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise.
This is solved problem with any large, existing, older code base. Original writers are gone and new people come on all the time. AI has actually helped me get up to speed in new code bases.
> If so, you should make all generated code your code, exactly in the form it needs to be according to your deep expertise.
Is this also true of all third party code used by their solution? Should they make all libraries and APIs they use their own in exactly in the form it needs to be according to their deep expertise? If not, why not?
If so, does this extend to the rest of the stack? Interpreters, OSes, drivers? If not, why not?
Well, what if one becomes unmaintained or has issues that only affect your project. Why is that uncontrolled code different to generated code? Is it specifically that it's generated?
This isn't a trick question, BTW. It's a genuine attempt to get to the rationale behind your (and the GP's) stance on this.
In particular, the GP said:
> Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term.
That hints I think at their rationale, that their stance is based on placing importance on the parts of software development that they enjoy, rather than any logical basis.
> Well, what if one becomes unmaintained or has issues that only affect your project.
This happens, but very rarely compared to changes in your own code base. If a library breaks, you can usually find an alternative, but even in that case you need to know how to modify your own code.
The difference with generated code is that you are tasked to maintain the generated code.
I think for a lot of minor things, having AI generate stuff is okay, but it’s rather astounding how verbose and sometimes bizarre the code is. It mostly works, but it can be hard to read. What I’m reading from a lot of people is that they’re enjoying coding again because they don’t have to deal with the stuff they don’t want to do, which...I mean, that’s just it isn’t it? Everyone wants to work on what they enjoy, but that’s not how most things work.
Another problem is that if you just let the AI do a lot of the foundational stuff and only focus on the stuff that you’re interested in, you sometimes just miss giant pieces of important context. I’ve tried reading AI driven code, sometimes it makes sense, sometimes it’s just unextensible nonsense that superficially works.
This isn’t tech that should replace anything and needs to be monitored judiciously. It can have value, but what I suspect is going to happen is we are going to have a field day with people fixing and dealing with ridiculous security holes for the next decade after this irrational exuberance goes away. It should be used in the same way that any other ML technique should be. Judiciously and in a specific use case.
Said another way, if these models are the future of general programming, where are the apps already? We’re years into this and where are they? We have no actual case studies, just a bunch of marketing copy and personal anecdotes. I went hunting for some business case studies a while ago and I found a Deloitte “case study” which was just pages of “AI may help” without any actual concrete cases. Where are the actual academic studies showing that this works?
People claiming AI makes them code faster reminds me that Apple years ago demonstrated in multiple human interaction studies that the mouse is faster, but test subjects all thought keyboard shortcuts were faster [1]. Sometimes objective data doesn’t matter, but it’s amusing that the whole pitch for agentic AI is that it is faster and evidence is murky for this at best.
This is such marketing speak. The words mean nothing, they’re just a vague amalgamation of feelings. “Vibes”, if you will.
If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
> The happy consumer and the polished product
More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection. Letting an LLM spit out code you just accept is not it.
The word you’re looking for is “shiny”, meaning that it looks good at a glance but may or may not be worth anything.
What term would you use? You can't say "a finished product" because it may never be finished, but something that other people find valuable seems like a good definition.
I get the argument. Sometimes I really enjoyed the actual act of finally figuring out a way to solve a problem in code, but most of the time it was a means to an end, and I'm achieving that end far more often now via AI tooling.
I’m not fussed about the exact term, as long as it points to something real and at semantic equal footing with the alternative.
Note how they described two areas of focus (what you “love”): “coding” and “delivering value/solutions”.
You can be a “coder” or a “programmer”, no one is a “deliverer of value/solutions”.
“Coding” is explicit, it’s an activity you can point at. “Delivering values/solutions” is vague, it’s corporate speak to sound positive without committing to anything. It doesn’t represent anything specific or tangible. It doesn’t even reference software, though it’s what it is, to make it sound broader than what it is. You could say “using and releasing apps”, for example, thought proponents may feel that’s reductive (but then again, so is “coding”).
Again, what’s in contention here isn’t the exact term, but making sure it’s one that actually means something to humans, instead of marketing speak.
> This is such marketing speak. The words mean nothing, they’re just a vague amalgamation of feelings. “Vibes”, if you will.
I actually think this reveals more about you than you might realise. A _lot_ of people enjoy being able to help people resolve problems with their skills. Delivering value is marketing speak, but it's specifically helping people in ways that's valuable.
A lot of people who work in software are internally motivated by this. The act of producing code may (or may not be) also enjoyable, but the ultimate internal motivation is to hand over something that helps others (and the external motivation is obviously dollars and cents).
There is also a subset of people who enjoy the process of writing code for its own sake, but it's a minority of developers (and dropping all the time as tooling - including LLMs - opens development to more people).
> If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection.
You can say the same thing about libraries, interpreters, OSes, compilers, microcode, assembly. If you're not flipping bits directly in CPU registers, your not pouring over every little detail to ensure perfection. The only difference between you and the vibe coder who's never written a single LoC is the level of abstraction you're working at.
Edit:
> If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
I also think this says maybe a lot about you, also, as many people also donate their time and efforts to others. I think it may be worth some self-reflection to see whether your cynicism has become nihilism.
I have spent over a decade working primarily on open-source, for free. I still do it, thought it’s no longer my primary activity. A huge chunk of that time was helping and tutoring people. That I still do and I’m better at it; I still regularly get thank you messages from people I assisted or who use the tools I build.
I did use to volunteer at a food bank, but I used that example only because it’s quick and simple, no shade on anyone who doesn’t. I stopped for logistical reasons when COVID hit.
I have used the set of skills I’m god at to help several people with their goals (most were friends, some were acquaintances) who later told me I changed their life for the better. A few I no longer speak to, and that’s OK.
Oh, and before I became a developer, I worked in an area which was very close to marketing. Which was the reason I stopped.
So yeah, I know pretty well what I’m talking about. Helping others is an explicit goal of mine that I derive satisfaction from. I’d never describe it as “delivering value/solutions” and neither would any of the people I ever helped, because that’s vague corporate soulless speech.
I don’t think they should’ve done that or continue to do it without consent, and I don’t limit that to code. Books, images, everything else applies the same.
I also don’t think “but it wouldn’t be viable otherwise” is a valid defence.
I don’t see what that has to do with the conversation, though. If your point is about the free/$20, that doesn’t really factor into my answer.
> So yeah, I know pretty well what I’m talking about. Helping others is an explicit goal of mine that I derive satisfaction from. I’d never describe it as “delivering value/solutions”, that’s vague corporate soulless speech.
While I commend your voluntary efforts, I don't think it lends any more weight to your original comment. In fact, I think this comment highlights a deep cynicism and I think a profound misunderstanding of the internal motivations of others and why "delivering value" resonates with others, but rings hollow to you.
In the end, this debate is less about LLMs, and more about how different developers identify. If you consider software to be a craft, then mastery of the skillset, discipline, and authorship of the code is key to you.
If you consider software to be a means to an end, then the importance lies in the impact the software has on others, irrespective to how it's produced.
While you are clearly in the former camp, it is undeniable that impact is determined entirely by what the software enables for others, not by how it was produced. Most users never see the code, never care how it was written, and judge it only by whether it solves their problem.
You’re failing to understand the complaint is about the hollow term being used to sound grandiose.
A street sweeper “delivers value” in the form of a clean street. A lunch lady at a school “delivers solutions” in the form of reducing hunger in children.
There’s nothing wrong with wanting to do something for others, the criticism is of the vague terminology. The marketing speak. I’ve said that so many times, I’d hope that’d been clear.
> While you are clearly in the former camp
You’re starting from wrong assumptions. No, I’m not “in the former camp”, I find the whole premise to be a false dichotomy to begin with. Reality is a spectrum, not a binary choice. It’s perfectly congruent to believe a great product for customers is the goal, and that the way to achieve it is through care and deliberate attention to the things you do.
> You’re failing to understand the complaint is about the hollow term being used to sound grandiose.
This isn’t a critique of language - it’s a category error. You’re confusing the mechanism with the purpose.
In your examples, a street sweeper or lunch lady (Google says this is an antiquated US term for canteen worker?) do indeed deliver value, clean streets and nourished students. That's the value they're paid to provide. Those are the outcomes we care about, and whether the sweeper uses a broom or Bucher Citycat is only of interest in that one allows the sweeper to provide more value at lower cost, eg more metres of clean road per dollar.
The same is true of the canteen worker, who may use Rationales and bains marie to serve more hot meals at lower cost than cooking each meal individually.
> You don’t “deliver solutions”, you write software (or have it written for you).
Saying you "write software", not deliver solutions actually indicates that you don't understand the profession you're in. It mistakes the process for the outcome. Writing code is one means among many for achieving an outcome, and if the same outcome could be achieved by the business without software, the software would be dropped instantly. Not because care doesn’t matter, but because the purpose was never the code itself.
> It’s perfectly congruent to believe a great product for customers is the goal, and that the way to achieve it is through care and deliberate attention to the things you do.
But according to you, care and deliberate attention (software as craft) are the only way. An absolutist position. But most software that matters is imperfect, build over time, touched by many hands, and full of compromises. Yet it still delivers enormous value. That’s evidence that outcomes, not purity of process, is what delivers value and defines success in the real world.
> But according to you, care and deliberate attention (software as craft) are the only way. An absolutist position.
No! That is not what I’m saying! How can you argue my position is an absolute when I just explicitly described it as a spectrum?!
However, I do believe you’re arguing in good faith, I just don’t think we’re on the same page. I wish we were, as while I think we might still disagree, I also believe we’d have an interesting conversation. Probably more so in person.
Unfortunately, I have to go get some work done so I’m unable to continue as of now. Still, instead of leaving you hanging, I wanted to thank you for the respectful conversation as well as your patience and I believe genuine effort in trying to understand my position.
Nonsense. Features are requested from me, I deliver them to the customer, the customer is happy and pays me. I deliver solutions and the customer deems them to be value for their business... What else am I supposed to call that?
I'm extremely diligent around vetting all code in my repo's. Everything is thoroughly tested and follows the same standards that were in my codebase before the invention of LLM's. I'm not "vibe coding". You're making assumptions because of your negative emotional reaction to LLM's.
Yes yes, so does a street sweeper. Someone pays them because the road is dirty, and they use a broom to deliver the solution of a cleaner street, which is of value to the user.
Do you see why that’s marketing speak? You’re using vague terms which can be applied to anything. It avoids commitment and makes whatever you do seem grandiose. That’s marketing.
A few years ago, every app developer and designer was a “story teller”.
You don’t “deliver solutions”, you write software (or have it written for you).
>Yes yes, so does a street sweeper. Someone pays them because the road is dirty, and they use a broom to deliver the solution of a cleaner street, which is of value to the user.
Yes, it's exactly the same. Is your problem the fact that this gets you off the high horse?
It’s not marketing speak, but it’s rarely 100 percent one or the other.
> More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”.
This doesn’t make any sense. Polished to who? The end user? You can absolutely use AI to polish the user experience. Whether coding by hand or AI the most important aspect of polish is having someone who cares.
This false dichotomy comes up from time to time, that you either like dicking around with code in your basement or you like being a big boy with your business pants on delivering the world's 8000th online PDF tools site. It's tired. Please let it die.
There are people who would code whether it was their career or not, I'm not one of those people. I fell into software development in order to make money, if the money stopped then I would stop. I love building and selling products, if I can't do that then I have no interest in programming. I'm interested in machines, CPU's, etc. I'm interested in products, liaising with customers, delivering solutions, improving things for users, etc. You think there is no distinction there? Again, there are people who code for fun, I'm simply not one of them...
If you really want to deliver polished products, you still have to manually review the code. When I tried actually "vibecoding" something, I got exhausted so fast by trying to keep up with the metric tons of code output by the AI. I think most developers agree that reviewing other people's code is more exhausting mentally than writing your own. So I doubt those who see coding as too mentally straining will take the time to fully review AI written code.
More likely that step is just skipped and replaced with thoughts and prayers.
I do manually review. I don't think the quality of my output has reduced even slightly. I'm just able to do much more. I deliver features more quickly, and I'm making more money, so of course I'm happy. If there was no money in programming I wouldn't be doing it, I think that's the major distinction. I barely have any understanding of how a CPU works, I don't care. I build stuff and people are very happy with what I build and pay me money for it...
> ...and those who love delivering value/solutions. I am in the latter camp. The happy consumer and the polished product is what gives me satisfaction...
Can't the customer now just skip you and generate a product for himself via AI?
Serious? Have you used an LLM? Of course they couldn't... LLM's speed up my development velocity. Maybe 1.5x-2x? Hard to measure. You still need the knowledge to make smart decisions, enforce sensible/maintainable architecture & patterns, etc. How is a regular person going to review code to make sure it's correct/efficient/safe?
I just work here, man. What's all this 'love' stuff? :) I propose a third camp: skilled employee seeking compensation.
edit: to stay on the larger topic, I haven't been swayed much one way or the other. ~90% of the code I need existed a decade ago in the form of reusable modules. Anything new is closer to pseudo-code, an amplifier or sandbox isn't something I'm that interested in.
I like using my software engineering skills to solve people's problems. I don't do coding for it's own sake - there's always a thing I'm trying to implement for someone.
As a professional, your job is to deliver value and solutions. It used to be that you could do this by writing code. AI changes this calculus because if the machine can write the code instead, the value you deliver by writing it yourself is greatly diminished.
I've also noticed a kind of grouping like this. I've described them as the "Builders" and the "Solvers". Where the former enjoys the construction aspect of the code more, and the latter enjoys the problem/puzzle-solving aspect of code more. I guess it's more of a scale than a binary, since everyone's got a bit of both, but I think I agree that AI is more fun for the builders.
You improve over time. I've been programming for 6 years and I still feel like I'm nowhere near others. That's a completely fine and valid thing to feel.
There is an interesting pattern emerging in this thread. There are a lot of 'same here' and 'opposite for me' comments, but both sides are converging on the same point: people developing software to solve a problem.
Many who are considering a career shift away from software due to 'AI disgust' devoted their lives to developing software because they loved the craft. But with AI churning out cheap, ugly, but passable code, it's clear that businesses never appreciated the craft. I hope these folks find an area outside of SWE that they love just as much.
But once these folks find this area, it would be naive to think they won't use software to scratch their itch. In the same way that people who didn't pursue a career in SWE (because they felt under-qualified) are using AI to solve their problems, these folks will now find their own problems to solve with software, even if at first that is not their intention. They probably won't use AI to write the code, but ultimately, AI is forcing everyone to become a product manager.
Some are saying "finally, AI does all the busywork and we focus on the business domain"
But what if the business is soulless? As in what if the business you're working on is just milking value out of people through negative patterns which... is ... well a lot of tech businesses these days. Maybe the busywork enabled engineers to be distracted from the actual impact of their work which makes people demotivated.
I have seen code bases that are amazing. I have seen ones that look bad, but work. About a year and half ago I saw my first fairly large scale fully AI generated project and it filled me with dread. It looked like the figma, which is very impressive. But under the hood it was bizarre. It was like those sci-fi movie tropes of teleportation where one of the people teleport and the destination coordinates are wrong and the merge with a tree or rock or whatever. There was so much unused junk that had nothing to do with anything. Ugh. My task to was to figure out why the initial render took so long. (unsurprisingly it was loading all the data then rendering, so with toy dev loads it was fine, in production nightmare and getting worse). So I just got to it and made some progress. But the new grad (who thought I was a dinosaur (might be right)) who made it was working in parallel and reintroducing more slop. So it became this Sisyphean task where I am speeding things up (true dinosaur so measuring things) and they were cutting and pasting and erasing the gains.
I have always found management to be just silly exercise in day full of meetings. I like to make things. I could retrain, but, the salary drop would be very hard. Hope to find one last gig and have enough to retire. I still get that spark of joy when all the tests pass.
Yeah enjoying it too, though it’s a different type of joy than hand rolling it. More getting things done fast which is neat but less proud of what one crafted
Can definitely understand the reluctance people feel around it. Especially when they’ve invested years into it and have their livelihood on the line
I’m also quite reluctant to publish any of it. Doesn’t feel right to push code I don’t fully understand so mostly personal projects for now
One thing that’s always missing from these compound interest calculators is multiple assets with different rates, and different rates over time (e.g between X date and Y date use Z rate, etc). I didn’t quite figure out the right UI for the second one.
I don’t like AI for production code, but I love it for ideation and prototyping. I agree. It really allows you to quickly iterate on ideas without being blocked by implementation details.
Not to be disrespectful, but OP's code is also a website that already exists literally thousands of times and could be done in any spreadsheet program without any programming at all...
It’s more like AI provides the development team, and you are the key user and product manager that comes with all the requirements and domain knowledge, the lead architect reviewing the architecture, and the lead UXer reviewing the UX.
I use AI as a senior developer I ask questions to. It gives me an answer, which I can use on my work or not. Saved me days of work, but I couldn't be taken out (yet) of the loop because I'm still making the decisions...
I didn't quit coding but I also vibe coded something similar despite having found thousands of similar utilities (retirement calculators) so I vibe coded (with base44) https://boringretirementcalculator.com
What can I say... If you used a calculator to get an answer for sqrt(2) are you back to doing mathematics? It's simpler and more fun instead of using Newton method. But it's debatable if you are actually working on mathematics problems.
Look, do what works for you obviously but this just reinforces my view that the people who see "AI Code agents" as a useful thing, are the people who don't know how to write code themselves.
For the same reason things like Image Playground/etc seem magical/appealing to non-artists (myself included): we don't know how to do it ourselves, so it feels empowering.
Or more close to home: it's the same reason that developers are so in love with clicking some buttons in the <insert cloud mega provider> dashboard in spite of the costs, lock-in, more costs, yet more costs, and of course the extra costs.
As with those choosing "cloud" services they don't need, here too there will no doubt be a lucrative market to fix the shit once people realise that there's a reason experts charge the way they do.
I never stopped developing but I find myself taking on a lot more side projects than I used to. The cost for doing those just dropped significantly. This enables me to prototype and pursue things that I previously wouldn't have.
I'm also now dealing with things that previously would have taken me too long to deal with. For example, I'm actually making a dent in the amount of technical debt I have to deal with. The type of things where previously I maybe wouldn't have taken a week out of my schedule to deal with something that was annoying me. A lot of tedious things that would take me hours/days now can get done in a few prompts. With my bigger projects, I still do most stuff manually. But that's probably going to change over the next months/year.
I'm mainly using codex. I know a lot of people seem to prefer Claude Code. But I've been a happy ChatGPT Plus user for a while and codex is included with that and seems to do the job. Amazing value for 20$/month. I've had to buy extra credit once now.
The flip side of all this is that waiting for AI to do it's thing isn't fun. It's slow enough that it slows me down and fast enough that I can't really multi task. It's like dealing with a very slow build that you have to run over and over again. A necessary evil. But not necessarily fun. I can see why a lot of developers feel like the joy is being sucked out of their lives.
Dealing with this pain is urgent. Part of that is investing in robust and fast builds. Build time competes with model inference in the time stuff takes. And another part is working on the UX of this. Being able to fork multiple tasks at once is hugely empowering. And switching between editing code and generating code needs to get more seamless. It feels too much like I'm sitting on my hands sometimes.
Thank you for the beautiful story. I work as a developer and have experienced the same in my personal projects, linux setup and - in general - all the collaterals.
AI is eroding the entry barrier, the cognitive overload, and the hyper-specialization of software development. Once you step away from a black-and-white perspective, what remains is: tools, tools, tools. Feels great to me.
I think people would have reacted a lot more positively if you'd said right up front in the first line "hey look guys, yes I wrote this with ChatGPT but I am not a native English speaker so I've used AI to translate"
Otherwise it feels deceptive. Which is surprising given we should judge off intentions and not augmentation (like come on guys this is HN FFS).
This guy's not running any ads on the site, hasn't spammed with multiple posts that I've seen. I still think investment funds/modern stock exchanges are needless parasites upon society but that's just my opinion.
Yeah, you're right — that part is pretty rough. I wanted to help people actually understand compound interest (it's kind of life-changing once it clicks), but I got lazy and let AI do it without proper editing. Defeats the whole point.
I'll figure out a better way. Thanks for calling it out.
These posts will destroy this place. Post your AI written tools if you like - fine, but using an LLM to reply to comments is just insulting, and will make this place a wasteland of LLM. I wouldn’t post this if I didn’t care about the usual good quality of the discussions on this site.
Vibe coded projects can be cool (if they're impressive), articles about using AI can be cool (from the right people), articles about the future of AI can be cool. All of these can sometimes be too much and some of them are just poor projects / articles etc. But they should definitely be allowed; some of them are genuinely interesting / thought provoking.
Someone prompting gpt-4o "Write a nice reply comment for this <paste>" and then pasting it here is never cool. If you can't write in english, you can use google translate or even ask an llm to translate, but not to write a comment for you!
Just another AI generated website with 5000 calculators thrown together that looks like every other single one. From a brand new account with a post that looks like it was also written from ChatGPT. Somehow getting enough votes to show up on my homepage.
Things are definitely changing around HN compared to when it first started.
Fair call — it did kind of explode from one calculator to 60+
I’m a real person (long-time lurker, finally posting), but I get why it looks sus.
Things are changing fast, and I’m just happy to be part of the messy early wave. Thanks for the honesty.
It's impossible to tell if this is AI or not. Another version of Poe's law. The only thing to do is assume everything is AI, just like you must assume all posts have ulterior (generalluy profit-driven) motives, all posters have a conflict of interest, etc.
Maybe the only thing to do is stop trying to understand posters' motivations, stop reading things charitably, stop responding, just look for things that are interesting (and be sure to check sources).
People are hurt when people turn person-to-person communication into person-to-machine communication. It's dismissive of their use of genuine wall-clock time trying to engage with you.
We're busy building real software, not toys. I routinely write all kinds of calculators in my game development, in addition to having 100x more complex code to contend with. This task is as trivial as it gets in coding, considering computers were literally made to calculate and calculation functions are part of standard libraries. OP definitely didn't use Claude to implement math functions from scratch, they just did the basic copy-and-paste work of tying it to a web interface on a godawful JS framework stack which is already designed for children to make frontends with at the cost of extreme bloat and terrible performance. Meanwhile I actually did have to write my own math library, since I use fixed-point math in my game engine for cross-CPU determinism rather than getting to follow the easy path of floating-point math.
It's cool that ChatGPT can stitch these toys together for people who aren't programmers, but 99% of software engineers aren't working on toys in the first place, so we're hardly threatened by this. I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
"Software engineering" doesn't matter to anyone except to software engineers. What matters is executing that idea that's been gathering dust for ages, or scratching that pain point that keeps popping up in a daily basis.
Software engineering matters very much to anyone who has ideas or pain points that are beyond the capabilities of a next-token prediction engine to solve.
Not really. Those ideas or pain points are simply ignored or endured by anyone who isn't a software engineer until the tools (no-code platform, LLM, etc) become good enough, or someone else builds the thing and makes it available.
My response is perhaps a bit raw, but so is the quote above.
Stop with the gate keeping. I've studied CS to understand coding, not to have some sort of pride to build "real software". Knowledge is a tool, nothing more, nothing less.
There are enough developers whose whole job it is to edit one button per week and not much more. And yes, there are also enough developers that actually apply their CS skills.
> but 99% of software engineers aren't working on toys in the first place
Go outside of your bubble. It's way more nuanced than that.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
Moving goal posts. Always has been.
It's not that I fully disagree with you either. And I'm excited about your accomplishments. But just the way it reads... man...
I guess it hits me because I used to be disheartened by comments like this. It just feels so snarky as if I am never good enough.
The vibe is just "BUH BUH BUH and that's it." That's how it comes across.
And I've come to mature enough to realize I shouldn't feel disheartened. I've followed enough classes at VUSEC with all their rowhammer variations and x86-64 assignments to have felt a taste of what deep tech can be. And the thing is, it's just another skill. It doesn't matter if someone works on a web app or a deep game programming problem.
What matters (to me at least) that you feel the flow of it and you're going somewhere touching an audience. Maybe his particular calculator app has a better UX for some people. If that's the case, then his app is a win. If your game touches people, then that's a win. If you feel alive because you're doing complex stuff, then that's a win (in the style of "A Mathematician's Apology"). If you're doing complex stuff and you feel it's rough and you're reaching no one with it, it's neutral at best in my book (positive: you're building a skill, negative: no one is touched, not even you).
Who cares what the underlying technology is. What's important is usability.
Feel free to point out where I moved goal posts. To say that I moved goal posts would imply that at one point I stated that creating a trivial website was software engineering. If you're comparing my statement to what some other person said, who made arguments I did not make, then we cannot have any kind of constructive dialogue. At that point you are not talking to me, but talking to an imaginary projection of me meant to make yourself feel better about your argument.
> Stop with the gate keeping.
I'm not gatekeeping anything. You can disagree with my descriptive terms if you want, but the core point I'm trying to get across is: what people are doing with Claude can not replace what I do. I would know, I've tried extensively. Development is a lot of hard work and I would love it if my job were easier! I use LLMs almost every day, mostly for trivial tasks like reformatting text or writing advanced regex because I can't be bothered to remember the syntax and it's faster than looking it up. I also routinely pose SOTA models problems I'm working on to have them try to solve them, and I am routinely disappointed by how bad the output is.
So, in a thread where people were asserting that critics are merely critics because they're afraid of being replaced I pointed out that this is not factually correct, that no, we're not actually afraid of being replaced, because those of us who do "real" engineering (feel free to suggest a different term to substitute for "real" if the terminology is what bothers you) know that we cannot be replaced. People without experience start thinking they can replace us, that the exhilarating taste of coding they got from an LLM is the full extent to the depth of the software engineering world, but in fact it is not even close.
I do think that LLMs fill a useful gap, for projects where the time investment would be too large to learn to code and too unimportant to justify paying anyone to program, but which are simple enough that a non-engineer can have an LLM build something neat for themselves. There is nothing wrong with toys. Toys are a great thing to have in the world, and it's nice that more people can make them[1]. But there is a difference between a toy and what I do, and LLMs cannot do the thing I do. If you're taking "toy" in a derogatory manner, feel free to come up with another term.
[1] To some extent. While accessibility is generally a great thing, I have some misgivings. Software is dangerous. The web is arguably already too accessible, with frameworks enabling people who have no idea what they're doing to make professional-looking websites. These badly-made websites then go on to have massive security breaches that affect millions of users. I wish there was a way to make basic website development accessible, whether through frameworks or LLMs, in a way that did not give people using them the misplaced self-confidence to take on things way above their skill level at the cost of other people's security.
Idk, your superiority complex about the whole issue does make it sound like you’re feeling threatened. You seem determined to prove that AI can’t really make any decent output.
What’s even the point of writing out that first paragraph otherwise?
> What’s even the point of writing out that first paragraph otherwise?
I was correcting your misguided statement:
> Their critics didn’t make that!
by pointing out that we, among other things, build the libraries that you/Claude are copy-and-pasting from. When you make an assertion that is factually incorrect, and someone corrects you, that does not mean they are threatened.
You're right that this is simple compared to what real engineers build. I have a lot of respect for people like you who write things like custom math libraries for cross-CPU determinism — that's way beyond my level.
I'll keep learning and try to make this less of a toy over time. And hopefully I can bring what I've learned from years in investing into my next product to actually help people. Thanks for the perspective.
What are you implying?. He would have had to hire a good developer at least for a full month salary to build something like this.
And if you are thinking enterprise, it would take 2-3 developers, 2 analysts, 2 testers, 1 lead and 1 manager 2-3 months to push something like this. (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
5000 calculators may look excessive, but in this case it magnifies the AI capabilities in the future - both in terms of quality and quantity.
> (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
Well, I don't think all those people are spending their time making simple calculators.
Twitter/X incentivizes you to get engagements because with a blue checkmark you get paid for it, so people shill aggressively, post idiotic comments on purpose trying to ragebait you. It's like LinkedIn in for entrepreneurs. Reddit or it's power hungry moderators (shadow)bans people often. The amount of popular websites that people can shill their trash is dwindling, so it gets worse here as a result I assume too.
Happy compunding! Wish I had started younger but catching up. 25% of your salary into a pension in global indexes I think is the way. You never get to touch it, no decisions to make and just forget it. Live life. Have a lot of money later. (Maybe go down to 5% for when needed e.g. buying a house. Having a baby)
>The problem? Every compound interest calculator online is terrible. Ugly interfaces, ads covering half the screen, can't customize compounding frequency properly, no year-by-year breakdowns. I've tried so many. They all suck.
Congrats! I never stopped coding, but AI makes it way more productive and fun for sure.
$100 seems like a lot. I guess if you think about it compared to dev salaries, it's nothing. But for $10 per month copilot you can get some pretty great results too.
$100 did feel steep at first. I tried other models but Opus 4 with extended thinking just hits different — it actually gets what I'm trying to do and the code often works first try. Hard to go back after that.
> Vibe coding didn't make me a 10x engineer. But it gave me permission to build again. Ideas I've had for years suddenly feel achievable. That's honestly the bigger win for me.
Well in my opinion there's nothing wrong with vibe-coding. You can completely use it to make your passion projects. I draw the line when people try to sell their vibe-coded project as something huge, putting people at the risk of potential security breaches while also taking money out of them.
Every other day I see ads of companies saying "use our AI and become a millionaire", this kind of marketing from agentic IDEs implies no need for developers who know their craft, which as said above, isn't the case.
Fair, but the threat model matters here. For a static mortgage calculator, the data leak risk is zero (if it's client-side). The risk here is different - logical. If the AI botches the formula and someone makes a financial decision based on that - that's the problem. For "serious" projects vibe coding must stop where testing and code audits begin
Totally agree. I have my day job, and vibe-coding has simply brought back the joy of building things for me. It should be about passion and creativity, not about scamming people or overselling half-baked products. The "get rich quick with AI" narrative is toxic.
this by definition filters out all non-devs, even many junior devs as you need to understand deeply if those tests are correct and cover all important edge cases etc.
+ when you deploy it - you need to know it was properly deployed and your db creds are not on frontend.
But mostly no one cares as there is no consequences to leaking personal data of your users or whatnot.
I think vibe coding isn't quite good enough for real products because I usually have 4 AI agents going non-stop. And I do read the code (I read so, so much code), and I give the AI plenty of feedback.
If you just want to build a little web app, or a couple of screens for your phone, you'll probably be fine. (Unless there's money or personal data involved.) It's empowering! Have fun.
But if you're trying to build something that has a whole bunch of moving parts and which isn't allowed to be a trash fire? Someone needs to be paying attention.
I've lost the joy in programming, the only thing I'm good at, I now make horrible music, but at least I don't exist as the means to an end that I don't control.
No Offense taken, but what's the point in using AI for anything unless you don't want to do it? I want live my life not consume information, is that really so bad?
Same. Fell out of love with programming after the first few years because the thought of spending my life staring at a screen and dealing with insignificant minutia suddenly seemed horrible. Spent a lot of years in management and LLMs gave me a way to build things I wanted again. Currently building a platformer.
This is tongue-in-cheek, but you spent years in management because "the thought of spending your life staring at a screen and dealing with insignificant minutia seemed horrible?" I need to read your management book!
It’s a lot of 1:1s and talking to people directly and strategy about setting up performant teams. I enjoy it way more and don’t spend a lot of time looking at screens.
For me it’s kinda the same. I always hated typing actual code, I love planing, reading, finding bugs etc.
But writing code? Eh, I never enjoyed that. Now with agents I can kinda do exactly what I like, plan, write in natural langue and then do code review.
> Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean."
I guess this is what separates some people. But I always explicitly tell it to use only HTML/JS/CSS without any libraries that I've vetted myself. Generating code allows you now not having to deal with it a lot more.
Cool to hear nonetheless. Can we now also stop stigmatizing AI generated music and art? Looking at you Steam disclosures.
The current trend made me realize I don't like coding so much as I like creating stuff. So I'm happy I can build the stuff faster in an increasingly tight schedule as I'm getting older.
I have always done small projects at home, few of which would reach maturity, and I was doing less and less every year, until recently!
Now I build all sorts of apps for my farm and organizations I volunteer for. I can pound out an app for tracking sample locations for our forage associations soil sample truck, another for moisture monitoring, a fleet task/calendar/maintenance app in hours and iterate on them when I think of features.
And git was brand new when I left the industry, so I only started using it recently to any extent, and holy hell, is it ever awesome!
I'm finally able to build all the ideas I come up with when I'm sitting in a tractor and the GPS is steering.
Seriously exciting. I have a hard time getting enough sleep because I hammer away on new ideas I can't tear myself away from.
I'm teaching my kid what I consider the AI dev stack: AI IDE (Antigravity for us), database (Supabase for us with a nice MCP server), and deployment (Github and Vercel for us). You can make wonderful little integrated apps with this in hours.
Did you take over a farm?
What a stupid sentiment on top of trying to generate views for the most low hanging slop ever.
I have a similar experience but its moreso AI lets me build my side projects I only have time to research on, not much time or energy to actually code. I get to review the code and have Claude inspect it (most people I feel dont have Claude do code audits) and tell me where theres bugs, security issues, etc. I do this routinely enough.
I'm not sure how you can claim this on the footer of every page when you're vibe coding these calculators.
How confident is the OP that every single one of these 60 calculators work all the time, with all edge cases? Because if someone is on your website using your calculator, they are putting trust in you. If it's wrong, it could have downstream impacts on them. I hope every single one has a comprehensive set of tests with good edge cases. But realistically will they?
I'm actually pretty pro-AI development. But if you're going to use AI to help develop a website, at least focus on quality rather than quantity. AI makes quantity easy, but quality is still hard.
As an aside, the website doesn't even work for me. My clicks don't don anything.
The compound interest calculator, which is their 'favorite page', already shows an incorrect value in the graph. So my faith in the other calculators isn't great. I also kinda doubt OP's story of them using that page all the time, since it took me about 20 seconds to find this issue.
Creating a polished, usable app is just so much work, and so much of it isn't fun at all (to me). There are a few key parts that are fun, but building an intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
I'm bewildered when I read posts by the naysayers, because I'm sitting here building polished apps in a fraction of the time, and they work. At least much better than what I was able to build over a couple of weekends. They provide real value to me. And I'm still having fun building them.
I now vibe coded three apps, two of them web apps, in Rust, and I couldn't write a "Hello World" in Rust if you held a gun to my head. They look beautiful, are snappy, and it being Rust gives me a lot of confidence in its correctness (feel free to disagree here).
Of course I wouldn't vibe code in a serious production project, but I'd still use an AI agent, except I'd make sure I understand every line it puts out.
I believe you can use LLMs as advanced search and as a generator for boilerplate. People liking it easy are also being easy with quality attributes, so anyone should be self aware where they are on that spectrum.
Then don’t do it. No one is forcing you. Are you also going to complain that building airplanes and ensuring food safety are too much work and not fun for you? Not everything needs to be or should be dumbed down to appeal to lowest common denominator.
Alternatively, go work at a company where you’re part of a team and other people do what you do not enjoy.
> I'm sitting here building polished apps in a fraction of the time
No, no you are not, guaranteed. “Polishing” means caring about every detail to make it perfect. If you’re letting the LLM make most of it, by definition it’s not polished.
No one is also keeping me from doing what I want to spend my time with on my days off.
> Are you also going to complain that building airplanes and ensuring food safety are too much work and not fun for you?
No, because this isn't remotely comparable to weekend hobby projects. What a weird question.
> No, no you are not, guaranteed. “Polishing” means caring about every detail to make it perfect. If you’re letting the LLM make most of it, by definition it’s not polished.
I guess we have different definitions of "polished" then.
I agree. But those also don’t need:
> intuitive UI, logging, error handling, documentation, packaging, versioning, containerization, etc. is so tedious.
Some of that, sure, but not all of it. Either it’s a weekend hobby project or it’s not, and your description is conflating both. A hobby is something done for fun.
Thanks.
thats why it was valuable.
All things worth doing are hard.
I always have a hard time taking this complaint seriously, because the solution is absolutely trivial. Write a snippet. Have you really been out there, year after year, rewriting the same shit from scratch over and over? Just make a snippet. Make it good and generic and save it. Whenever you need to do something repeated on a new project, copy it (or auto-expand if you use it that often) and adapt. Snippet managers are a thing.
If you want to use LLMs to help express something you don’t know the words for in English then that is a good use for LLMs, if it’s called out. Otherwise your messages scream LLM bot to native speakers.
“You’re absolutely right”, “That hits different”, “Good call!” “–“ are all classic LLM giveaways.
I’m not a moderator here, so you don’t have to listen to me either way.
For myself, I’ve always enjoyed “getting my hands dirty” with code, and the advent of LLMs have been a boon. I’m retired from 34 years of coding (and managing), and never skipped a beat. I’ve released a few apps, since retiring. I’m currently working on the first app that incorporates a significant amount of LLM assistance. It’s a backend admin tool, but I’ll probably consider using the same methodology for more public-facing stuff, in the future.
I am not one to just let an LLM write a whole app or server, unsupervised (I have control issues), but have allowed them to write whole functions, and help me to find the causes of bugs.
What LLMs have given me, is a decreased hesitance to trying new things. I’ve been learning new stuff at a furious rate. My experience makes learning very fast. Having a place to ask questions, and get [mostly] good answers (experience helps me to evaluate the answers), is a game-changer.
> “A ship in harbor is safe, but that is not what ships are built for.” –John A. Shedd
[0] https://littlegreenviper.com/miscellany/thats-not-what-ships...
I’m not an AI hater but I do see this as evidence of LLMs being susceptible to chasing trends as much as people.
Next.js with server rendered React is not a stack that an experienced web developer would have recommended for a “clean” solution to a collection of financial calculators. It’s the answer you’d get if you asked for the stack that’s trending the most lately.
These are all quite reliable well-understood components, and far from "chasing trends" IMO.
No it didn't, in fact, your job shifted from code writer to code fixer
The years "away" gave me an unusually clear picture of what problems actually need solving vs what's technically interesting to build. Most devs early in their careers build solutions looking for problems. Coming back after working in a specific domain, I had the opposite - years of watching people struggle with the same friction points, knowing exactly what the output needed to look like.
What I'd add to the "two camps" discussion below: I think there's a third camp that's been locked out until now. People who understand problems deeply but couldn't justify the time investment to become fluent enough to ship. Domain experts who'd be great product people if they could prototype. AI tools lower the floor enough that this group can participate again.
The $100 spent on Opus to build 60 calculators is genuinely good ROI compared to what that would have cost in dev hours, even for someone proficient. That's not about AI replacing developers - it's about unlocking latent capability in people who already understand the problem space.
Feel like forums have turned into a grand Turing Test.
There are people here "I can finally get all my ideas done!" Sure, if they are really important enough, I guess. But high technology is much, much less important to me than my employer or probably others here on HN. I can only be concerned with the paycheck at this point. And at this point, they are happy that I can read documentation, write code, read documentation, write code, and don't care how it gets done. (For what I am working in though, I'd just skip the AI training step.)
With that in mind, I like to use PLs as tools to clarify thinking. There are others that think using PLs and their accompanying tools as friction to their goals, but my friction is understanding the problems I am trying to solve. So, while taking the adventure into automated tooling might be interesting, it doesn't replace the friction (just the feeling I have to read more potential garbage code.)
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
> If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
My experience as a consultant to business is that they only ever bring in consultants when they need a fix and are in a hurry. No client of mine ever phoned me up to say "Hey, there, have you any timeslots next week to advise on the best way to do $FOO?", it's always "Hey there, we need to get out an urgent fix to this crashing/broken system/process - can we chat during your next free slot?".
> Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating.
I dunno about this - depends on the specialisation.
They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
Someone with deep knowledge of the insurance industry? Nope - they'll look for a f/timer. Someone with deep knowledge of payment processing? No consultant, they'll get a f/timer.
No, that's fair, and I think you're right about that. But refactoring 300,000 lines 'real quick' isn't going to happen, regardless of that. :)
> They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
I implicitly had narrow technical specialisations in mind, albeit including ones that intersect with things like "insurance industry workflows".
That's my worry. Might be put off a few years, but still...
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
The way the do, which is? I've skimmed comments and a lot of them is hate, hostility towards OP's project and coders "without skill" in general, also denial because there's no way anything vibe-coded worked. At best, there is strong tribalism on both ends.
You might see more opposing views in this thread, but if you browse this site often you'll see both sides.
Those embracing it heavily do not see the nuances carefully creating maintainable solutions, planning and recognizing tech debt, and where it's acceptable short term. They are also missing the theory building behind what is being created. Sure AI models might get even better and could solve everything. But I think it's naive to think that will be generally good for 90% of the population including people not in tech.
Using these models (text or image) devalues the work of everyone in more than one way. It is harmful for creative work and human expression.
This tech, and a lot of tech, especially ones built by large corporations for profit extraction and human exploitation, is very unlikely to improve the lives at a population level long term. It can be said for a lot of tech (ie. social media = powerful propaganda). The goal of the people creating these models are to not need humans for their work. At which point I don't know what would happen, kill the peasants?
Also we are still designing systems and have to be able to define the problem properly, at least in my company when we look at the velocity in delivering projects it is barely up since AI because the bottlenecks are elsewhere..
Do you truly believe it won't get better, maybe even better at whole system design and implementation than people?
What are you calling "growth"? Adoption, or LLM progress? LLM progress has objectively slowed down, and for rather obvious reasons. The leaps from GPT-2 to GPT-4 can't be reprised forever.
Literally yesterday I remarked to my tech friends how fun coding with CoPilot is. I actually make forward progress now, and I understand all that the agent is doing.
For me, coding is an enjoyable means to an end. I do enjoy the process, but I enjoy the results more.
They've even got their own slogan: "you're probably just not prompting it properly"
That's the same energy as telling other professions to "just learn to code, bro" once they are displaced by AI.
But I guess it doesn't feel nice once the shoe is on the other foot, though. If nobody values the quality of human art, why should anybody value the quality of human code?
It's the exact same neoliberal elites who told everyone to code one year and told them they'd all be automated of a job the next year.
I dunno who you think you're railing against but it isnt me.
Just like SEO experts, marketing experts, trade bots and crypto experts; the vibe coders will weed out.
Im in this field and my system was heavily built with Claude, though not per vibe coding, more like a junior supporting me: I do not see any person connecting a vibe coded bot to a real account soon, since if its about real money, people will hesitate. And if you have blown up one account with your vibe coded bot while you are not a professional dev, you will loose interest very quickly - such systems do not contain "just a few thousand lines of code": Sure you could speed up development massivly and "hit the rock sooner than later" when going vibe coded here :-D
It's a miracle. Simply wouldn't have been done before. I think we'll see an explosion of software in small and midsize companies.
I admit it may be crappy software, but as long as the scope is small - who cares? It certainly is better than the janky manual paper processes, excel sheets, or just stuff in someone's head!
Funnily enough, Excel is the quintessential example of a fourth generation language, IDE, and database and it's the only one aside from SQL which actually succeeded from its time period. It's software, just like what you're building now, and just like what you're building now there are good points and bad points about it. The tradeoffs are different between the JS / Python code you're likely spinning up now vs. the Excel code that was being spun up before, but they rhyme.
Most developers are too full of themselves, in fact, most of us are a bunch of pretentious pricks. It is no wonder people are happy to be able to get what they want without our smugness and pretentiousness. Too bad some us are not like that and will end up getting unemployed anyway in the next few years.
And to be honest, even the tiny apps I'm doing I wouldn't have been able to do without some background in how frontend / backend should work, what a relational database is, etc. (I was an unskilled technical PM in the dotcom boom in the 2000s so at least know my way around a database a little. I know what these parts of tech CAN do, but I didn't have the skills to make them do it myself.)
For me, that is nightmare fuel. We already have too much software! And it's all one framework or host app version update away from failure.
1. Invoice billing review. Automated 80% of what was a manual process by providing AI suggestions in an automated way. Saved 3 hours per day of managers time. Increased topline by 10%. Dev time: 1 day
2. Data dashboards. We use janky saas that does not have APIs. Automated a scraper to login, download the reports daily, parse and upload to a database, and build a dashboard. Used to take my associate 3 hours per week to do this in a crappy spreadsheet. Now I have it in a perfect database much more frequently. Dev time: 4 hours.
We are attacking little problems all across the business now.
A MIRACLE!!!!
I wouldn't want to hassle customers who have fully paid up accounts
I think also you need to compare it to what was already there. No QA on the humans. Done off the side of their desk with no oversite, process, or checking. Huge amounts of manual errors.
The new solution just needs to be better than the old one, it doesn't need to be perfect.
(But I 100% agree that I wouldn't let AI live against customers. It is helping us build automations faster, and doing a "little" thinking on recommendation rules that would be very hard to implement without something highly structured, which would be frankly impossible in our environment.)
No. The bar is "miracle" and can cure cancer etc and can replace all developers etc. The bar is much higher than existing manual processes. It absolutely needs to be perfection to match the lofty claims
I guess Vibe coding cleanup firms and offensive security researchers are plotting to find bugs costing firms millions of dollars worth of bugs or one creating a dreadful data breach.
is there a term for that?
AI at our fingertips, accessible and useful, that's just a tool, that's not redefining us as an industry and denying people's jobs – that's an asset. (I used an em dash to prove I am not AI, as apparently double dash is now a sign of AI text!)*
(*) case in point, the situation is _TIRING_.
"Comment NEAT to receive the link, and don't forget to connect so I can email you" -- this is the most infuriating line ever.
In comms, they have something like a 1:4 ratio of design to validation engineers. Defence is slightly different, as it depends on the company, but generally the tolerance for bugs is zero. Lets not get started on the HF trading folks and their risk appetite!
There's a lot of room for software engineers. Most FPGAs are SoC devices now, running some form of embedded linux doing high-level task management networking. Provided you know enough Verilog to know your way around, you'll be fine. You're also in a space where most engineers I know are preparing to retire in the next 5-10 years, so there will be a panic which will ripple across industries.
FPGA basics: https://nandland.com/fpga-101/
Verilog basics: https://hdlbits.01xz.net/wiki/Main_Page
Projects: https://www.hackster.io/fpga/projects
Security engineers will have jobs until software is perfectly secure... and that is going to be a while.
I do not use LLMs at all to do my job, and it is unlikely I ever would. Clients pay me -after- they had all their favorite LLMs take a pass.
And indeed the vibe coders will just create a lot more security issues
Might be never or if the software is not used at all.
The perfect and secure software is none.
Well, at least not connected to the internet?
A lot of work was tedious, painstaking grind, but the reward at the end was considerable.
AI has completely annihilated all of the joy I got out of the process, and everything that attracted me to it with such abandon as an adolescent and a teenager. If someone had told me it was mostly slop curation, I would have stayed in school, stuck to my philosophy major, and who knows -- anything but this. I'm sure I'd have got reasonably far in law, too, despite the unpropitious time to be a JD.
I'm still working on my own small closed source projects, building them the way I want to, like a gameboy emulator - and I've gotten a lot of joy from those.
No matter how 'senior' you are, when you lose touch with the code, you will, slowly, lose the ability to audit what LLMs spit out, while the world moves on. You got the ability to do that by banging your head against code the hard, "pre-AI" way, perhaps for decades, and if you don't do the reps, the muscle will atrophy. People who think this doesn't matter anymore, and you can just forget the code and "embrace exponentials" or whatever, are smoking the good crack; it _is_ about the code, which is exactly why LLMs' ability to write it is the object of such close examination and contestation.
Folks who realise this will show to advantage in the longer run. I don't mean that one shouldn't use LLMs as an accelerant -- that ship has sailed, I think. However, there is a really good case to be made for writing a lot by hand.
These existed before but the culture surrounding AI delivered a double dose of both.
I have no problems with LLMs themselves or even how they are used but it has developed its own religion filled with dogma, faith based reasoning and priests which is utterly toxic.
The tools are shoved down our throats (thanks to the priesthood, AI use is now a job performance criteria) and when they fail we are not met with curiosity and a desire to understand but with hostility and gaslighting.
One day I might start a consultancy business that only does artisanal code. You can hire me and my future apprentices to replace AI code with handcrafted code. I will use my company to teach the younger generation how to write code without AI tooling.
That's an interesting perspective. I guess it depends on what you want and how low the stakes are. Artisanal coffee, sure. Artisanal clothing, why not? Would you want an artisanal MRI machine? Not sure. I wouldn't really want it "hand crafted", I just want it to do it's job.
The author even insists that AI was used because of their poor English, which is the standard excuse on Reddit as well. But clearly, this is not a translation:
> Curious if others have similar stories. Anyone else come back to building after stepping away?
This is bog-standard AI slop to increase engagement.
Look at the blog on their linked site as well. AI-generated posts.
This has been posted here for SEO. This is a business venture.
It's times like this when I think HN needs a post downvote button. Flagging might not be quite appropriate here, but I hate to see this content cluttering up the front page.
Thankfully LLMs are still very stupid. Especially when it comes to security engineering, my specialty, so looks like I have a while yet.
Perhaps if we didn’t have deep layer cakes of frameworks and libraries, people would feel like they can code with or without AI. Feels like AI is going to hinder any efforts to address complexity and justify us living with unnecessary complexity simply because a machine can write the complex, hard to understand, brittle code for us.
I loaded the lowest level piece of software I wrote in the last 15 years - a memory spoofing aimbot poc exploiting architectural issues in x86 (things like memory breakpoints set on logical memory - not hw addresses - allowing to read memory without tripping kernel-level detection tools, ability to trigger PFs on pages where the POC was hiding to escape detection, low level gnarly stuff like this). I asked it to clean up the code base and propose why it would not work under current version of windows. It did that pretty well.
Lower level stuff does of course exist, but not a whole lot IMHO. I would not assume claude will struggle with kernel level stuff at all. If anything, this is better documented than the over-abstraced mainstream stuff.
The cost of hallucinations though - you potentially have a stronger point there. It wouldn’t surprise me if that fails to sway some decision makers but it doesn’t give the average dev a bit more ground to work with.
It's not so great for the one or two but fantastic for everybody else.
With AI, you are no longer a developer, you're a product manager, analyst, or architect. What's neat about this, from a business perspective, is that you can in effect cut out all your developers and have a far smaller development workforce consisting of only product managers, analysts, and architects whom you call "developers" and pay developer salaries to. So you save money twice: once on dev workforce downsizing, and again on the pay grade demotion.
I'm currently exploring domain-specific languages aimed at writing web applications. I've been particularly interested in, much like bash, data flowing through pipelines. I have spent quite a bit of time and I'm definitely not vibe coding but I've probably only writen 1-2% of the code in these projects.
It is so much work to build out a new language with a surrounding ecosystem of tooling. Not even five years ago this would have necessarily been a full time multi-year endeavor or at least required a team of researchers. Now I can tinker away in my off hours.
This is what I am exploring:
https://williamcotton.com/articles/the-evolution-of-a-dsl
Did I not craft the syntax and semantics of these languages?
What about the phone in your hand, did you design that?
HN loves to believe they are the noble few - men and women of math and science, driven by nothing but the pure joy of their craft
But this whole AI thing has been super revealing. Almost everyone here is just the same old same old, only that now that the change is hitting close to home, you’re clutching your pearls and lamenting the days when devs were devs
The younger generation born into the AI world is going to leave you in the dust because they aren’t scared of it
My math teacher used to say that people felt this was about…calculators, imagine that
Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term. The reverse is dangerous too, but can be offset to a certain extent with good product management.
This is solved problem with any large, existing, older code base. Original writers are gone and new people come on all the time. AI has actually helped me get up to speed in new code bases.
Is this also true of all third party code used by their solution? Should they make all libraries and APIs they use their own in exactly in the form it needs to be according to their deep expertise? If not, why not?
If so, does this extend to the rest of the stack? Interpreters, OSes, drivers? If not, why not?
This isn't a trick question, BTW. It's a genuine attempt to get to the rationale behind your (and the GP's) stance on this.
In particular, the GP said:
> Or to phrase it more succinctly: if you are in camp 2 but don't have the passion of camp 1, you are a threat for the long term.
That hints I think at their rationale, that their stance is based on placing importance on the parts of software development that they enjoy, rather than any logical basis.
This happens, but very rarely compared to changes in your own code base. If a library breaks, you can usually find an alternative, but even in that case you need to know how to modify your own code.
The difference with generated code is that you are tasked to maintain the generated code.
I don't think this is true, but say we accept it.
> The difference with generated code is that you are tasked to maintain the generated code.
Is this a task that LLMs are incapable of performing?
That's what people tend to report, yes.
I think for a lot of minor things, having AI generate stuff is okay, but it’s rather astounding how verbose and sometimes bizarre the code is. It mostly works, but it can be hard to read. What I’m reading from a lot of people is that they’re enjoying coding again because they don’t have to deal with the stuff they don’t want to do, which...I mean, that’s just it isn’t it? Everyone wants to work on what they enjoy, but that’s not how most things work.
Another problem is that if you just let the AI do a lot of the foundational stuff and only focus on the stuff that you’re interested in, you sometimes just miss giant pieces of important context. I’ve tried reading AI driven code, sometimes it makes sense, sometimes it’s just unextensible nonsense that superficially works.
This isn’t tech that should replace anything and needs to be monitored judiciously. It can have value, but what I suspect is going to happen is we are going to have a field day with people fixing and dealing with ridiculous security holes for the next decade after this irrational exuberance goes away. It should be used in the same way that any other ML technique should be. Judiciously and in a specific use case.
Said another way, if these models are the future of general programming, where are the apps already? We’re years into this and where are they? We have no actual case studies, just a bunch of marketing copy and personal anecdotes. I went hunting for some business case studies a while ago and I found a Deloitte “case study” which was just pages of “AI may help” without any actual concrete cases. Where are the actual academic studies showing that this works?
People claiming AI makes them code faster reminds me that Apple years ago demonstrated in multiple human interaction studies that the mouse is faster, but test subjects all thought keyboard shortcuts were faster [1]. Sometimes objective data doesn’t matter, but it’s amusing that the whole pitch for agentic AI is that it is faster and evidence is murky for this at best.
[1] https://www.asktog.com/TOI/toi06KeyboardVMouse1.html
This is such marketing speak. The words mean nothing, they’re just a vague amalgamation of feelings. “Vibes”, if you will.
If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
> The happy consumer and the polished product
More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection. Letting an LLM spit out code you just accept is not it.
The word you’re looking for is “shiny”, meaning that it looks good at a glance but may or may not be worth anything.
I get the argument. Sometimes I really enjoyed the actual act of finally figuring out a way to solve a problem in code, but most of the time it was a means to an end, and I'm achieving that end far more often now via AI tooling.
I’m not fussed about the exact term, as long as it points to something real and at semantic equal footing with the alternative.
Note how they described two areas of focus (what you “love”): “coding” and “delivering value/solutions”.
You can be a “coder” or a “programmer”, no one is a “deliverer of value/solutions”.
“Coding” is explicit, it’s an activity you can point at. “Delivering values/solutions” is vague, it’s corporate speak to sound positive without committing to anything. It doesn’t represent anything specific or tangible. It doesn’t even reference software, though it’s what it is, to make it sound broader than what it is. You could say “using and releasing apps”, for example, thought proponents may feel that’s reductive (but then again, so is “coding”).
Again, what’s in contention here isn’t the exact term, but making sure it’s one that actually means something to humans, instead of marketing speak.
I actually think this reveals more about you than you might realise. A _lot_ of people enjoy being able to help people resolve problems with their skills. Delivering value is marketing speak, but it's specifically helping people in ways that's valuable.
A lot of people who work in software are internally motivated by this. The act of producing code may (or may not be) also enjoyable, but the ultimate internal motivation is to hand over something that helps others (and the external motivation is obviously dollars and cents).
There is also a subset of people who enjoy the process of writing code for its own sake, but it's a minority of developers (and dropping all the time as tooling - including LLMs - opens development to more people).
> If you are using LLMs to write your code, by definition your product isn’t “polished”. Polishing means pouring over every detail with care to ensure perfection.
You can say the same thing about libraries, interpreters, OSes, compilers, microcode, assembly. If you're not flipping bits directly in CPU registers, your not pouring over every little detail to ensure perfection. The only difference between you and the vibe coder who's never written a single LoC is the level of abstraction you're working at.
Edit:
> If you “love delivering value and solutions”, go donate and volunteer at a food bank, there’s no need for code at any point.
I also think this says maybe a lot about you, also, as many people also donate their time and efforts to others. I think it may be worth some self-reflection to see whether your cynicism has become nihilism.
I did use to volunteer at a food bank, but I used that example only because it’s quick and simple, no shade on anyone who doesn’t. I stopped for logistical reasons when COVID hit.
I have used the set of skills I’m god at to help several people with their goals (most were friends, some were acquaintances) who later told me I changed their life for the better. A few I no longer speak to, and that’s OK.
Oh, and before I became a developer, I worked in an area which was very close to marketing. Which was the reason I stopped.
So yeah, I know pretty well what I’m talking about. Helping others is an explicit goal of mine that I derive satisfaction from. I’d never describe it as “delivering value/solutions” and neither would any of the people I ever helped, because that’s vague corporate soulless speech.
How do you feel about the fact that OpenAi et al have slurped up all your code and are now regurgitating it for $20/month?
I also don’t think “but it wouldn’t be viable otherwise” is a valid defence.
I don’t see what that has to do with the conversation, though. If your point is about the free/$20, that doesn’t really factor into my answer.
While I commend your voluntary efforts, I don't think it lends any more weight to your original comment. In fact, I think this comment highlights a deep cynicism and I think a profound misunderstanding of the internal motivations of others and why "delivering value" resonates with others, but rings hollow to you.
In the end, this debate is less about LLMs, and more about how different developers identify. If you consider software to be a craft, then mastery of the skillset, discipline, and authorship of the code is key to you.
If you consider software to be a means to an end, then the importance lies in the impact the software has on others, irrespective to how it's produced.
While you are clearly in the former camp, it is undeniable that impact is determined entirely by what the software enables for others, not by how it was produced. Most users never see the code, never care how it was written, and judge it only by whether it solves their problem.
A street sweeper “delivers value” in the form of a clean street. A lunch lady at a school “delivers solutions” in the form of reducing hunger in children.
There’s nothing wrong with wanting to do something for others, the criticism is of the vague terminology. The marketing speak. I’ve said that so many times, I’d hope that’d been clear.
> While you are clearly in the former camp
You’re starting from wrong assumptions. No, I’m not “in the former camp”, I find the whole premise to be a false dichotomy to begin with. Reality is a spectrum, not a binary choice. It’s perfectly congruent to believe a great product for customers is the goal, and that the way to achieve it is through care and deliberate attention to the things you do.
This isn’t a critique of language - it’s a category error. You’re confusing the mechanism with the purpose.
In your examples, a street sweeper or lunch lady (Google says this is an antiquated US term for canteen worker?) do indeed deliver value, clean streets and nourished students. That's the value they're paid to provide. Those are the outcomes we care about, and whether the sweeper uses a broom or Bucher Citycat is only of interest in that one allows the sweeper to provide more value at lower cost, eg more metres of clean road per dollar.
The same is true of the canteen worker, who may use Rationales and bains marie to serve more hot meals at lower cost than cooking each meal individually.
> You don’t “deliver solutions”, you write software (or have it written for you).
Saying you "write software", not deliver solutions actually indicates that you don't understand the profession you're in. It mistakes the process for the outcome. Writing code is one means among many for achieving an outcome, and if the same outcome could be achieved by the business without software, the software would be dropped instantly. Not because care doesn’t matter, but because the purpose was never the code itself.
> It’s perfectly congruent to believe a great product for customers is the goal, and that the way to achieve it is through care and deliberate attention to the things you do.
But according to you, care and deliberate attention (software as craft) are the only way. An absolutist position. But most software that matters is imperfect, build over time, touched by many hands, and full of compromises. Yet it still delivers enormous value. That’s evidence that outcomes, not purity of process, is what delivers value and defines success in the real world.
No! That is not what I’m saying! How can you argue my position is an absolute when I just explicitly described it as a spectrum?!
However, I do believe you’re arguing in good faith, I just don’t think we’re on the same page. I wish we were, as while I think we might still disagree, I also believe we’d have an interesting conversation. Probably more so in person.
Unfortunately, I have to go get some work done so I’m unable to continue as of now. Still, instead of leaving you hanging, I wanted to thank you for the respectful conversation as well as your patience and I believe genuine effort in trying to understand my position.
I'm extremely diligent around vetting all code in my repo's. Everything is thoroughly tested and follows the same standards that were in my codebase before the invention of LLM's. I'm not "vibe coding". You're making assumptions because of your negative emotional reaction to LLM's.
Do you see why that’s marketing speak? You’re using vague terms which can be applied to anything. It avoids commitment and makes whatever you do seem grandiose. That’s marketing.
A few years ago, every app developer and designer was a “story teller”.
You don’t “deliver solutions”, you write software (or have it written for you).
Yes, it's exactly the same. Is your problem the fact that this gets you off the high horse?
> More marketing speak. If you are using LLMs to write your code, by definition your product isn’t “polished”.
This doesn’t make any sense. Polished to who? The end user? You can absolutely use AI to polish the user experience. Whether coding by hand or AI the most important aspect of polish is having someone who cares.
There are people who would code whether it was their career or not, I'm not one of those people. I fell into software development in order to make money, if the money stopped then I would stop. I love building and selling products, if I can't do that then I have no interest in programming. I'm interested in machines, CPU's, etc. I'm interested in products, liaising with customers, delivering solutions, improving things for users, etc. You think there is no distinction there? Again, there are people who code for fun, I'm simply not one of them...
maybe "MBA news" would be better suited?
More likely that step is just skipped and replaced with thoughts and prayers.
Can't the customer now just skip you and generate a product for himself via AI?
edit: to stay on the larger topic, I haven't been swayed much one way or the other. ~90% of the code I need existed a decade ago in the form of reusable modules. Anything new is closer to pseudo-code, an amplifier or sandbox isn't something I'm that interested in.
I like using my software engineering skills to solve people's problems. I don't do coding for it's own sake - there's always a thing I'm trying to implement for someone.
If this were about grammar, it would be appropriate to translate something you wrote, not use generative AI to create it.
This whole thing is an ad. All the post's sentiments that people are engaging with ("imposter syndrome" etc.) were spit out by a clanker.
What a disheartening start to my morning.
You improve over time. I've been programming for 6 years and I still feel like I'm nowhere near others. That's a completely fine and valid thing to feel.
Many who are considering a career shift away from software due to 'AI disgust' devoted their lives to developing software because they loved the craft. But with AI churning out cheap, ugly, but passable code, it's clear that businesses never appreciated the craft. I hope these folks find an area outside of SWE that they love just as much.
But once these folks find this area, it would be naive to think they won't use software to scratch their itch. In the same way that people who didn't pursue a career in SWE (because they felt under-qualified) are using AI to solve their problems, these folks will now find their own problems to solve with software, even if at first that is not their intention. They probably won't use AI to write the code, but ultimately, AI is forcing everyone to become a product manager.
But what if the business is soulless? As in what if the business you're working on is just milking value out of people through negative patterns which... is ... well a lot of tech businesses these days. Maybe the busywork enabled engineers to be distracted from the actual impact of their work which makes people demotivated.
I have always found management to be just silly exercise in day full of meetings. I like to make things. I could retrain, but, the salary drop would be very hard. Hope to find one last gig and have enough to retire. I still get that spark of joy when all the tests pass.
Can definitely understand the reluctance people feel around it. Especially when they’ve invested years into it and have their livelihood on the line
I’m also quite reluctant to publish any of it. Doesn’t feel right to push code I don’t fully understand so mostly personal projects for now
Cool project!
That's creating a new inefficient, socially destructive, environmentally damaging hammer because solving the real problem doesn't sell well.
I'll be happy when we solve THAT problem.
Nit: it seems like the graph for the compound interest calculator should start at year 0 rather than year 1.
Also, it might be nice to have a way to change the starting year to the actual year you want to start (such as the current year).
What can I say... If you used a calculator to get an answer for sqrt(2) are you back to doing mathematics? It's simpler and more fun instead of using Newton method. But it's debatable if you are actually working on mathematics problems.
For the same reason things like Image Playground/etc seem magical/appealing to non-artists (myself included): we don't know how to do it ourselves, so it feels empowering.
Or more close to home: it's the same reason that developers are so in love with clicking some buttons in the <insert cloud mega provider> dashboard in spite of the costs, lock-in, more costs, yet more costs, and of course the extra costs.
As with those choosing "cloud" services they don't need, here too there will no doubt be a lucrative market to fix the shit once people realise that there's a reason experts charge the way they do.
I'm also now dealing with things that previously would have taken me too long to deal with. For example, I'm actually making a dent in the amount of technical debt I have to deal with. The type of things where previously I maybe wouldn't have taken a week out of my schedule to deal with something that was annoying me. A lot of tedious things that would take me hours/days now can get done in a few prompts. With my bigger projects, I still do most stuff manually. But that's probably going to change over the next months/year.
I'm mainly using codex. I know a lot of people seem to prefer Claude Code. But I've been a happy ChatGPT Plus user for a while and codex is included with that and seems to do the job. Amazing value for 20$/month. I've had to buy extra credit once now.
The flip side of all this is that waiting for AI to do it's thing isn't fun. It's slow enough that it slows me down and fast enough that I can't really multi task. It's like dealing with a very slow build that you have to run over and over again. A necessary evil. But not necessarily fun. I can see why a lot of developers feel like the joy is being sucked out of their lives.
Dealing with this pain is urgent. Part of that is investing in robust and fast builds. Build time competes with model inference in the time stuff takes. And another part is working on the UX of this. Being able to fork multiple tasks at once is hugely empowering. And switching between editing code and generating code needs to get more seamless. It feels too much like I'm sitting on my hands sometimes.
AI is eroding the entry barrier, the cognitive overload, and the hyper-specialization of software development. Once you step away from a black-and-white perspective, what remains is: tools, tools, tools. Feels great to me.
https://youtu.be/JJz5D9txeGA
Otherwise it feels deceptive. Which is surprising given we should judge off intentions and not augmentation (like come on guys this is HN FFS).
This guy's not running any ads on the site, hasn't spammed with multiple posts that I've seen. I still think investment funds/modern stock exchanges are needless parasites upon society but that's just my opinion.
I'll figure out a better way. Thanks for calling it out.
https://news.ycombinator.com/newsguidelines.html
Vibe coded projects can be cool (if they're impressive), articles about using AI can be cool (from the right people), articles about the future of AI can be cool. All of these can sometimes be too much and some of them are just poor projects / articles etc. But they should definitely be allowed; some of them are genuinely interesting / thought provoking.
Someone prompting gpt-4o "Write a nice reply comment for this <paste>" and then pasting it here is never cool. If you can't write in english, you can use google translate or even ask an llm to translate, but not to write a comment for you!
Things are definitely changing around HN compared to when it first started.
It's impossible to tell if this is AI or not. Another version of Poe's law. The only thing to do is assume everything is AI, just like you must assume all posts have ulterior (generalluy profit-driven) motives, all posters have a conflict of interest, etc.
Maybe the only thing to do is stop trying to understand posters' motivations, stop reading things charitably, stop responding, just look for things that are interesting (and be sure to check sources).
Every spammer and scammer, even a bot, is ultimately controlled by a real person in some sense. That doesn't mean we want their content here.
Anyone who disagrees with the above are just hurt that their manual hyping has been replaced with machines.
OP made a site with a bunch of calculators. Their critics didn’t make that!
It's cool that ChatGPT can stitch these toys together for people who aren't programmers, but 99% of software engineers aren't working on toys in the first place, so we're hardly threatened by this. I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
"Software engineering" doesn't matter to anyone except to software engineers. What matters is executing that idea that's been gathering dust for ages, or scratching that pain point that keeps popping up in a daily basis.
My response is perhaps a bit raw, but so is the quote above.
Stop with the gate keeping. I've studied CS to understand coding, not to have some sort of pride to build "real software". Knowledge is a tool, nothing more, nothing less.
There are enough developers whose whole job it is to edit one button per week and not much more. And yes, there are also enough developers that actually apply their CS skills.
> but 99% of software engineers aren't working on toys in the first place
Go outside of your bubble. It's way more nuanced than that.
> I guess people who aren't software engineers don't realise that merely making a trivially basic website is not what software engineering is.
Moving goal posts. Always has been.
It's not that I fully disagree with you either. And I'm excited about your accomplishments. But just the way it reads... man...
I guess it hits me because I used to be disheartened by comments like this. It just feels so snarky as if I am never good enough.
The vibe is just "BUH BUH BUH and that's it." That's how it comes across.
And I've come to mature enough to realize I shouldn't feel disheartened. I've followed enough classes at VUSEC with all their rowhammer variations and x86-64 assignments to have felt a taste of what deep tech can be. And the thing is, it's just another skill. It doesn't matter if someone works on a web app or a deep game programming problem.
What matters (to me at least) that you feel the flow of it and you're going somewhere touching an audience. Maybe his particular calculator app has a better UX for some people. If that's the case, then his app is a win. If your game touches people, then that's a win. If you feel alive because you're doing complex stuff, then that's a win (in the style of "A Mathematician's Apology"). If you're doing complex stuff and you feel it's rough and you're reaching no one with it, it's neutral at best in my book (positive: you're building a skill, negative: no one is touched, not even you).
Who cares what the underlying technology is. What's important is usability.
Feel free to point out where I moved goal posts. To say that I moved goal posts would imply that at one point I stated that creating a trivial website was software engineering. If you're comparing my statement to what some other person said, who made arguments I did not make, then we cannot have any kind of constructive dialogue. At that point you are not talking to me, but talking to an imaginary projection of me meant to make yourself feel better about your argument.
> Stop with the gate keeping.
I'm not gatekeeping anything. You can disagree with my descriptive terms if you want, but the core point I'm trying to get across is: what people are doing with Claude can not replace what I do. I would know, I've tried extensively. Development is a lot of hard work and I would love it if my job were easier! I use LLMs almost every day, mostly for trivial tasks like reformatting text or writing advanced regex because I can't be bothered to remember the syntax and it's faster than looking it up. I also routinely pose SOTA models problems I'm working on to have them try to solve them, and I am routinely disappointed by how bad the output is.
So, in a thread where people were asserting that critics are merely critics because they're afraid of being replaced I pointed out that this is not factually correct, that no, we're not actually afraid of being replaced, because those of us who do "real" engineering (feel free to suggest a different term to substitute for "real" if the terminology is what bothers you) know that we cannot be replaced. People without experience start thinking they can replace us, that the exhilarating taste of coding they got from an LLM is the full extent to the depth of the software engineering world, but in fact it is not even close.
I do think that LLMs fill a useful gap, for projects where the time investment would be too large to learn to code and too unimportant to justify paying anyone to program, but which are simple enough that a non-engineer can have an LLM build something neat for themselves. There is nothing wrong with toys. Toys are a great thing to have in the world, and it's nice that more people can make them[1]. But there is a difference between a toy and what I do, and LLMs cannot do the thing I do. If you're taking "toy" in a derogatory manner, feel free to come up with another term.
[1] To some extent. While accessibility is generally a great thing, I have some misgivings. Software is dangerous. The web is arguably already too accessible, with frameworks enabling people who have no idea what they're doing to make professional-looking websites. These badly-made websites then go on to have massive security breaches that affect millions of users. I wish there was a way to make basic website development accessible, whether through frameworks or LLMs, in a way that did not give people using them the misplaced self-confidence to take on things way above their skill level at the cost of other people's security.
What’s even the point of writing out that first paragraph otherwise?
I was correcting your misguided statement:
> Their critics didn’t make that!
by pointing out that we, among other things, build the libraries that you/Claude are copy-and-pasting from. When you make an assertion that is factually incorrect, and someone corrects you, that does not mean they are threatened.
I'll keep learning and try to make this less of a toy over time. And hopefully I can bring what I've learned from years in investing into my next product to actually help people. Thanks for the perspective.
And if you are thinking enterprise, it would take 2-3 developers, 2 analysts, 2 testers, 1 lead and 1 manager 2-3 months to push something like this. (Otherwise why would lead banks spent billions and billions for IT development every year? What tangible difference you see in their website/services?)
5000 calculators may look excessive, but in this case it magnifies the AI capabilities in the future - both in terms of quality and quantity.
Well, I don't think all those people are spending their time making simple calculators.
Have you tried this? https://www.investor.gov/financial-tools-calculators/calcula...
$100 seems like a lot. I guess if you think about it compared to dev salaries, it's nothing. But for $10 per month copilot you can get some pretty great results too.
Did fucking AI also write your article?
Edit: I appreciate the quick turnaround. Apologies.
Every other day I see ads of companies saying "use our AI and become a millionaire", this kind of marketing from agentic IDEs implies no need for developers who know their craft, which as said above, isn't the case.
If that’s the bar, there likely a ton of businesses that should shut down…
this by definition filters out all non-devs, even many junior devs as you need to understand deeply if those tests are correct and cover all important edge cases etc.
+ when you deploy it - you need to know it was properly deployed and your db creds are not on frontend.
But mostly no one cares as there is no consequences to leaking personal data of your users or whatnot.
If you just want to build a little web app, or a couple of screens for your phone, you'll probably be fine. (Unless there's money or personal data involved.) It's empowering! Have fun.
But if you're trying to build something that has a whole bunch of moving parts and which isn't allowed to be a trash fire? Someone needs to be paying attention.
I guess this is what separates some people. But I always explicitly tell it to use only HTML/JS/CSS without any libraries that I've vetted myself. Generating code allows you now not having to deal with it a lot more.
Cool to hear nonetheless. Can we now also stop stigmatizing AI generated music and art? Looking at you Steam disclosures.
This is a revolution, welcome back to coding :)