While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.
Some examples:
- https://news.ycombinator.com/item?id=46164360
- https://news.ycombinator.com/item?id=46200460
- https://news.ycombinator.com/item?id=46080064
Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).
What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(This is a broader restriction than the one you're looking for).
It's important to understand that not all of the rules of HN are on the Guidelines page. We're a common law system; think of the Guidelines as something akin to a constitution. Dan and Tom's moderation comments form the "judicial precedent" of the site; you'll find things in there like "no Internet psychiatric diagnosis" and "not owing $publicfigure anything but owing this community more" and "no nationalist flamewar" and "no hijacking other people's Show HN threads to promote your own thing". None of those are on the Guidelines page either, but they're definitely in the guidelines here.
One comment stands out to me:
> Whether to add it to the formal guidelines (https://news.ycombinator.com/newsguidelines.html) is a different question, of course. I'm reluctant to do that, partly because it arguably follows from what's there, partly because this is still a pretty fuzzy area that is rapidly evolving, and partly because the community is already handling this issue pretty well.
I guess me raising this question is because it feels maybe slightly off that people can't really know about this unwritten rule until they break it or see someone else break it and people tell them why. It is true that the community seems to handle it with downvotes, but it might not be clear enough why something gets downvoted, people can't see the intent. And it also seems like an inefficient way of communicating community norms, by telling users about them once they've broken them.
Being upfront with what rules and norms to follow, like the guidelines already do for most things, feels more honest and welcoming for others to join in on discussions.
* First, the guidelines get too large, and then nobody reads them all, which makes the guideline document less useful. Better to keep the guidelines page reduced down to a core of things, especially if those things can be extrapolated to most of the rest of the rules you care about (or most of them plus a bunch of stuff that doesn't come up often enough to need space on that page).
* Second, whatever you write in the guidelines, people will incline to lawyer and bicker about. Writing a guideline implies, at least for some people, that every word is carefully considered and that there's something final about the specific word choices in the guidelines. "Technically correct is the best kind of correct" for a lot of nerds like us.
Perhaps "generated comments" is trending towards a point where it earns a spot in the official guidelines. It sure comes up a lot. The flip side though is that we leave a lot of "enforcement" of the guidelines up to the community, and we have a pretty big problem with commenters randomly accusing people of LLM-authoring things, even when they're clearly (because spelling errors and whatnot) human-authored.
Anyways: like I said, this is pretty well-settled process on HN. I used to spend a lot of time pushing Dan to add things to the guidelines; ultimately, I think the approach they've landed on is better than the one you (and, once, I) favored.
God I hate this. And it cuts across society, not just on HN or among nerds. Something like 10% or so of the people I interact with seem to just rules-lawyer their way through life as a default. "Well... as you can see in the rule book, section 5, paragraph 3, the rule clearly says we shouldn't do X, but it doesn't say we mustn't do X, so I'm clearly allowed to do X." Insufferable.
*Not applicable to Nations or States, or any place where people aren't free to come and go as they please.
dang: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
tomhow: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Where does that saying come from? I keep seeing it in a lot of different contexts but it somehow feels off to me in a way I can't really explain.
If it helps, you can think of it as saying more about possible disagreeing opinions than about the specific opinion expressed. "This answer is right, and the people who disagree are 'objectively' wrong."
It took me some time to catch on to this. It can certainly be jarring or obnoxious, though sometimes it can be helpful to say "yo people, you're treating this like a subjective opinion, but there are objective reasons to conclude X."
https://news.ycombinator.com/item?id=46209137
Edit: Rereading the comments, I agree (heheh) with you analysis. I hadn't considered saying "I agree", because I didn't feel I was expressing an opinion, but a fact, like 1+1=2. The comment stated that the mods in fact disallow those comments and provided proof, so I didn't consider it an opinion.
I really like this conversation by the way. I'm actively trying to become a better writer (by doing copywork of my favourite writers), and no other forum on Earth has this sort of conversation in such an interesting, nuanced way.
Ultimately, I put it because:
- It was the most directly informative comment on the thread;
- It had been downvoted (greyed out) to the very bottom of the thread; and
- I wanted to express my support before making a fairly orthogonal comment without whiplashing everyone.
The whiplashing concern is the problem I run into most generally. It can be hard to reply to someone with a somewhat related idea without making it seem like you're contradicting them, particularly if they're being dogpiled on with downvotes or comments. I'd love to hear other ways to go about this, I'm always trying to improve my communication.
Just that this forum uses a common law styled system - and even added "think of X as..." making clear this refer to this analogy, not to official state law.
> no hijacking other people's Show HN threads to promote your own thing
This seems to happen a lot, so apparently not a very well enforced rule.
The pre-LLM equivalent would be: "I googled this, and here's what the first result says," and copying the text without providing any additional commentary.
Everyone should be free to read, interpret and formulate their comments however they'd like.
But if a person outsources their entire thinking to an LLM/AI, they don't have anything to contribute to the conversation themselves.
And if the HN community wanted pure LLM/AI comments, they'd introduce such bots in the threads.
As I see it, down-voting is an expression of the community posture, rules are an expression of the "space" posture. It's up to the space to determine if there is something relevant enough to include it in the rules.
And again, as I see it, community should also have a way to at least suggest modifications of the rules.
I agree with you in "People who can't take a hint aren't going to read the rules". But as they say: "Ignorance of the law does not exempt one from compliance."
I tend to dislike these type of posts but a properly designed and functioning vote mechanism should take care of it.
If not, it is the voting mechanism that should be tuned - not new rules.
Can't find the link right now (cause why would i save a thread like that..) but I've seen more than once situations where people get defensive of others that post AI slop comments. Both times it was people in YC companies that have personal interest related to AI. Both times it looked like a person defending sockpuppets.
A lot of the guidelines are about avoiding comments that aren’t interesting. A copy/paste from an LLM isn’t interesting.
1. If I wanted to run a web search, I would have done so 2. People behave as if they believe AI results are authoritative, which they are not
On the other hand, a ban could result in a technical violation in a conversation about AI responses where providing examples of those responses is entirely appropriate.
I feel like we're having a larger conversation here, one where we are watching etiquette evolve in realtime. This is analogous to "Should we ban people from wearing bluetooth headsets in the coffee shop?" in the 00s: people are demonstrating a new behavior that is disrupting social norms but the actual violation is really that the person looks like a dork. To that end, I'd probably be more for public shaming, potentially a clear "we aren't banning it but please don't be an AI goober and don't just regurgitate AI output", more than I would support a ban.
Except it's "...and here is the first result it gave me, I didn't bother looking further".
Web search has the same issue. If you don't validate it, you wind up in the same problem.
The social norm has always been that you write comments on the internet for yourself, not others. Nothing really changes if you now find enjoyment in adding AI output to your work. Whatever floats your boat, as they say.
It seems a lot like code. You can "vibe code" your way into an ungodly mess, but those who used to enjoy the craft of writing high quality code before LLMs arrived still seem to insist on high quality code even if an LLM is helping produce it now. It is highly likely that internet comments are no different. Those who value quality will continue to. Those who want garbage will produce it, AI or not.
Much more likely is seeing the user base shift over time towards users that don't care about quality. Many a forum have seen that happen long before LLMs were a thing, and it is likely to happen to forums again in the future. But, the comments aren't written for you (except your own, of course) anyway, so... It is not rational to want to control what others are writing for themselves. But you can be responsible for writing for yourself what you want to see!
Sure, the motivation for many people to write comments is to satisfy themselves. The contents of those comments should not be purely self-satisfying, though.
Reddit was originally just one guy with 100s of accounts. The epitome of writing for oneself.
> upvotes were intended to be used for comments that contributed to the discussion.
Intent is established by he who acts, not he who observes. It fundamentally cannot be any other way. The intent of an upvote is down to whatever he who pressed the button intended. That was case from conception of said feature, and will always remain the case. Attempting to project what you might have intended had you been the one who acted onto another party is illogical.
> The contents of those comments should not be purely self-satisfying, though.
Unless, perhaps, you are receiving a commission with detailed requirements, there is really no way to know what someone else will find satisfying. All you can do is write for yourself. If someone else also finds enjoyment in what you created, wonderful, but if not, who cares? That's their problem. And if you did receive a commission to write for another, well, you'd expect payment. Who among us is being paid to write comments?
But what if the AI is used to build up a(n otherwise) genuine human response, like: 'Perhaps the reason behind this is such-and-such, (a quick google)|($AI) suggests that indeed it is common for blah to be blah, so...'
Same logic still applies. If I gave a shit what it "thought" or suggests, I'd prompt the $AI in question, not HN users.
That said, I'm not against a monthly (or whatever regular periodic interval that the community agrees on) thread that discusses the subject, akin to "megathreads" on reddit. Like interesting prompts, or interesting results or cataloguing changes over time etc etc.
It's one of those things that can be useful to discuss in aggregate, but separated out into individual posts just feels like low effort spam to farm upvotes/karma on the back of the flavor of the month. Much in the same way that there's definitely value in the "Who's Hiring/Trying to get hired" monthly threads, but that value/interest drops precipitously if each comment/thread within them were each their own individual submission.
I'm not so sure they actually believe the results are authoritative, I think they're being lazy and hoping you will believe it.
While true, many times people don't want to do this because they are lazy. If they just instead opened up chatgpt they could have instantly gotten their answer. It results in a waste of everyone's time.
If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?
We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.
This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).
Great now we've wasted time & material resources for a possibly wrong and hallucinated answer. What part of this is beneficial to anyone?
Frankly, it's a skill thing.
You know how some people can hardly find the back of their own hands if they googled them?
And then there's people (like eg. experienced wikipedians doing research) who have google-fu and can find accurate information about the weirdest things in the amount of time it takes you to tie your shoes and get your hat on.
Now watch how someone like THAT uses chatgpt (or some better LLM) . It's very different from just prompting with a question. Often it involves delegating search tasks to the LLM (and opening 5 google tabs alongside besides) . And they get really interesting results!
Ideally we would require people who ask questions to say what they've researched so far, and where they got stuck. Then low-effort LLM or search engine result pages wouldn't be such a reasonable answer.
To introspect a bit, I think the rote regurgitation aspect is the lesser component. It's just rude in a conventional way that isn't as threatening. It's the implied truth/authority of the Great Oracular Machine which feels more-dangerous and disgusting.
It’s clumsy and has the opposite result most of the time, but people still do it for all manner of trends.
> 1. If I wanted to run a web search, I would have done so
Not everyone has access to the latest Pro models. If AI has something to add for the discussion and if a user does that for me I think it has some value.
2. People behave as if they believe AI results are authoritative, which they are not
AI is not authoritative in 2025. We don’t know what will happen in 2026. We are at the initial transition stage for a new technology. Both the capabilities of AI and people’s opinions will change rapidly.
Any strict rule/ban would be very premature and shortsighted at this point.
That said, I've also grown exceedingly tired of everyone saying, "I see an em dash, therefore that comment must have come from AI!"
I happen to like em dashes. They're easy to type on macOS, and they're useful in helping me express what I'm thinking—even if I might be using them incorrectly.
I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.
In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.
"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.
The LLM skips a step, and gets you right to the "unusable source" argument.
Still, I will fight that someone actually doing the leg work even by search engine and reasonable evaluation on a few sources is often quite valuable contribution. Sometimes even if it is done to discredit someone else.
We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.
This may actually be a good thing because it'd force them to put some thought into dissecting the comment from AI instead of just pasting it in wholesale. Depending on how well they try to disguise it, of course.
At this point, I make value judgments when folks use AI for their writing, and will continue to do so.
The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.
If it was just a translation, then that adds no value.
I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.
I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)
You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.
I think building it as a forum feature rather than a browser feature is maybe worth.
We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.
Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.
We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.
I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.
When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.
1. An automatic translation feature.
2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.
The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.
It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.
I'm also open to the thought that it's a terrible idea.
I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.
Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language
Its common enough that it must be a literal translation difference between German and English.
I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.
Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.
The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!
Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.
* Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.
* always proof-read if you are at all able to!
Ultimately you should be responsible for your own posts.
(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)
The objective of that model, however, is quite different to that of an LLM.
It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.
Palmada palmada bot.
The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”
Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.
I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.
However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.
I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.
If a user doesn't speak a language well, they won't know whether their meanings were altered.
i don't think it is likely to catch on, though, outside of culturally multilingual environments
It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.
https://blog.thms.uk/2023/02/mastodon-translation-options
https://jampauchoa.substack.com/p/writing-with-ai-without-th...
TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.
When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."
It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.
(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)
I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.
Every time this happens to me at work one of two things happens:
1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.
2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.
I have a less cynical take. These are casual replies, and being forthright about AI usage should be encouraged in such circumstances. It's a cue for you to take it with a grain of salt. By discouraging this you are encouraging the opposite: for people to mask their AI usage and pretend they are experts or did extensive research on their own.
If you wish to dismiss replies that admit AI usage you are free to do so. But you lose that freedom when people start to hide the origins of their information out of peer pressure or shame.
Well now you're putting words in my mouth.
If you make it against the rules to cite AI in your replies then you end up with people masking their AI usage, and you'll never again be able to encourage them to do the legwork themselves.
I want to hear your thoughts, based on your unique experience, not the AI's which is an average of the experience of the data it ingested. The things that are unique will not surface because they aren't seen enough times.
Your value is not in copy-pasting. It's in your experience.
If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.
I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?
Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.
Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?
Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.
Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.
Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.
Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.
"I asked an $LLM and it said" is very different than "in my opinion".
Your opinion may be supported by any sources you want as long as it's a genuine opinion (yours), presumably something you can defend as it's your opinion.
edit 1: The sincerest form of flattery
edit 2: To be fair, Claude Opus 4.5 seems to encourage people to be nicer to each other if you let them.
When someone says: "Source?", is that kinda the same thing?
Like, I'm just going to google the thing the person is asking for, same as they can.
Should asking for sources be banned too?
Personally, I think not. HN is better, I feel, when people can challenge the assertions of others and ask for the proof, even though that proof is easy enough to find for all parties.
But: Just because it's easy doesn't mean you're allowed to be lazy. You need to check all the sources, not just the ones that happen to agree with your view. Sometimes the ones that disagree are more interesting! And at least you can have a bit of drama yelling at your screen at how dumb they obviously are. Formulating why they are dumb, now there's the challenge - and the intellectual honesty.
But yeah, using LLMs to help with actually doing the research? Totally a thing.
IMO, HN commenters used to at least police themselves more and provide sources in their comments when making claims. It was what used to separate HN and Reddit for me when it came to response quality.
But yes it is rude to just respond "source?" unless they are making some wild batshit claims.
Yes, comments of this nature are bad, annoying, and should be downvoted as they have minimal original thought, take minimal effort, and are often directly inaccurate. I'd still rather they have a disclaimer to make it easier to identify them!
Further, entire articles submitted to HN are clearly written by a LLM yet get over a hundred upvotes before people notice whether there's a disclaimer or not. These do not get caught quickly, and someone clicking on the link will likely generate ad revenue that incentives people to continue doing it.
LLM comments without a disclaimer should be avoided, and submitted articles written by a LLM should be flagged ASAP to avoid abuse since by the time someone clicks the link it's too late.
It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.
Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.
Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.
But now people are vomiting chatgpt responses instead of linking to chatgpt.
https://news.ycombinator.com/item?id=46204895
when it had only two comments. One of them was the Gemini summary, which had already been massively downvoted. I couldn't make heads or tails of the paper posted, and probably neither could 99% of other HNers. I was extremely happy to see a short AI summary. I was on my phone and it's not easy to paste a PDF into an LLM.
When something highly technical is posted to HN that most people don't have the background to interpret, a summary can be extremely valuable, and almost nobody is posting human-written summaries together with their links.
If I ask someone a question in the comments, yes it seems rude for someone to paste back an LLM answer. But for something dense and technical, an LLM summary of the post can be extremely helpful. Often just as helpful as the https://archive.today... links that are frequently the top comment.
I don't think this is a good example personally.
[1] https://arxiv.org/abs/2504.00025
The story was being upvoted and on the front page, but with no substantive comments, clearly because nobody understood what the significance of the paper was supposed to be.
I mean, HN comments are wrong all the time too. But if an LLM summary can at least start the conversation, I'm not really worried if its summary isn't 100% faithful.
But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.
One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.
If I'm looking for entertainment, HN is not exactly my first stop... :P
The point of asking on a public forum is to get socially relatable human answers.
Most often I see these answers under posts like "what's the longest river or earth", or "is Bogota a capital of Venezuela?"
Like. Seriously. It often takes MORE time to post this sort of lazy question than actually look it up. Literally paste their question into $search_engine and get 10 the same answers on the first page.
Actually sometimes telling a person like this "just Google it" is beneficial in two ways: it helps the poster develop/train their own search skills, and it may gently nudge someone else into trying that approach first, too. At the same time slowing the raise of the extremely low effort/quality posts.
But sure, sometimes you get the other kind. Very rarely.
Only that, I’m not the one who posted the original question, I DID google (well DDG) it, and the results led me to someone asking the same question as me, but it only had that one useless reply
I think its a very valid question to ask the AI: "which coding languages is most suitable for you to use and why" or other similar questions.
You could reply with "Hey you could ask [particular LLM] because it had some good points when I asked it" but I don't care to see LLM output regurgitated on HN ever.
The top story on here for 2 days has been “Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now”
I could have typed that into an LLM myself too.
Before that it was “llm tries to recreate Space Jam website”
The argument is that the information it generated is just noise, and not valuable to the conversation thread at all.
This is neither the mechanism nor the goal of human communication, not even on the internet.
I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.
https://news.ycombinator.com/item?id=36735275
Just curious if chatGPT is actually formally banned on HN?
Hacker News <hn@ycombinator.com> Sat, Jul 15, 2023, 4:12 PM to me
Yes, they're banned. I don't know about "formally" because that word can mean different things and a lot of the practice of HN is informal. But we've definitely never allowed bots or generated comments. Here are some old posts referring to that.
dang
https://news.ycombinator.com/item?id=35984470 (May 2023) https://news.ycombinator.com/item?id=35869698 (May 2023) https://news.ycombinator.com/item?id=35210503 (March 2023) https://news.ycombinator.com/item?id=35206303 (March 2023) https://news.ycombinator.com/item?id=33950747 (Dec 2022) https://news.ycombinator.com/item?id=33911426 (Dec 2022) https://news.ycombinator.com/item?id=32571890 (Aug 2022) https://news.ycombinator.com/item?id=27558392 (June 2021) https://news.ycombinator.com/item?id=26693590 (April 2021) https://news.ycombinator.com/item?id=22744611 (April 2020) https://news.ycombinator.com/item?id=22427782 (Feb 2020) https://news.ycombinator.com/item?id=21774797 (Dec 2019) https://news.ycombinator.com/item?id=19325914 (March 2019)"
(Edit: oh, it's not 2024 anymore. How time flies!)
A tip of the hat for this performance art
(written by a human with help from https://aiphrasefinder.com/common-chatgpt-phrases/)
I do find it more helpful when people specify why they think something was AI-generated. Especially since people are often wrong (fwict).
For example, some people seem to be irritated by jokes and being able to ignore +5 funny comments might be something they want.
Strong agree.
If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.
If you can't, accept that you can't, and stop wasting everyone else's time with your unvalidated guesses about whether something is AI or not.
The least valuable lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.
It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.
Also, I'm pretty sure most people can spot blogspam full of glaringly obvious cliche AI patterns without being able to create a high reliability AI detector. To set that as the threshold for commentary on whether an article might have been generated is akin to arguing that people shouldn't question the accuracy of a claim unless they've built an oracle or cracked lie detection.
In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.
https://news.ycombinator.com/item?id=45652349
GPT has ruined my enjoyment of using em dashes, for instance.
Source?
You might be surprised by the tremendous amount of value in AI posts. AI considers context NI doesn't.
Whether prompt engineering is a skill is perhaps a different topic. I just found this meta statistic in this thread interesting to observe
What did we used to call it? Google-fu?
I mainly work with .NET and I did a search for the term "prompt engineering" on one of the biggest website for job adverts in my country. Out of almost 800 offers only 9 mention the term "prompt engineer". Changing that to "AI" produces around 200 results, but many of these are typical throwaway lines like "our company uses the newest AI tools" that doesn't mean anything.
Maybe it's different in other regions or tech stacks, but so far I am not seeing anything that makes me feel I need to take any of this seriously.
Obviously cut&pasting the raw output of a google search or pubmed search or etc would be silly. Same goes for AI generated summaries and such. But references you find this way can certainly be useful.
And using spelling checkers, grammar checkers, style checkers, translation tools or etc (old fashioned or new AI-enhanced) should be ok if used wisely.
Small exception if the user is actually talking about AI, and quoting some AI output to illustrate their point, in which case the AI output should be a very small section of the post as a whole.
If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"
But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.
This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.
"People are responsible for the comments that they post no matter how they wrote them. If you use tools (AI or otherwise) to help you make a comment, that responsibility does not go away"
People will still do it, but now they're doing it intentionally in a context where they know it's against the guidelines, which is a whole different situation. Staying up late to argue the point (and thus add noise) is obviously not going to work.
I'd prefer the guideline to allow machine translation, though, even when done with a chatbot. If you are using a chatbot intentionally with the purpose of translating your thoughts, that's a very different comment than spewing out the output from a prompt about the topic. There's some gray area where they fuzz together, but in my experience they're still very different. (Even though the translated ones set off all the alarm bells in terms of style, formatting, and phrasing.)
Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.
I’d rather hear it filtered through a brain, be it a good answer or bad.
Sure, I'll occasionally ask an LLM about something if the info is easy to verify after, but I wouldn't like comments here that were just copy-pastes of the Google search results page either.
Saying “ChatGPT told me …” is a fast track to getting your input dismissed on our team. That phrasing shifts accountability from you to the AI. If we really wanted advice straight from the model, we wouldn’t need a human in the loop - we’d ask it ourselves.
The intent isn't to shift accountability, it's to brainstorm. A shitty idea gets shot down quickly, whereas a good idea gets implemented.
Edit: sentence
1) borderline. Potentially provides some benefit to the thread for readers who also don't have time or expertise to read an 83 page paper. Although it would require someone to acknowledge and agree that the summary is sound.
2) Acceptable. Dude got grok to make some cool visuals that otherwise wouldn't exist. I don't see what the issue is with something like this.
3) borderline. Same as 1 mostly.
The more I think about this, the less bothered I am by it. If the problem were someone jumping into a conversation they know nothing about, and giving an opinion that is actually just the output of an LLM, I'd agree. But all the examples you provided are transformative in some way. Either summarizing and simplifying a long article or paper, or creating art.
What makes HN valuable to me is the opposite impulse: people trying to understand things for themselves from the community and in the process, maybe, discovering new ideas which LLMs can't supply. Because what LLMs don't know they don't know, LLMs think they know(that's why they predict next sentence) but we know they don't know and still some people think they know
Sometimes, I wish, there is no thinking tax: as a reminder of why this place exists & also to reward people who are still curious and thinkers
I WANT HN TO REMAIN A PLACE WHERE HUMAN CONVERSATIONS STILL HAPPENS.
I think sometimes it's fine to source additional information from an LLM if it helps advance the discussion. For example, if I'm confused about some topic, I might explore various AI responses and look at the source links they provide. If any of the links seem compelling I'll note how I found the link through an LLM and explain how it relates to the discussion.
The HN guidelines haven't yet been updated but perhaps if enough people send an email to the moderators, they'll do it.
But this is a text-only forum and text (to a degree, all digital content) has become compromised. Intent and message is not attributable to real life experience or effort. For the moment I have accepted the additional overhead.
As with most, I have a habit of estimating the validity of expertise in comments, and experiential biases, but that is becoming untenable.
Perhaps there will soon be transformer features that produce prompts adequate to the task of reproducing the thought behind each thread, so their actual value, informational complexity, humor, and salience, may be compared?
Though many obviously human commentors are actually inferior to answers from “let me chatgpt that for you.”
I have had healthy suspicions for a while now.
You can make the same argument for AI output as well, but to be clear, I'm referring to the case of someone bringing up a low quality source as the answer.
Not sure how easy that would actually be to moderate, of course.
I have not encountered a single instance of this ever since I've started using HN (and can't find one using the site search either) whereas the "I asked ChatGPT" zombie answers are rampant.
I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.
Of course, if it’s banned maybe people just stop admitting it.
"A guideline to refrain" seems better. Basically, this should be only slightly more tolerated than "let me google for you" replies: maybe not actively harmful, but rude. But, anyway, let's not be overly pretentious: who even reads all these guidelines (or rules for that matter)? Also, it is quite apparent, that the audience of HN is on average much less technical and "nerdy" than it was, say, 10 years ago, so, I guess, expect these answers to continue for quite some time and just deal with it.
I can't locate them, but I'm sure they exist...
Unfortunately, I can't find them. Its a shame. Everybody should read them.
If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.
With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.
Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.
My brain-based LLM would like you to know there’s a set of standard guidelines for contribution linked on the footer of this page.
In many threads, those comments can be just as annoying and distracting as the ones being replied to.
I say this as someone who to my recollection has never had anyone reply with a rule correction to me -- but I've seen so many of them over the years and I feel like we would fill up the screen even more with a rule like this.
Yes, if you wanted to ask an llm, you’d do so, but someone else asks a specific question to the llm, and generates an answer that’s specific to his question. And that might add value to the discussion.
I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?
At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.
then write their own response using an AI to improve the quality of the response? the implication here is that an AI user is going to do some research when using the AI was their research. to do the "fact check" as you suggest would mean doing actual work, and clearly that's not something the user is up for indicated by use of the AI.
so, to me, your suggestion is fantasy level thinking
https://distantprovince.by/posts/its-rude-to-show-ai-output-...
I don't recall any instances where I've run into the problem here, maybe because I tend to arrive to threads as a result of them being popular (listed on Google News) which means I'm only going to read the top 10-50 posts. I read human responses for a bit before deciding if I should continue reading, and that's the same system I use for LLMs because sometimes I can't tell just by the formatting; if it's good, it's good - if it's bad, it's bad -- I don't care if a chicken with syphilis wrote it.
This should be restated: Should people stop admitting to AI usage out of shame, and start pretending to be actual experts or doing research on their own when they really aren't?
Be careful what you wish for.
Saying "I asked AI" usually falls into the former category, unless the discussion is specifically about analyzing AI-generated responses.
People already post plenty of non-substantive comments regardless of whether AI is involved, so the focus should be on whether the remark contributes any meaningful value to the discourse, not on the tools used to prepare it.
So no, I don't think forbidding anything helps. Let things fall where they should, otherwise.
Someone below mentions using it for translation and I think that's OK.
Idea: Prevent LLM copy/pasting by preempting it. Google and other things display LLM summaries of what you search for after you enter your search query, and that's frequently annoying.
So imagine the same on an HN post. In a clearly delineated and collapsible box underneath or beside the post. It is also annoying, but it also removes the incentive to run the question through an LLM and post the output, because it was already done.
Some people will know how to use it in good taste, others will try to abuse it in bad taste.
It might not be universally agreed which is which in every case.
Copying and pasting from chatGPT is no more contributing to discussion than it would be if you pasted the question into Google and submitted the result.
Everyone here knows how to look up an answer in Google. Everyone here knows how to look up an answer in ChatGPT.
If anyone wanted a Google result for a chat, GPT result, they would have just done that.
But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.
Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.
This nails it. This is the fundamental problem with using AI material. You are outsourcing thinking in a way where the response is likely to look very correct without any actual logic or connection to truth.
I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.
Please don’t pollute responses with made-up machine generated time-wasting bits here…!!!
https://niclas-nilsson.com/blog/40f2-empty-sharing
If you didn’t think it, and you didn’t write it, it doesn’t belong here.
I think making it a “rule” just encourages people to use AI and not acknowledge its use.
If they are, people will still do it, but skip the admission that it's from AI
There are far too many replies in this thread saying to drop the ban hammer, for this to be seriously taken as Hacker News. What has happened to this audience?
I've been seeing more and more of these on the front page lately.
If the discussion itself is about AI then what it produces is obviously relevant. If it's about something else, nobody needs you to copy and paste for them.
I am a human and more than half of what I write here is rejected.
I say bring on the AI. We are full of gatekeeping assholes, but we definitely have never cared if you have a heart (literally and figuratively).
I feel like the LLM equivalent here sort of demonstrates the exact opposite (I don't know enough about this topic to even doubt the accuracy of the machine...)
"I googled that for you" can also be done from a position of ignorance too.
This is just a new thing that new cultural norms are developing from.
Since that isn't likely to happen, perhaps the community can develop a browser extension that calls attention to or suppresses such accounts.
Agreed. It's hard enough dealing with the endless stream of LLM marketing stories, please lets at least try to keep the comments a little free of this 'I asked...' marketing spam.
Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.
2.) Posting AI response has as much value as posting random reddit comment.
3.) AI has value where you are able to factually verify it. If someone asks a question, they do not know the answer and are unable to validate ai.
Maybe that’s part of tracing your reasoning or crediting sources: “this got me curious about sand jar art, Gemini said Samuel Clemens was an important figure, I don’t know whether that’s historically true but it did lead me to his very cool body of work [0] which seems relevant here.”
Maybe it’s “I think [x]. The LLM said it in a particularly elegant way: [y]”
And of course meta-discussion seems fine: “ChatGPT with the new Foo module says [x], which is a clear improvement over before, when it said [y]”
There’s the laziness factor and also the credibility factor. LLM slop speaks in the voice of god, and it’s especially frustrating when people post its words without the clues we use to gauge credibility. To me those include the model, the prompt, any customizations, prior rounds in context, and any citations (real or hallucinated) the LLM includes. In that sense I wonder if it makes sense to normalize linking to the full session transcript if you’re going to cite an LLM.
[0] https://americanart.si.edu/blog/andrew-clemens-sand-art
I asked Perplexity, and Perplexity said: ""Your metaphysical intuition is very much in line with live debates: once “small pebbles” are arranged into agents that talk, coordinate, and co-shape our world, there is a strong philosophical case that they should be brought inside our moral and political conversations rather than excluded by fiat.""
AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.
This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.
Doing this will lead to people using AI without mentioning it, making it even harder to parse between human-origin content.
2026 is great year to watch out for typos. Typos are real humans
(A) Reticule the AI for giving a dumb answer.
(B) Point out how obvious something is.
Also, if you forbid people to tell you they consulted AI, they will just not say that.
Then we can just filter it at the browser level.
In fact why don't we have glyphs for it? Like special quote characters.
I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.
(source: ChatGPT)
Someone once put it as, "sharing your LLM conversations with others is as interesting to them as narrating the details of your dreams", which I find eerily accurate.
We are here in this human space in the pursuit of learning, edification, debate, and (hopefully) truth.
There is a qualitative difference between the unreliability of pseudonymous humans here vs the unreliability of LLM output.
And it is the same qualitative difference that makes it interesting to have some random poster share their (potentially incorrect) factual understanding, and uninteresting if the same person said "look, I have no idea, but in a dream last night it seemed to me that..."
Is the content of the comment counter-productive? Downvote it.
I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example
> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?
to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect
No one is stopping you from doing that for yourself. There is no need to copypasta it.
(The nerd joke btw is that both of my replies so far started with "No", since you didn't specify whitespace after the two characters o:) )
Should we allow 'let me google that for you' responses?
It’s the HN equivalent to “@grok is this true?”, but worse
For instance, what's wrong with the following: "Here's interesting point about foo topic. Here's another interesting point about bar topic; I learned of this through use of Gemini. Here's another interesting point about baz topic."
Is this banned also? I'm only sharing it because I feel that I've vetted whatever I learned and find it worth sharing regardless of the source.
Plus if you ban it people will just remove the "AI said" part, post it as is without reading and now you're engaging with an AI without even the courtesy of knowing. That seems even worse
What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.
Longer ...
I am here for the interesting conversations and polite debate.
In principle I have no issues with either citing AI responses in much the same way we do with any other source. Or with individual's prompting AI's to generate interesting responses on their behalf. When done well I believe it can improve discourse.
Practically though, we know that the volume of content AI's can generate tends to overwhelm human based moderation and review systems. I like the signal to noise ratio as it is; so from my pov I'd be in favour of a cautious approach with a temporary guidelines against it's usage until we are sure we have the moderation tools to preserve that quality.
The question was something like: “how reliable is the science behind misinformation.” And it said something like: “quality level is very poor and far below what justifies current public discourse.”
I ask for a specific article backing this up, and it’s saying “there isn’t any one article, I just analyzed the existing literature and it stinks.”
This matters quite a bit. X - formerly Twitter - is being fined for refusing to make its data available for misinformation research.
I’m trying to get it to give me a non-AI source, but it’s saying it doesn’t exist.
If this is true - it’s pretty important- and something worth discussing. But it doesn’t seem supportable outside the context of “my AI said.”
IMO hiding such content is the job of an extension.
When I do "here's what chatgpt has to say" it's usually because I'm pretty confident of a thing, but I have no idea what the original source was, but I'm not going to invest much time in resurrecting the original trail back to where I first learned a thing. I'm not going to spend 60 minutes to properly source a HN comment, it's just not the level of discussion I'm willing to have though many of the community seem to require an academic level of investment.
Why introduce an unnecessary and ineffective regulation.
My objection to AI comments is not that they are AI per se, but they are noise. If people are sneaky enough that they start making valuable AI comments, well that is great.
People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.
And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.
I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.
Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.
But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.
When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.
People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.
Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.
"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.
If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.
Of course I prefer to read the thoughts of an actual human on here, but I don't think it makes sense to update the guidelines. Eventually the guidelines would get so long and tedious that no one would pay attention to them and they'd stop working altogether.
(did I include the non-word forbiddance to emphasize the point that a human––not a robot––wrote this comment? Yes, yes I did.)
HN is not actually a democracy. The rules are not voted on. They are set by the people who own and run HN.
Please tell me what you think those people think of this question.
So I think maybe the guidelines should say something like:
HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.
---------
Also I asked ChatGPT and it said:
Short Answer
HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.
A rule probably isn’t needed. A norm is.
It’s the same as “this” of “wut” but much longer.
If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.
I am blown away by LLMs - now using ChatGPT to help me write some python scripts in seconds, minutes, that used to take me hours, weeks.
Yet, when I ask a question, or wish to discuss something on here, I do it because I want input from another meatbag in the hacker news collective.
I don’t want some corporate BS.
Low effort LLM crap is bad.
Flame bait uncurious mob pile-ons (this thread) are also bad.
Use the downvote button.
with features:
- ability to hide AI labeled replies (by default)
- assign lower weight when appropriate
- if a user is suspected to be AI-generated, retroactively label all their replies as "suspected AI"
- in addition to downvote/upvote, a "I think this is AI" counter
And if two people can get two opposite results by giving the same prompt which asks a very specific question to the same model, it looks like bunk anyway. LLMs don't care if they are correct.
Why is that relevant to GP's point?
I can't speak for anyone else, but I come to HN to discuss stuff with other humans. If I wanted an LLM's (it's not AI, it's a predictive text algorithm) regurgitations, I can generate those myself and don't need "helpful" HNers to do it for me unasked.
When I come here I want to have a discussion with other sentient beings, not the gestalt of training data regurgitated by a bot.
Perhaps that makes me old-fashioned and/or bigoted against interacting with large language models, but that's what I want.
In discussion, I want to know what other sentient beings think, not an aggregation of text tokens based on their probability of being used in a particular sequence determined by the data fed to model.
The former can (but may well not be) a creative, intellectual act by a sentient being. The latter will never be so, as it's an aggregation of existing data/information as a sequence of tokens cobbled together based on the frequency with which such tokens are used in a particular order in the model's corpus.
That's not to say that LLM are useless. They are not. But their place is not in "curious conversation," IMNSHO.
If someone thinks an "I asked $AI, and it said" comment is bad, then they can downvote it.
As an aside, at times it may be insightful or curious to see what an AI actually says...
The Case for a New HN Guideline on AI-Generated Content
This is a timely discussion. While AI is an invaluable tool, the issue isn't using AI—it's using it to replace genuine engagement, leading to "low-signal" contributions. The Problem with Unfiltered AI Replies
Instead of an outright ban, which punishes useful use cases, a new guideline should focus on human value-add and presentation.The spirit of the guideline should be: If you use an LLM, your contribution must be more than the LLM's output.
Ultimately, the community downvotes already function to filter low-effort posts, but a clear guideline would efficiently communicate the shared norm: AI is a tool for the human conversation, not a replacement for it.I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):
1. A unique human made prompt
2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).
3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.
Other points:
A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".
B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)
So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).
edit: (100% human post btw!)
Thank you for your attention on this matte.r
Edit: I'm happy to add two related categories to that too - telling someone to "ask ChatGPT" or "Google it" is a similar level offense.
> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.
For obvious(?) reasons I won't point to some recent comments that I suspect, but they were kind and gentle in the way that Opus 4.5 can be at times; encouraging humans to be good with each other.
I think the rules should be similar to bot rules I saw on wikipedia. It ought to be ok to USE an AI in the process of making a comment, but the comment needs to be 'owned' by the human/the account posting it.
Eg. if it's a helpful comment, it should be upvoted. If it's not helpful, downvoted; and with a little luck people will be encouraged/discouraged from using AI in inappropriate ways.
"I asked gemini, and gemini said..." is probably the wrong format, if it's otherwise (un)useful, just vote it accordingly?
Obligatory xkcd https://xkcd.com/810/
This is new territory, you don't ban it, you adapt with it.
I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.
Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360
"Banning" the comment syntax would merely ban the form of notification. People are going to look stuff up with an LLM. It's 2025; that's what we do instead of search these days. Just like we used to comment "Well Google says..." or "According to Alta Vista..."
Proscribing quoting an LLM is a losing proposition. Commenters will just omit disclosure.
I'd lean toward officially ignoring it, or alternatively ask that disclosure take on less conversational form. For example, use quote syntax and cite the LLM. e.g.:
> Blah blah slop slop slop
-- ChatGippity
I for one would love to have summary executions for anyone who says that Hello-Fellow-Kids cringe pushed on us by middle-aged squares: "vibe"
1. Paid marketing (tech stacks, political hackery, Rust evangelism) 2. Some sociopath talking his own book 3. Someone who spouts off about things he doesn’t know about (see: this post’s author)
The internet of real people died decades ago and we can only wander in the polished megalithic ruins of that enlightened age.
But most of the time it’s like they were bothered that I asked and copy paste what an AI said.
Pretty easy. Just add their name to my “GFY” list and move on in my life.