This is one of those studies that presents evidence confirming what many people already know. The majority of the bad content comes from a small number of very toxic and very active users (and bots). This creates the illusion that a large number of people overall are toxic, and only those who are in deep already recognize the truth.
It is also why moderation is so effective. You only have to ban a small number of bad actors to create a rather nice online space.
And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.
It also explains why large platforms can be so toxic. If there were a sport with 1000 players, you would need 100 referees, not 1. At scale, all you can really do is implement algorithmic solutions, which are much coarser and can be seriously frustrating for good-faith creators (e.g. YouTube demonetization)
Arbitrators are good! They can be unfair or get things wrong, but they are absolutely essential. It boggles my mind how we decided we needed to re-learn human governance from scratch when it comes to the internet. Obviously the rules will be different, but arbitrators are practically universal in human institutions.
The stakes are much lower on social media. If a referee makes a bad call then I might lose the game so it's worth paying for sufficient and competent officials. But when I see offensive content on social media I just block it and move on with no harm done. As a user the value of increased governance is virtually zero.
> But when I see offensive content on social media I just block it and move on with no harm done.
You may be in a minority here. Most people when they see harmful content react to it. And that reaction is perceived as engagement which further perpetuates and strengthen the signal.
Any site with UGC should include posting frequency next to the name of posters, each time they appear on pages. If a post is someone's 500th for that day it provides a lot of valuable context.
Ratio of posts:replies, average message length, and average message energy (negative, combative, inflammatory, etc) provide decent signal and would be nice to see too. Most trolls fall into distinct patterns across those.
I’m of the belief that HN would benefit from showing a user’s up-votes and down-votes, and perhaps even the post that happened within. Also limit down votes per day, or at least make karma points pay for them. There is definitely an “uneven and subjective” distribution of down-votes and it would be healthy to add some transparency.
One of the best things platforms started doing is showing the account country of origin. Telegram started doing this this year using the users phone number country code when they cold DM you. When I see a random DM from my country, I respond. When I see it's from Nigeria, Russia, USA, etc I ignore it.
It's almost 100% effective at highlighting scammers and bots. IMO all social media should show a little flag next to usernames showing where the comment is coming from.
Yes, but as soon as scammers find their current methods ineffective they will swap to VPN and find a way to get "in country" phone numbers.
There is a fundamental problem with large scale anonymous (non-verified) online interaction. Particularly in a system where engagement is valued. Even verified isn't much better if it's large scale and you push for engagement.
There are always outliers in the world. In their community they are well know as outliers and most communities don't have anyone that extreme.
Online every outlier is now your neighbor. And to others that "normalizes" outlier behaviors. It pushes everyone to the poles. Either encouraged by more extreme versions of people like them, or repelled by more extreme versions of people they oppose.
And that's before you get to the intentional propaganda.
In country phone numbers are quite hard to get since they have to be activated with ID. Sure scammers could start using stolen IDs, but that's already a barrier to entry. And you are limited to how many phone numbers you can register this way.
Presumably with further tie ins to government services, one would be able to view all the phone numbers registered in their name to spot fraud and deactivate the numbers they don't own.
It is very much like crime in general. The vast majority of crimes committed each year are by a tiny minority of people. Criminals often have a rap sheet as long as your arm; while a huge percentage of the population has never had a run-in with the law except for a few traffic or parking tickets.
While crime is definitely a major problem, especially in big cities; it only takes a few news stories to convince some people that almost everyone is out to get them.
> this is why for-profit platforms are loathe to properly moderate
They measure the wrong things. Instead of measuring intangibles like project outcomes or user sentiment they measure engagement by time spent on site. It's the Howard Stern show problem on a "hyper scale."
> A social network
Given your points we should probably just properly call them "anti-social networks."
"Last week, the Yale Youth Poll released its fall survey, which found that “younger voters are more likely to hold antisemitic views than older voters.” When asked to choose whether Jews have had a positive, neutral, or negative impact on the United States, just 8 percent of respondents said “negative.” But among 18-to-22-year-olds, that number was 18 percent. Twenty-seven percent of 18-to-22-year-olds strongly or somewhat agreed that “Jews in the United States have too much power,” compared with 16 percent overall and just 11 percent of those over 65."
It's easy to get exposed to extreme content on instagram, X, YT and elsewhere. Incendiary content leads to more engagement. The algorithms ain't alright.
One danger is that the volume of toxic people does actually create large numbers of actually toxic people. For example when mainstream influencers or politicians endorse racist views even indirectly, it can shift give others permission to start saying the same things. Then that causes the other side to go further to an extreme on their side. And so on.
> And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.
How so? It's not like Facebook charges you to post there.
Furthermore, this does well to illustrate how that handful of trolls is eroding away the mutual trust that makes modern civilization function. People get start to get the impression that everybody is awful and act accordingly. If allowed to continue to spiral, the consequences will be dire.
Bad content being shoved in our face is a symptom of the real problem, which is bad mechanics. Solutions that reform the mechanics (e.g. require a chronological feed instead of boosting by likes) are going to be more effective, less divisive, neutral by design, and politically/legally easier to implement.
Isnt it still an accurate perception of moral decline? Even if its only 3% sharing misinfo and toxic posts its still 47% that is sharing them, commenting on them and interacting positively with them. This gives the in my opinion correct perception that there is moral decline.
You have to eliminate this counterpoint for the evidence to fully support your perception: people share stuff on social media because it's easier and actively encouraged.
Here's the counterpoint to that though: people share stuff on social media not just because it's easy, but because of the egocentric idea that "if I like this, I matter to the world." The egocentricism (and your so-called moral decline) started way earlier than that, though-it goes back to the 1990s when talk shows became the dominant morning and afternoon program in the TV days. Modern social media is simply Jerry Springer on sterioids.
It doesn't indicate moral decay at all. It just confirms what we already know about the human psyche being vulnerable to certain types of content and interactions. The species has always had a natural inclination towards gossip, drama, intrigue, outrage, etc.
It only looks like "decline" because we didn't used to give random people looking to exploit those weaknesses a stage, spotlight, and megaphone.
This is intentional: make people think there's nothing online except harmful content, and propose a regulatory solution, which creates a barrier to entry. It's "meta" trying to stop any insurgent network.
It’s also meta overstating the power of influence. Why would they do that? Because it’s good marketing for them to sell a story around how their services running ads can be used for highly effective mass influence.
Yes, like the "Cambridge Analytica scandal." The "scandal" was people using their ad marketplace tools and a 100-line Python script to "hack" a presidential election. Also, the "Russian Election Interference" aka someone doing a $50k ad buy and "hacking" the presidential election.
This study seems to be playing with what toxicity means.
Is the 43% cited at the top of the piece matching the same criteria they use for digging deeper in the study ?
Their specific definition of toxicity is in the supplementary material, and honestly I don't think it matches the spectrum of what people perceive as toxic in general:
> The study looked at how many of these Reddit accounts posted toxic comments. These were
mostly comments containing insults, identity-based attacks, profanity, threats, or sexual
harassment.
That's basically very direct, ad hominem comments.
and example cited:
> DONT CUT AWAY FROM THE GAME YOU FUCKING FUCK FUCKS!
Also why judge Reddit on toxicity but not fake news or any other social trait peolple care about ? I'm not sure what's the valuable takeaway from this study, only 3% of reddit users will straight insult you ?
Abstract: "Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people's perceptions and interactions with social media may undermine social cohesion."
Ahhhh. So maybe it's the platforms and their algorithms promoting harmful content for attention that are to blame? And how many of the platforms want to even admit the content they are pushing is "harmful"? Seems like two elephant sized sources of error.
The premise of this study is a bit misguided, imho. I have absolutely no idea how many people _post_ harmful content. But we have a lot of data that suggests a _lot_ of people consume harmful content.
Most users don't post much of anything at all on most social media platforms.
Saying it's "algorithms" trivializes the problem. Even on reasonable platforms, trolls often get more upvotes, reshares, and replies. The users are actively trying to promote the bad stuff as well as the good stuff.
Open youtube in a fresh browser profile behind a vpn. More than 90% of the recommended videos in the sidebar are right-wing trash like covid-conspiracies, nut-jobs sprouting Kremlin nonsense, alt-right shows.
Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that. The same for Twitter. They are not stupid, if they know you and they think they should push you in a more subtle way then they aren't going to bombard you with Tucker Carlson. Don't ever think the tech oligarchy is "neutral". Just a platform, yeah right.
> Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that.
Google et al do not give a hoot about being “left” or “right” - they only care about profit. Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If Youtube can make money from videos about peace and prosperity that’s what you’d see behind the VPN. since no one watches that shit you get Tucker
via their preferred business model: no competition, no market. Their best bet is the right. And since the right's agenda is objectively antithetical to the people's interest, they need to create smokescreens with bullshit. That is exactly what is being pushed here.
They are happy with "left" politicians as long as they buy into the false narratives, and as long as they are willing to play along with their monopolist playbooks.
fooled in what way? I don’t use youtube or any social media since 2019-ish. last time I saw anything on youtube in probably 2018-ish (othercthan my kid showing me volleyball highlights :) )
Well you just posted, telling someone else what Zuck's political interests might be, based upon what even you described as meaningless performative behavior.
If more people were obligated to undergo KYC to get posting rights, Less people would be able to objectively claim to be other than they are.
If more channels were subject to moderation, and moderators incurred penalty for their failure, channels would be significantly more circumspect in what they permitted to be said.
KYC should work both ways. If a social media network needs to know my real name and address, I should know the real name and address of everyone running the social media network.
It's a good idea for it to work both ways in case doxxing happens. I'm against KYC by any business from a security perspective: my PII shouldn't be available to criminals.
any basic nodal theory will help you understand its not about how many who post, its about their reach and correlations with viewership of overall graph.
A few bad apples, spoil the whole bunch is illustrated to an extreme in any nodal graph or community.
So it's more about how much toxic content is pushed, not how much is produced. At an extreme a node can be connected to 100% of other nodes and be the only toxic node, yet also make the entire system toxic.
> When US-Americans go on social media, how many of their fellow citizens do they expect to post harmful content?
Just because an American citizen sees something psoted on social media in English, it doesn't mean that it was a fellow American citizen who posted it. There are many other major and minor Anglophone countries, and English is probably the most widely spoken second language in the history of humanity. Not to mention that even if someone does live in America and speak English and post online, they are not necessarily a US citizen.
It is also why moderation is so effective. You only have to ban a small number of bad actors to create a rather nice online space.
And of course, this is why for-profit platforms are loathe to properly moderate. A social network that bans the trolls would be like a casino banning the whales or a bar banning the alcoholics.
Arbitrators are good! They can be unfair or get things wrong, but they are absolutely essential. It boggles my mind how we decided we needed to re-learn human governance from scratch when it comes to the internet. Obviously the rules will be different, but arbitrators are practically universal in human institutions.
You may be in a minority here. Most people when they see harmful content react to it. And that reaction is perceived as engagement which further perpetuates and strengthen the signal.
It's almost 100% effective at highlighting scammers and bots. IMO all social media should show a little flag next to usernames showing where the comment is coming from.
There is a fundamental problem with large scale anonymous (non-verified) online interaction. Particularly in a system where engagement is valued. Even verified isn't much better if it's large scale and you push for engagement.
There are always outliers in the world. In their community they are well know as outliers and most communities don't have anyone that extreme.
Online every outlier is now your neighbor. And to others that "normalizes" outlier behaviors. It pushes everyone to the poles. Either encouraged by more extreme versions of people like them, or repelled by more extreme versions of people they oppose.
And that's before you get to the intentional propaganda.
Presumably with further tie ins to government services, one would be able to view all the phone numbers registered in their name to spot fraud and deactivate the numbers they don't own.
While crime is definitely a major problem, especially in big cities; it only takes a few news stories to convince some people that almost everyone is out to get them.
They measure the wrong things. Instead of measuring intangibles like project outcomes or user sentiment they measure engagement by time spent on site. It's the Howard Stern show problem on a "hyper scale."
> A social network
Given your points we should probably just properly call them "anti-social networks."
We hold (or I do at least) certain stereotypes of what type of person they must be, but I'm sure I'm wrong and it'd be lovely to know how wrong I am.
"Last week, the Yale Youth Poll released its fall survey, which found that “younger voters are more likely to hold antisemitic views than older voters.” When asked to choose whether Jews have had a positive, neutral, or negative impact on the United States, just 8 percent of respondents said “negative.” But among 18-to-22-year-olds, that number was 18 percent. Twenty-seven percent of 18-to-22-year-olds strongly or somewhat agreed that “Jews in the United States have too much power,” compared with 16 percent overall and just 11 percent of those over 65."
It's easy to get exposed to extreme content on instagram, X, YT and elsewhere. Incendiary content leads to more engagement. The algorithms ain't alright.
How so? It's not like Facebook charges you to post there.
Here's the counterpoint to that though: people share stuff on social media not just because it's easy, but because of the egocentric idea that "if I like this, I matter to the world." The egocentricism (and your so-called moral decline) started way earlier than that, though-it goes back to the 1990s when talk shows became the dominant morning and afternoon program in the TV days. Modern social media is simply Jerry Springer on sterioids.
It only looks like "decline" because we didn't used to give random people looking to exploit those weaknesses a stage, spotlight, and megaphone.
Is the 43% cited at the top of the piece matching the same criteria they use for digging deeper in the study ?
Their specific definition of toxicity is in the supplementary material, and honestly I don't think it matches the spectrum of what people perceive as toxic in general:
> The study looked at how many of these Reddit accounts posted toxic comments. These were mostly comments containing insults, identity-based attacks, profanity, threats, or sexual harassment.
That's basically very direct, ad hominem comments. and example cited:
> DONT CUT AWAY FROM THE GAME YOU FUCKING FUCK FUCKS!
Also why judge Reddit on toxicity but not fake news or any other social trait peolple care about ? I'm not sure what's the valuable takeaway from this study, only 3% of reddit users will straight insult you ?
Most users don't post much of anything at all on most social media platforms.
Rage = engage
Baseline is in the end anti-democracy and anti-truth. And Google is heavily pushing for that. The same for Twitter. They are not stupid, if they know you and they think they should push you in a more subtle way then they aren't going to bombard you with Tucker Carlson. Don't ever think the tech oligarchy is "neutral". Just a platform, yeah right.
Google et al do not give a hoot about being “left” or “right” - they only care about profit. Zuck tattooed rainbow flag while Biden was President and is currently macho-man crusader. If Youtube can make money from videos about peace and prosperity that’s what you’d see behind the VPN. since no one watches that shit you get Tucker
via their preferred business model: no competition, no market. Their best bet is the right. And since the right's agenda is objectively antithetical to the people's interest, they need to create smokescreens with bullshit. That is exactly what is being pushed here.
They are happy with "left" politicians as long as they buy into the false narratives, and as long as they are willing to play along with their monopolist playbooks.
Funny how you say this but insist you're not the one being fooled right now!
I was always intrigued about Twitter. After the novelty wears off who the hell wants to spend hours ever day tweeting?
If more channels were subject to moderation, and moderators incurred penalty for their failure, channels would be significantly more circumspect in what they permitted to be said.
Free speech reductionists: Not interested.
A few bad apples, spoil the whole bunch is illustrated to an extreme in any nodal graph or community.
So it's more about how much toxic content is pushed, not how much is produced. At an extreme a node can be connected to 100% of other nodes and be the only toxic node, yet also make the entire system toxic.
Just because an American citizen sees something psoted on social media in English, it doesn't mean that it was a fellow American citizen who posted it. There are many other major and minor Anglophone countries, and English is probably the most widely spoken second language in the history of humanity. Not to mention that even if someone does live in America and speak English and post online, they are not necessarily a US citizen.