Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Banning hate communities does work though [0]. I assume that the results are similar for twitter and facebook. "Hateful" Reddit or FB communities also don't allow "free speech". The moderators will ban people who go against the grain. There is no free exchange of ideas or disputing as if you're going against the grain you'll just get banned from that community (but not the platform). As such, either the platforms allow the hateful communities to just exist in their 'safe space silo' or they de-platform them.

[0] http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf



I'm being censored on Reddit myself.

I'm not really sure why exactly. I suspect it has to do with having a NSFW tag on my account (that I didn't put there) that I'm afraid to turn off, because of the warning that's next to that button and it doesn't explain what "swearing" is exactly and if I turn it off and write "fuck" I might get banned.

My account provides a lot of help to other people, it's highly valued by the community, I have loads of karma, I've paid for quite a bit of server time by the rewards I've received.

But you can't search on my username, it won't show up (not even if you have NSFW visibility turned on).

I'm not about politics, or covid or any hate stuff. I'm just there to try and help people with depression, anxiety and other issues. And I'm being censored for some reason.

So whatever they're doing, it's guaranteed to overreach and it's counter productive and who knows how much damage it does to society as a whole.

It's just that no one knows the severity of the damage the censorship does.


> I'm just there to try and help people with depression, anxiety and other issues.

Then it's mostly like because those topics aren't "advertiser friendly." Youtube did something similar during what was called the "adpocalypse." Advertisers don't want their products to be associated with the things you talk about, and reddit cares more about selling ads than helping you help people.


Servers and bandwidth are not free. People maintaining them need to eat too.


That doesn't mean that people should be restricted to only discussing things that don't piss off the advertisers.

Fuck advertising.


>That doesn't mean that people should be restricted to only discussing things that don't piss off the advertisers. Fuck advertising.

They're not. It's easier and cheaper than ever to host content online or even host it at your home.


Computation, bandwidth and (open source) software is so cheap these days that it approaches free for many people (but not all). It's just a matter of coming up with a functioning model.

Ad supported is just one such model and it's not a very new one at that. Remember, you made this comment on server that was provided to you for free without the need of ad support.


> Remember, you made this comment on server that was provided to you for free without the need of ad support.

HN has ads, just sneaky ones that blend in with user-submitted links on the front page ($YCStartup is Hiring, Launch HNs. But on the positive side, they don't seem to rely on tracking to target any narrower audience than the whole HN userbase).

https://github.com/minimaxir/hacker-news-undocumented#percei...


Comes down to your definition of advertisement.

It's not without self interest. But that was kind of my point. So I'd say you and I are on the same page, I just used a different way to describe it and if that wasn't clear, that's probably because of how I wrote it. But thanks for giving me your feedback.


Reddit-like software is open source, what's stopping anyone like you then? Talk is cheap.

>Ad supported is just one such model and it's not a very new one at that. Remember, you made this comment on server that was provided to you for free without the need of ad support.

There are job ads on HN, plus HN is even more highly moderated than Reddit. So it's a very bad example.


>Reddit-like software is open source, what's stopping anyone like you then? Talk is cheap.

That source code hasn't been updated in years.

>There are job ads on HN, plus HN is even more highly moderated than Reddit. So it's a very bad example.

If you add meaning to the words of others that is not there, then indeed, the examples are bad.

To bring up the moderation here, you're supposed to be nice and interpret the words of others positively. Your reply is bordering on malicious:

>Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.


>That source code hasn't been updated in years.

Oh so it does cost effort and money to hire people to make a site like Reddit, thought you said it's easy and cheap and approaches free. So how does Reddit recoup that money???

Anyway, I said Reddit-like, not Reddit. E.g. https://github.com/libertysoft3/saidit

Old Reddit code is still pretty good.

>If you add meaning to the words of others that is not there, then indeed, the examples are bad.

>To bring up the moderation here, you're supposed to be nice and interpret the words of others positively. Your reply is bordering on malicious:

My point was that HN runs on ads and the financial goodwill of YC. If there's hate speech and misinformation on here then it would reflect badly on YC and YC backed companies. So we run into the exact same problem with Reddit and Youtube servers running with ad companies that don't want ads on a site with hate speech and misinformation.

If you want to set up or donate to such a site to prevent these influences, as you seem to think servers, bandwidth and maintenance is cheap. I think only talk is cheap, prove me wrong. What are the reasons for not creating such a site since you feel so strongly about censorship?


It's not free, it's just not something you compensate with cash. Let's not pretend that HN does not derive value from the people using it- that's a key component of the business.


"Approaches free" does not mean "free" in this context and that HN derives that value in other ways than advertisement was the point I was making.


Reddit’s search is incredibly (purposefully?) broken. It won’t give you nsfw results for anything. You have to go to old.Reddit.com and use that search system. It has a checkbox for “include nsfw results” that is missing from the new ui. Check that and see if it helps you find yourself.


I don't really need to find myself, you know. I know where I am. The problem is more that others can't find me and don't even know that I'm missing. Maybe you're right about the reason, but even that's irrelevant.

It's scary. I happen to know this was done to me. How many times have others made you disappear without you knowing you were scrubbed without being given any notice?

This isn't my first account on hackernews (not my second one either, though with one exception they are all in good standing).

But my very first account, I was happily participating here for years, until someone kindly told me I had been shadowbanned and I don't how much of my voice was censored exactly and I don't know why.

So the issue isn't reddit. And apparently it's not really about me. The problem is exactly what this article is about, private censorship. Web3 is supposed to address issues like this (though, I don't know how much of that is handwaving and make believe). As soon as I can, I'm going to move there.

I just can't trust these corporate entities, they don't operate in good faith, aren't transparent and avoid accountability for their actions. The utopia that was promised to me has failed and is in full decay, the signs are everywhere. It's time to move on.

It's sad that a "normal user" like me ended up with those beliefs.

I just wanted to be left alone and allowed to voice my opinions, I don't want to have to deal with censorship, I don't want to have to tell others how I've been impacted by it, to make them aware of what's happening behind their backs. Instead, here I am having a discussion like this with others.

Am I even allowed to say this? Will this be wiped as well? I don't know. I wish this was all a joke. What kind of dystopian future is this? It's ridiculous beyond belief.


I don't think I've ever seen an account with an NSFW tag/flair. Posts? Yeah. Subreddits? Yup. Possibly a flair for your user on a particular subreddit added by a mod for some reason? Sure. But not accounts. Can you link to this tag?


Oh yeah. Accounts have NSFW tags these days. Also impacts chat functionality. You'll get warnings there. They call it "profile" but your profile is 100% of your account activity.

https://www.reddit.com/r/help/comments/pfpw4y/any_way_to_rem...

This link has pictures in it, showing you the NSFW tag:

https://www.reddit.com/r/techsupport/comments/8qhay9/why_is_...


It seems that there are a fair number of sex workers on reddit whose profiles are tagged NSFW. Presumably, they would view the tag as an asset.

(I don't have any experience with these accounts myself. But I've stumbled across more than one in mainstream subreddits, and my curiosity was piqued. Only pointing this out to note that this comment reflects the totality of my knowledge on the subject.)


Define 'works'. This subject is an easy one in which to conflate goals. The goals of the platform are different than the goals of the public, lawmakers, etc. which is what the article is about.

The goal of the platform is to pull in more users. Having less abusive language and users is a means to that end, as is appeasing general public outcries for more moderation.

The goal of the public and lawmakers is, hopefully, more effective and meaningful discourse.

The paper you link measure for the prior outcome, not the latter.

Edit: Just to comment on the outcome you are referring to though, I don't believe the bans actually lead to more meaningful and effective discourse on Reddit. I don't even thing it lead to more civil discourse. You can click on literally any politically sensitive topic on the front page and you'll find that almost all the top comments both lean in a specific direction (diversity of discourse is terrible) and the deride anyone who disagrees. Maybe they use the 'F' word less, and no one is using the 'N' word anymore, but it is very obvious that it has become a strongly homogenized platform that doesn't welcome nuanced opinions.


"More effective discourse" is probably far too aspirational a goal for lawmakers/government. It's also probably too subjective.

Less violence, on the other hand, is a clearer goal that aligns with existing laws against violence, so it has a firmer basis, in line with existing restrictions on speech. It also gives you a non-partisan measuring stick.

Private platforms, on the other hand, will have more flexibility in trying to maintain, say, discussion quality as a goal (see HN's stated moderation goals vs Reddit's). Or disinformation, or language/profanity, or whatever else a private moderator may choose.

The government telling those private parties that they can't set their own standards, though, seems like a particularly terrible direction to go.


It's no more subjective than 'less violence' provided the interested parties define measures for success. I mean, if the government can set goals for employment, then surely they can set them for civic engagement and disinformation.

Totally agree with your last two points.


"violence"


I suspect that for reddit specifically, homogenous opinion and lack of meaningful discourse has more to do with the voting/karma system and its inability to scale well.

Nuance gets crowded out because posts and comments presenting polarized opinions are much more likely to get acted on by readers than their more nuanced counterparts are. Posting and voting becomes more about the dopamine hit from the gained karma and sense of being correct than it is about discussion.

This is why while it still isn't perfect, one can often see higher quality discussions in smaller niche subreddits where there's no critical mass to push dominant opinions. In those communities, posting extreme comments just makes one look like an attention seeking troll.

So in my view, moderation or lack thereof is almost orthogonal to an online community's propensity for quality discourse. That seems more related to the designs of and incentives given by the software these communities run on.


I agree with most of what you've said. I really only disagree with the last bit, as it ignores the network effects of moderation, and the fact that moderation applied by topic is necessarily polarizing along the lines of that topic.

I didn't intend to prove "bans cause homogenization", but rather cast doubt on "bans achieve the stated goal".


Sure it worked at reducing hate speech on reddit, but is that the right objective? It's not implausible to think that the resentment from such local measures could cause hate to increase in a more global sense. Kind of like entropy in thermodynamics: local work can reduce local entropy, but at the expense of causing a global increase in entropy.

If the real objective should be a global decrease in hate, then maybe local suppression/exile might not be the right mechanism.


I suppose to the extent these channels of communication recruit and radicalise, if they are shut down they will not be able to recruit and radicalise. The act of shutting them down might infuriate regular users of these channels, but they’re already radicalised anyway.


What you end up with is recruitment. If the user remains on Reddit or Twitter they're exposed to the gamut of human thought. However extreme they may be, they're still attached, there may exist some sort of analogue for a cleavage furrow, but nonetheless the cell remains. It's only at the point where you've so alienated and shorn their attachment to the whole that they become a fully independent entity, and that they become truly radicalized. And having observed this, I can say with sincerity that they move deeper into the domain of extremity.


The point someone is “recruited” is subjective. Either ways cutting out new recruits is a great solution.


There is little evidence that radicalization happens online [1]. It seems to require in-person contact to really take root.

[1] https://www.rand.org/randeurope/research/projects/internet-a...


This “study” has been proven to be obviously and demonstrably incorrect in the years since it was published. The authors should retract.


source?


Have you read the conclusions of the study? Do they comport with events of the past 5 years? Not at all


If you are going to claim a study is bunk then link another study proving it. empty statements like that prove nothing. that's not science.


This study was not really science. They interviewed a few people and then applied their own qualitative analysis to conclude that people cannot be self radicalized fully online. Since they published the study, there have been several major terrorist attacks perpetrated by individuals fully radicalized online.


I see. Your source is moral panicking in the news and journalists psychic ability to determine that dangerous radicalism is on the rise without ever bothering with any sort of data collection. Who could argue with that?


I promise you from personal experience, radicalization does happen online too.


I'd like to recommend this podcast about online radicalization, produced by the NYTimes: https://www.nytimes.com/column/rabbit-hole


That research was published by a libertarian think tank in 2013. Since then, there have been numerous examples of lone wolf terrorist attacks where the perpetrators appeared to have no offline contacts with extremist groups. See: New Zealand shooting, Pittsburgh shooting, etc.


There were also lone wolf mass shootings before the internet, so what does this prove?


That radicalisation does not exclusively happen face to face and that it can be conducted on line, by newsletter, by book, by carrier pigeon.


If ISIS can recruit in person by talking to vulnerable people in the right mosques, surely other extremists have backup channels as well.


But surely fewer channels means fewer recruits? Ultimately it's about what kind of content these platforms want to publish though.


It might also mean "higher quality recruits".


And that’s okay. Fewer “higher quality” recruits are easier to go after individually.


History says that well-organized, even if smaller movements, have higher chances of achieving their goals.

I can recommend this article about inefficiency of online movements:

https://www.theatlantic.com/technology/archive/2019/05/in-pe...


You can say the same about forces that work against the criminals as well except they’re well organized and have larger numbers.

Your link doesn’t take into account if the chilling effect is what leads to ephemeral online hate movements.


The more someone has to rely on backup channels and then backups of the back channel, the less likely you enroll new members.


As I said elsewhere in this thread (and linked to an article on similar topic), success isn't measured by sheer numbers of lukewarm sympathisers alone.

It is not as if George Washington or the Civil Rights Movement recruited masses of followers online. They started with relatively small but cohesive units. Yes, it took them longer than it takes today to assemble a flash mob. But the efficiency measured by capability was higher than today.

For an example, if I were a German cop, I would be much more concerned about the mostly-offline groups such as the Reichsburger or the Grey Wolves (Turkish nationalists). They linger around for decades and are hard to penetrate, unlike whatever wild crowd you can put together on Facebook. That Facebook group will look scary, but it won't have much pull after 5 years, because it competes with a ton of other Facebook groups for attention.


Even so, it makes very little sense to provide a platform to these hardcore organizations to continue to recruit from. The dismantling of the old KKK in the United States is a great success story to work off of.


Person to person doesn't scale as well as the internet.


Again: yes, it does not scale as well. But scale to what? Silicon Valley is generally obsessed with quick growth without thinking about long term impact. But politics is generally long term.

It is easy to put together a temporary, unstable mob of people in virtual reality. But real political impact of such mobs tends to be superficial.

Impactful organizations such as, well, the Taliban, sacrifice rapid growth and substitute it with higher-quality personal connections.


The hate is in part generated by a continuous bubbled feedback loop. Cutting out the source(s) usually doesn’t lead to a redirection and rather has a chilling effect.


A plausible hypothesis, but needs data.


Pretty much how it played out with r/fatpeoplehate. Anecdotal yes, but there are quite a few more examples on Reddit alone.


You have evidence that fat person hate has decreased globally beyond reddit?


The goal is to reduce network effects.

As poetic as free speech is, the 1700s were a different time. Hard to SWAT a rando a town away let alone a country. Now a nation state can instigate localized terrorism.

Thomas Jefferson understood laws and constitutions must change in step with human discovery.

Scalia wrote laws are not open ended, and up for public romantic interpretation.

To paraphrase both. Indeed many Founders wrote of the need for free speech in official political proceedings, never considering the privilege extending to the public. In public people were welcome to get a beating if they prefer to violate local order.

Humans are biological animals first. Gentile by training.


if you read the actual research provided by the OP it shows way more than that. The measures did not only crowd out hate speech, the study followed individual users and noted that they moderated and changed their behavior once they moved to other communities after the more extremist ones were banned, and they individually changed adapted.

That implies that the there's a positive effect even on the individual level, not just a silencing effect. Which seems completely in line with say, busting a cult in real life.


I can say that the local subreddit is one of the most heavily policed with the result that during the covid pandemic a large fraction of the casual posters were banned because they couldn't keep track of what new rule was introduced in what day.

A lot of people moved over to telegram channels because you could actually ask questions without getting banned for concern trolling, misinformation or incitement - e.g. asking where protests were happening which is what got me banned finally. Ironically I was asking where they were happening so I could avoid them.

The result of that policing is that I now have a telegram account and regularly scan a dozen right wing channels so I know if I can buy groceries without getting tear gassed.

If this is what winning looks like for the left we don't need help in losing.


> http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf

This paper is trash. On defining hate speech, "we focus on textual content that is distinctively characteristic of these forums", so, yes, banning specific subreddits resulted in less of that content in the overall Reddit. Did it make Reddit a more friendly place? Probably not as tensions have never been higher.


> Did it make Reddit a more friendly place? Probably not as tensions have never been higher.

Did allowing hate speech make 8chan/Voat friendlier? There are plenty of unfriendly communities that lack overt racism/fatphobia, but I can't think of any friendly ones that do.


Yes. People weren't trying to censor each other so they were less pissed off.


8chan was a sacrificial site to distract the most fervent internet crusaders. Worked pretty well, I believe it still exists in some form, but it is out of peoples mind.


Also important to note that Reddit doesn't ban hate. They ban unpopular opinions. If you want to see hate, go to r/politics.


Yup. These platforms are all perfectly 100% fine with hatred as long as it's the right kind.


> Banning hate communities does work though [0]

That's like saying banning smoking in the park reduces smoking because nobody's smoking in the park any more.


These things do help. Most members of these communities get radicalized by virtue of other content they’re already browsing. If you remove the communities and force individuals to leap from Facebook to some no-name forum, they’re far less likely to engage.

This phenomenon is mirrored elsewhere as well where small interventions can lead to big impacts: see suicide rates in Britain after removing carbon monoxide in their house gas. Initial friction (or means reduction in that research) can drive change.


I'm not convinced we should think it's acceptable for you or I to decide who does and does not get exposed to information that will 'radicalize' them because you and I may have wildly different opinions of what constitutes a radical opinion. Humans sometimes develop stupid opinions, and that is OK. They're allowed to do that. They aren't allowed to act on those opinions in a way that constitutes a crime, which is why we go through the trouble of defining crimes in the first place. 'Precrime' thought policing seems like a pretty dangerous road to go down and one that is full of unintended consequences.


I define a radical group to be some collection that seems to be consistently peddling false material for their own ends, with an overly provoking tilt that makes it easy to go viral. Some specifics out of the past year would be vaccine misinformation, prejudice that incites racial violence in Myanmar, organizing an occupation to the capital in the US. Shouldn't we put a stop to these communities if they sway people to start peddling this information themselves?

At least in the US, free speech is most free within public forums. But even then we already define some speech that is too dangerous if it's known to lead to poor outcomes. You can't yell fire in a crowded room, not because yelling fire is inherently a crime, but because it's going to incite people to a detrimental ends.

Plus social networks don't have a constitutional obligation to be town squares where free speech can spread unfettered. You have to draw a line somewhere. And as the people who write the algorithms that can amplify or bury content on these networks, I think it _is_ our obligation to at least set parameters on what constitutes good/healthy content interactions on these social platforms and what doesn't. The ML algorithms have to optimize over some loss function.


Not that I disagree with you - but "can't yell fire in a crowded room" is slightly misconstrued. As those aren't the original words from the U.S. Supreme Court case. [0]

Additionally, the idea of 'clear and present danger' has been modified within the past 100 years since the said court case. The Supreme court since then has stated: "The government cannot punish inflammatory speech unless that speech is 'directed to inciting or producing imminent lawless action and is likely to incite or produce such action'". Said definition has changed and depends upon situations, where some action is imminent or "at some ambiguous future date".[1]

[0] https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the... [1] https://en.wikipedia.org/wiki/Imminent_lawless_action


I mean location based smoking bans definitely do reduce smoking rates.


A lot fewer people smoke now in the US than several decades ago.

So let's do it. Let's isolate and restrict the hateful, the angry, the irrational EXACTLY like we did smoking.


This seems like a decent idea until someone in power decides that you are hateful, angry, or irrational.


You could use the same reasoning like

> Making bad things illegal seems like a decent idea until they make something you like doing illegal

Therefore just have no laws, I guess?

Sometimes an imperfect, abusable system is better than no system. In fact, I'd wager that's usually the case.


I think it already has been abused, the law has nothing to do with it.

Just don't venture to places where that is not the case and we can agree. We have different requirements for interesting platforms and should keep it separated for the most part.


> Therefore just have no laws, I guess?

No. But have a system where some single powerful person can't just make up laws on a whim.


Well, they generally can't: laws are written and passed by a legislature.

You could make this sort of argument for executive orders, though.


Maybe, but it does make the park a more pleasant space for everyone else to spend time in.


To be clear, the study found that platforms banning topics succeeds in removing those topics from the platform - not necessarily from society as a whole. The study did not conclude that banning hateful communities off of reddit actually made those users less hateful or curbed the spread of hateful content online. If each of those banned users subsequently posted twice as much hateful content on a different platform, that still comports with the conclusions of that study.


there's at least one major reason to be skeptical of that result, and they mention it in their limitations section.

what they did was create a lexicon of hate speech terms from two large subs that were banned. they then counted the frequency of those terms in other subs after the ban. they found that usage of those terms dropped substantially, and concluded that the bans were effective at reducing hate speech.

if you're familiar with the dynamics of these sorts of subs, the problem with this approach should be fairly obvious. these subs tend to develop their own set of specific terms/memes (hate-related or otherwise). it may be the case that the bans were effective at reducing hate speech across the whole site. but it's also possible that the same people are still posting the same stuff coded differently. this study is far from the final word on the matter.


Yes, reforming users who already want to engage in hate speech is not the goal; isolation is. Look at the pre-internet times, certain forms of radicalization were far less common than they are today because they aren't very present in the in-person community (while those that WERE geographically concentrated, like racism in certain American communities, were still spread in those places).

When the status quo is bad, "maybe that won't work" is not an effective argument against taking action anyway if you don't have a better idea. If we don't take action, we already know we're going to (continue to) have a bad result!


The exact same is true for political communities that aren't declared as a hate communities. So no, banning hate does not work. It does work if you define work as this study did, which has problems to say the least.

Today reddit is even more hateful than before, although that is a subjective meassure, but 5-10 years ago you at least had standards to not wish death on people you disagree with for the most part. There were exceptions, today it is very common.

I would argue the study is wrong.


They're measuring success based on referencing the community they just banned people from, aren't they?


Then r/politics should be banned but it's one of the biggest subreddits instead.


Neither of your assertions are related to what is being discussed. Nuclear bombs "work" and bad people would do horrible things with them, but that doesn't say anything about if we should use them.


If all "legitimate" platforms ban hate speech, then users wanting to engage in hate speech will all go to some platform that radically allows all speech with disproportionately this undesirable speech. They will intermingle disproportionately with those spreading sexual abuse images, drugs, insurgent propaganda and instructional material, and other undesirable material. Facebook likely makes the problem worse by forcing these "hate speech" and "disinformation users" to be completely surrounded by people with repulsive content, instead of having their repulsive content critiqued and shamed by other users.

Having people with bad, hateful ideas out in the open I would argue is preferable to concentrating all these bad thoughts together with people that will reinforce that it is normal.


>"Hateful" Reddit or FB communities also don't allow "free speech". The moderators will ban people who go against the grain.

Which is fine I think, but why have Reddit or FB do the censoring (aside from things that are outright illegal). I don't much care if a bunch of Nazis or tankies are busy planning world domination on Reddit while sharing recipes. Why do you care?


> Why do you care?

Because zero moderation beyond criminal content is 8chan, which arguably inspired mass shootings.

https://www.washingtonpost.com/technology/2019/08/04/three-m...


And when you ban misinformation on your platform, that "anti-vaxxer" instead goes to 8chan where by your argument might inspire them to shoot people up.


Given the size of 8 Chan and the size is communities banned from Reddit, we know this to be untrue.


But you want Reddit to literally adopt 8chan's moderation policy, meaning that Reddit now will become the place that inspires mass shooters instead of 8chan (which by the way, is no longer a place that inspires mass shootings, since it was killed by Cloudfare after the last one, and replaced with the impotent and unpopular 8kun).


>Reddit now will become the place that inspires mass shooters instead of 8chan

I can guarantee that there are plenty of evil doings on Reddit and Facebook.

One argument, and a more honest one, that people can make is that (a) social media is toxic and (b) it should be made illegal generally. Bingo bango, no mass shootings I guess.


>But you want Reddit to literally adopt 8chan's moderation policy,

I want reddit to adopt the public square's policy of allowing any content that isn't illegal, which also happens to be pretty much synonymous with 8chan's policy.

Do you consider the town square (which has the policy of allowing content that is not illegal) a center of inspiration for mass shootings? Could it be that the public square is not viewed as a place for inspiration of mass shooting, have anything to do with integration of many ideas and the fact that someone bringing bad ideas might actually be challenged in an environment where they are exposed to the general ideas of the community rather than an echo chamber of fellow nazis or whatever?

The nazi hall may have the same moderation policy as the town square, that doesn't mean I expect the same inspirations to come out of the nazi hall. The issue with the nazi hall is the powder-keg full of people reinforcing bad ideas, whereas a nazi in a more "normal" place like the public square might have some chance of being shamed or convinced their anti-social ideas are undesirable (despite the nazi hall and public square having same moderation policy). I don't want to shove more people into the nazi hall by banning them from the public square (especially when they're only being banned from the square because they have unconventional views on vaccines.)

---------------

In the censor's world, the people with undesirable ideas in the public square are kicked into 8chan where instead of their ideas being challenged they all end up in a self reinforcing chamber. The proportional amount of people wanting a mass shooting may be tenfold that in the public square, leading to more compressed exposure including by other people who were originally just anti-vax or whatever. And the people running the public square turn around and say "see, 8chan allows any ideas, and that's what happens when you do that!"


  "Do you consider the town square a centre of inspiration for mass shootings"
Before the internet, yes, definitely. Maybe not mass shootings specifically because that seems to be a recent fashion trend after Columbine, but violent extremism in general. How do you think Hitler managed to secure over 40 percent of the democratic vote in the early 1930s? How did Osama Bin Laden recruit extremists who were willing to put a bomb into the WTC basement? Propaganda, speech.

This idea that unfettered speech in the public town square, even if it isn't directly inciting violence, can't lead to pathological outcomes just doesn't hold up.

This isn't even an argument for government censorship. It's merely me recognizing that these type of outcomes can come about.

Nowadays almost all extremist speech is online, because that's where there is distribution and anonymity, so the analogy breaks down.

  "where they are exposed to the general ideas of the community rather than an echo chamber"
This isn't a bad argument, but you have to balance it off with the knowledge that ideas are highly, highly contagious. On balance, I think giving such ideas distribution to a billion eyes is far more harmful than pushing a fringe into echo chambers which already existed before social media censorship began anyway (such as the Stormfront forum).

Moreover you have to recognize that these isolated echo chambers would naturally self-segregate on Reddit if given free reign, and so in practice you haven't changed anything aside from giving these ideas more distribution. It's not like /r/88 or whatever would be interacting with the rest of Reddit thus helping their members deradicalize.


I appreciate your honesty in believing the public square is a center of inspiration for mass shootings.

I believe quite the opposite. It has been a place for the public to plan self defense, both to organize themselves in defense from natural disaster, hostile forces, wildfires, and anyone who seeks to do them harm. It is a place for the public to engage in the marketplace of ideas and inspirations, which ultimately leads to the saving of lives, prosperity, security, and bonding of the populace. Harmful ideas can be shamed and those espousing bad ideas have a chance of learning the holes in their ideas. The mass shooter espousing violent ideas in the public square is as likely to have alerted his neighbor to be alert for any evidence of crime, as he is to convince the general populace of his nutjob ideas.

I don't buy your hypothesis that Hitler came to power because of free speech, and quite frankly it is laughable to think banning Hitler from Reddit (were it to exist in his day) would have any effect whatsoever. You seem quite ignorant of the factors precipitating Naziism, including the economic situation of Germany at that time. It's also worth noting that Hitler was quick to stifle certain speech that went against his ideas, meaning he found free speech at odds or even dangerous to Naziism.

---------

>How did Osama Bin Laden recruit extremists who were willing to put a bomb into the WTC basement? Propaganda, speech.

Bin Laden attempted to blow up the WTC basement with bombs, not free speech. Bin Laden lived in Muslim nations with more limited speech regulations than Reddit.

>Moreover you have to recognize that these isolated echo chambers would naturally self-segregate on Reddit if given free reign, and so in practice you haven't changed anything aside from giving these ideas more distribution. It's not like /r/88 or whatever would be interacting with the rest of Reddit thus helping their members deradicalize.

Some may, some may not. I've stopped using reddit because I was banned because I simply said things like I didn't believe forcefully shutting down a restaurant is an appropriate way to deal with coronavirus. Now maybe that is a very wrong and bad idea, but I'm willing to debate with others on it and learn their perspectives. Instead these communities said fuck you, you're banned, and now you have to go to some echo-chamber where everyone agrees with it. I'm not interested in an echo chamber, I'm interested in engaging with others so my bad ideas can be brought to light and shown to be bad, or my good ideas can be integrated. Your argument sounds more like one against having subreddits.


Hitler convinced almost half the country to vote for him because of speech that drummed up resentments stemming from the Versailles Treaty and the depression, channeling and anthropomorphizing those resentments towards Jews, the lugenpresse, the military establishment, and so on. So you've missed my point, which is that town square offline speech can directly cause pathological outcomes when it is weaponized by bad faith actors.

The belief that sunlight is the best disinfectant is nothing more than empty sloganeering and it flies in the face of everything we know about social contagion and the willingness of humans to be led astray by tribal hatred.

Town square offline speech didn't lead specifically to mass shootings historically only because this particular medium of terrorism is a modern fashion trend, so it follows that it's a phenomenon that's going to be motivated online more than offline in the modern context.


And your argument is that if the venues hosting Hitler's speeches had Reddit's moderation policies then Hitler would not have been elected?


You're trying to draw analogies between modern technology and the old town square. You should stop doing that because instant distribution to a billion people isn't the same thing as a speech to a thousand.

I provided examples of speech in the old town square leading to pathological outcomes, but we are in a very different regime now and analogizing too much isn't helpful.


So who should decide what moderation policies we have for the public? The general populace, who as you say would elect literally Hitler, or the government itself of which Hitler was once a part and used these very moderation mechanisms to suppress the Jews? The tyranny of a minority of special moderators like perhaps a nominally communist censor committee may have? We allow Naziist speech to exist precisely because we don't want the government or the tyranny of the majority or minority choosing what political speech is allowed, such as outlawing speech that doesn't promote Naziism.

>You're trying to draw analogies between modern technology and the old town square.

No I'm trying to find out how you want to apply moderation strategies to "reduce the likelihood" (my apologies if I misquoted your deleted comment) of democratic election of those who some censors decide have the wrong political views or speech.

>You should stop doing that because instant distribution to a billion people isn't the same thing as a speech to a thousand.

Are you also one of those that thinks the first amendment doesn't apply to the internet because the founders never imagined something that distributes so much faster than the printing press could exist? I know this is a straw man but I can't help but think this is where this is leading.

>And your argument is that if the venues hosting Hitler's speeches had Reddit's moderation policies then Hitler would not have been elected?

The fact that you didn't answer this question (well you did, but you deleted it) really is an damning answer of itself.


> So who should decide what moderation policies we have for the public?

There's three possibilities:

(1) No moderation at all, beyond what's illegal.

(2) Private voluntary self-regulation.

(3) Government censorship.

In my opinion, (2) is the lesser evil, which isn't to say that it doesn't have its own pitfalls. (1) is infeasible due to the 8chan experience, and our understanding of social contagion and human tribalism. (3) has a much bigger slippery slope risk.

> The fact that you didn't answer this question

I deleted my answer because these analogies are too tenuous. You're trying to compare modern social media with how information spread 90 years ago. How can I map "Reddit's moderation policies" onto 1920s beer halls and Der Sturmer and newspapers? You can't do it. We're in a new regime and we need to reason about this new regime from first principles.


We're in agreement, although I might add (2) is essentially the same as the censorship policy in the Weimar Republic under which Hitler was elected, where public censorship was nominally and constitutionally illegal [1] (except in narrow circumstances, such as anti-Semetic expression) and any censorship essentially relegated to private and/or voluntary regulation

> How can I map "Reddit's moderation policies" onto 1920s beer halls and Der Sturmer and newspapers?

The same way the first amendment is applied to both beer halls and the internet. There's not a single rule in Reddit's content policy that cannot be applied to a beer hall [0]. If you fail to find a way to apply these rules you're either not putting in any effort or you're a lot dumber than you sound (methinks the former).

Given that what you advocate for is virtually identical to that under the Weimar Republic, I assert your chosen policies would have little to no effect on the election of Hitler.

[0] https://www.redditinc.com/policies/content-policy

[1] Ritzheimer, Kara L (2016). 'Trash,' Censorship, and National Identity in Early Twentieth-Century Germany. Cambridge University Press.


I know that the Weimar Republic had anti-semitic censorship laws. I made almost the same argument that you are making just 2 months ago:

https://news.ycombinator.com/item?id=27865484

"My understanding is that pre-Nazi Germany had hate speech laws, and it didn't seem to work there?"

I abandoned my views on this question for a few reasons:

- The Weimar Republic laws either weren't effective at preventing distribution or they weren't actually enforced. The continued circulation of Der Sturmer is evidence of this. The judiciary was known to be heavily biased in favor of the far-right, where less than 10% of far-right political killers were convicted and the majority of far-left political killers were convicted.

- Online censorship is far less likely to create martyrs than the visual/emotional imagery of imprisoning people.

- Online censorship is far more effective at preventing distribution.

- Failing to censor online leads to automatic mass-distribution due to the consolidation of eyeballs in a small number of venues. Failing to censor offline does not. There is less scale to be had offline.

- Online censorship that we're talking about is private and voluntary. It is not in the same category as government censorship as far as downside risk is concerned.

> Given that what you advocate for is virtually identical

It is not "virtually identical". As I've said, the context is extremely different. You can't draw an analogy as much as you keep trying.


>I know that the Weimar Republic had censorship laws. I made almost the same argument that you are making just 2 months ago:

What? The anti-semetic expressions crime thing is a fact, not an argument (I am against censorship laws!). I was honestly completely knocked cold fthat you came to the conclusion I was making your argument. The takeaway isn't that hate speech laws work, it's that they don't. I'm pro hate speech and anti-censorship. I don't like it, but I'm pro allowing it. I'm making your counter argument. In fact you seem to be listing many of the reasons why hate speech laws don't prevent Naziism.

>I abandoned my views on this question for a few reasons:

I'm surprised you chose not to learn from your responses and realize the folly of restricting "hate speech." It doesn't seem you abandoned anything, it seems you double downed.

>It is not "virtually identical". As I've said, the context is extremely different. You can't draw an analogy as much as you keep trying.

You can bury your head in the sand if you like, but no matter how hard as you keep trying to think they aren't virtually identical, they still will be. What you advocate is extremely similar and you are oblivious and disconnected to the reality of the similarity between our current censorship laws and those of the Weimar Republic. I'm not drawing an analogy, I'm saying you are literally advocating for the policy of the Weimar Republic under which Hitler was elected, with only the slightest of differences (their laws were ever so slightly more restricted due to some spottily enforced hate speech laws). The Weimar's Republic policy was literally free speech, sans some poorly enforced hate speech laws, plus private and/or voluntary censorship, which is your option (2). In fact your precise option (2) was free speech + private/voluntary regulation, but you admitted that Weimar's hate speech laws were essentially useless.

The internet is just another media of communication. That's it. You said yourself hitler reached over half of voters with his speech. That's probably a greater voter penetration than even what reddit reaches. You make some arguments why hate speech laws weren't very effective at the speeches but then you think they will be even more effective at something with even lower voter distribution than these speeches that you say went to more than half of voters.

>Having said that, it's true that for some people no amount of reasoning or persuasion will work

Some people are their own soothsayer. Have fun in your censored future insulated from reality and the opinion of others, left to the discretion of whatever "private" entity believes is allowed truths.


  "I was honestly completely knocked cold fthat you came to the conclusion I was making your argument. The takeaway isn't that hate speech laws work, it's that they don't."
You've misunderstood. I was previously arguing that they don't work, not that they do work. Read the old post of mine that I linked.

  "What you advocate is extremely similar and you are oblivious and disconnected to the reality of the similarity between our current censorship laws and those of the Weimar Republic."
I outlined the reasons why these are different situations which you haven't addressed in your reply.


Zero moderation beyond criminal content is/was basically Usenet alt groups. Somehow the world survived all those years.

Listen, I understand the need to cordon off the wrongthink people so that they can't communicate with each other, I just don't agree with it.


> Banning hate communities does work though ...

The sentence is incomplete. Banning X communities does work to achieve the goal of people not talking about X. I don't think that your linked study is really necessary, the Chinese cultural revolution worked really well (to achieve the goal of "preserving Chinese communism") [0]. Imagine if 30 years ago large digital monopolies banned what was considered unmentionable back then. I doubt gay marriage would have been legalised in America. All of the progress that we have in making marijuana more legal would have been terminated by the companies wanting to prevent people advocating illegal drug usage.

[0] https://en.wikipedia.org/wiki/Cultural_Revolution


Homosexuality in general was effectively banned by dominant platforms in the US for quite some time! It was a BIG DEAL to people when gay characters started becoming more common in mass media.

But note the order it happened. The privately-set standards moved with the times much faster than the government ones did!

Writers/tv execs/etc heard both the pro-equality arguments as well as the anti-homosexuality arguments and made their own choices as they were persuaded to. Many states, on the other hand, never legalized gay marriage before the court decision overruled their laws.

So that seems to show that we should empower private parties to have control over what their platform shows, over either the government or just the loudest mobs (there were MANY protests/boycott threats/etc from religious groups over this). The market gives this the advantage over the government here: the private publisher can test what sells, and over time is going to be increasingly forced to move with societal changes, while the government is much more likely to be captive to small-but-loud contingencies (especially in a gerrymandered world).


> the Chinese cultural revolution worked really well

Given that the lineage in power now (the Dengists) were imprisoned under Mao during the cultural revolution, and only after Mao's death were able to perform a coup, arrest the rest of the leaders of the cultural revolution, and let the Dengists out of jail, I'm not sure that's the case.


Chinese censorship is backed by the threat of arbitrary imprisonment or violence. In that case, it's not really the banning of the topic that's working, it's policing for compliance and de facto criminalization of defiance.


>Banning X communities does work to achieve the goal of people not talking about X.

Honestly I'm not sure that is even accurate, I would imagine it does the opposite. People are drawn to what's not allowed, it's even one of the morals of the Adam and Eve story. Pretty old but still relevant idea of human nature.

Here's an interesting story about Goldberger who defended the Neo Nazis in Skokie.

https://www.aclu.org/issues/free-speech/rights-protesters/sk...


From a former Communist country: whatever was banned (jokes about the Party or the Soviet Union, various conspiracies or whatever correct-but-undesirable information out there, such as the Chernobyl accident in the first days), spread like wildfire by "whispering channels".

People are really drawn to forbidden fruit.


You need to understand that the people who are saying we must ban misinformation are the party apparatchiks, trying to argue in good faith with them is pointless. The only reason to engage is to show the silent majority that they aren't crazy for disagreeing with those in power.


> Banning hate communities does work though

According to a single study. In general I don't think it actually does. Making it harder to find is sufficient, banning it outright just proves their point and pulls additional moderates to their cause.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: