Banning hate communities does work though [0]. I assume that the results are similar for twitter and facebook. "Hateful" Reddit or FB communities also don't allow "free speech". The moderators will ban people who go against the grain. There is no free exchange of ideas or disputing as if you're going against the grain you'll just get banned from that community (but not the platform). As such, either the platforms allow the hateful communities to just exist in their 'safe space silo' or they de-platform them.
I'm not really sure why exactly. I suspect it has to do with having a NSFW tag on my account (that I didn't put there) that I'm afraid to turn off, because of the warning that's next to that button and it doesn't explain what "swearing" is exactly and if I turn it off and write "fuck" I might get banned.
My account provides a lot of help to other people, it's highly valued by the community, I have loads of karma, I've paid for quite a bit of server time by the rewards I've received.
But you can't search on my username, it won't show up (not even if you have NSFW visibility turned on).
I'm not about politics, or covid or any hate stuff. I'm just there to try and help people with depression, anxiety and other issues. And I'm being censored for some reason.
So whatever they're doing, it's guaranteed to overreach and it's counter productive and who knows how much damage it does to society as a whole.
It's just that no one knows the severity of the damage the censorship does.
> I'm just there to try and help people with depression, anxiety and other issues.
Then it's mostly like because those topics aren't "advertiser friendly." Youtube did something similar during what was called the "adpocalypse." Advertisers don't want their products to be associated with the things you talk about, and reddit cares more about selling ads than helping you help people.
Computation, bandwidth and (open source) software is so cheap these days that it approaches free for many people (but not all). It's just a matter of coming up with a functioning model.
Ad supported is just one such model and it's not a very new one at that. Remember, you made this comment on server that was provided to you for free without the need of ad support.
> Remember, you made this comment on server that was provided to you for free without the need of ad support.
HN has ads, just sneaky ones that blend in with user-submitted links on the front page ($YCStartup is Hiring, Launch HNs. But on the positive side, they don't seem to rely on tracking to target any narrower audience than the whole HN userbase).
It's not without self interest. But that was kind of my point. So I'd say you and I are on the same page, I just used a different way to describe it and if that wasn't clear, that's probably because of how I wrote it. But thanks for giving me your feedback.
Reddit-like software is open source, what's stopping anyone like you then? Talk is cheap.
>Ad supported is just one such model and it's not a very new one at that. Remember, you made this comment on server that was provided to you for free without the need of ad support.
There are job ads on HN, plus HN is even more highly moderated than Reddit. So it's a very bad example.
>Reddit-like software is open source, what's stopping anyone like you then? Talk is cheap.
That source code hasn't been updated in years.
>There are job ads on HN, plus HN is even more highly moderated than Reddit. So it's a very bad example.
If you add meaning to the words of others that is not there, then indeed, the examples are bad.
To bring up the moderation here, you're supposed to be nice and interpret the words of others positively. Your reply is bordering on malicious:
>Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
Oh so it does cost effort and money to hire people to make a site like Reddit, thought you said it's easy and cheap and approaches free. So how does Reddit recoup that money???
>If you add meaning to the words of others that is not there, then indeed, the examples are bad.
>To bring up the moderation here, you're supposed to be nice and interpret the words of others positively. Your reply is bordering on malicious:
My point was that HN runs on ads and the financial goodwill of YC. If there's hate speech and misinformation on here then it would reflect badly on YC and YC backed companies. So we run into the exact same problem with Reddit and Youtube servers running with ad companies that don't want ads on a site with hate speech and misinformation.
If you want to set up or donate to such a site to prevent these influences, as you seem to think servers, bandwidth and maintenance is cheap. I think only talk is cheap, prove me wrong. What are the reasons for not creating such a site since you feel so strongly about censorship?
It's not free, it's just not something you compensate with cash. Let's not pretend that HN does not derive value from the people using it- that's a key component of the business.
Reddit’s search is incredibly (purposefully?) broken. It won’t give you nsfw results for anything. You have to go to old.Reddit.com and use that search system. It has a checkbox for “include nsfw results” that is missing from the new ui. Check that and see if it helps you find yourself.
I don't really need to find myself, you know. I know where I am. The problem is more that others can't find me and don't even know that I'm missing. Maybe you're right about the reason, but even that's irrelevant.
It's scary. I happen to know this was done to me. How many times have others made you disappear without you knowing you were scrubbed without being given any notice?
This isn't my first account on hackernews (not my second one either, though with one exception they are all in good standing).
But my very first account, I was happily participating here for years, until someone kindly told me I had been shadowbanned and I don't how much of my voice was censored exactly and I don't know why.
So the issue isn't reddit. And apparently it's not really about me. The problem is exactly what this article is about, private censorship. Web3 is supposed to address issues like this (though, I don't know how much of that is handwaving and make believe). As soon as I can, I'm going to move there.
I just can't trust these corporate entities, they don't operate in good faith, aren't transparent and avoid accountability for their actions. The utopia that was promised to me has failed and is in full decay, the signs are everywhere. It's time to move on.
It's sad that a "normal user" like me ended up with those beliefs.
I just wanted to be left alone and allowed to voice my opinions, I don't want to have to deal with censorship, I don't want to have to tell others how I've been impacted by it, to make them aware of what's happening behind their backs. Instead, here I am having a discussion like this with others.
Am I even allowed to say this? Will this be wiped as well? I don't know. I wish this was all a joke. What kind of dystopian future is this? It's ridiculous beyond belief.
I don't think I've ever seen an account with an NSFW tag/flair. Posts? Yeah. Subreddits? Yup. Possibly a flair for your user on a particular subreddit added by a mod for some reason? Sure. But not accounts. Can you link to this tag?
Oh yeah. Accounts have NSFW tags these days. Also impacts chat functionality. You'll get warnings there. They call it "profile" but your profile is 100% of your account activity.
It seems that there are a fair number of sex workers on reddit whose profiles are tagged NSFW. Presumably, they would view the tag as an asset.
(I don't have any experience with these accounts myself. But I've stumbled across more than one in mainstream subreddits, and my curiosity was piqued. Only pointing this out to note that this comment reflects the totality of my knowledge on the subject.)
Define 'works'. This subject is an easy one in which to conflate goals. The goals of the platform are different than the goals of the public, lawmakers, etc. which is what the article is about.
The goal of the platform is to pull in more users. Having less abusive language and users is a means to that end, as is appeasing general public outcries for more moderation.
The goal of the public and lawmakers is, hopefully, more effective and meaningful discourse.
The paper you link measure for the prior outcome, not the latter.
Edit: Just to comment on the outcome you are referring to though, I don't believe the bans actually lead to more meaningful and effective discourse on Reddit. I don't even thing it lead to more civil discourse. You can click on literally any politically sensitive topic on the front page and you'll find that almost all the top comments both lean in a specific direction (diversity of discourse is terrible) and the deride anyone who disagrees. Maybe they use the 'F' word less, and no one is using the 'N' word anymore, but it is very obvious that it has become a strongly homogenized platform that doesn't welcome nuanced opinions.
"More effective discourse" is probably far too aspirational a goal for lawmakers/government. It's also probably too subjective.
Less violence, on the other hand, is a clearer goal that aligns with existing laws against violence, so it has a firmer basis, in line with existing restrictions on speech. It also gives you a non-partisan measuring stick.
Private platforms, on the other hand, will have more flexibility in trying to maintain, say, discussion quality as a goal (see HN's stated moderation goals vs Reddit's). Or disinformation, or language/profanity, or whatever else a private moderator may choose.
The government telling those private parties that they can't set their own standards, though, seems like a particularly terrible direction to go.
It's no more subjective than 'less violence' provided the interested parties define measures for success. I mean, if the government can set goals for employment, then surely they can set them for civic engagement and disinformation.
I suspect that for reddit specifically, homogenous opinion and lack of meaningful discourse has more to do with the voting/karma system and its inability to scale well.
Nuance gets crowded out because posts and comments presenting polarized opinions are much more likely to get acted on by readers than their more nuanced counterparts are. Posting and voting becomes more about the dopamine hit from the gained karma and sense of being correct than it is about discussion.
This is why while it still isn't perfect, one can often see higher quality discussions in smaller niche subreddits where there's no critical mass to push dominant opinions. In those communities, posting extreme comments just makes one look like an attention seeking troll.
So in my view, moderation or lack thereof is almost orthogonal to an online community's propensity for quality discourse. That seems more related to the designs of and incentives given by the software these communities run on.
I agree with most of what you've said. I really only disagree with the last bit, as it ignores the network effects of moderation, and the fact that moderation applied by topic is necessarily polarizing along the lines of that topic.
I didn't intend to prove "bans cause homogenization", but rather cast doubt on "bans achieve the stated goal".
Sure it worked at reducing hate speech on reddit, but is that the right objective? It's not implausible to think that the resentment from such local measures could cause hate to increase in a more global sense. Kind of like entropy in thermodynamics: local work can reduce local entropy, but at the expense of causing a global increase in entropy.
If the real objective should be a global decrease in hate, then maybe local suppression/exile might not be the right mechanism.
I suppose to the extent these channels of communication recruit and radicalise, if they are shut down they will not be able to recruit and radicalise. The act of shutting them down might infuriate regular users of these channels, but they’re already radicalised anyway.
What you end up with is recruitment. If the user remains on Reddit or Twitter they're exposed to the gamut of human thought. However extreme they may be, they're still attached, there may exist some sort of analogue for a cleavage furrow, but nonetheless the cell remains. It's only at the point where you've so alienated and shorn their attachment to the whole that they become a fully independent entity, and that they become truly radicalized. And having observed this, I can say with sincerity that they move deeper into the domain of extremity.
This study was not really science. They interviewed a few people and then applied their own qualitative analysis to conclude that people cannot be self radicalized fully online. Since they published the study, there have been several major terrorist attacks perpetrated by individuals fully radicalized online.
I see. Your source is moral panicking in the news and journalists psychic ability to determine that dangerous radicalism is on the rise without ever bothering with any sort of data collection. Who could argue with that?
That research was published by a libertarian think tank in 2013. Since then, there have been numerous examples of lone wolf terrorist attacks where the perpetrators appeared to have no offline contacts with extremist groups. See: New Zealand shooting, Pittsburgh shooting, etc.
As I said elsewhere in this thread (and linked to an article on similar topic), success isn't measured by sheer numbers of lukewarm sympathisers alone.
It is not as if George Washington or the Civil Rights Movement recruited masses of followers online. They started with relatively small but cohesive units. Yes, it took them longer than it takes today to assemble a flash mob. But the efficiency measured by capability was higher than today.
For an example, if I were a German cop, I would be much more concerned about the mostly-offline groups such as the Reichsburger or the Grey Wolves (Turkish nationalists). They linger around for decades and are hard to penetrate, unlike whatever wild crowd you can put together on Facebook. That Facebook group will look scary, but it won't have much pull after 5 years, because it competes with a ton of other Facebook groups for attention.
Even so, it makes very little sense to provide a platform to these hardcore organizations to continue to recruit from. The dismantling of the old KKK in the United States is a great success story to work off of.
Again: yes, it does not scale as well. But scale to what? Silicon Valley is generally obsessed with quick growth without thinking about long term impact. But politics is generally long term.
It is easy to put together a temporary, unstable mob of people in virtual reality. But real political impact of such mobs tends to be superficial.
Impactful organizations such as, well, the Taliban, sacrifice rapid growth and substitute it with higher-quality personal connections.
The hate is in part generated by a continuous bubbled feedback loop. Cutting out the source(s) usually doesn’t lead to a redirection and rather has a chilling effect.
As poetic as free speech is, the 1700s were a different time. Hard to SWAT a rando a town away let alone a country. Now a nation state can instigate localized terrorism.
Thomas Jefferson understood laws and constitutions must change in step with human discovery.
Scalia wrote laws are not open ended, and up for public romantic interpretation.
To paraphrase both. Indeed many Founders wrote of the need for free speech in official political proceedings, never considering the privilege extending to the public. In public people were welcome to get a beating if they prefer to violate local order.
Humans are biological animals first. Gentile by training.
if you read the actual research provided by the OP it shows way more than that. The measures did not only crowd out hate speech, the study followed individual users and noted that they moderated and changed their behavior once they moved to other communities after the more extremist ones were banned, and they individually changed adapted.
That implies that the there's a positive effect even on the individual level, not just a silencing effect. Which seems completely in line with say, busting a cult in real life.
I can say that the local subreddit is one of the most heavily policed with the result that during the covid pandemic a large fraction of the casual posters were banned because they couldn't keep track of what new rule was introduced in what day.
A lot of people moved over to telegram channels because you could actually ask questions without getting banned for concern trolling, misinformation or incitement - e.g. asking where protests were happening which is what got me banned finally. Ironically I was asking where they were happening so I could avoid them.
The result of that policing is that I now have a telegram account and regularly scan a dozen right wing channels so I know if I can buy groceries without getting tear gassed.
If this is what winning looks like for the left we don't need help in losing.
This paper is trash. On defining hate speech, "we focus on textual content that is distinctively characteristic of these
forums", so, yes, banning specific subreddits resulted in less of that content in the overall Reddit. Did it make Reddit a more friendly place? Probably not as tensions have never been higher.
> Did it make Reddit a more friendly place? Probably not as tensions have never been higher.
Did allowing hate speech make 8chan/Voat friendlier? There are plenty of unfriendly communities that lack overt racism/fatphobia, but I can't think of any friendly ones that do.
8chan was a sacrificial site to distract the most fervent internet crusaders. Worked pretty well, I believe it still exists in some form, but it is out of peoples mind.
These things do help. Most members of these communities get radicalized by virtue of other content they’re already browsing. If you remove the communities and force individuals to leap from Facebook to some no-name forum, they’re far less likely to engage.
This phenomenon is mirrored elsewhere as well where small interventions can lead to big impacts: see suicide rates in Britain after removing carbon monoxide in their house gas. Initial friction (or means reduction in that research) can drive change.
I'm not convinced we should think it's acceptable for you or I to decide who does and does not get exposed to information that will 'radicalize' them because you and I may have wildly different opinions of what constitutes a radical opinion. Humans sometimes develop stupid opinions, and that is OK. They're allowed to do that. They aren't allowed to act on those opinions in a way that constitutes a crime, which is why we go through the trouble of defining crimes in the first place. 'Precrime' thought policing seems like a pretty dangerous road to go down and one that is full of unintended consequences.
I define a radical group to be some collection that seems to be consistently peddling false material for their own ends, with an overly provoking tilt that makes it easy to go viral. Some specifics out of the past year would be vaccine misinformation, prejudice that incites racial violence in Myanmar, organizing an occupation to the capital in the US. Shouldn't we put a stop to these communities if they sway people to start peddling this information themselves?
At least in the US, free speech is most free within public forums. But even then we already define some speech that is too dangerous if it's known to lead to poor outcomes. You can't yell fire in a crowded room, not because yelling fire is inherently a crime, but because it's going to incite people to a detrimental ends.
Plus social networks don't have a constitutional obligation to be town squares where free speech can spread unfettered. You have to draw a line somewhere. And as the people who write the algorithms that can amplify or bury content on these networks, I think it _is_ our obligation to at least set parameters on what constitutes good/healthy content interactions on these social platforms and what doesn't. The ML algorithms have to optimize over some loss function.
Not that I disagree with you - but "can't yell fire in a crowded room" is slightly misconstrued. As those aren't the original words from the U.S. Supreme Court case. [0]
Additionally, the idea of 'clear and present danger' has been modified within the past 100 years since the said court case. The Supreme court since then has stated:
"The government cannot punish inflammatory speech unless that speech is 'directed to inciting or producing imminent lawless action and is likely to incite or produce such action'". Said definition has changed and depends upon situations, where some action is imminent or "at some ambiguous future date".[1]
I think it already has been abused, the law has nothing to do with it.
Just don't venture to places where that is not the case and we can agree. We have different requirements for interesting platforms and should keep it separated for the most part.
To be clear, the study found that platforms banning topics succeeds in removing those topics from the platform - not necessarily from society as a whole. The study did not conclude that banning hateful communities off of reddit actually made those users less hateful or curbed the spread of hateful content online. If each of those banned users subsequently posted twice as much hateful content on a different platform, that still comports with the conclusions of that study.
there's at least one major reason to be skeptical of that result, and they mention it in their limitations section.
what they did was create a lexicon of hate speech terms from two large subs that were banned. they then counted the frequency of those terms in other subs after the ban. they found that usage of those terms dropped substantially, and concluded that the bans were effective at reducing hate speech.
if you're familiar with the dynamics of these sorts of subs, the problem with this approach should be fairly obvious. these subs tend to develop their own set of specific terms/memes (hate-related or otherwise). it may be the case that the bans were effective at reducing hate speech across the whole site. but it's also possible that the same people are still posting the same stuff coded differently. this study is far from the final word on the matter.
Yes, reforming users who already want to engage in hate speech is not the goal; isolation is. Look at the pre-internet times, certain forms of radicalization were far less common than they are today because they aren't very present in the in-person community (while those that WERE geographically concentrated, like racism in certain American communities, were still spread in those places).
When the status quo is bad, "maybe that won't work" is not an effective argument against taking action anyway if you don't have a better idea. If we don't take action, we already know we're going to (continue to) have a bad result!
The exact same is true for political communities that aren't declared as a hate communities. So no, banning hate does not work. It does work if you define work as this study did, which has problems to say the least.
Today reddit is even more hateful than before, although that is a subjective meassure, but 5-10 years ago you at least had standards to not wish death on people you disagree with for the most part. There were exceptions, today it is very common.
Neither of your assertions are related to what is being discussed. Nuclear bombs "work" and bad people would do horrible things with them, but that doesn't say anything about if we should use them.
If all "legitimate" platforms ban hate speech, then users wanting to engage in hate speech will all go to some platform that radically allows all speech with disproportionately this undesirable speech. They will intermingle disproportionately with those spreading sexual abuse images, drugs, insurgent propaganda and instructional material, and other undesirable material. Facebook likely makes the problem worse by forcing these "hate speech" and "disinformation users" to be completely surrounded by people with repulsive content, instead of having their repulsive content critiqued and shamed by other users.
Having people with bad, hateful ideas out in the open I would argue is preferable to concentrating all these bad thoughts together with people that will reinforce that it is normal.
>"Hateful" Reddit or FB communities also don't allow "free speech". The moderators will ban people who go against the grain.
Which is fine I think, but why have Reddit or FB do the censoring (aside from things that are outright illegal). I don't much care if a bunch of Nazis or tankies are busy planning world domination on Reddit while sharing recipes. Why do you care?
And when you ban misinformation on your platform, that "anti-vaxxer" instead goes to 8chan where by your argument might inspire them to shoot people up.
But you want Reddit to literally adopt 8chan's moderation policy, meaning that Reddit now will become the place that inspires mass shooters instead of 8chan (which by the way, is no longer a place that inspires mass shootings, since it was killed by Cloudfare after the last one, and replaced with the impotent and unpopular 8kun).
>Reddit now will become the place that inspires mass shooters instead of 8chan
I can guarantee that there are plenty of evil doings on Reddit and Facebook.
One argument, and a more honest one, that people can make is that (a) social media is toxic and (b) it should be made illegal generally. Bingo bango, no mass shootings I guess.
>But you want Reddit to literally adopt 8chan's moderation policy,
I want reddit to adopt the public square's policy of allowing any content that isn't illegal, which also happens to be pretty much synonymous with 8chan's policy.
Do you consider the town square (which has the policy of allowing content that is not illegal) a center of inspiration for mass shootings? Could it be that the public square is not viewed as a place for inspiration of mass shooting, have anything to do with integration of many ideas and the fact that someone bringing bad ideas might actually be challenged in an environment where they are exposed to the general ideas of the community rather than an echo chamber of fellow nazis or whatever?
The nazi hall may have the same moderation policy as the town square, that doesn't mean I expect the same inspirations to come out of the nazi hall. The issue with the nazi hall is the powder-keg full of people reinforcing bad ideas, whereas a nazi in a more "normal" place like the public square might have some chance of being shamed or convinced their anti-social ideas are undesirable (despite the nazi hall and public square having same moderation policy). I don't want to shove more people into the nazi hall by banning them from the public square (especially when they're only being banned from the square because they have unconventional views on vaccines.)
---------------
In the censor's world, the people with undesirable ideas in the public square are kicked into 8chan where instead of their ideas being challenged they all end up in a self reinforcing chamber. The proportional amount of people wanting a mass shooting may be tenfold that in the public square, leading to more compressed exposure including by other people who were originally just anti-vax or whatever.
And the people running the public square turn around and say "see, 8chan allows any ideas, and that's what happens when you do that!"
"Do you consider the town square a centre of inspiration for mass shootings"
Before the internet, yes, definitely. Maybe not mass shootings specifically because that seems to be a recent fashion trend after Columbine,
but violent extremism in general. How do you think Hitler managed to secure over 40 percent of the democratic vote in the early 1930s? How did Osama Bin Laden recruit extremists who were willing to put a bomb into the WTC basement? Propaganda, speech.
This idea that unfettered speech in the public town square, even if it isn't directly inciting violence, can't lead to pathological outcomes just doesn't hold up.
This isn't even an argument for government censorship. It's merely me recognizing that these type of outcomes can come about.
Nowadays almost all extremist speech is online, because that's where there is distribution and anonymity, so the analogy breaks down.
"where they are exposed to the general ideas of the community rather than an echo chamber"
This isn't a bad argument, but you have to balance it off with the knowledge that ideas are highly, highly contagious. On balance, I think giving such ideas distribution to a billion eyes is far more harmful than pushing a fringe into echo chambers which already existed before social media censorship began anyway (such as the Stormfront forum).
Moreover you have to recognize that these isolated echo chambers would naturally self-segregate on Reddit if given free reign, and so in practice you haven't changed anything aside from giving these ideas more distribution. It's not like /r/88 or whatever would be interacting with the rest of Reddit thus helping their members deradicalize.
I appreciate your honesty in believing the public square is a center of inspiration for mass shootings.
I believe quite the opposite. It has been a place for the public to plan self defense, both to organize themselves in defense from natural disaster, hostile forces, wildfires, and anyone who seeks to do them harm. It is a place for the public to engage in the marketplace of ideas and inspirations, which ultimately leads to the saving of lives, prosperity, security, and bonding of the populace. Harmful ideas can be shamed and those espousing bad ideas have a chance of learning the holes in their ideas. The mass shooter espousing violent ideas in the public square is as likely to have alerted his neighbor to be alert for any evidence of crime, as he is to convince the general populace of his nutjob ideas.
I don't buy your hypothesis that Hitler came to power because of free speech, and quite frankly it is laughable to think banning Hitler from Reddit (were it to exist in his day) would have any effect whatsoever. You seem quite ignorant of the factors precipitating Naziism, including the economic situation of Germany at that time. It's also worth noting that Hitler was quick to stifle certain speech that went against his ideas, meaning he found free speech at odds or even dangerous to Naziism.
---------
>How did Osama Bin Laden recruit extremists who were willing to put a bomb into the WTC basement? Propaganda, speech.
Bin Laden attempted to blow up the WTC basement with bombs, not free speech. Bin Laden lived in Muslim nations with more limited speech regulations than Reddit.
>Moreover you have to recognize that these isolated echo chambers would naturally self-segregate on Reddit if given free reign, and so in practice you haven't changed anything aside from giving these ideas more distribution. It's not like /r/88 or whatever would be interacting with the rest of Reddit thus helping their members deradicalize.
Some may, some may not. I've stopped using reddit because I was banned because I simply said things like I didn't believe forcefully shutting down a restaurant is an appropriate way to deal with coronavirus. Now maybe that is a very wrong and bad idea, but I'm willing to debate with others on it and learn their perspectives. Instead these communities said fuck you, you're banned, and now you have to go to some echo-chamber where everyone agrees with it. I'm not interested in an echo chamber, I'm interested in engaging with others so my bad ideas can be brought to light and shown to be bad, or my good ideas can be integrated. Your argument sounds more like one against having subreddits.
Hitler convinced almost half the country to vote for him because of speech that drummed up resentments stemming from the Versailles Treaty and the depression, channeling and anthropomorphizing those resentments towards Jews, the lugenpresse, the military establishment, and so on. So you've missed my point, which is that town square offline speech can directly cause pathological outcomes when it is weaponized by bad faith actors.
The belief that sunlight is the best disinfectant is nothing more than empty sloganeering and it flies in the face of everything we know about social contagion and the willingness of humans to be led astray by tribal hatred.
Town square offline speech didn't lead specifically to mass shootings historically only because this particular medium of terrorism is a modern fashion trend, so it follows that it's a phenomenon that's going to be motivated online more than offline in the modern context.
You're trying to draw analogies between modern technology and the old town square. You should stop doing that because instant distribution to a billion people isn't the same thing as a speech to a thousand.
I provided examples of speech in the old town square leading to pathological outcomes, but we are in a very different regime now and analogizing too much isn't helpful.
So who should decide what moderation policies we have for the public? The general populace, who as you say would elect literally Hitler, or the government itself of which Hitler was once a part and used these very moderation mechanisms to suppress the Jews? The tyranny of a minority of special moderators like perhaps a nominally communist censor committee may have? We allow Naziist speech to exist precisely because we don't want the government or the tyranny of the majority or minority choosing what political speech is allowed, such as outlawing speech that doesn't promote Naziism.
>You're trying to draw analogies between modern technology and the old town square.
No I'm trying to find out how you want to apply moderation strategies to "reduce the likelihood" (my apologies if I misquoted your deleted comment) of democratic election of those who some censors decide have the wrong political views or speech.
>You should stop doing that because instant distribution to a billion people isn't the same thing as a speech to a thousand.
Are you also one of those that thinks the first amendment doesn't apply to the internet because the founders never imagined something that distributes so much faster than the printing press could exist? I know this is a straw man but I can't help but think this is where this is leading.
>And your argument is that if the venues hosting Hitler's speeches had Reddit's moderation policies then Hitler would not have been elected?
The fact that you didn't answer this question (well you did, but you deleted it) really is an damning answer of itself.
> So who should decide what moderation policies we have for the public?
There's three possibilities:
(1) No moderation at all, beyond what's illegal.
(2) Private voluntary self-regulation.
(3) Government censorship.
In my opinion, (2) is the lesser evil, which isn't to say that it doesn't have its own pitfalls. (1) is infeasible due to the 8chan experience, and our understanding of social contagion and human tribalism. (3) has a much bigger slippery slope risk.
> The fact that you didn't answer this question
I deleted my answer because these analogies are too tenuous. You're trying to compare modern social media with how information spread 90 years ago. How can I map "Reddit's moderation policies" onto 1920s beer halls and Der Sturmer and newspapers? You can't do it. We're in a new regime and we need to reason about this new regime from first principles.
We're in agreement, although I might add (2) is essentially the same as the censorship policy in the Weimar Republic under which Hitler was elected, where public censorship was nominally and constitutionally illegal [1] (except in narrow circumstances, such as anti-Semetic expression) and any censorship essentially relegated to private and/or voluntary regulation
> How can I map "Reddit's moderation policies" onto 1920s beer halls and Der Sturmer and newspapers?
The same way the first amendment is applied to both beer halls and the internet. There's not a single rule in Reddit's content policy that cannot be applied to a beer hall [0]. If you fail to find a way to apply these rules you're either not putting in any effort or you're a lot dumber than you sound (methinks the former).
Given that what you advocate for is virtually identical to that under the Weimar Republic, I assert your chosen policies would have little to no effect on the election of Hitler.
"My understanding is that pre-Nazi Germany had hate speech laws, and it didn't seem to work there?"
I abandoned my views on this question for a few reasons:
- The Weimar Republic laws either weren't effective at preventing distribution or they weren't actually enforced. The continued circulation of Der Sturmer is evidence of this. The judiciary was known to be heavily biased in favor of the far-right, where less than 10% of far-right political killers were convicted and the majority of far-left political killers were convicted.
- Online censorship is far less likely to create martyrs than the visual/emotional imagery of imprisoning people.
- Online censorship is far more effective at preventing distribution.
- Failing to censor online leads to automatic mass-distribution due to the consolidation of eyeballs in a small number of venues. Failing to censor offline does not. There is less scale to be had offline.
- Online censorship that we're talking about is private and voluntary. It is not in the same category as government censorship as far as downside risk is concerned.
> Given that what you advocate for is virtually identical
It is not "virtually identical". As I've said, the context is extremely different. You can't draw an analogy as much as you keep trying.
>I know that the Weimar Republic had censorship laws. I made almost the same argument that you are making just 2 months ago:
What? The anti-semetic expressions crime thing is a fact, not an argument (I am against censorship laws!). I was honestly completely knocked cold fthat you came to the conclusion I was making your argument. The takeaway isn't that hate speech laws work, it's that they don't. I'm pro hate speech and anti-censorship. I don't like it, but I'm pro allowing it. I'm making your counter argument. In fact you seem to be listing many of the reasons why hate speech laws don't prevent Naziism.
>I abandoned my views on this question for a few reasons:
I'm surprised you chose not to learn from your responses and realize the folly of restricting "hate speech." It doesn't seem you abandoned anything, it seems you double downed.
>It is not "virtually identical". As I've said, the context is extremely different. You can't draw an analogy as much as you keep trying.
You can bury your head in the sand if you like, but no matter how hard as you keep trying to think they aren't virtually identical, they still will be. What you advocate is extremely similar and you are oblivious and disconnected to the reality of the similarity between our current censorship laws and those of the Weimar Republic. I'm not drawing an analogy, I'm saying you are literally advocating for the policy of the Weimar Republic under which Hitler was elected, with only the slightest of differences (their laws were ever so slightly more restricted due to some spottily enforced hate speech laws). The Weimar's Republic policy was literally free speech, sans some poorly enforced hate speech laws, plus private and/or voluntary censorship, which is your option (2). In fact your precise option (2) was free speech + private/voluntary regulation, but you admitted that Weimar's hate speech laws were essentially useless.
The internet is just another media of communication. That's it. You said yourself hitler reached over half of voters with his speech. That's probably a greater voter penetration than even what reddit reaches. You make some arguments why hate speech laws weren't very effective at the speeches but then you think they will be even more effective at something with even lower voter distribution than these speeches that you say went to more than half of voters.
>Having said that, it's true that for some people no amount of reasoning or persuasion will work
Some people are their own soothsayer. Have fun in your censored future insulated from reality and the opinion of others, left to the discretion of whatever "private" entity believes is allowed truths.
"I was honestly completely knocked cold fthat you came to the conclusion I was making your argument. The takeaway isn't that hate speech laws work, it's that they don't."
You've misunderstood. I was previously arguing that they don't work, not that they do work. Read the old post of mine that I linked.
"What you advocate is extremely similar and you are oblivious and disconnected to the reality of the similarity between our current censorship laws and those of the Weimar Republic."
I outlined the reasons why these are different situations which you haven't addressed in your reply.
The sentence is incomplete. Banning X communities does work to achieve the goal of people not talking about X. I don't think that your linked study is really necessary, the Chinese cultural revolution worked really well (to achieve the goal of "preserving Chinese communism") [0]. Imagine if 30 years ago large digital monopolies banned what was considered unmentionable back then. I doubt gay marriage would have been legalised in America. All of the progress that we have in making marijuana more legal would have been terminated by the companies wanting to prevent people advocating illegal drug usage.
Homosexuality in general was effectively banned by dominant platforms in the US for quite some time! It was a BIG DEAL to people when gay characters started becoming more common in mass media.
But note the order it happened. The privately-set standards moved with the times much faster than the government ones did!
Writers/tv execs/etc heard both the pro-equality arguments as well as the anti-homosexuality arguments and made their own choices as they were persuaded to. Many states, on the other hand, never legalized gay marriage before the court decision overruled their laws.
So that seems to show that we should empower private parties to have control over what their platform shows, over either the government or just the loudest mobs (there were MANY protests/boycott threats/etc from religious groups over this). The market gives this the advantage over the government here: the private publisher can test what sells, and over time is going to be increasingly forced to move with societal changes, while the government is much more likely to be captive to small-but-loud contingencies (especially in a gerrymandered world).
> the Chinese cultural revolution worked really well
Given that the lineage in power now (the Dengists) were imprisoned under Mao during the cultural revolution, and only after Mao's death were able to perform a coup, arrest the rest of the leaders of the cultural revolution, and let the Dengists out of jail, I'm not sure that's the case.
Chinese censorship is backed by the threat of arbitrary imprisonment or violence. In that case, it's not really the banning of the topic that's working, it's policing for compliance and de facto criminalization of defiance.
>Banning X communities does work to achieve the goal of people not talking about X.
Honestly I'm not sure that is even accurate, I would imagine it does the opposite. People are drawn to what's not allowed, it's even one of the morals of the Adam and Eve story. Pretty old but still relevant idea of human nature.
Here's an interesting story about Goldberger who defended the Neo Nazis in Skokie.
From a former Communist country: whatever was banned (jokes about the Party or the Soviet Union, various conspiracies or whatever correct-but-undesirable information out there, such as the Chernobyl accident in the first days), spread like wildfire by "whispering channels".
You need to understand that the people who are saying we must ban misinformation are the party apparatchiks, trying to argue in good faith with them is pointless. The only reason to engage is to show the silent majority that they aren't crazy for disagreeing with those in power.
According to a single study. In general I don't think it actually does. Making it harder to find is sufficient, banning it outright just proves their point and pulls additional moderates to their cause.
Some of the comments in this thread seem intentionally bad-faith or ignorant to how much hate and abuse actually exists on these platforms. The Internet fucking sucks. I stopped using social media because I'm trans and I don't feel like there's a place for me online anymore. No matter where I go, including large platforms like Reddit and Twitter, I'm inevitably subjected to someone expressing their grievances about trans people or the LGBTQ community. There's a part of me that wants to reply and give my perspective, but it's like I can't even have a voice online without the fear of people belittling and harassing me, sending me abusive messages, trying to doxx me, telling me to kill myself or scouring through my profile to try to find whatever they can add to their "cringe compilation" or Kiwifarms thread about how degenerate and disgusting trans people are. My mental health is more important than participating in the shitfest that is online discourse so I just avoid it. I post on Hacker News and a few other places where people are generally respectful, but other than that I've given up on having conversations with strangers online.
I'm an artist and a software dev, I have a lot that I want to share with the world but I don't think I'll ever get the chance to. This world is cruel and these online platforms and social media algorithms amplify that to the point where it feels like the only way to win the game is to not play. Personally, I don't feel one way or the other about online censorship at this point. I think social media has ironically ushered in a culture of anti-social behavior and maybe it's time to move on to something else.
FWIW r/actuallesbians is extremely supportive towards trans women. (I don't know subreddits for trans men) But even then, abusive PMs are still a problem.
LGBT has become political not to the preference of a lot of its members. But the flag has become a political instrument for signaling. Abused against, I guess.
Most of the people I see making LGBT “political” are people outside of our community who feel the need to constantly tell trans people how they feel about us - our pronouns, what gendered bathroom we should use, who we should be able to compete against athletically, whether we were lying as kids, whether or not they would date us. Like ugh, stop. Then you have the daily /r/Cringetopia or /r/HolUp thread that makes it to the front page when the only point of the thread is for grown ass men to take the piss out of some random gay teen on TikTok or no-follower account talking about pronouns on Twitter. What do we have? Selfie threads on /r/lgbt? Threads about how to come out to your parents on /r/asktransgender? Are you serious right now?
The harassment doesn’t even stop when these trolls and communities are banned. They will literally mass create fake accounts with hateful usernames and follow trans users in an attempt to harass them via the notification feed [0]. It’s scary when this happens to you and I think it’s the reason why you mostly see activist types from the trans community on these platforms, because they’re the only people who are willing to put up with this.
That is one side, but the more relevant is that you see the flag to virtue signal. Even arms manufacturers fly it to signal their alleged values. The consequences for LGBT people in nations not really too friendly with western countries is obvious and you need to account for reactionaries and maybe think beyond your bottom line for governmental contracts.
The irony of private platforms censoring hate groups (for their own PR I might add; let's not forget that YouTube's algorithm famously favors alt-right content because weirdly enough algorithms that favor "engagement" tend to end up favoring lowest common denominator populist reactionary politics) is that it 1). Legitimizes the claims of groups that otherwise might sound like delusional paranoid conspiracy theorists, and 2). Forces these groups to adopt decentralized alternatives that make them even harder to do anything about. Tech companies engaging in performative content moderation is really just creating selection pressure that will ultimately create more resilient, technically competent hate groups that will also have an increasingly legitimate case to make for being persecuted by powerful organizations.
It might, but that's a bit like saying "fixing security holes will just make hackers resort to trickier tricks to crack systems."
Quite possibly. And we want to put the burden of labor on them to do so, instead of operate in a world where the rule is "Don't use the internet; hackers will steal all your data."
2020-2021 has shown me that most people happen to be 'fair weather fans' of civil rights. Everyone claims to love free speech but when suppression of speech opposing their own worldview comes into play, they suddenly dither and fall back on other rationalizations to justify the censorship.
Well, it's not free speech being violated when it's 'hate speech'.
Well, it's not free speech being violated when it's 'misinformation'.
Well, it's not free speech being violated when it's a private company doing the censorship. Et cetera. I'm sure you've seen you're own examples of this.
It's pretty disheartening, but enlightening nonetheless. I have a much better understanding of historical moral panics and cessions of freedoms. Whereas I used to wonder how some societies ever gave into such pressures, I now realize it's not that hard to persuade the average citizen into accepting such things.
What free speech advocates ignore here is that even they generally draw a line somewhere. Death threats, defamation, pedophilia, sharing bomb making materials etc are usually accepted by everyone to not be acceptable.
'But those are different' is usually the argument here. But why though? Because they cause harm? Doesn't inciting racial hatred cause harm?
Once we stop arguing over the issue being black/white but instead discuss _where exactly_ we draw the line then I think we are finally having a far more honest discussion. Just because some speech is illegal and other speech isn't illegal shouldn't be the deciding factor on whether someone (or some company) needs to platform that view. That's then leaving it to governments to decide what is ok and what isn't, instead leave it to society to choose not to propogate hateful and distasteful messages.
I also find it frustrating that some armchair psychologists have decided that people like me with more nuanced views simply want to repress and censor things that we disagree with which is not what it's about at all.
>I also find it frustrating that some armchair psychologists have decided that people like me with more nuanced views simply want to repress and censor things that we disagree with which is not what it's about at all.
Well, you've got to draw the line somewhere, right? /s
But seriously, I think one can be a free speech advocate without being a free speech absolutist, and believe that we are heading in the wrong direction. In fact I think it's wrong to think of 'where to draw the line' at all, because what is acceptable discourse is not, and should not, be thought of as static.
I think a much better question is how 'the line' is shifting, as the amount of things we can't talk about is both a lagging indicator of institutional health (because being able to have uncomfortable conversations is a sign of emotional maturity), and a leading one (because public discourse is necessary to solving problems we don't understand or would rather not acknowledge).
US law defines the line as speech that incites imminent lawless action. The speech must not only be encouraging such action, it must be imminent and likely. This is in practice a pretty good line.
If you think that private companies should censor much more heavily, then you obviously don't really believe in free speech. There's no obvious reason why it's more acceptable because all the dominant communication platforms censor speech vs the government doing it.
When all your friends talk on platforms like Twitter and Facebook, then you cannot act like censorship through these platforms does not negatively effect the content of discourse available to the public. Especially in times of a forced lockdown, such as we were recently subjected to with the "quarantine".
> What free speech advocates ignore here is that even they generally draw a line somewhere. Death threats, defamation, pedophilia, sharing bomb making materials etc are usually accepted by everyone to not be acceptable.
Assuming we're talking about the abstract concept of free speech (as opposed to 1A, which dictates what the US government is allowed to censor), the "line" is around expression of ideas. A death threat isn't "expressing an idea", it's coercing someone with violence. Similarly "harassment" which you didn't mention, falls out of bounds of speech because it violates another's right of association (you have the right to speak, but you can't force me to listen). Pedophilia obviously isn't an expression of an idea, although pedophilic advocacy, while repugnant, is still in bounds of free speech by definition.
Whether platforms are obliged to adhere to a "free speech" standard is a different question. Personally, I think so much of our speech is flowing through a handful of these large platforms that they are the de facto public square, and should be regulated accordingly or broken up. Even if they could articulate a clear moderation policy and enforce it fairly, simply having that much power to determine who sees what content for so many citizens is concerning. Even if you're a liberal or progressive and thus largely enjoy the alleged bias in platforms' moderation policies/enforcement, recall that in 2016-2017 we were virtually certain that Russia was manipulating Twitter to influence the US presidential election--if you believe Russia can indirectly influence our elections via Twitter, then it necessarily follows that Twitter can directly influence our elections and surely that's too much power to give to a corporation.
> 'But those are different' is usually the argument here. But why though? Because they cause harm? Doesn't inciting racial hatred cause harm?
I think the issue is that many don't trust platforms to enforce their own policies consistently. On Twitter for example, one gets the impression that it's okay to incite racial hatred toward whites, Asians, Jews, and even "ideologically diverse" blacks/etc, which is to say that the policy is neutral but the enforcement is biased--and that biased enforcement constitutes harm. Of course, a racist might respond "Good, we should punish whites, Asians, and Jews for their race because historically other whites, Asians, and Jews have enjoyed various advantages because of their race", but presumably the goal is to minimize racism.
Defining free speech is easy, adjudicating it is sometimes hard. In this case, a threat requires the intent to compel (note that compulsion != persuasion). Determining whether "I wonder if the world would be a better place if this man did not exist" is intended to compel or not is harder.
But most free speech absolutists will be pretty content if we get to a point where the thrust of the free speech debate concerns itself with outlier cases like this one (rather than "is it 'hate speech' to criticize woke excesses?" or "to use a Chinese word that sounds vaguely like an English racial slur?").
> But most free speech absolutists will be pretty content if we get to a point where the thrust of the free speech debate concerns itself with outlier cases like this one (rather than "is it 'hate speech' to criticize woke excesses?" or "to use a Chinese word that sounds vaguely like an English racial slur?").
In your absolutist world how do you stop the trolls on the current incarnation of social media from flooding the medium with references to these outlier cases until it triggers censorship?
When those same trolls continue pentesting the medium until they trigger censorship on less direct references, you're going to be left with examples functionally equivalent to the ones you are comparing to above.
Won’t move the moderation line and you won’t have that slippery slope problem. At the boundary, use judgment. The law has the same problem and yet we still have tremendous speech liberties.
To be clear, I think moderating small communities is fine, but planet-scale social networks are de facto public squares.
I agree with this line of reasoning, but then we have to ask the question: are platforms obligated to host all speech that is not illegal (in whatever jurisdiction they reside in)? Should they be obligated? If I as a private citizen decide to create a platform to host discourse, do I have the freedom to decide what is permitted there?
If the answers to the followup questions are "yes", then we arrive to the status quo, where most of the big platforms heavily censor the content. This means that if you want to say something that'd be censored, you have the right to do so, but you don't have the means. I suppose you can walk out into the street and say to the people there, but there's a certain lack of reach in that approach :)
So there's also a practical aspect, where you may have full freedom of speech by law, and yet in reality you have no freedom because nobody will give you the possibility to actually communicate what you want to say. You may try to build your own platform, but then you run into second order problems where you'll find that no service provider will want to host your servers.
At one point, you may hit a barrier where you have no monetary means to build all the infrastructure necessary to be able to provide a truly free speech shelter.
I believe that common carrier applies to platforms, not just the infrastructure of wires. These platforms could not exist without the large privilege given to them I'm immunity to the illegal content they serve up.
The problem is that these platforms have it both ways. They can censor entire political parties, yet play dumb and cry immunity "we're just a platform" when there's literal illegal content that makes it's way to their public hosting.
Once they censor based on content, they should lose all until and be considered a publisher of that content. They're no longer a passthrough, they're now actively working to manipulate opinions
I hear this argument quite a bit, and I wonder if you have ever tried to use a discussion group that has been overrun with spam? Because that’s what you will get.
The reason why platforms are expected to host even the speech they don't agree with is the same why some bakers are forced to make cakes for homosexual weddings. If you compel to the latter, you should also compel to the former and vice-versa.
> 'But those are different' is usually the argument here. But why though?
Because we already agree they're different, so they form a Schelling point. We don't have to argue about them, and so there's no slippery slope.
However, I'd say even these examples are not as clear as they seem:
Death threats: I can see the need to punish credible death threats (and any credible threats of violence), but that does not actually imply a need to censor the death threats. Allow them to exist on the platform, but punish the threatener.
Defamation: This seems like a pointless holdover from honor culture. If we didn't have laws against it, people would simply demand evidence more often when hearing someone defame someone else. I don't see why the government needs to certify that my statement about someone else is true. Let my reputation do that.
Pedophilia: Have any children been saved by child pornography laws? It's possible banning and deleting it only encourages them to make more. And again, you can punish the creator of the pornography without also having to censor the content. The two are separate.
Sharing bomb making materials: This is the only example you gave where the actual information itself is dangerous. I support this being banned for sure. (You could say defamation is dangerous, but I'd say it's only harmful when the government is certifying our speech. Take away defamation laws, and defamation itself becomes less harmful, because we won't believe what others say at face value.)
> instead discuss _where exactly_ we draw the line then I think we are finally having a far more honest discussion
There doesn't have to be an agreement, on the contrary there cannot be an agreement because that depends on the community. But what we currently see is that people push into established communities that want their lines realized. So I see no advantage to discuss this topic.
These lines are subjective, even by culture, country, region, age, gender or whatever. There is no universal line.
> instead discuss _where exactly_ we draw the line then I think we are finally having a far more honest discussion
There doesn't have to be an agreement, on the contrary there cannot be an agreement because that depends on the community. But what we currently see is that people push into established communities that want their lines realized. So I see no advantage to discuss this topic.
You animosity towards free speech advocates already tells me that we probably don't share a common line at all.
Although I don't fundamentally disagree with your point, I don't think you're framing the issue fairly. It's not a matter of most people being enemies of free speech, it's just that this is an inherently difficult problem. Everyone draws the line differently on what's acceptable and what isn't, and every platform is trying to foster a different kind of community.
Intent. If your speech is intended to compel someone (compulsion != persuasion), then it's a threat. Of course, accurately assessing intent is difficult because threats are often implied rather than explicit (precisely because the one issuing the threat wants to avoid the consequences of issuing a threat).
You put forth the idea in your parent comment that there is a simple bright-line test -- apparently textual -- for deciding whether a remark is an innocuous opinion or a credible threat.
So -- bearing in mind that the speaker's henchmen said they interpreted it as a command -- which side of the line does "Will no one rid me of this troublesome priest?" fall on?
I don't think people are enemies, adversaries, or even detractors of free speech, but rather, they won't actually defend the kinds of speech that the ideal of free speech is meant to protect. Especially when the censorship happens to be affecting their partisan opposite. I do, though, recognize the difficulty in allowing some things and not others depending on the forum.
Please excuse the shortcoming I have for explaining this, but I have to fall back on "I know it when it see it" when it comes to what counts as violations of the principle of free speech vs content moderation. In the current zeitgeist though, I absolutely see this as censorship rather than content moderation. This is because I absolutely sense partisan motivations for content take-downs and topic-wide bans on such large platforms as YouTube, Twitter, Reddit, etc.
I wonder if we would be better served with a NPR-like publicly funded platform for video hosting that puts a lot more resources into content moderation. The private platforms get away with the bare minimum by throwing black box AI at the problem which leads to problems like the anti-vax censorship chilling effects etc. There should be an easily reached human in the loop with transparent decisions, and levels of appeal which is much more expensive. Kinda like the court system with the jury-of-your-peers litmus test.
If the state's going to host it, then their bar should be whether it's legal or not. If they want to put scary warnings around certain content or lock some of it behind age restriction, sure, fine. But if they are going to use public funds to host a government run publicly available video sharing platform, they should be very cautious about removing any content that doesn't violate actual laws. Free speech and all that, if anyone still remembers the concept.
Yes, exactly. And deciding legality should be done better than current automatic moderations that punish innocent content without a working appeals process.
Requiring that content be sourced from actual humans in good faith, and identifying people violating terms of service by spamming with puppet bots would be a good start. If the service is being operated by a national government, then you could require posters to prove residence or citizenship. But would anyone really want to use such a service? Could it possibly be even produced or operated efficiently? Once you put strict legal requirements on the operating entity it will get very slow, expensive, and user unfriendly.
Er, politics in the period of 1949-1985 was not, generally, “more boring” than 1985 to present. The last few years maybe have achieved the undesirable level of not-boring that was generally the case through most of the 1950s, 1960s, and much of the 1970s, but certainly overall the post-Fairness Doctrine period was more boring than the Fairness Doctrine period. (The Neoliberal Consensus is probably a bigger factor than the FD on that, though; it’s pretty hard to attribute much of anything about the overall tenor of politics to the FD.)
Specifics depend on the country. In the USA, over the air broadcast is restricted in content on the premise that broadcast spectrum is a limited public resource, and that you don't have much choice with what you see when you tune in. That argument gets pretty weak with point-to-point networks with nearly unlimited bandwidth, as I see it. An analogy might be the difference between ads with nudity on billboards (I believe that can be prohibited in the USA?) and ads with nudity in a print magazine going to a subscribership expecting such ads (protected by the 1A, including for mailing through the USPS).
Public libraries are perhaps another source of analogy. My local library system has some of the most vile and objectionable works ever printed on the shelves due to popular demand. Many public libraries in Canada and the USA are quite absolute about that with regards to free expression. For example: https://www.cbc.ca/news/canada/ottawa/porn-library-ottawa-po... "Library patrons allowed to surf porn, Ottawa mom discovers"
Sadly, NPR is a bad example because they are hardly publicly funded and their content is pretty biased (I'm a moderate liberal and NPR definitely feels like it's left of me, even if they're typically more civil than other media outlets).
NPR censored coverage of the Hunter Biden laptop claiming it was not newsworthy. All platforms with any moderation will be censorious platforms by definition. You can always tweak the degree of censorship though with moderation.
>I have a much better understanding of historical moral panics and cessions of freedoms. Whereas I used to wonder how some societies ever gave into such pressures, I now realize it's not that hard to persuade the average citizen into accepting such things.
Me too. I've always wonders what drives people into the arms of fascists and other such overtly unpleasant belief systems and the simple answer is fear. Fear for yourself, your family, and your future will apparently cause people to abandon all sorts of supposedly cherished beliefs.
This is why I think here in the UK SAGE's SPI-B group using behavioural psychology to increase the perception of personal threat as part of the anti-coronavirus measures was such a dangerous and short-sighted policy. Using fear might be a convenient way to convince people into doing what you want for a while, but fear also drives people into the welcoming arms of all kinds of nasty ideologies. That cat's out of the bag now too, I suspect using fear to "nudge" the public into doing what the government of the day wants will become a much more common feature of liberal democracies in the future now. We've done a deal with the devil and he always collects in the end.
>Fear for yourself, your family, and your future will apparently cause people to abandon all sorts of supposedly cherished beliefs.
And we (the US) must realize we've been put under a constant state of fear by news / advertisers for the last 30+ years. We don't really even realize that we aren't in a "neutral" mental state because we've been constantly bombarded by fear mongering. The constant fear is "normal" here. It wasn't always like this. This is why the "for the children," is so effective at infringing on our rights, we are afraid for the wellbeing our kids, more so than ourselves. It's nefarious to use our fear for our children to pass controversial legislation. I mean stop to think about how evil that is.
What would our country look like if we weren't being constantly being programmed to be afraid?
>It's nefarious to use our fear for our children to pass controversial legislation. I mean stop to think about how evil that is.
It's absolutely contemptible, yet as Western societies we spend so much time criticising our neighbours and blaming them for our problems rather than criticising the people amping up the fear on us. If I could change just one thing about society I'd introduce some sort of "immune system" against those who try and use fear to manipulate people, the correct response to fearmongering is contempt towards those responsible for it in my opinion.
>What would our country look like if we weren't being constantly being programmed to be afraid?
I'm not American, but here in the UK we face exactly the same issue. I think both of our countries would be unrecognisable and probably a lot better than they are today. How much avoidable inequality exists because the fear of Russian-style Bolshevism harmed moderate left-wing policies in the 20th century? How much avoidable authoritarianism would we have if the War on Terror hadn't itself become an intense source of domestic fear in the 21st? Fear is the fountain from which all tyranny and bigotry springs in my opinion.
It's not just politics that would be affected, every aspect of society would be changed for the better I think. Maybe that's the form a modern Enlightenment would take, an active rejection of fear and promotion of courage and tolerance for dissent in its place.
The last few years have shown me how many people are willing to believe utterly stupid things, and how easy it is to make people turn against each other. I knew this in the abstract, but watching it happen in real time is something else.
The human mind (myself included) has some serious bugs, and "we" as a society - with help from technology - are getting better and better at exploiting these bugs at scale. I don't think censorship is a solution, but I don't know what IS a solution.
Incidentally, what happened to FUD (Fear, Uncertainty, Doubt)? I miss that term, and I think it's much more descriptive of what we're seeing these days than the more vague "misinformation".
News companies have known for centuries that Fear Uncertainty and Doubt are profitable. They've shrouded the FUD behind claims of professionalism and legitimacy.
The internet has made it instantly and continuously accessible, and just as dangerously, largely fabricated. Even news about things that actually happened can have its comments astroturfed by bad faith arguments or straight up lies.
Censorship doesn't solve the FUD; that will never go away while there's a profit motive (IE, increase clicks).
Censorship can't distinguish between truth and lies, that's a problem journalism used to solve when it was profitable.
Censorship does solve brainwashing. Is the tradeoff worth it? Hard to say.
All I know is all platforms legally need to censor illegal content, so it becomes a hammer looking for a nail.
> The last few years have shown me how many people are willing to believe utterly stupid things, and how easy it is to make people turn against each other. I knew this in the abstract, but watching it happen in real time is something else.
If anyone wants some good examples of this, go to Reddit and read the posts /r/HermanCainAward that are marked "Awarded". No need to read the comments there--they are often rather mean. Just take a look at the submissions themselves.
For those not familiar with /r/HermanCainAward, the typical submission is a gallery of screenshots of someone's social media posts, usually full of memes about why they are not masking/distancing/getting vaccinated and invariably ending with them getting COVID, asking for prayers, and then someone else announcing that the person has died and often asking for donations to help their widow and/or children get buy (because apparently the kind of person who feels that they should get get all their COVID advice from stupid memes and conspiracy theories is also the kind of person who does't believe in life insurance...).
> The human mind (myself included) has some serious bugs, and "we" as a society - with help from technology - are getting better and better at exploiting these bugs at scale. I don't think censorship is a solution, but I don't know what IS a solution.
This too is illustrated nicely on /r/HermanCainAward. Before all this I would have thought that if I needed to convince a lot of people to make the kind of mistakes that the HCA winners do I would need to carefully craft an individual plan for each one of them. I would have never guessed that just making a dozen or so memes would be enough.
So because the Constitution defines some rather extreme circumstances where Habeas Corpus is allowed to be suspended, the Constitution is "fair weather" when it comes to civil rights?
Yes? I understand what courts do, my only point was that setting out two extreme situations where a certain right may be suspended doesn't constitute "fair weather" in my eyes.
I'm super tired of people conflating civil rights + first amendment protections with the idea that speech anywhere, by anyone, on any platform, deserves to be protected.
The goal of the first amendment is to protect the citizens of America from laws created by congress / government in limiting speech. That is nowhere near what we are talking about when we're talking about _any_ speech conducted on a private platform.
In fact, it's interesting to me that the argument has recently been spun around such that some politicians are claiming that social platforms are violating their first amendment rights by blocking or banning. This has nothing to do with the intent or language of the first amendment.
In my mind, you can be the most fervent civil rights advocate and still believe that Twitter/Facebook/etc can ban anyone they want for any reason. Even more so if you believe in free enterprise and the rights of a business to act in the way that they best see fit.
I understand that platform bans have more implications and repercussions than I'm outlining here in simple terms but still the conflation is frustrating to me.
IMO, if the government isn't using the extraordinary power's granted to them (search, seizure, arrests, fines, imprisonment, etc) against it's citizens strictly for the words they say or write, "free speech" hasn't been violated and it would be incorrect to use the term in that context.
I think even a government hosted forum could filter content without violating even the spirit of "free speech", and I'm pretty sure it wouldn't legally be considered a violation of the first amendment, even if that filtering reached blatant censorship levels.
Censorship, government or private, is a separate concern from freedom of speech, and using one to make arguments about the other doesn't make any sense to me.
> IMO, if the government isn't using the extraordinary power's granted to them (search, seizure, arrests, fines, imprisonment, etc) against it's citizens strictly for the words they say or write, "free speech" hasn't been violated and it would be incorrect to use the term in that context.
"Free speech" is overloaded. There's the abstract concept of "free speech" and then there's the first amendment which specifically limits what the US government is allowed to censor.
I know there's technically a distinction, the abstract concept is what I was alluding to when I mentioned the "spirit" of free speech, but I'm not convinced it's useful to make a distinction between the spirit and the law anyway. The concept of government and law is just as abstract.
I think it's more accurate to say the first amendment specifically limits _how_ the US government is allowed to censor speech i.e. they can't use their extraordinary powers to do it. It doesn't say they can't use the same tools and tactics everyone else legally uses to minimize the visibility and impact of certain speech.
> I'm not convinced it's useful to make a distinction between the spirit and the law anyway. The concept of government and law is just as abstract.
My use of "abstract" refers to the fact that the first amendment applies only to the US government, and consequently it's more concrete or restricted than the broader "spirit". Any particular "free speech" law is just one concrete instance of the more abstract concept of "free speech". The utility of the distinction is proved by the fact that we can talk about "free speech" laws across countries, but also in that we are currently having a debate about whether large social media networks should adhere to a free speech ethos.
> I think it's more accurate to say the first amendment specifically limits _how_ the US government is allowed to censor speech i.e. they can't use their extraordinary powers to do it. It doesn't say they can't use the same tools and tactics everyone else legally uses to minimize the visibility and impact of certain speech.
I'm not sure what distinction you're making. Specifically I'm not sure what "tactics everyone else legally uses to minimize the visibility and impact of certain speech"--most people don't minimize the visibility and impact of speech. The few who do are generally private platforms, and the US government specifically can't use these same tools. For example, the government can't legally give a platform to a Muslim group but deny that platform to a Jewish group. Additionally, it can't use its extraordinary powers to e.g. order a takedown of an offensive website, but it can't use "ordinary" powers to denying platforms to certain groups either (e.g., Neo-nazis/Antifa can protest in the streets as well as moderate liberals).
I think most people try to minimize visibility and impact of speech they don't agree with in whatever little part of the world they have influence over by e.g. verbalizing disagreement and counter arguments; not inviting people they strongly disagree with to things; not hiring them; not recommending their business; not letting them get near their kids; down voting; 1 star reviews; suggesting others do the same; embellishing stories to paint them in a more negative light. People in government positions can do all that legally, maybe with a few extra things to tip-toe around, but not much.
Protests aren't really a government provided platform, they're more a natural gathering of people that the government is explicitly forbidden from breaking up by using their extraordinary powers. Somehow, they are allowed to get in the way by requiring permits, but worse, they can and do use tactics to escalate protests into something where they can legally use those powers. Individuals can use a similar tactics e.g. instigate someone to take a swing at them so they can claim self-defense.
Even though I'm the one that brought up government hosted platforms for open speech, now that I think about it, the only example I can really think of is maybe town hall meetings. Knowing what a mess those can be, maybe that's why the government has been smart enough to not start hosting open online forums.
> People in government positions can do all that legally, maybe with a few extra things to tip-toe around, but not much.
I don't see any problem with this? They're exercising their own free speech rights as private citizens, which is entirely compatible with free speech principles and the first amendment.
> Protests aren't really a government provided platform, they're more a natural gathering of people that the government is explicitly forbidden from breaking up by using their extraordinary powers.
"Public spaces" are a government provided platform. Similarly, if a public university allows certain groups to chalk messages on the sidewalk, they're legally obliged to let other groups chalk messages as well. Arguing that one or both of these aren't a "platform" is purely semantics.
> 2020-2021 has shown me that most people happen to be 'fair weather fans' of civil rights.
This is always how it's worked. The idea that protecting the rights of your enemies can be salutary relies on too many complex concepts for the majority of people to have any hope of grasping it: burning the commons, collective action, norm evolution, meta-level thinking[1], modeling counterfactual worlds (eg one a future where your favored ideology is not dominant and is in need of the protections you are currently burning down).
The average person is nowhere near smart enough to be able to put these pieces together into a coherent worldview, let alone one that they find more convincing than "they're the enemy, crush them". The periods where liberalism has been resurgent are not ones where the masses are suddenly enlightened, but ones in which they either have little power or are pacified by unrelated conditions. This is not unlike the conditions in which dictatorships are stable, as the common thread is simply "the masses can't or don't care in detail about the fundamentals of the way they're governed". It's not a coincidence that the global illiberalism surge coincides with the rise of universal connectivity: On top of the social and economic upheaval that it induced, suddenly large amounts of people can coordinate epistemically, through hashtags and reshares, without making their way through distribution chokepoints controlled by elites.
[1] I couldn't think of a concise way to phrase this, but I'm referring to the tendency to claim that a big chunk of your beliefs/preferences are incontrovertible and fundamental tenets of society while others' are simply their beliefs and preferences.
Yea, I hear you, that's certainly possible. But I'm not reasoning top-down from liberalism's unpopularity, but rather bottom-up from the inability most people have to grasp the components I mentioned. Some of them are much simpler concepts than liberalism, and are applicable in many cases that people have more direct stakes in. And yet, in my experience, vanishingly few people are capable of grasping them to any reasonable degree.
We're drowning in anti-racist and anti-fascist material. I don't think it's particularly relevant: clearly it's not difficult to nominally support these beliefs while still being extremely illiberal.
That's what's so insidious about illiberalism. It can (and does) poison any ideology, even "good ones", because once a sufficient number of people are bought into it, it hardens into dogma and leads easily to "why should we protect those that disagree with our holy belief system?".
To use an example that we have no trouble recognizing as illiberal, from our modern perspective: Christianity on paper is a very liberal tradition, full of exhortations to love thy enemy and spread peace and love. I don't doubt that this deeply resonated with many early converts. But after a thousand years of spreading to the masses and ossifying into institutions, the medieval Church resembled every other illiberal institution: here's the dogma, and if you don't like it, well then you'll love my torture dungeon.
I had the same realism about secularism when a catholic became head of the big secularist-leaning party in my then county of residence. People were sure interested in his private feelings about sinfulness, or said that they would never vote for a party with a religious leader. The party angled itself as secularist, but that particular kerfuffle revealed it to be a highly contingent value of the membership (and of people generally).
It's good to know what people really care about, and what beliefs are negotiable.
I'm confused.. isn't a secularist party in favor of not involving personal religious beliefs in Civic issues? In that light, what's wrong with someone with personal religious beliefs from participating? Others made his religion important not him, from what you describe
I'm sorry, but: Wouldn't it be important to make sure the person actually wants secular values in politics when parts of their religion doesn't think the same way?
I mean, if he doesn't personally think folks should be using birth control or that same sex marriage is bad (as the Catholic Church does), isn't it important to ask how they deal with this? Does it come out in votes or does the person think that they should live up to a stricter moral code than what law dictates?
If you don't find this stuff out, you might wind up in a position where the law reflects religious values rather than broader secular ones. You don't have to have an official religion for this to happen, merely enough religious folks in office that vote with their religion.
Hey, I think your comment is interesting and raises a number of valid points, and I basically agree with you. I spent about 40 minutes drafting and redrafting more detailed replies, but everything I could think of saying read like kicking off a very rote/standard internet discussion about politics and religion, and I wouldn't wish that upon either of us.
Exactly, it's like the unfortunately common belief that religious people can't be scientists. Let their actions speak for themselves; if they are unreasonably biased, it will show.
Right, that was the surprise. It should be what you said, but in practice it became the tribe-of-choice for people who wanted not just religion out of government but also religious people.
Yeah, most people tend to get squidgy about abstracts like "free speech" when the crazies come out of the woodwork and start calling for treason, mass deaths, and so on.
What changed in 2020? Do you not remember the Dixie Chicks getting blacklisted for speaking against George Bush, Janet Jackson being deplatformed from record labels for showing a nipple on television, the LA DNC in 2000 cordoning off protesters into "free speech zones" 4 blocks away from the convention, gangsta rap and heavy metal being censored in the 80s and 90s, pushes for civil rights being met with dogs and firehoses in the 60s, Lenny Bruce being imprisoned for stand up comedy, interracial marriage being illegal, teachers being fired for being gay, American citizens having all of their property confiscated because they were ethnically Japanese? We had anti-sedition laws passed within 22 years of the Constitution being ratified.
Most of these things that aren't a matter of private freedom of association for platform owners tend to eventually get struck down because the Supreme Court can somewhat reliably be counted on to eventually respect the Constitution, but the general public and elected politicians have never supported free speech or free anything. I would say almost the exact opposite of what you said. People only claim to love free speech when something they want to say is unpopular or suppressed. Almost nobody just supports it generally. The ACLU used to be pretty reliable about this, i.e. defending the KKK and Nazis, but I'm not even really sure where they stand any more.
The average citizen doesn't give a crap about any abstract ideals at all. They just want to live their lives and possibly raise a family in an atmosphere not dissonant with their own cultural traditions and beliefs. Allowing people with other traditions and beliefs to spread those via public advocacy, art, or any other means that may lead to them being mainstream or even dominant is antithetical to that.
Let's be precise; the left started doing it. We are literally discussing how big parts of the political left in the US is moving away from the liberal ideals and philosophy that it once embraced and championed.
There should be a term for this, when conservatives in USA have so little exposure to actual leftists that they confuse MSNBC, the Democratic Party, and censorious anklebiters with "the Left". Here's a hint: actual leftists aren't cheering on the persecution of Assange, like everyone on MSNBC is.
No true Leftist, hmm? Okay, I'll humor you, what should we call "MSNBC, the Democratic Party, and censorious anklebiters" (sic) and the rest of the broad coalition to the left of the American center, if not the left?
Aside from that, if you're suggesting I'm a conservative (no worries, it's a common mistake; it's tough being politically homeless), I'm a liberal unhappy with the broad coalition that used to be called the left. If that makes me a "conservative" in your eyes, well, there probably should be a term for that too.
As you admit here, in any nation in Europe (or any nation in Latin America whose government wasn't recently installed by CIA), the name for MSNBC and the Democratic Party would be "the Right". Even in USA, they're the ones banging the drums most loudly for global thermonuclear war.
Your dissatisfaction is that you aren't a pure liberal. If you were, you would go along happily with their efforts to conserve this mess they've created. You just have an unfortunate fixation on the Bill of Rights or whatever. Keep in mind that Madison owned slaves when he wrote that, just as he had owned slaves when writing and campaigning for the Constitution the excesses of which the Bill of Rights pretended to control.
The term exists and it is called the Overton Window. It has been an intentional tactic and it is happening across several political spectra. For example, being against illegal immigration now gets you labelled anti-immigration. Wanting the police to be held accountable for their actions is now considered leftist instead of normal. Being anti-vax is considered a personal choice worthy of consideration instead of a fringe/lunatic view.
We could get rid of illegal immigration instantly just by changing immigration law. But I’m guessing that folks that say they are only against illegal immigration wouldn’t be happy with that outcome either. So I wonder what it is that they are really against?
you're going to continue to be downvoted but it's the truth. when Jon Stewart was going against the grain of the widely-accepted political narrative of the Bush administration, it was easy to agree with him calling out bullshit. when the political pendulum swung the other way, however...
the lesson here should be that we're not really much different from our ancestors of 5-50k years ago and because of that, concentrations of power (and by extension, influence) are inherently dangerous. leaders with too much power will inevitably make mistakes, and keeping institutions limited and focused means the harms from those mistakes are limited in scope.
in government, that means extending federalism: smaller governing bodies loosely federated (primarily for mutual protection and interrelational fairness). in business, it means truly competitive (and fair) markets with diverse participants, not oligarcal ones (like these platforms).
100% of everything you said I agree with. We're in a downright recession on freedoms.
"I now realize it's not that hard to persuade the average citizen into accepting such things."
Yes, and worse I feel it's not even that people are all too willing to go along with these things from on high, they want it, they propagate it. I'm starting to see the same behavior and mentality I imagine was common in East German where citizens police other citizens. It's not a good direction we're in at all.
Yep. Every western country I know of except America, has only a token gesture of human rights and they're only rights that happened to match the culture at the time with no longer term vision at all.
New Zealand, for example has its Bill of Rights Act which says no forced medical procedures. Somebody was recently fired for refusing a Covid vaccine and went to court arguing that right. The judge said, yea, there is that right, but also the government can revoke it whenever they feel like for any reason they want, and they have, so no luck.
Discrimination based on race or sex? Certainly not! Oh, except for hiring domestic workers, selecting flatmates, decency, safety, sports, etc, etc. In other words, all the places where people were already doing discrimination.
The UN's ambitiously named "Universal Declaration of Human Rights" has 28 rights for people, and a 29th right of governments to deny any of those rights for reasons of morality, public order, general welfare of democratic society. In other words, any reason whatsoever.
Remember freedom of internal travel? China used to be a human rights violator for restricting that. Now every country and its dog is doing the same. But this time it's "us" not "them", so it's all OK.
I have to be honest and say that my faith in the capacity of people to think in a free speech totally open environment was severely tested over the past 5-10 years. Things like Qanon, flat Earth, antivax hysteria, the meme-driven return of both "right" and "left" totalitarian ideologies from the early 20th century that should be completely discredited, and so on have made me wonder if most people simply can't handle exposure to unregulated content as they lack the ability to think critically. I've actually wondered if most people might not need to be protected from unregulated content in the same way that people need to be protected from exposure to lead, radon gas, etc.
The human brain simply didn't evolve in this environment. Throughout 99.9% of human history a person's ideas came from the tribe or neighborhood and came with the context of culture, social relationships, and physical body language cues. The brain did not evolve to process a context-free meme stream connected to an algorithm trying to keep you "engaged."
If only a few people had fallen for this nonsense that would be one thing, but I witnessed mass conversions of millions of people to ideas that are more absurd than the craziest conspiracy bullshit I read on Usenet in the 1990s. This is stuff the people who are crazy think is crazy.
It goes beyond shockingly bad ideas too. I've seen an alarming rise of discourse that resembles word salad spewed out by a primitive Markov chain text generator. It's terrifying. It almost looks like exposure to open discourse is causing brain damage in some susceptible fraction of the population. Some subset of people seem to have lost the ability to process or generate a coherent narrative or structured system of ideas. They just spew memes in random order. It's less coherent than classic new age babble or Orwellian "newspeak."
I still lean in the free speech direction pretty hard, but my faith is shaken to the core. I honestly and non-hyperbolically wonder if the right carefully crafted AI-powered social media boosted propaganda campaign couldn't convert double digit percentages of the population into homicidal maniacs or cause a mass suicide wave killing millions.
BTW this and not "Skynet" is the scary thing about AI. The most terrifying AI scenario I can think of is AI-assisted demagoguery or mass stochastic terrorism powered by big data. Think "adversarial attacks against the human neocortex."
Chris Hedges sometimes remarks that he finds people turn towards superstitious, religious, or conspiratorial world views when they find they have no control over their lives. I suspect that if the US somehow changed so that economic security were increased for most people, there would be much less unreason and general discourse wouldn't be so mean.
That's an interesting theory, although I doubt that 19th century US farmers, among the most self-sufficient people in history, were lacking in religion, superstition, or conspiracies.
They were self-sufficient in the sense that there was no one coming to help in case of emergency. That doesn't mean they ever felt secure. Entire families regularly died for one or more of dozens of causes: starvation, freezing weather, tornadoes, human sickness, livestock sickness, crop failure, drought, dangerous animals, Indian attack, crime, etc. Some of them might have pretended at a "control over their lives", but few modern Americans would trade places with them.
In fairness, it is trivial to point to any number of counter-examples throughout history if you just want to dismiss the observation. Hedges is a journalist and was referring to the changes he saw in the squeezed middle and lower classes through his career, and probably wasn't making a sweeping historic claim.
I mentioned it as contrast to the parent comment who simply concluded people are incapable of rational thought, and meant to suggest there are mediating influences.
I'm not and I'm willing to give it a chance. I'm just not seeing a strong correlation.
What I do see people doing when they lack control is to attempt to gain control. Put together a group, grab some of that sweet power through mass. The more ambitious ones fight their way up the power structure. Special bonus points if you can take over an existing seat of power.
> What I do see people doing when they lack control is to attempt to gain control.
One way in which people gain (the feeling of) control is to imagine the world differently (superstition, conspiracy, religion) in a way that makes them virtuous or special and others not (i.e. essentially Nietzsche's idea of ressentiment). If you haven't the power to take part in a real struggle, this is not so surprising to me.
Perhaps the temptation is to choose philosophies that have the appearance of power. Sticking pins in a voodoo doll of your boss, Mr. Scrooge for example.
Some acquaintances of mine did that at their job once and it worked a charm. The boss was permanently off on sick leave within a month. Maybe there's something to it.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Farmers through all of history and to this day are absolutely never self-sufficient. They are at the mercy of 'the gods', aka the weather, at all times. One bad flood, one freeze, one week of ill-timed rain, and they are destitute. That's the perfect environment to breed superstition.
I think im general I believe that theory to be true, but that doesn't explain all the well fed boomers that went full Qanon. They have retirement accounts, own their houses, a few nice vehicles out front, a boat, maybe a second property, etc.
These people ended up being Qanon's bread and butter. From the outside in they seem in control of their lives, but maybe it's all a ruse.
My father said something interesting to me once (he's a boomer of course): "Most people in my generation have this sense somewhere deep inside that they failed, but they're not sure exactly what they failed at."
Only a small percentage of that generation were into the specific sort of counterculture Thompson was steeped in, but the whole cultural zeitgeist of the generation was wrapped up in various forms of either political or mystical idealism or a heavily marketed idea that scientific or technological progress would deliver giant miracles in short order. These people expected some kind of big, glorious future and they feel like it just didn't come.
Nice retirement accounts and second houses was not "the dream." The dream was something bigger, an "ultimate triumph" as Thompson calls it, and they feel like they never quite reached it.
Instead they got the 1970s when it all seemed to crash down in a wave of urban crime, drug hangovers, crazy cults, serial killers of the week, pollution, and stagflation. Keep on truckin'!
A good number of them are still looking for "it." Apocalyptic evangelical Christianity with its rapture tapped into this. The unbalanced Internet techno-utopianism of the 90s and Singulatarianism tapped into this. Qanon's language tapped into this.
I'm a young gen-Xer or old millennial depending on how you count it. As a rule my generation is a bit jaded on movements and idealism, partly as a reaction to what many of us see as the delusional thinking of our parents.
I hope to leave the world better than I found it, but I don't expect some kind of shining utopia to rise from the waves. If we manage to solve the big problems of our era like climate change, sustainability, and wealth inequality, their absence will merely reveal other big problems that were building under the surface. Life is beautiful and fascinating but it's also a perpetual struggle against entropy.
> I've actually wondered if most people might not need to be protected from unregulated content in the same way that people need to be protected from exposure to lead, radon gas, etc.
Yes, but that means deciding which content is harmful, and that's where we are now. Figuratively, you end up with lead-lickers coming out of the woodwork saying that their way of life is being stifled by regulation/moderation.
Chasing user "engagement" has been pushing conversations from the mundane middle toward the fringes. Thus, people make understandable but hasty generalization[0] that what they're seeing is more common than average.
In the past, I think this drift was counteracted by codes of morality (whether internalized, reinforced by people you know, or promulgated by regulatory bodies) as well as the limited means of disseminating information (few newspaper editors/radio announcers/news anchors to many readers/listeners/viewers). Though I'm sure there were plenty of wild pamphlets spreading chaos in the centuries between Gutenberg and Zuckerberg.
Even though most of those morality codes are downright oppressive by today's standards, and the many-to-many distribution enabled by the Internet has many benefits, we haven't found a substitute, so there's a gap in our armor.
Side note: Believing conspiracies and yearning totalitarian are two different failures in thinking. I say that because only the latter had strong support in the 20th century—even earlier if you count monarchies. Someone supporting Flat Earthers isn't harming me (except by undermining science in general); someone supporting Stalin 2.0 is an indirect direct threat to me.
>Yes, but that means deciding which content is harmful, and that's where we are now. Figuratively, you end up with lead-lickers coming out of the woodwork saying that their way of life is being stifled by regulation/moderation.
There might be something to be said for instead of limiting speech, increasing the 'reporting requirements'. This isn't a fully formed position of mine but rules along the lines of no anonynomous speech[0] and stricter fraud[1][2] rules are imo compatible with free speech and its ideals while helping to manage fraud and propaganda.
[0] So if an AI wrote a blog spam post, it should be at minimum illegal to not have the AI in the byline, with e.g. a unique identifier.
[1] Say loudly and publicly that there is a pedophile ring in the basement of a pizzeria, with no evidence, go to jail.
[2] Not that such rules can't/haven't been abused before though.
Flat Earth folks these days mostly seem to just delight in being contrarian/trolls. I'm sure there's some unstable folks in there like any belief system/group though.
> How do we maintain the emphasis on free speech with our higher fragmented population who have WIDELY varying beliefs and little shared culture?
We cannot. Diversity, Freedom, Centralization. Pick two.
What has kinda worked in some countries is splitting up into different areas and letting them run their own affairs -- e.g. swiss style cantons.
But that type of arrangement is very fragile given the organizational advantages of centralization and countries like Switzerland are notable because of their exceptionality.
How can you support free speech but prevent a company from exerting that speech?
YouTube banning antivax content is speech.
You want the government to start mandating that a company can't take a stance on important issues of healthcare? Churches spend all day every day taking stances on abortion. You want to the government to tell them they can't take a side?
> How can you support free speech but prevent a company from exerting that speech?
I can be deeply disappointed by youtube's moderation decisions without suggesting that the company be compelled to allow certain content. as an aside, I find it frustrating to see people constantly swapping between "free speech" as a legal concept and "free speech" as an abstract ideal in these threads. we talk past each other the same way every time the debate comes up. just because the law is written the way it is doesn't mean that's necessarily the way it should be. and even if we can't write the law "just right", we can still advocate for higher principles to be followed.
anyways, I generally agree with the "companies can manage their properties as they see fit" line of thought. but it becomes problematic when our public spaces are increasingly controlled by a small number of huge companies that mostly share the same politics. I'm not really sure what the solution is, but it sucks to watch it unfold.
> You want the government to start mandating that a company
There is already an established history of requiring certain large communication platforms, to act a certain way.
They are called common carrier laws, and already apply to things like the telephone network.
Sure, they don't currently apply to other things, but the law could be updated, so that they do.
Philosophically, common carrier laws are uncontroversial, and already apply to major communication platforms, so you don't get to pretend like this is unprecedented.
I like the free market. If a store who also likes the free market decided to raise their price significantly because they claim to be better than everyone else then I will still think that is their right in the free market. However, since I value the free market so much I won't buy from them. Similarly, if YouTube wants to exercise their right of freedom of expression to censor content then I, as someone who values freedom of expression will use them less. Unfortunately, while in the first example many people would behave like me and cause the store to lower their price, not that many people value freedom of expression for YouTube to care about loosing those people.
I think discussions of the free market need to include scale. Scale absolutely matters when it comes to "voting with your wallet", or I suppose, in this case, your usage of a platform.
I think our notions on the merits of a free market, and indeed, the very understanding of a free market itself, come from a time before the network effect and the de-facto digital monopolies we see today.
> You want the government to start mandating that a company can't take a stance . . .
I, for one, want less monopolistic media so that the people can exert viewership pressure; they can get their media elsewhere and the ad money will follow. The content being stopped is not the only loss of people's voices happening here.
As Thiel says, (in whatever form it ultimately takes) the free market is a selector for monopolies. At peak capitalism, you still have start ups competing with an increasingly low chance of success excepting scandals...which are inevitable in large organizations.
The biggest companies are basically utilities and that will not change anytime soon. The market has resulted in this condition. The government has to play catchup, as usual.
Youtube is a monopoly, and monopolies should be limited in the same way the government is, and for the same reason. This also applies to groups of otherwise independent businesses that operate in concert.
That, and adding political beliefs to the list of protected classes, is what is necessary to start the US healing processes. Until there is no other option but to talk with the people you despise, neither side will start doing it.
The problem is that Youtube is big enough, and carries enough of the global conversation, that we think it should be a common carrier. (Think of the phone company back in the day. They didn't care if you were literally the Nazi Party of America, they carried your phone calls just like everybody else's.) People kind of think of Youtube that way, even though, legally, Youtube isn't playing by those rules.
But there's also this two-faced evaluation of Youtube. When Youtube blocks the other side, people say "private company, First Amendment, they can carry what they want". But when Youtube blocks their side, people at least feel the violation of the "common carrier" expectation, and get upset.
So maybe it's time for us as a society to decide: Has Youtube (and Facebook, and Twitter, and Google) gotten big enough and important enough that they should be regulated into some kind of "common carrier" status? Or do we want them to continue as they are?
> "How can you support free speech but prevent a company from exerting that speech?"
Corporations are not humans (regardless of the "corporate personhood" doctrine) and thus should not be entitled to the full rights of humans. Semi-monopolies like Youtube are especially not entitled to use their dominance to manipulate public opinion, given how easily it can be abused.
Ask yourself, if YouTube were pushing conspiracy content and suppressing pro-vaccination content instead would the parent poster and those like them still be saying what they are saying? Fair-weather friends indeed.
> "You want to the government to tell them they can't take a side?"
> Corporations are not humans (regardless of the "corporate personhood" doctrine) and thus should not be entitled to the full rights of humans.
True, but corporations are just connections of people with shared goals. Should groups of people lose "fundamental" rights when they organize?
> Fair-weather friends indeed.
Yes, I fully support the rights of platforms to do stupid things. Use rumble or whatever if you want. I'll mock those platforms, but I don't think the government should ban them.
> Semi-monopolies like Youtube are especially not entitled to use their dominance to manipulate public opinion, given how easily it can be abused.
So, you think that we should circumstantially limit constitutionally protected rights, for the greater good? Fair weather friends indeed.
> So, you think that we should circumstantially limit constitutionally protected rights
According to your logic, you should think that common carrier laws should be repealed entirely.
Think, the telephone company blocking certain political groups, or the only water company in town, refusing to deliver water to certain people who say things that they don't like.
It is pretty similar, philosophically. Common carrier laws are pretty uncontroversial.
So it feels weird for you to be making these types of arguments, when it is already established, that there are major counter examples.
So you'd have to either recognize the contradiction, or admit that your position is at odds with other established, and uncontroversial laws.
I didn't take any particular position, I pointed out the incongruence in the one they espoused.
I'm admittedly mixed on common carrier laws, I think that they are "impure" in a sense, but I also think the benefits are greater than the costs, even taking into account the potential theoretical erosion of our rights.
I absolutely agree that there are major counterexamples, and I'm overall fine with that, but I'll also freely admit that I'm not a free speech absolutist of any form.
> "So, you think that we should circumstantially limit constitutionally protected rights, for the greater good?"
Why, yes, I do. Ask yourself: what was the various civil rights victories and legislation for minorities, women, LGBTQ other than saying "We are circumscribing your rights, including ones formerly interpreted as constitutionally protected, so that these protected classes may be treated equally, for the greater good"? Sounds like you'd argue against that.
I mean, yes. I think that's fine. But that goes in both directions: if you're willing to sacrifice the speech of some for the speech of others, clearly either you aren't a free speech maximalist, or there is some inherent contradiction in what free speech is. Because if my free speech requires limiting yours, well...how do we decide whose speech is more important?
I think a modern formulation of freedom of speech, like Article 19 of the UN Declaration of Human Rights, is clearer about what freedom of speech is: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." Note that this formulation doesn't have to descend into some sort of free speech maximalist apocalypse; spam is unprotected because it's not something anyone wants to receive, incitements to imminent violence are violations of other human rights and laws, etc. so moderation is still possible. There is a difference between interfering with parties who are mutually voluntarily communicating and helping parties not receive communications they voluntarily have decided that they don't want to receive.
(I was about to say that free speech maximalists are a myth but I do recall seeing at least one person on HN advocate such a view.)
And I agree there's a tension between the rights of different individuals/entities but the same tension exists for other civil and human rights. The compromise used for other civil rights seems reasonable here: everyone should be treated evenhandedly and impartially and, the larger the corporation/organization, the greater the responsibility to do so.
> Note that this formulation doesn't have to descend into some sort of free speech maximalist apocalypse; spam is unprotected because it's not something anyone wants to receive, incitements to imminent violence are violations of other human rights and laws, etc. so moderation is still possible. There is a difference between interfering with parties who are mutually voluntarily communicating and helping parties not receive communications they voluntarily have decided that they don't want to receive.
Sure I accept this (note though, that the UDHR definition is incompatible with the US constitutional definition, but let's work with this one because I prefer it anyway).
You mention mutually voluntary communication. So the vital question is then: is you posting a video on youtube a voluntary communication with me? I don't see how it is. I can see this argument for email, but I don't see how you can claim that this is the case for Youtube without an apparent contradiction in that even if you and I want to communicate via youtube, Fred over there thinks that the video you posted is spam/inappropriate and should be blocked. That is, if you need mutual consent from all parties, there will always be people who don't want to watch a video, but it is shown to. Therefore I don't think your framing works either way. Either You and I aren't the mutual parties to the communication, in which case Youtube can withdraw its consent, or everyone is party to the communication, in which case Youtube has a responsibility to block the content on behalf of those other people.
> everyone should be treated evenhandedly and impartially
Are you claiming that people aren't? Who is being treated differently? As far as I can tell, the discrimination, insofar as it exists, is based on an idea that isn't specifically called out elsewhere in the UDHR (such as religion or race, in Article 18 or 2).
I'm sorry but I'm not sure I follow the reasoning here. If we say that I posted the video (which falls under "imparting") and you intentionally searched for that video and viewed it (which falls under "seeking and receiving"), that seems like voluntary communication between us I would think. Fred may not want to see that video but that's a separate interaction between myself and him; I would wholeheartedly support Fred being offered tools to prevent my hypothetical video from being shown or offered to him. But blocking/removing the video so nobody can see it based on Fred's preference (or even the preferences of many like Fred or YouTube's preferences), how is that fair to you, who wanted to see it, and how is that fair to I who wanted my audience to find it if they looked for it? Fred might judge the video harmful but that gives Fred no standing to interfere any more than a stranger has a right to interfere with you purchasing a book from a bookstore because it's harmful.
Well but did I intentionally search for that video, or did I search for, say, content about vaccines?
If you're saying the video is unlisted/private and sent to people on a mailing list over email, that's one thing, but I think you're arguing that YT should act as a distributor as well, and there's no way to "impartially" order search results.
Nobody is forcing the searcher to view any of the content presented as search results and I think it's hard to make the case that mere exposure to unwanted search results is anything other than a minor irritation.
I think we would be in agreement that improvements in search and algorithmic identification of content would be a great thing so that people who don't want to see fringe content can be helped to avoid receiving it in their search results?
Let me rephrase: what, in your opinion, is wrong with Youtube's search algorithm making antivaxx content appear "last"?
This is "just" a choice about how they order search results, but is functionally equivalent to delisting the content. Is that okay? If not, what makes today's ordering "more okay", and broadly, how do we delineate between acceptable search algorithms, and unacceptable ones?
Disinformation is doing more harm to society than encrypted terrorist emails ever could. The "Disinformation vs. free speech" will be a defining balance of the next decade, even more fraught than "privacy vs. security" in cyberspace.
I'm not censoring someone because they want low taxes, high taxes, disbanding the department of education, or giving every poor person a new car.
There is a line to be drawn, somewhere, one when speech crosses from the expression of political thought and free thinking, and into the willful (or the amplification of willful), factually incorrect statements generated by sophisticated trolls and adversary nation-states.
My personal take: The Internet is a vital tool for free expression, and as such, a "floor" of free expression should be permitted. Ensure that DNS and ISP connectivity cannot be removed from someone based on the legal expression of thought. Those are infrastructure.
Youtube, Facebook, and other amplification platforms? In the short term, I don't see how we force them to host actively harmful content without recategorizing their role in society.
edit to respond to iosono88 (since HN is throttling my ability to rebut): I'll keep my response simple: I also don't like payment processors being used as control mechanisms for legal activities.
> In the short term, I don't see how we force them to host actively harmful content without recategorizing their role in society.
This recent flip from "Facebook et al aren't doing enough, put the screws on them" to "we can't force Facebook not to censor" is quite disingenuous. These companies and their founders are famously liberal, and were dragged kicking and screaming into ever more heavy-handed moderation, by both public opinion and veiled threats of regulation from politicians. There's plenty of statements on the record of eg Zuckerberg saying what was obvious to most of us: nobody, including Facebook, wants them in the position of deciding what is true and what is false.
Leaving aside whether content moderation is a good thing, let's not pretend that the situation here is that Facebook really wanted to become the arbiter of truth and misinformation and we can't stop them from being so.
They already were at that point, regardless of what people claim.
Facebook had the ability to paint whatever picture they wanted as truth by controlling what their users saw at what time. And they utilized it proudly long before COVID to increase engagement.
Sure, and long before Covid, that was a valid (and much-expressed) criticism of them, as well as a general criticism of using non-federated platforms.
But expanding explicitly into deciding what users are allowed to see and express to each other is a million times worse than the type of banal malevolence that arises from "show people what they like to see".
There's the fundamental difference! I love finding the "fundamental difference". The crux, the place where philosophies diverge, where the understandings break down.
I fundamentally disagree with the premise that "keeping harmful, intentional disinformation away from people" is worse than "letting people unknowingly subscribe to disinformation".
I would argue, perhaps, that there should be open policies on what topics are off-limits. That Facebook, et. al. should have to document to the public what "viewpoints" and disinformation they limit - and furthermore, that more of these content-display algorithms should be auditable, competitive secrecy be damned.
I wouldn't call hundreds or thousands of people dying due to disinformation-backed vaccine skepticism "banal", either.
> "keeping harmful, intentional disinformation away from people"
This is begging the question though. It's assuming that "harmful, intentional disinformation" is 1) well-defined and 2) always going to be determined by those with your best interests at heart. It relies on a blind faith in the fundamental and eternal moral purity of Facebook and other mega-corporations. I wholeheartedly disagree that they fit this mold.
This is true even if you turn your religious passion towards government institutions instead of Facebook. Do you similarly agree that criminal trials and due process are unnecessary? After all, the same pre-hoc confidence in your ability to categorize without rigor leads to "Why would we want criminals going free due to technicalities and lawyer's tricks?". I assume you don't agree with this statement, because in that context, you've internalized the idea that institutions are both corruptible and flawed even when uncorrupted, and it behooves us as a society to have some epistemic humility instead of pretending that Truth is carved onto clay tablets and handed down from God.
If you've paid any attention to the pandemic, you'd know that even a situation where government is in full control of defining "misinformation" can be consistently and significantly misleading. "Mask truther" used to mean someone who thought wearing masks was a good idea for preventing spread, discussing the lab leak hypothesis was "misinformation", the vaccines were "rushed and unsafe" until Trump was out of office, etc etc etc. It's hard to pick a topic where it wasn't trivial to front-run official advice by several months, repeatedly, over the entire pandemic.
It's a bit of a paradox: The very certainty and (illusory) Truth-with-a-capital-T that you take for granted is forged through a process of skepticism, second-guessing, and constant poking at beliefs. Hamstringing that process is like killing the golden goose to make more room for all the eggs you plan to have.
Your entire paragraph of "if you've paid any attention" are questionable.
I never heard people get mocked at any point for wearing a mask during the pandemic, even in the beginning when the CDC said it wasn't necessary.
In my PERSONAL opinion, the "lab leak" theory was never misinformation, but uninformation: Discussion pushed forward by right-wing outlets to generate an enemy, a "they" to blame. They used it as a cudgel without evidence against Anthony Fauci, and they beat the shit out of Asians because of it. Most importantly, it was completely irrelevant to the extent of us debating it, when the focus was on containing a disease for which there was no cure or vaccine.
And while there was some public skepticism about the pace of the vaccine process, I likewise don't think there was a "switch-flip" of trust in it like you suggest. When it came time to take it, everyone who wasn't a vaccine skeptic already went to get it when they could, and clearly the Trump admin was in charge through the development of the vaccine.
---
There is also a difference between someone posting "I do not trust the government not to be incompetent, or not to run a mass trial on people" (though I think those people are nuts re: the vaccine), and someone saying "I know that Bill Gates and Anthony Fauci put microchips into a bone-melting serum that will activate in a year!"
It's a huge, multi-faceted issue. In the end, the problem to TRY to solve in coming years will be sifting between legitimate skepticism and good-faith debate, and nation-states/Fox-News lies that intend to manipulate you into anger and division, and whether private entities have the obligation to allow harmful information across their channels.
Youve added an addendum to protect DNS and ISP both of which have been used to censor citizens. Thoughts on pulling payment processing via very generalized Chokepoint?
New Zealand has a Censorship Office and an official called the Chief Censor. The government has the power to declare that certain communications are illegal and prosecute people for making those communications. They recently, famously did this with footage from the Christchurch shootings as well as the shooter's manifesto.
Yet New Zealand consistently tops Cato Institute's freedom index.
Maybe, just maybe, an absolutist reading of free speech that favors spreading of hate and misinformation, as we have in the USA ever since Brandenburg v. Ohio in 1969[0] is not a necessary condition for freedom. Maybe restricting speech actually has a public benefit and can increase the freedom and well-being of individuals with no hateful or dishonest intent.
[0] Note that the plaintiff was a KKK member, so yes, the current bedrock court ruling for free speech in the USA was crafted specifically to protect hate speech.
>Yet New Zealand consistently tops Cato Institute's freedom index.
Well I'm sure those who got arrested for sharing the Christchurch footage (footage that was, btw, widely available and shared worldwide) will be happy to hear that. They may be in jail but at least the Cato institute declared that they were still living in a country that tops their index. An index that hasn't even changed or taken in consideration the extreme drift towards authoritarianism that we have seen in the 2020.
If you can ban something for being hate speech, calling something hate speech becomes a weapon. I absolutely don't trust any of the people banning "hate speech" to not have double standards that make it easy to call anything politically disagreeing with them "hate speech".
>"Yet New Zealand consistently tops Cato Institute's freedom index."
Good for them, but that that index is meaningless to me. Truthfully, I have no respect for arguments that twist acts of suppressing freedom into acts seen as enhancing or preserving freedom.
From a personal perspective, I don't want to have to wade through everyone else's "free expression", for the same reason that I don't want to have to wade through every company's advertisements. Experientially, I am less free when everyone can interrupt my brain space with what they want to shove at me. I want limited amounts of high-quality, interesting communication, not the noise of everybody's everything. (Think in terms of Shannon information theory.)
So I kind of see your point. Filtering out the garbage is no loss to me. It make me freer rather than less free. And yet...
The failure mode of the New Zealand approach is to have someone who is partisan hold the office of Chief Censor. Worse, someone who is a dedicate partisan might see the value of holding that position, and might deliberately, systematically seek it, hiding their partisan-ness until they obtained it. Sooner or later, someone's going to at least try.
Ironic that this is from the EFF, who were happy to dogpile on RMS when he said some things that some people took offense to [1]. Just like private censorship, trying to cancel someone is not the best to fight hate.
But I'm also wary of this idea that "well you just give the people tools to sort it out".
Mostly because:
1. It seems unrealistic as expecting people to peruse their own source code... That's a huge pain in the ass for most folks and they're just as likely to pick a bad filter or service to do that for them anyway.
2. Did that change anything about how a situation where GoDaddy and Google refused to manage the domain registration for the Daily Stormer?
I don't think it did....
Aren't we back at the start with these suggestions?
> Anonymity and pseudonymity have played important roles throughout history, from secret ballots in ancient Greece to 18th century English literature and early American satire.
Sorry, I think anonymity is valuable in the world we find ourselves, but to suggest it was prevalent more than a few decades ago is really stretching things. Yes there are examples, but ordinary people just wouldn't do it. Now, to have say an email which is your actual name is a rarity even if you wanted it.
Private networks need censorship, or their userbase will implicitly censor people by harassing, intimidating and threatening them.
“Free speech” is an overused buzzword. Anyone can say or write whatever they want privately, and most can talk freely to their friends and family. Your voice will never reach most people unless it’s popular, because there are many voices and people have limited time and attention. There are plenty of free speech networks today but most people aren’t on them, because most people don’t want free speech.
Democracy is a system in which your party loses elections. And when they lose, do you want them dictating what you can and can’t say?
No one has a monopoly on the truth.
In fact, our greatest scientific discoveries (the closest thing we have to Truth) have been forged by “offensive speech”. The ability to offend actually helps minimize misinformation.
If debate is always a diversionary tactic and we should prioritize action by picking a side, why should we necessarily side with you? Are you always right?
Let's trust the billionaire execs at Google, Facebook, Amazon and Twitter to listen to the correct academics rather than responding to the incentives of capital. When faced with calls to ban pro-Palestinian rights activism on their platforms, they've never caved before
> Let's trust the billionaire execs at Google, Facebook, Amazon and Twitter to listen to the correct academics rather than responding to the incentives of capital
That's what we're doing now: it's a neoliberal "free market" and the acolytes of the Chicago School tell us that the incentives of capital will eventually lead to the best possible outcome because rational actors will make perfect choices with complete information.
Not if the Left actually builds arguments and movements to change minds and and wins political power. But instead many spend their time begging the rich white men at Facebook and Twitter to decide which political speech deserves to be hidden from millions of people
The speech in your post is oppressing me right now. Cease and desist your verbal oppression or else I will use state-sponsored violence to end your oppressively free speech.
In the US? Yes it is. You realize you are quoting a 1919 supreme court case (that used the "fire in a theater" argument to make protesting against the draft illegal)... which has been overturned 50 years ago, right?
Oh I'm sorry, I didn't realize I was talking to constitutional scholars. What law school did you go to? How long have you been a member of the SCOTUS bar? What law reviews have published your work and how many constitutional law cases have you argued?
I think you should review the HN guidelines, as it appears your comment is breaking several of them. If you want to make posts like this you should build your own HN.
That is a gross mischaracterization, although one that I would expect given the track record of this conversation. Your comments are filled with an unhealthy level of vitriol, and I think it would be a good idea to cool down for a while.
Straight out of the obfuscation and doubt handbook. Instead of addressing the issue, go straight for attacking the messenger and divert, deflect, and distract.
Stop peddling conspiracy theories and bad faith arguments.
Your "misinformation" is the notion that masks work, vaccines are not "100% effective" as originally claimed, and that the coronavirus strain causing covid-19 is man-made. You want to stifle political opinions that threaten your narrative, and you have twisted your words in such a way that you are framing state censorship as a form of liberation. Your advocacy for state controls on speech would be far more at home in a totalitarian regime such as the DPRK, and I suggest you pursue your ideals there.
Is there an easy copy pasta on exactly how bad the original judgment was? Because there really ought to be one whenever someone posts about fire in a theatre.
Your position is based on a provably fallacious argument. The posters in this thread have provided you with references, but instead you're doubling down and ignoring them.
The cia wants big tech monopolies because we are in direct competition with China. Mimetic desire to be dominant great power. But the problem is monopolies suck. They stifle innovation and make the culture sluggish and less dynamic. Where do we go from here? Probably some new great power emerges that allows true freedom and fosters bottom up innovation. I just wonder where that will come from.
It's not so simple as whether or not a platform allows content of a certain type from certain authors to be published on their platform. It's also about whether the platform is pushing that content to others, using automated tools which have been tuned to improve "engagement". That's some of the Facebook research which was suppressed until the Wall Street Journal published the leaks from that research. Facebook apparently knew that changes they made were allowing posts that made people more angry to get surfaced more, because angry people were more likely to comment on the posts, thus improving "engagement". No matter if it caused disinformation to be spread, or if it made people more angry, or negatively impacted the mental health of teenagers. It improved "enagement" and thus $$$ to Facebook shareholders.
This is also why there may be a problem with the truism "the best way to counter bad speech is by allowing more speech". Well, what if the engagement algorithsm cause the bad speech to get amplified 1000x more than the speech which has objectively verifiable truth claims? Free speach assumes that truth and lies would be treated more or less equally --- but that's not necessarily true on modern platforms.
So it's not only a question of whether people have the "right" to publish whatever they want on a platform. Sure, you can stand on a public street corner and rant and rave about whatever you want, including "stop the steal", or "the world is about to end". But you don't have the right to do that with an amplifier which causes your speech to blare out at 100 decibels. Similarly, platforms might want to _not_ amplify certain pieces of content that are killing people by spreading misinformation, or destroying democracy, or encouraging genocide. And that might very well be the best thing platforms can do.
But now we have the problem that content can be shared across platforms. So even if one platform doesn't cause information about vaccines causing swollen testicles from showing up on millions and millions of News Feeds --- what if that same information, posted on one platform, is shared on another platform which is much less scrupulous?
So for example, suppose platform Y decided not to amplify videos that they decided was scientifically incorrect, because they didn't want to have the moral stain of aiding and betting people from killing themselves by not being vaccinated, or not allowing their children from being vaccinated. But another platform, platform F, which had done the research indicating this would happen, but actively decided that $$$ was more important than silly things like ethics or destroying democracy, might promote content that by linking to videos being posted on Platform Y. Maybe the best thing Platform Y could do would be to remove those videos from their platform, since even though they were no longer amplifying that information, it was being amplified by another platform?
[0] http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf