Although I don't fundamentally disagree with your point, I don't think you're framing the issue fairly. It's not a matter of most people being enemies of free speech, it's just that this is an inherently difficult problem. Everyone draws the line differently on what's acceptable and what isn't, and every platform is trying to foster a different kind of community.
Intent. If your speech is intended to compel someone (compulsion != persuasion), then it's a threat. Of course, accurately assessing intent is difficult because threats are often implied rather than explicit (precisely because the one issuing the threat wants to avoid the consequences of issuing a threat).
You put forth the idea in your parent comment that there is a simple bright-line test -- apparently textual -- for deciding whether a remark is an innocuous opinion or a credible threat.
So -- bearing in mind that the speaker's henchmen said they interpreted it as a command -- which side of the line does "Will no one rid me of this troublesome priest?" fall on?
I don't think people are enemies, adversaries, or even detractors of free speech, but rather, they won't actually defend the kinds of speech that the ideal of free speech is meant to protect. Especially when the censorship happens to be affecting their partisan opposite. I do, though, recognize the difficulty in allowing some things and not others depending on the forum.
Please excuse the shortcoming I have for explaining this, but I have to fall back on "I know it when it see it" when it comes to what counts as violations of the principle of free speech vs content moderation. In the current zeitgeist though, I absolutely see this as censorship rather than content moderation. This is because I absolutely sense partisan motivations for content take-downs and topic-wide bans on such large platforms as YouTube, Twitter, Reddit, etc.
I wonder if we would be better served with a NPR-like publicly funded platform for video hosting that puts a lot more resources into content moderation. The private platforms get away with the bare minimum by throwing black box AI at the problem which leads to problems like the anti-vax censorship chilling effects etc. There should be an easily reached human in the loop with transparent decisions, and levels of appeal which is much more expensive. Kinda like the court system with the jury-of-your-peers litmus test.
If the state's going to host it, then their bar should be whether it's legal or not. If they want to put scary warnings around certain content or lock some of it behind age restriction, sure, fine. But if they are going to use public funds to host a government run publicly available video sharing platform, they should be very cautious about removing any content that doesn't violate actual laws. Free speech and all that, if anyone still remembers the concept.
Yes, exactly. And deciding legality should be done better than current automatic moderations that punish innocent content without a working appeals process.
Requiring that content be sourced from actual humans in good faith, and identifying people violating terms of service by spamming with puppet bots would be a good start. If the service is being operated by a national government, then you could require posters to prove residence or citizenship. But would anyone really want to use such a service? Could it possibly be even produced or operated efficiently? Once you put strict legal requirements on the operating entity it will get very slow, expensive, and user unfriendly.
Er, politics in the period of 1949-1985 was not, generally, “more boring” than 1985 to present. The last few years maybe have achieved the undesirable level of not-boring that was generally the case through most of the 1950s, 1960s, and much of the 1970s, but certainly overall the post-Fairness Doctrine period was more boring than the Fairness Doctrine period. (The Neoliberal Consensus is probably a bigger factor than the FD on that, though; it’s pretty hard to attribute much of anything about the overall tenor of politics to the FD.)
Specifics depend on the country. In the USA, over the air broadcast is restricted in content on the premise that broadcast spectrum is a limited public resource, and that you don't have much choice with what you see when you tune in. That argument gets pretty weak with point-to-point networks with nearly unlimited bandwidth, as I see it. An analogy might be the difference between ads with nudity on billboards (I believe that can be prohibited in the USA?) and ads with nudity in a print magazine going to a subscribership expecting such ads (protected by the 1A, including for mailing through the USPS).
Public libraries are perhaps another source of analogy. My local library system has some of the most vile and objectionable works ever printed on the shelves due to popular demand. Many public libraries in Canada and the USA are quite absolute about that with regards to free expression. For example: https://www.cbc.ca/news/canada/ottawa/porn-library-ottawa-po... "Library patrons allowed to surf porn, Ottawa mom discovers"
Sadly, NPR is a bad example because they are hardly publicly funded and their content is pretty biased (I'm a moderate liberal and NPR definitely feels like it's left of me, even if they're typically more civil than other media outlets).
NPR censored coverage of the Hunter Biden laptop claiming it was not newsworthy. All platforms with any moderation will be censorious platforms by definition. You can always tweak the degree of censorship though with moderation.