> I've actually wondered if most people might not need to be protected from unregulated content in the same way that people need to be protected from exposure to lead, radon gas, etc.
Yes, but that means deciding which content is harmful, and that's where we are now. Figuratively, you end up with lead-lickers coming out of the woodwork saying that their way of life is being stifled by regulation/moderation.
Chasing user "engagement" has been pushing conversations from the mundane middle toward the fringes. Thus, people make understandable but hasty generalization[0] that what they're seeing is more common than average.
In the past, I think this drift was counteracted by codes of morality (whether internalized, reinforced by people you know, or promulgated by regulatory bodies) as well as the limited means of disseminating information (few newspaper editors/radio announcers/news anchors to many readers/listeners/viewers). Though I'm sure there were plenty of wild pamphlets spreading chaos in the centuries between Gutenberg and Zuckerberg.
Even though most of those morality codes are downright oppressive by today's standards, and the many-to-many distribution enabled by the Internet has many benefits, we haven't found a substitute, so there's a gap in our armor.
Side note: Believing conspiracies and yearning totalitarian are two different failures in thinking. I say that because only the latter had strong support in the 20th century—even earlier if you count monarchies. Someone supporting Flat Earthers isn't harming me (except by undermining science in general); someone supporting Stalin 2.0 is an indirect direct threat to me.
>Yes, but that means deciding which content is harmful, and that's where we are now. Figuratively, you end up with lead-lickers coming out of the woodwork saying that their way of life is being stifled by regulation/moderation.
There might be something to be said for instead of limiting speech, increasing the 'reporting requirements'. This isn't a fully formed position of mine but rules along the lines of no anonynomous speech[0] and stricter fraud[1][2] rules are imo compatible with free speech and its ideals while helping to manage fraud and propaganda.
[0] So if an AI wrote a blog spam post, it should be at minimum illegal to not have the AI in the byline, with e.g. a unique identifier.
[1] Say loudly and publicly that there is a pedophile ring in the basement of a pizzeria, with no evidence, go to jail.
[2] Not that such rules can't/haven't been abused before though.
Yes, but that means deciding which content is harmful, and that's where we are now. Figuratively, you end up with lead-lickers coming out of the woodwork saying that their way of life is being stifled by regulation/moderation.
Chasing user "engagement" has been pushing conversations from the mundane middle toward the fringes. Thus, people make understandable but hasty generalization[0] that what they're seeing is more common than average.
[0]: https://en.wikipedia.org/wiki/Faulty_generalization#Inductiv...
In the past, I think this drift was counteracted by codes of morality (whether internalized, reinforced by people you know, or promulgated by regulatory bodies) as well as the limited means of disseminating information (few newspaper editors/radio announcers/news anchors to many readers/listeners/viewers). Though I'm sure there were plenty of wild pamphlets spreading chaos in the centuries between Gutenberg and Zuckerberg.
Even though most of those morality codes are downright oppressive by today's standards, and the many-to-many distribution enabled by the Internet has many benefits, we haven't found a substitute, so there's a gap in our armor.
Side note: Believing conspiracies and yearning totalitarian are two different failures in thinking. I say that because only the latter had strong support in the 20th century—even earlier if you count monarchies. Someone supporting Flat Earthers isn't harming me (except by undermining science in general); someone supporting Stalin 2.0 is an indirect direct threat to me.