Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not so simple as whether or not a platform allows content of a certain type from certain authors to be published on their platform. It's also about whether the platform is pushing that content to others, using automated tools which have been tuned to improve "engagement". That's some of the Facebook research which was suppressed until the Wall Street Journal published the leaks from that research. Facebook apparently knew that changes they made were allowing posts that made people more angry to get surfaced more, because angry people were more likely to comment on the posts, thus improving "engagement". No matter if it caused disinformation to be spread, or if it made people more angry, or negatively impacted the mental health of teenagers. It improved "enagement" and thus $$$ to Facebook shareholders.

This is also why there may be a problem with the truism "the best way to counter bad speech is by allowing more speech". Well, what if the engagement algorithsm cause the bad speech to get amplified 1000x more than the speech which has objectively verifiable truth claims? Free speach assumes that truth and lies would be treated more or less equally --- but that's not necessarily true on modern platforms.

So it's not only a question of whether people have the "right" to publish whatever they want on a platform. Sure, you can stand on a public street corner and rant and rave about whatever you want, including "stop the steal", or "the world is about to end". But you don't have the right to do that with an amplifier which causes your speech to blare out at 100 decibels. Similarly, platforms might want to _not_ amplify certain pieces of content that are killing people by spreading misinformation, or destroying democracy, or encouraging genocide. And that might very well be the best thing platforms can do.

But now we have the problem that content can be shared across platforms. So even if one platform doesn't cause information about vaccines causing swollen testicles from showing up on millions and millions of News Feeds --- what if that same information, posted on one platform, is shared on another platform which is much less scrupulous?

So for example, suppose platform Y decided not to amplify videos that they decided was scientifically incorrect, because they didn't want to have the moral stain of aiding and betting people from killing themselves by not being vaccinated, or not allowing their children from being vaccinated. But another platform, platform F, which had done the research indicating this would happen, but actively decided that $$$ was more important than silly things like ethics or destroying democracy, might promote content that by linking to videos being posted on Platform Y. Maybe the best thing Platform Y could do would be to remove those videos from their platform, since even though they were no longer amplifying that information, it was being amplified by another platform?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: