The headline seems to be a rather extreme (and willful) misrepresentation of the facts.
As the article points out, framing is important for the decision making, and I see absolutely nothing wrong with that. A video showing the horrors of war, and being presented as an argument against war, just is different from a video showing violence with an accompanying message promoting such violence.
One case in point is the Kim Phuc photo of a naked crying girl running away from Napalm in Vietnam. That photo being deleted actually was widely criticized, as it should.
At least my experience using Facebook also speaks against the article’s thesis: I cannot remember ever being shown any gratuitous violence. If they are trying to shore up click with shock value they are doing a mightily bad job at it.
There’s a lot to criticize about Facebook, but this just doesn’t seem like it.
Some images become iconic and symbolize a greater issue such as that photo. But I think that’s different from calling any gratuitous real graphic violence just because it’s contextualized “correctly”, acceptable.
While big media is imperfect and has its large problems, they provide a nuanced filter with regard to things like self-harm, suicide, genital mutilation, etc. I’d prefer not having very personal very dark themes go through a lowest common denominator since many people lack judiciousness and good judgement when it comes to these issues. Is an image informative or is it promoting something further? Exploitative or Journalistic?
> A video of a toddler being beaten by an adult would prompt hundreds of calls by Facebook users for its removal. But as the documentary reveals, a video showing just that is being used by CPL to train moderators on the type of content that should remain on the platform.
> In this case, the video was left to circulate online for six years after being flagged by online child abuse campaigners.
> The decision to ignore or delete extreme content appears to rely on how it’s being positioned. In one instance, the undercover moderator asks the advice of his colleague regarding a video that shows two underage girls, one of them being beaten by the other. “Unless it has a condemning caption, it’s a delete,” says the colleague, referring to the text accompanying the video.
> In other words, if the caption were promoting the violence or poking fun, it would come down. But in this case, since the caption condemns the fighting, the moderator is told to leave it up with a “mark as disturbing” warning. As the colleague notes, “If you start censoring too much, then people lose interest in the platform. It’s all about making money at the end of the day.” (emphasis mine)
> The undercover reporter inquires about a far-right page that touts anti-Muslim and anti-immigrant content. He is told that these pages, though they have exceeded Facebook’s “allowed content” violations, remain active and are “shielded,” preventing the CPL moderators from deleting them. “Obviously, they have a lot of followers, so they’re generating a lot of revenue for Facebook,” says one moderator. (emphasis mine)
That's all the substantive material from the article. I find it noteworthy that at least two of the damning quotes come from co-moderators, not any sort of official material. Whether that's an intended culture on facebook's part or not is up to the reader to infer.
"If you start censoring too much, then people lose interest in the platform."
It's interesting to hear them say that. I usually see moderators/administrators on the internet characterized as being quick to wield the ban hammer or purge stuff that could harm their brand. I rarely see this other side depicted, wherein mods/admins intentionally hold back because purging too much will make the place no longer cool. Good to know people consider that a concern, though in facebook's case maybe they're taking that sentiment a little too far.
The title is "How does Facebook moderate its extreme content?", which seems more accurate than "Undercover video suggests Facebook wants extreme and disturbing content."
This title provided by the OP is intentionally misleading and taken from a quote by Roger McNamee.
There are other platforms out there (liveleak and worldstarhiphop for example) that it only takes a minute or less to reach extreme content. If Facebook really desired what the OP suggested, assumed by Roger McNamee to push ad revenue by engagement then that’s exactly the first video one would see when logging in Facebook. The first video I see is always some shallow inspirational video by an “influencer” or a Buzz feed video on cooking.
Dang, I’d rather at least the original title be used so that anyone reading the article can reach their own conclusion and then bring whatever fire and brimstone need be.
> if the caption were promoting the violence or poking fun, it would come down. But in this case, since the caption condemns the fighting, the moderator is told to leave it up with a “mark as disturbing” warning
This sounds exactly right. How the heck are we supposed to confront and fight society's ills if any video highlighting them is immediately deleted?
Maybe if people were more aware of the shocking problems that people are facing in the world, they would be more inclined to act upon it. Out of sight, out of mind.
'The film also uncovers how some Facebook pages that promote hate speech are left up and running. The undercover reporter inquires about a far-right page that touts anti-Muslim and anti-immigrant content. He is told that these pages, though they have exceeded Facebook’s “allowed content” violations, remain active and are “shielded,” preventing the CPL moderators from deleting them. “Obviously, they have a lot of followers, so they’re generating a lot of revenue for Facebook,” says one moderator.'
Content moderation is difficult, and there are many legitimate and arguable edge cases.
The existence of these edge cases and difficulty of moderation does not negate or excuse FB's overall behavior, which goes far beyond, as noted by several examples in the article.
Roger McNamee, an early investor in Facebook: “This is essentially the crack cocaine of their product. It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform, So they want as much extreme content as they can get.”
This is the same reason that FB makde a top level decision to leave posted a video known to be maliciously edited to make the subject look drunk or impaired -- the subject is the person 3rd in line for the Presidency of the United States. Nevermind that this effectively amplifies & legitimizes false propaganda on one of the world's largest publishing platforms.
This is the same reason that FB ignored massive interference in the US election by Cambridge Analytica, Russia, etc. The content was surprising and increased engagement time, nevermind that much of it was false and micro-targeted at people's specifically tested fears, and corrupted the electoral process.
Anything that people cannot turn their eyes away from, simply to extend "engagement" time and rack up counts of adverts displayed. The ruthless pursuit of attention in the attention economy.
The only restraint is the limit of how grotesque the content can be before people turn away, and that boundary is apparently being continually pushed forward by exposure.
Of course, they justify it all on varoius grounds ranging from 'freedom of expression', 'let the viewers decide', 'we're not editors', etc., ad nauseum.
To be clear:
This is beyond malicious, this is knowingly (or at best intentionally ignorantly) poisoning society for their own profit.
Not that this is unusual; corporations have often poisoned the water, air, & land for their own profit. Except that this is at the scale & scope of modern technology.
If we don't want the tech industry to soon be seen in the same light as the old smokestack or cigarette industries, this needs to be curbed.
> The decision to ignore or delete extreme content appears to rely on how it’s being positioned. In one instance, the undercover moderator asks the advice of his colleague regarding a video that shows two underage girls, one of them being beaten by the other. “Unless it has a condemning caption, it’s a delete,” says the colleague, referring to the text accompanying the video.
I am distrustful of "undercover" investigations in general now, mainly due to the activities of James O'Keefe and Project Veritas. It's very, very easy to cut a "bad guy edit," especially when people don't know they're the subject of an interview.
> In one instance, the undercover moderator asks the advice of his colleague regarding a video that shows two underage girls, one of them being beaten by the other. “Unless it has a condemning caption, it’s a delete,” says the colleague, referring to the text accompanying the video.
> In other words, if the caption were promoting the violence or poking fun, it would come down. But in this case, since the caption condemns the fighting, the moderator is told to leave it up with a “mark as disturbing” warning.
This seems like a reasonable balance to strike. Context is extremely important; a video of Nazis marching down an American street can mean two very different things depending on who is posting it, and why.
> [Early Facebook investor Roger] McNamee describes how the platform’s business model actually relies on extreme content, as it keeps users on the platform longer, feeding them more ads and increasing revenue. “This is essentially the crack cocaine of their product. It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform,” he says.
This gets to a more salient underlying issue; Facebook, and pretty much every other app on your phone, makes money from your attention, and they're designed to be addictive. This is true with Facebook notifications, Reddit karma, games ... the art and science of triggering compulsive behavior has been weaponized by advertisers and is being used against us.
The fact that some people's compulsive behavior is triggered by fake videos of Nancy Pelosi doesn't matter to Facebook. There is no political angle there, only financial.
As the article points out, framing is important for the decision making, and I see absolutely nothing wrong with that. A video showing the horrors of war, and being presented as an argument against war, just is different from a video showing violence with an accompanying message promoting such violence.
One case in point is the Kim Phuc photo of a naked crying girl running away from Napalm in Vietnam. That photo being deleted actually was widely criticized, as it should.
At least my experience using Facebook also speaks against the article’s thesis: I cannot remember ever being shown any gratuitous violence. If they are trying to shore up click with shock value they are doing a mightily bad job at it.
There’s a lot to criticize about Facebook, but this just doesn’t seem like it.