A lot of attention gets paid to sex abuse content, which it should, but there is much less I feel for all sorts of other abhorrent content. Gore, violence (physical/mental), terrorism, death, destruction, shock content, etc. It's not just NSFW, but also NSFL content that I think is under discussed at times. Seeing the worst output of humanity non-stop can (and _will_) completely break you mentally if consumed at a sufficient amount, as we are at the end of the day humans. It's unfortunate the world is a messed up place, and maybe it will stay that way as long as humans are around.
This is the exact type of content that I think AI is so crucial for detecting, and it's a bit sad that we often hear so about all the bad things you can generate with AI and so little about all the good. If we need to train models on this type of content to make it easier to detect and remove, to prevent mentally scarring people, then personally I'm all for it -- regardless of the "it can also generate that bad content" cost.
The issue is less about detection and more about the sheer volume of content. Even if your false positives and negative rates are low the numbers are huge. Which is why everyone ends up hiring more and more people as content volume increases.
What has changed over the years is - the cost to spam the entire world with zillion messages a second is tending to ZERO.
While this is an achievement it's also a disaster cause it's akin to everyone having an almost free high power broadcast transmitter sitting on their roof.
We regulate and auction radio spectrum and don't allow everyone on the planet to broadcast simultaneously on what ever frequency they want. Cause the noise would make signal drop to 0. But on the internet it's allowed and it's almost free.
>> What has changed over the years is - the cost to spam the entire world with zillion messages a second is tending to ZERO.
One solution to that is whitelisting. But we don't want 3rd party censorship, so a federated system where we rate sources and content ourselves seems like a good idea. Unfortunately kids won't filter anything - they are drawn to the unusual and forbidden.
How is whitelisting going to work? You can’t whitelist Instagram photos because that would prevent people from posting new ones. You also can’t whitelist users because bad actors just hack their accounts. All of the disallowed content I’ve come across (pornography on Facebook or Instagram) was posted by an obviously hacked account.
If you add an extra checkmark people have to get to be able to post then hackers will just target those people.
The person you are responding to is trying to obtain a 100% success rate on filtering, and explicitly discusses the Facebook model as insufficient as the people you follow might suddenly become porn (due to their account being hacked).
I think the real problem here is that you seem to have a zero tolerance policy on bad content: yes, peoples' accounts get hacked... but that sucks for them so much more than for you, and yet you seem concerned that your eyes were forced to see some porn?
And look, I get that we are talking about children as the viewers, not you... but children simply shouldn't be on the Internet browsing random stuff in the first place!! The whitelist can then be all encompassing, as it should be their parents showing them stuff.
It turns out this is actually a very hard problem which is why centralized social media companies get it wrong so often (and why that might still be preferable to the alternatives).
I was looking at rotten.com when I was like 14. I could almost handle the person gore but the animal gore affected me and still does if I see it today.
Basically baking in a reputation system into network protocols, to deter bad actors… not sure how viable a solution it is, just fun to think how far back people were thinking about solutions to these problems.
Related to your point - Meta got sued by the human team employed to detect gore, violence and death. They fired the entire human content team at their biggest content moderation site in Nairobi, Kenya after the team fought to unionize.
Here's the story - https://techcrunch.com/2022/03/30/meta-and-sama-face-legal-a...
I think big tech companies are pushing towards AI to avoid somewhat similar court cases. I can't imagine how *less stressful* it is for employers who avoid legal tussles with their subcontractors. In this case, Meta bore the brunt of their contractor, Samasource's, underdealings!
Large tech companies employ subcontractors specifically to take flak for this kind of thing. They save money, but more importantly deflect criticism by allowing subcontractors to engage in employment practices that break their own policies. Your sympathy should never default to a large wealthy company, but to the exploited, usually under paid, employees and 'contractors' forced to do unpalatable things out of the eye of regulation.
There's an important distinction between what Meta is doing here and other types of low wage work such as clothes manufacturing. We have to consider where the work lies on the spectrum of voluntary vs coercive. The problem with Meta's gore content moderation job is that the people signing up for it didn't know what they were getting themselves into. Or if they did, they weren't aware of the long term health consequences (but Meta surely were). So it's closer to coercive than voluntary. Meta were abusing the information asymmetry. If Meta gave each new hire a dossier outlining how bad the health impacts will be before the contract is signed, then I wouldn't have a problem with it, because anyone who signs up knows what they're getting into. But Meta doesn't do that. They obscure this for new hires in order to exploit the information asymmetry.
To provide a more expanded horizon: it was a hobby of a peculiar group of kids in my middle school (and earlier). They teased each other, who can be more cool and unfazed (eat a sandwich for example), while watching it.
One of them is a paramedic now. I imagine, surgeons might go through similar character development.
It's funny how often we get asked "what's the worst call you've ever been on?" which, when you think about it, is a strange question to ask. Do you go around to other people in your life on social occasions to ask them to relive unpleasant events for your curiosity?
However, that being said:
It's generally assumed that it's the most gruesome call. The most unpleasant trauma. That's not the case. When it comes down to it, we're all just tissue and blood. We stop bleeding, put (some) things back where they belong, and try to replace what has been lost (fluids), etc.
The worst calls are generally the tragic, the emotional abuse. Child sexual assault. Elder neglect. The horrible accidents that are no-one's fault, but will rack someone with guilt for the rest of their lives.
Those ones add up. I'd say those are the reasons for most EMS and Fire PTSD.
There's a good documentary about the rebirth of Detroit, which was for a long time the number one city for arson in the world, called "Burn" (incidentally financed/championed by Denis Leary)...
One of the firefighters in that documentary says, sitting, emotionally exhausted and eyes shallow from a long night, or career:
> I wish my head could forget what my eyes have seen.
a friend of mine (career not remotely connected) still likes to watch them, often before going to bed of all times!
unsure how much of it is curiosity or whatever psychological conditions the rest of comments were suggesting. regardless, i would not consider it an ideal approach to build your stomach for the medical field.
The "what doesn't kill you makes you stronger" thing is ironically an example of black-or-white thinking which psychologically works to prevent people from dealing with the complexity and nuance of emotion.
The best counterexample is freezing damage. Each time makes you weaker.
I believe both e.g. participating in war or being a victim of a crime makes you mentally weaker.
Many veterans seems to be the brink of collapse and those people were probably better suited mentally than the average man for ensuring war since they self selected for it? Or maybe less naive people would fare better?
'positive dismissal' seems intuitively useful and i do appreciate the nuance of weakening emotions that can be converted into productive assets with this kind of thinking.
It isn't useful, it is used to avoid going through a healthy emotional arc.
It is reflective of a society that sees emotions that don't feel productive as "bad" and to make excuses or falsely positive reasons to dismiss them.
The classic example is the British attitude of the "stiff upper lip".
These narratives result in lower happiness, compassion and generally poorer mental health.
Don't let a society that encoded outdated views on emotional health dictate that you shouldn't freely allow emotions to complete their cycle and be expressed.
I truly cant express how bad perpetuating this kind of thinking is on an individual and societal level.
I've been online since the late eighties and over time I've seen some nasty NSFL stuff. I am absolutely certain this has shaped my world view to some degree. Like some other GenX-er said: "Internet is our Vietnam".
> I am absolutely certain this has shaped my world view to some degree
On the other hand many people, if I had to guess I'd say the vast majority, have not seen any of that stuff on the internet. I have, but because I actively went looking for it. Likewise if I ask around amongst friends the only ones who saw the real gore all admitted having actively searched for it. In other words: even before we saw it there might have been something different in our world view already.
It depends how terminally online you are too. Even not actively looking for it, you'll eventually see it. Probably less likely these days than on the wild west internet ~y2k. But visiting enough forums and going to enough personal pages at the time eventually put you in front of some NSFL content. Or at least a link that you didn't actively look for but now have an option of clicking purely out of curiousity.
That's sort of the point though: we do that, but whereas around y2k forums were the online communities, I don't exactly have the impression that is still the case. And it's not like you're going to see much NSFL content on, say, TikTok.
> In other words: even before we saw it there might have been something different in our world view already.
Agreed. I was brought up in a troubled home and spent a lot (if not all) of my childhood/early teens on 4chans /b/ board as a result (mid-late 2000s, early 10s). Looking back at it, it seems like it was THE place that brought together all the damaged individuals.
The stuff I've seen there desensitized me to a basically every disgusting/horrid/tragic thing I had to witness later on in life, other than losses of loved ones in of themselves.
Agreed, there's a stark difference between seeing someone getting decapitated on screen and then closing the tab instead of seeing this IRL and being scared of being the next one in line or realizing that the person standing next to you is just doing this.
This is referring to the "GenX" who watched this out of his own free will, which is also not comparable to what the people in this article have to deal with.
I agree that it's the ideal use case for the AI, it's just the issue there is that it's an area where you can't afford any false positives or negatives. All hell will break loose if sexual abuse content or footage of terrorism/cartel incidents ends up on random people's Facebook timelines, and another kind of hell will break loose if innocent people see their accounts nuked and their details reported to authorities because the system thought something they posted was illegal content.
We've already had both things happen in recent times, and it's been a legal and PR nightmare for affected companies. So while using AI would be great to avoid harm to any human moderators, the system would have to be perfect (or nearly perfect) in order to avoid any side effects that would cause serious harm to innocent people and the company/platform in question.
So long as we aren't using the AI to punish the posters. Vis-a-vis the google incident last year with the guy loosing his entire account because of telemedicine.
I'm okay with false positives. You took down my video of a sphinx cat because your AI fagged it as NSFW. fine. I can't think of a scenario where random people being able to post things getting blocked occasionally for things being incorrectly flagged is a risk. The risk comes when they then use that to ban and delete accounts. If they just lock the account in a non-destructive way, i.e. block new posting/shadow ban. and reverse it on appeal.
This is of course excluding the use of such tools in a non-good faith way.
Sadly you're probably not wrong here. But the consequences for that sort of attitude when it comes to this particular field are extremely high, and I dread to think of the outcomes in said cases.
I think its undertalked about because platforms regulate that pretty well. If only because adverts would absolutely hate being associated with any of that. Meanwhile, sexuality is much more ephemeral and easier to slip in.
>This is the exact type of content that I think AI is so crucial for detecting, and it's a bit sad that we often hear so about all the bad things you can generate with AI and so little about all the good
most companies care about making money, so there's a lot more focus on how it can streamline the workflow/replace labor. Not as much business on how to use it as an auto-moderator.
> the bad things you can generate with AI and so little about all the good
Generative AI and detection/classification are very different. The bad news of the last few months are almost exclusively about generative AI. Your good example is a detection AI.
> Seeing the worst output of humanity non-stop can (and _will_) completely break you mentally if consumed at a sufficient amount, as we are at the end of the day humans.
At the end of the day, we evolved to endure much worse than sitting at a computer and watching stuff happen, and our ancestors have.
If anything, being too coddled and sheltered is a way to produce way more harm. Even being raped is more traumatic when you're socially trained to expect to be traumatized.
We are using AI and GenAI to get better at detection, training and policy.
However, There can never exist a tool that can correctly detect harmful content. You would need the intent of a submission to get to that level of accuracy.
Everything else is: AI filters -> Human review.
Your offenders are 2 types: Bad actors and Adversaries. Both evolve to find ways around your detection processes.
This is not where the mainstream is heading. Content like that is illegal to store in some jurisdictions, handling it invokes GDPR issues, no one in their right mind would touch it with a long stick. LAION case is a good recent example.
This could be a reasonable business for a company in the countries where it is not illegal to moderate content like that.
Eh it’s all a matter of perspective. We get used to animal gore with no mental side effects. If violence is normalized in your society and not directed at you directly (for instance in ancient Mayan/incan societies), it not going to bother you all that much.
Do we get used to animal gore though? Do you know why there are legal time limits on how long you can work at an abbatoir? Its because anyone who works there for more than ~3 years tends to become a serial killer. We're used to seeing meat because its out of the context of a slaughtered animal., in the same way that a mortician is used to seeing corpses in caskets, but would freak out if there was one in their shed. If people had to kill their own cows, I imagine vegetarianism would take off.
> People used to kill their own animals and weren't vegetarians.
That's not "gore".
Animal gore is not subsistence living. It's, with apologies for the graphicness, videos of people catching cats in cages and dousing them in gasoline in the cage and ...
Could you elaborate here? Spurred by you and the person you're replying to painting such different pictures of what sources are out there, I took a look myself. I didn't find it so trivial to find sources backing up the person's claims.
> Most places do not have "time limits".
I couldn't find anything about slaughterhouses having lifelong limits for employees and would love a link or search term to use because it's an interesting concept to me.
> I find no evidence of abattoir workers becoming more likely to be serial killers.
I also could not find evidence for this. Here is a meta analysis of many studies, none of which mention slaughterhouse workers being more likely to be serial killers. Some studies show higher crime rates in towns with slaughterhouses, but not for violent crimes. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10009492/
> People used to kill their own animals and weren't vegetarians.
I agree everything you said. This is horrific and no person should have to go through seeing all that disgusting material for my benefit of using the Internet. It’s really something that pushes me further and further away from using the Internet. You can put it on par to driving an electric car, knowing that the lithium was mined by nine and 10-year-olds in Africa somewhere.
I’m not a religious man, but I think I’m becoming one. Mostly because I feel the appearance of all this stuff correlates with the decline of religion. I only knew which one to choose that didn’t have as much behind it as the Internet does I would, for surely be a devout follower.
I’ve heard it said that the existence of objective evil is only explained by the existence of objective good. And yeah… some of what happens in the world just has to be a step beyond subjective evil. Please stay curious :)
“All this stuff” correlates with a lot of other things too. And is happening all over the world, including places where religion is in rude health indeed.
> Seeing the worst output of humanity non-stop can (and _will_) completely break you mentally if consumed at a sufficient amount, as we are at the end of the day humans.
What is the source for this seemingly widely held belief?
Bad images on a screen aren’t inherently harmful to many people.
I grew up (10+ hours a day for a decade) on the edgy, uncensored internet, and I don’t think it broke me or harmed me in any way.
“Growing up on the edgy internet” isn’t the same as a work week of seeing non-stop images of CSAM or beheadings or shootings. Even if you were on /b/ all day everyday in 2007, the two aren’t remotely comparable.
I was in a Chatroulette one day that did a number on me. 10 seconds, mostly for me and my neighbor to parse what we were seeing. Once we did we immediately exited and shut the laptop and just sat there, completely mindfucked at what we saw. This was maybe like 2010 and I still think about it, it was horrid. If I was subjected to content like that for 40 hours a week I’d either be an alcoholic or checking myself into the psych ward.
Obviously, I can't study the whole database, and give percentages, but in practice most “images of CSAM” are either semi-professional or amateur photos of naked kids and teens in studio or home environments. Most likely with an ever-growing flood of selfies recently. Sometimes with sexual acts. I know we've all been going in a strange direction, but a naked body is still not a “traumatizing experience”. Regular person seeing that content for weeks only gets bored, and probably realizes that regular adult exploitation in entertainment is hardly any different (both meanings of “adult” here), which is actually a good development.
However, you got used to cheap thrills coming from stories about “horrors out there” (whether digitally far away in some “darknet”, or geographically, in some “third world country”), and your imagination has already drawn you a picture of countless maniacs recreating slasher movie scenes (“life imitates art”?). It is simply logistically impossible to create that much, not to mention that criminals are not in a hurry to film themselves.
As for “shootings” (an American term for media spectacle in which murders are just seeds), many of those have been causally televised for entertainment purposes. I recall some high profile “shooting” even involved some president showing an example of how all good citizens should consume that content in front of TV screens.
It's actually worse. If you're watching something terrible like gore, there's still some distance between you and the thing that you're watching
OTOH, if you're getting emotionally abused in a place that's already full of damaged people (and if you're seeing all of their pathological coping strategies first hand), that has more potential for long term harm
We can measure objectively the damage ethanol does to the tissues in the body, both short and long term, and the impairment it causes to judgement. We have studied its use both intermittently and chronic, and its effects are well-understood.
Can we say the same about bad images on a screen?
All of this “if you look at enough of them you become broken” seems like folk “wisdom” to me. I don’t want to be the guy holding up the “[citation needed]” sign, but I am truly skeptical of this claim.
Just because it’s gross and unpleasant to look at doesn’t mean that lots of exposure causes permanent damage.
It's funny how many people like you have to pop on and do the opposite of virtue signalling (vice signalling?) to try and fail to prove a point.
The 'edgy, uncensored' internet you know about isn't even close to what paramedics and people in the OP experience. I know people that did the same thing, they grew up on that side of the internet. One of them became a paramedic and had to quit after a few years because it was starting to severely break them.
But by all means, if you're the kinda person to go 'Nah, I wouldn't break' then you should give these jobs a shot. See how far you get.
There is no source because it's not true. People pretend they are horrified more than they really are because it's more "normal" and socially acceptable. They think more about what others will think about them than what they are actually seeing: "Will others be okay with the way I react to this?", etc.
Look up PSTD, the causes and the broad spectrum of people it affects without them even realizing what's happening, the years of suffering it can cause.
PTSD is real. I once fled a forest fire. Thats not what im talking about.
I'm referring to the assumption that looking at porn, or violent pictures will "absolutely break you". Which I don't believe is the case.
Maybe if the person was really into an orthodox conservative religion I could see the possibility of an extreme stress response from seeing the pictures. But in that case its their brainwashing, not the pictures that caused the trauma.
Of course, but I think the average person is actually more emotionally resilient, easily desensitized, and capable of killing than the average American generally assumes. History has plenty of examples of this. Modern times do as well. If filtering these was decently well payed, and was a job that had been around for 50 years, 99% of them would be totally fine. Just clock in and clock out. No it is not guaranteed to give every person ptsd if they see gore, which is what the person who started this comment chain claimed, and was what I was responding to.
I worked for a dating company for a while. The moderation team was mostly young people predominantly women.
Fortunately, compared to what the people in the article deal with, it was relatively tame. But it still took its mental toll - having to look at dick pics all day is not healthy.
having to look at dick pics all day is not healthy
Why? I mean, if you are just looking at nudity - even if the men are 'prepared for sex' - at the end of the day, it is just nudity. I've worked in nursing homes. Other folks work at hospitals. Lots of folks see nudity everyday. Lots of those folks see genitals daily, in a much more intimate way than a dick pic.
I don't really get what is so unhealthy about it. There is nothing inherently wrong with simple nudity in general, even when it is suggestive.
Theres more nudity seen in one day by a T&S team than what a whole hospital will have to deal with.
Finally, you curtail your argument by saying “simple nudity”. Trust and Safety exists because of context and action. These arent tasteful nudes or art people are looking at.
I wont speculate too much, but the intent of a picture, the context and the target make a difference. Sending a pic to someone not interested in them, or a minor - has a vastly different connotation.
In general, sex isn't something to get upset about. A hard penis isn't traumatic. It is still simple nudity.
Heck, it isn't even universally linked to sex. I don't have a penis, but from what I understand, they do randomly become erect and cycle through this stage during sleep.
A penis, even in hand and suggestive, is still simple nudity. None of the people in the center have had it sent to them personally, and surely they know.
Harassment using pictures of yourself is different: That's harassment, and should be frowned upon. They are still using simple nudity to do it, and the nudity isn't the issue.
If I sent someone a picture of a raw steak via a dating app, it wouldn't "just be a picture of a steak". There's nothing inherently wrong with pictures of steak, but the context is important.
In the context of a dating site, sending someone a picture of a steak is inappropriate and creepy. Is the user trying to communicate that they see someone as a piece of meat to be used for pleasure? Is it a threat ("I'm going to cut you up")?
Likewise, there's nothing inherently wrong with a picture of a penis. But seeing first hand the aggression and intent to humiliate or victimize that's behind a dick pic is what's disturbing. See enough of that every day and you'd lose your faith in men. That's the problem for the moderators.
Making general statements is the starting point for policy principles.nThe application is nuanced. Defining simple nudity, I would say, is of low importance. i would be looking at types of harm using case data.
Spending time defining only the non problem parts is a luxury. Theres pages of que to clean and no one to do it. I need a definition, plus examples and tests that a team can actually use daily.
Furthermore - unsolicited sexual imagery IS traumatic to a swathe of people (survivors of abuse) and illegal in many cultures.
That moderators get affected seeing such content should not be a surprise. The very least that happens to mods is desensitization. Paranoia, PTSD, high risk behaviors are common fates if you work in tough ques.
Surely if your job entails reviewing these images, then the images are no longer unsolicited, and are now just simple nudity as the poster states. I don't like warehouse work, and I do find it traumatic, but that's why I don't work in a warehouse.
I think that's a bit reductive. I suspect the constant stream of dicks, day in day out, in a dating app context, would cause a change in opinion in the moderators. It's the sort of thing that sneaks up on you without you quite realising it.
The pics they were looking at were not "simple nudity" and it is puzzling why you insist on framing it that way. People can and do communicate through pictures more then just mere object of it.
That being said, if it was a picture of a penis erect in sleep, the context of "probably not consensualy taken" on itself makes it more disturbing. And seeing a lot of these can easily make you more paranoid to sleep where you can't lock door and everything.
> Harassment using pictures of yourself is different: That's harassment, and should be frowned upon. They are still using simple nudity to do it, and the nudity isn't the issue.
If you had to view that harassment secondhand every single day, wouldn't you feel a little disgusted after a while?
Sure, you can look at it and say "this isn't about me", but does that make the disgust go away?
>Why? I mean, if you are just looking at nudity - even if the men are 'prepared for sex' - at the end of the day, it is just nudity.
I think a big part of it is adequate training; Think of surgery, a trained surgeon seems to be able to cut someone open (to help obviously) and go to bed fine that night but if I ever had to do it, regardless of the outcome I most likely wouldn't be able to sleep.
An adequately trained content moderator should be equipped with whatever emotional tools needed before they're expected to moderate their company's content safely.
I think it made a valid commentary about how silly the parent comment was.
You almost certainly just become completely desensitized to genitalia pretty quickly. There’s nothing innately terrible or scarring about human anatomy, the taboo is purely cultural. Some basic exposure and you get used to seeing them. Just becomes like seeing a foot or an arm.
I think that’s the point being made though a little more glibly.
That's your opinion, here's mine: I found it sort of funny, plus it actually raises an interesting point (definitely not useless, plus bonus points for such humor btw): I always wonder how gynaecologists, urologists, ... mentally deal with this exact aspect. For instance: what is sex in the evening like if you've seen x different genitals during the day? How easy is it to separate that? Or is there no separation perhaps?
I'd be surprised if more than 1% of people could get an erection in a urologist's office (unless their visit is due to priapism). You're likely visiting due to a painful or embarrassing problem with your equipment.
In this case I was actually wondering about the urologist, not the patients themselves. The latter is a situation I've been in myself so it's easier to relate to and reason about.
Honest question: do you have such job yourself? Or do you intimately know multiple people who do? Or does this come from research? Because if not, that would just sound like how you'd want it to be. Not necessarily what it is like in reality.
But in any case: yes, I know it's a different context, but that's not the question. Rather: is it realy that easy to just separate context 100%, and at will?
I’m not GP, but I assume the point is that it’s part of the job. Coroners look at real dead people all day. EMTs see horrific injuries. Many jobs have unpleasant aspects, some horrible, but this isn’t novel or unique.
Nonsense. Looking at a part of human anatomy is scarring only if you're an incurable prude.
Would looking at human hands all day "take its mental toll"? Probably because it would be as boring as watching paint dry, and doing that day in day out is not good for one's sanity.
It's the intent to shock or humiliate behind the sending of a lewd picture that's disturbing, not the picture itself.
A urologist can look at penises all day and not care because there's no ill intent behind the display. On a dating site that's absolutely not the case.
Hands are completely different than genitals. There is reason why being exposing oneself in public is a crime. No one who didn't consent should be a part of someone else's fetish. It's not simply about being a "prude" although there is nothing wrong with being that either. There are many women who are traumatized by getting sent dick photos, it's intrusive and disgusting and the people who send them know it.
Completely? Yeah, I don't see it. Maybe because I'm German and there's very little fuss made about nudity over here*, which in turn makes me very confident it's just a cultural thing. Society taught you that there's something special about genitals, so you start believing it. Most people can probably overcome this through simple exposure or doctors would have a problem.
What's going to matter more is the setting. I'm guessing the pictures are supposed to be sexual, which might affect viewers more than the fact it's a picture of a penis.
> There are many women who are traumatized by getting sent dick photos.
I'm sorry, but if seeing a penis can traumatize someone, there's something wrong with them - and it's not the fact they've been sent a dick pic. It doesn't excuse others being intrusive or exploiting that vulnerability, but it's still a vulnerability that is way abnormal and should probably be addressed.
Overall I agree with you that it can affect someone, but I'd do so with more nuance than using words like "trauma".
*There's way more "extreme" examples than Germans, even ignoring our animal relatives.
I don't disagree that kind of thing being targeted at a person can be psychologically damaging, but not because there's anything specifically different about the genitals. It's all about the motive and meaning behind it.
I think the Anglosphere's over-sexulisation of any and all nudity makes it worse than it needs to be (I think my exposure to e.g. Germany's sauna culture on a few trips has really helped me form a healither attitudes to nudity in a dozen or two hours than the rest of my life over here, and that kind of thing being more common would probably help people be able to more easily shake it off) but I think the tramatic part is the stalker behaviour and being directly targeted by someone with a lewd motive. And it being done when the person sending it knows it's undesired and forcing them to look is kind of almost rapey.
So for me, I'd find it creepy if somebody was targeting me with that, but would have zero issues approaching it from "I'm here to filter out the dick picks so I'm going to see a lot of anatomy". This is very different to e.g. gore where I actualy do feel a bit physically sick if I know it's real, but if I randomly see nudity that's not motivated at me it's basically nothing.
While reading stories like these, one thing that depresses me the most is this - how little humanity/empathy these suits at corporations have. Sure, their primary (probably only) goal is to make money, but that doesn't mean they have to treat their employees/contractors like trash.
Costco has shown repeatedly that it is possible to build a good business, while treating employees/suppliers etc decently. If Costco can do it in retail business (where the margins aren't as lucrative as software business), why can't Meta?
This seems like two different questions being interwoven, asking both why high-turnover business models exist and why the management culture in those business models is the way it is.
I can't answer the first, but AFAICT the latter is just an evolutionary pressure of a sort: as more ruthless cultures are selected for, the successful individuals in those cultures become those who embody ruthlessness all the time instead of people who adjust their level of compassion up or down as the situation dictates.
Yup. Having an authority figure able to adjust compassion appropriately up or down can leave a failure mode where someone manipulates the authority figure into being more compassionate than appropriate, getting the manipulator benefits at the expense of the authority figure or organization.
In a very competitive situation where those expenses hurt competitiveness, eventually that can result in the authority figure/organization losing the competition - which removes that authority figure from authority (or destroys the entire organization) and hence the benefits to the manipulator.
Having an authority figure who is always ruthless and selfish doesn’t have that failure mode.
The failure mode in that situation is one where the authority figure is inappropriately ruthless and selfish to the point it is harming their effectiveness/destroying the organization. However, the system is able to handle that better - as long as the authority figure follows the codified rules and delivers results, that’s ’business as usual’, and if they don’t, there are penalties - up to removing the authority figure or ruining them. It tends to be more obvious than excess manipulators, as by their nature assholes leave more obvious marks.
And the authority figures selfishness here helps prevent them from doing anything to that extent that can be documented.
In a brutal environment, a consistently ruthless and selfish authority figure (as long as they are effective in their goals) can be the easier and more stable option.
Near as I can tell, it’s why ‘dark triad’ traits are evolutionarily advantageous in the right mix.
Too much of them destroys what is needed to actually produce value and grow. Too little makes one susceptible to predators.
One thing I feel important to make clear is the only reason why these people seem to thrive is because corporate culture has chosen to believe social darwinism to be true. (despite all evidence and history showing how dangerous this ideology actually is)
Until we see a culture shift corporate leadership, these type of people are going to continue to be rewarded and go on mentor the next generation to do the same.
I don't understand why everyone is hating on this guy. He didn't say it in a "can you believe it?!!!" way. It was a matter of fact statement about his understanding of motivation of his employees. The tiktokers are satirizing an attitude which was never displayed.
"So much of what running a business is about is figuring out 'how do I connect with people?'" Shaich told BI. "What motivates them, and how do I help them decide to affiliate with what the mission of the enterprise is?" He said therapy is about better understanding ourselves, where we come from, and our own motivations. In turn, that can build our ability to form alliances and find commonalities with other people. "And that makes you a much more powerful manager," Shaich said. "Every one of us brings our own humanity to it. Similarly, I think it makes us more empathetic if we start by being empathetic to ourselves and learning and thinking about ourselves. We're able to be more empathetic, not just to our team members and the people we work with, but also to our customers."
I'm actually missing the point as well. It sounds to me like he is agreeing with the sentiment in those memes, that employees aren't motivated by making owners richer -- and you need to empathize with them to understand them.
Fame-hungry microbloggers are lying about an acceptable class of target to clickbait lazy users and ride an elevated engagement bandwagon? Say it ain't so!
>but that doesn't mean they have to treat their employees/contractors like trash.
Apparently, CEOs are disproportionately likely to be sociopaths (with some psychopathic tendencies as well). I've heard numbers from 15-25%.
>Or live in some kind of weird bubble - like this arsehole for example
The Panera founder? He seems to understand his place pretty well. May be in a bubble (a bit too idealist of what he hopes for from what are just entry level workers, likely with no growth prospects) , but I can't really call him an ass for attempting to understand. Most others just don't seem to GAF.
Personally I think most C-levels are sociopaths. The good ones have learned how to fake empathy and compassion, but they will have no trouble switching that off and showing their true colors.
I say that because I have yet to reconcile how any good person could step up to run a business in a world where the sociopaths almost always win.
Good people cannot fake being evil, but evil can fake being good. Consequently, good people have an inherent disadvantage that will lose in the marketplace when faced with evil adversaries.
> I have yet to reconcile how any good person could step up to run a business in a world where the sociopaths almost always win.
Not-being-a-sociopath is a USP itself: users/customers of a product/service/system/platform that resists _enshitification_ will likely stay loyal, especially when the service offered relies on network-effects; I read this why Craigslist is still around, for example; similarly, Apple's tight control over their own platform (for better and for worse) keeps people (like myself) attached to iOS, after-all, the iOS home-screen doesn't show you ads for CandyCrush or Evernote when you get a brand new phone, but Windows 10 does - so both Apple and Craigslist are examples of when not-being-sociopathic results in a solid business, faithful customers, and a decent public reputation.
Of course, this doesn't apply when the bad eggs are able to simply buy-out any civic-minded competition (e.g. if Craigslist was publicly traded, they'd have been acquired by a major legacy publisher by now).
On a related note, I don't suppose you've heard of "B Corporation"? https://www.bcorporation.net - apparently companies are now advertising that they're not optimizing for shareholder value - it's something I've seen an increasing amount over the past year - though I'm not-yet-convinced it isn't something like a greenwashing campaign or a fig-leaf like BBB. We shall see...
Suits having, or at least acting on, humanity/empathy is illegal. The suit is an instrument of the shareholders to maximize their profit by any means necessary and the person wont stay in the suit (or the company won't survive the competition) if they don't do this.
This is fake dogma that was invented 50 years ago by neoliberal economists and has never actually been a legal or even contractual truth. A company exists to do Company Things. Often, that is, "Make money," but it can also (should primarily) be, "Provide service/product and break even so I don't cease to exist." Ironically, championing "shareholder value" over all else leads to a paperclip apocalypse economy.
It is the law in most countries and enforced also by international treaties. And something that private capital market selects for automatically.
Edit: A summary quote for those not wanting to read the rather lengthy and thorough essay:
"Instead of recognizing that for-profit corporations will seek profit for their stockholders using all legal means available, we imbue these corporations with a personality and assume they are moral beings capable of being “better” in the long-run than the lowest common denominator. We act as if entities in which only capital has a vote will somehow be able to deny the stockholders their desires, when a choice has to be made between profit for those who control the board’s reelection prospects and positive outcomes for the employees and communities who do not."
No, it isn’t. Try looking for these alleged laws and you’ll see how badly you’ve been lied to. There’s a small industry pushing this idea because it exonerates a bunch of rich people from being accountable for their decisions, but that doesn’t make it true.
What you’ll actually find is that executives are expected to act with the best interests of the company in mind but are given broad discretion for what that means. This is because there’s no way to reliably predict the future and any non-trivial business decision requires a gamble on multiple conditions and the outcome is hard to judge over a non-trivial timeframe. You can replace an executive if you’re unhappy, but there’s no way you’re getting any money back unless you can show something like fraud or gross nepotism.
As a simple example, during the 90s and early 2000s some analysts said Apple should switch to Windows or license macOS to PC manufacturers. At the time, they had a fair argument that this would boost profits and if your fictitious law really existed, Jobs might not have been able to ignore them – and shareholders would have missed out on many billions in value which was created later. If you measured his performance over a single year, you’d have been disastrously wrong over a decade.
I'm not well versed in US corporate law, and few of us probably are given the complexity of common law systems. Leo E. Strine, Jr. I linked to definitely is:
"By so stating, I do not mean to imply that the corporate law requires directors to maximize short-term profits for stockholders. Rather, I simply indicate that the corporate law requires directors, as a matter of their duty of loyalty, to pursue a good faith strategy to maximize profits for the stockholders. The directors, of course, retain substantial discretion, outside the context of a change of control, to decide how best to achieve that goal and the appropriate time frame for delivering those returns."
In Finnish law I'm more familiar with it's very clear: "The purpose of a company is to make profit for its stockholders, unless otherwise prescribed in the company bylaws." (my translation)
My claim is that the idea that for-profits somehow have some benevolent interests is the main lie that keeps the populace from demanding more control over the corporations.
Again, that’s a suggested interpretation which is not legally binding and, you’ll note, it includes broad latitude for exactly the reasons I mentioned. It would be quite easy to find, for example, one person who says they’re maximizing profits by reducing labor costs as much as possible and another who thinks that a well-paid, stable workforce will be more productive and resilient to challenges long-term. Both of them can be completely serious, acting in good faith, and neither would have any fear of legal consequences.
It would help you to make your point if you could refer to the specific legal doctrine outlining exactly what management owes stockholders. It's called 'fiduciary' and, is broadly defined in the US by Delaware state law.
You probably didn't read the essay I linked? It shows through numerous court decisions that maximizing shareholder profit is the legal duty.
Yes, there's a lot of leeway for HOW to maximize the profit and on what timescale, but the purpose still has to be profit maximization. For example a court ruled against Henry Ford rising workers' wages for the benefit of the workers and the society.
I don't really understand why people struggle with this IMHO plain fact that for-profit is for-profit. Maybe it challenges the idea that "free market" will benefit us all?
Am I the only one wondering how any half capable AI classifier would be unable to identify a literal beheading video or man having s*x with a fish? And especially at a cost that would justify using manual labellers?
I get that these human agents are most likely providing labelled training data as opposed to actual filtering but still, surely on more edge-case footage where the model is not confident: not things that could very easily and confidently be classified as inappropriate.
How do you know that already isn't happening? When we're talking about FB, X, Youtube, we're talking about (tens of?) billions of multimedia posts a year. "Edge-case" will still be a massive amount.
A quick Google tells me about 1% of the population are psychopaths. There are 80 million people on the planet that have no empathy or conscience. And that's just one angle we could look at this from. The point is, there's a truckload of fucked up shit on the net and AI isn't that good yet.
> The point is, there's a truckload of fucked up shit on the net and AI isn't that good yet.
I would beg to differ. AI is definitely good enough to identify fucked up things. Feed GPT-4 even a subtly fcked up question, or fcked up image and it'll be onto you immediately
When I said edge cases, I wasn't arguing that there wouldn't be any edge cases - I'm saying the human agents were being shown videos that were not edge cases - i.e sexual abuse, gore - if it's bad enough to traumatize a human being, it's also high signal enough for a model to pick it up.
Yes, human agents should be used for labelling ACTUAL edge cases where a model might be confused - i.e is this a nerf gun fight or a real gun fight, is this person nude or is it just their shirt... not a plain beheading video.
This sounds like early industrialization in England. The way they approach their workers. It's awful and ridiculous that Facebook don't want to pay them 2.2$. Corporations are really necessarily evil. The depersonalization of people which came with these big structures generally is making most of the problems. People usually are not psychos when dealing with people. Usually.
Corporations are not evil. They just don't care at all about e.g. human suffering or destruction of environment they cause to maximize return on capital. From human perspective they are amoral or chaotic neutral. Like Cthulhu.
An asteroid does not have the ability to care about those things, it’s a rock. It can’t choose to leave the earth alone in the name of empathy.
Corporations are made of humans, who, it’s been shown, care about other humans a lot of the time. I think the more we rely on these power structures for our own self preservation, the more we start to weaponize them against weaker forms (normal people) as a form of active defense.
I like the reference to Cthulhu, it really is like a higher order life form. Workers are the cells, infrastructure the body, and the c-suite is of course the ravenous mind. You as a human don’t care what happens to a cell in a Petri dish, the same way corporations don’t care about what happens to individual people. As a human, I find this evil, and wonder if there are perhaps other higher order life forms (economic systems) we can construct that aren’t as harmful to their own composition.
When acting on behalf of a corporation (e.g. as an employee), humans are quite limited in what they can choose. This is very commonly used to justify actions that even the human themselves may find immoral. E.g. "just doing my job", "have bills to pay", "just following orders", "if I wouldn't do it, somebody else would". Not only are the corporations psychopaths (i.e. amoral), they are structures that turn humans psychopaths when they are on the clock.
We seem to agree that structures like corporations are best seen as something that transcend the individuals that they comprise. And I think there are less harmful forms of organization, or at least what corporations can do should be limited by e.g. states and unions significantly more.
I avoid (and criticize) the use of "evil" because it usually muddies more than clarifies, and has weird metaphysical connotations. "Harmful" I think captures it better.
People are not free to make decisions in a corporation. If their decisions are against interests of the corporation (the stockholders) they are typically fired.
I don't believe it's very useful to categorize people or things as evil if they don't themselves think this is their motive.
The "evil" category often makes people think that the "evil" entities have some motives and logic of operation that "good" people just can't understand or relate to at all.
There was a sort of internally coherent logic and motivation for Nazi death camps (and to many similar horrors). I don't agree with it at all and I find the results of it extremely harmful and such logic must be vigorously resisted. And to vigorously resist it, it has to be understood.
However, I'm not at all sure that I would have resisted the death camps if I were to formulate my views in the same situation as Nazi germans were. I could well be a Putinist in current Russia. And I probably wouldn't think I was evil.
In that case, no one is evil as they can subjectively justify every decision. I don't believe subjective justification makes something intrinsically evil like child abuse acceptable, tolerable, or ignorable.
Large corporate structures diffuse both responsibility and ability for coherent action.
Dealing with a corporation is like trying to communicate with some neurons in a brain. You might encounter helpful units, but the total sum of units operates at completely different levels.
C level execs deal with many issues directly, but inevitably have to delegate (to many levels) most of what they are responsible for. The ability of corporations to multitask in the thousands of threads (human tasks) comes with a price of lost coherence.
The structure of an organization becomes the overriding entity, one without any moral impulses, and highly insensitive to the details of myriads of problematic exceptional situations that individually seem very simple to solve.
But all the exception handling required to properly handle every problematic event would turn the organization to goo.
Especially talented & moral leaders can make a huge difference, bending the organization to prioritize a few important issues over others. But even they can’t close the gap overall.
And of course, many successful leaders we might think of as normal healthy people as individuals, are not the unusually talented moral leaders needed to deal with this complexity.
It was I used the term "necessary evil" deliberately because they are as you implying basically just a tools.
"Necessary" because we need them, they are efficient way to drive progress and production using competition. Also using just profit as your main goal is very straightforward and therefore efficient.
"Evil" because it can allow you to dismiss any moral values and basically be completely
selfish without even noticing. You are just a small gear in the machine doing it's job. Even if it is doing some lousy
paperwork, hauling some cargo to the port and then finally dropping toxic waste into the ocean. Reason for this is that corporation is a tool for "atomization of work" when everything is blended into small tasks and people are completely detached from the consequences of their actions.
So we as a human individuals profit from them greatly but also it can harm us because "they" act as cynical sociopaths. By they I mean corporations. With the use of "necessary evil" I was being hyperbolic But it is still double edged sword.
Personally, as someone active in Twitter's Birdwatch / Community Notes, I don't even want to imagine what the actual paid moderators get to see. What CN gets is a ton of propaganda (mostly from Russian-backed troll farms and their Western accomplices) and only a few bits of gore (most of that purported to be from the current I/P conflict, but actually from other, sometimes many years old wars and conflicts). And that's bad enough.
I imagine that moderating content online can take a real toll on one's mental health. Sort of like police who have to look through evidence of child porn day after day.
police being bothered by cp seems miniscule in comparison to the emotional strain that comes with the daily domestic violence, physical neglect and war of the roses style cases
That makes me wonder about the ethics of perhaps having pedophiles and psychopaths put their “competitive advantage” to use in these respective occupations.
I am no longer permitted to post images to my Facebook feed. I am also experiencing glitches with Messenger. I am not sure why, but I can't expect Support to be of any help here.
Back in 2015, I participated in a genuine knighting ceremony, and it was re-enacted for photo opportunities. So there I was, on my knees receiving a sword to my shoulders from a knight in full regalia. Of course I eagerly posted the photo to Facebook and it was immediately rejected, I assume by AI only, because it was "male pornography". Yeah, I suppose I could see how the confusion arises there.
Unfortunately the false-positives of Trust & Safety can be a real hindrance to those of us who have genuine moments to share with others.
It seems everyone will ban you for anything with no recourse because they expect bad actors to just create new accounts every time. The answer to this is, ironically, and even officially in Google's case, that good actors should just create a new account as well if they get unfairly banned.
I would like to point out that this has been a typical recurring high profile reportage, reporting on exactly the same problem, for at least a decade.
ABSOLUTELY NOTHING has changed. Not even in Europe, where the case exploded in Germany in 2016.
You can translate this piece from the SZ from 2016 to have an idea.
As someone who reports on media and tech, I have to say: the way nothing changes with Meta and social networks even when their unethical and evil practices are exposed is one of the most frustrating aspects of my career.
Damn long-form journalism takes forever to get to the point, so my skimming sometimes misses something crucial.
With that disclaimer out of the way, (I think) it is distressing and damaging for young people to be doing this job. (And maybe the rest of the article goes on to say something like that.)
Still in today's world it is a job that has to be done. If it is to be done it should be by older more mature people with more life experience under their belts. And it should be revealed up front what the job is and how damaging it can be. I think you would have to be very spiritually grounded and take many breaks to refresh your spirit (mental health) to do this.
Only a brief comment: I imagine many (and specially the top) social media companies will be sued in the near future like Tobacco companies in the past [1]. It is just that regulations, laws, and politics are slow to adapt to an extremely dynamic world. Here I am not pointing to specific politics, politicians or ideology spectrum but the field of politics itself.
That’s probably the most horrifying flip side of that debate. There’s genuine physical abuse at a scale that would make every genocide, or any pandemic look tame, but it’s all dismissed because car companies represent a material amount of ad revenue.
But because the images of it (not the reality of losing a loved one, or the consequence of losing a parent) are gory, there’s nothing we can do to highlight we should care about more than ad budgets.
It seems to me the algorithm being followed where these moderators are introduced to this horrible content maximises the anguish it causes. There's no proper training in advance to fortify them against it and they're straight into the most horrible things from the first day on the job, which is inhumane.
Separating the wheat from the chaff of content is difficult and admirable work and these people should be properly supported and rewarded for the risks they take from moderating it on our behalf. A lot of these people are also quite nieve about horrors of the world. Are they really the best ones to do this?
To everyone saying AI is the solution to this. You are right but only kind of. Much of the work of modern AI is due to backend labelling and tagging of datasets by humans on services like Mechanical Turk.
So to train the model you have to get hundreds of thousands, or even millions of images of the kind you wish to filter. Then get thousands of humans to look at them and label them. These people will suffer just the same as those interviewed in this article.
One of the few jobs which I'd just never do fullstop no matter how desperate.
Going to war seems to mess up a good % of people. This stuff seems to have an even higher rate (near 100%?).
>Like other sacked moderators, she confessed to a feeling of withdrawal at being deprived of the graphic content she had grown accustomed to.
Quite surprised about that part - hadn't heard that before in other articles about this topic. Maybe that's what drives the people who post all this crap in the first place
Young people suffering “Internet's worst horrors”? You mean the social network services?
Those exploitative weepy articles about content moderation are just multi-layered hypocrisy.
First of all, it is supposed that people are somehow entitled to “good experiences”, and the arbitrary (and moving) cultural boundary between “suitable” and “unsuitable” is as unquestionable as if it was God-given, when in fact, it's just a feature of an entertainment direct-to-screen service available to those from the luckier parts of the world/society. Others deal with the feces for them, as shown here.
Then there's the hypocrisy of the reader, who enjoys the thrills articles like those give, but also enjoys the service too much to stop, and leave the system of exploitation. Like, “It's so awful, so awful, but I need my daily dose of filtered cat pictures, so you're gonna get that sad dickpick spam in my stead. It's just the world we live in!The algorithm makes me continue doing that!”, etc.
Then there's the talk about values, correctness, and so on, but those decisions are not even personal in the first place. The “SFW” facade is not supported by some die-hard conservatives in power, it is just a business requirement. Say, breastfeeding is considered “problematic” not because of some “clash of cultures”, or “gender conflict”, or “religious opposition”, but because you can't use someone's body in that context to hold advertisements, as stated in contracts. Now go hide yourself in a ditch somewhere, don't spoil our pretty picture. Money happens to be religion here (what an original thought).
Some time ago, people naively believed in the future that eliminates travails and death, now we pretend really hard that the “future” is now, and build all kinds of media and social contraptions to make someone else do the “dirty” work (real or “emotional”).
>Then there's the hypocrisy of the reader, who enjoys the thrills articles like those give, but also enjoys the service too much to stop, and leave the system of exploitation. Like, “It's so awful, so awful, but I need my daily dose of filtered cat pictures, so you're gonna get that sad dickpick spam in my stead. It's just the world we live in! The algorithm makes me continue doing that!”, etc.
I get you're angry but if everyone who complains about how awful social media can be while they themselves participate in social media are hypocrites and therefore shouldn't speak up (because... ???), how do you honestly see society being able to address the problem?
Heck even now, both you and I are participating in a limited form of social media, aren't we to hypocrites for complaining about the harm given your interpretation? I mean sure we aren't looking at "filtered cat pictures" (or maybe we are :P) but I don't see how what we're doing here is fundamentally different.
It depends. With law enforcement, probably. Abother option working for a cybersecurity firm and focus on post-intrusion forensics, etc. Also, working in incident response for a large enough company will give you this experience too. Source- I held a role responsible for response and some forensics for years.
Yeah, I mostly meant law enforcement. I probably couldn't reskill at this point anyways. I had a couple security courses including a forensics course in grad school. But it seems most places want people to be experts and don't offer any training. Things like CISSP don't seem to have very good hands-on training.
There are some podcasts like the DFIR podcast that can get you started with some concepts. I agree with the CISSP. That's more of a management level certification and focuses a lot on risk and other topics. It's a mile wide and an inch deep. If you're looking for other training, there are some SANS ($$$) classes or you can look into general courses that will teach you to use things like FTK or others like Magnet Axiom, Encase, etc. If you're interested, I know some folks deeper into forensics than I that can give me some more specific courses if you'd like- let me know.