Not to contradict your overall sentiment, but people often focus on intention with these issues. I think that's wrong.
Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.
We need to be more dispassionate when discussing these issues because otherwise threads like this descend into analysis of whether Zuckerberg/Bezos/whoever is a moral person. Which is a)probably unknowable and b)besides the point.
There is a problem here with a very big company that has more power than it knows how to handle, which can probably only be mitigated by breaking it up. That's all there is to it, really.
[edit] Just as an addendum, that's not to say that if the company has done something illegal, the people responsible shouldn't be prosecuted- they should.
There definitely is a difference between someone accidentally tracking dog poo into your house vs someone doing it intentionally; in the first case, you all them to clean it up, and in the second case you break the friendship and/or seek criminal prosection.
Likewise for big companies. If a company is acting badly, you need to figure out if it was intentional. In both cases you seek damages, but your approach to making sure it never happens again will be very different.
> There definitely is a difference between someone accidentally tracking dog poo into your house vs someone doing it intentionally; in the first case, you all them to clean it up, and in the second case you break the friendship and/or seek criminal prosection.
But the point is that this assumes you're able to tell the difference between the two cases- intent is often a really hard thing to prove. And often discussions about whether Facebook acted in good faith when it did certain things neglect the fact that they made a big mess everywhere, that needs sorting out regardless of their intent.
It is and yet the legal system is busy with these sorts of proves all the time. e.g. if you have a professional insurance and caused some damage by mistake - that's covered, if you cause damage deliberately - it's not.
But if we can reduce that to a second-order effect we will be better off. For example, if you made the rule ('regulation') that "shoes shall not be worn in the house" then you can avert the situation and never need to try to determine motivation.
I think this is basic good that regulations- thoughtful ones- serv. Because if Facebook makes money by being evil, then their competitors will be pressured to do the same in order to stay in business. But by leveling the playing field you can help prevent these monopolies from getting so big and powerful.
We can't rely on businesses to act "morally." That ship has sailed. We have to compel them to behave, not by social shaming but by making non-compliance painful and repeated non-compliance an existential threat.
It's hard to prove but for some strange reason they always have the resources when it's time to come after the little guy and they don't when it's the big guy and it's also more obvious
A logical reason to care about intent is that malicious people will likely do it again and can not be trusted in the future where as people who make a mistake will avoid doing it again
For a corporation, "avoiding doing it again" means dedicating resources to preventing a repeat of that type of "mistake": audits, privacy reviews, etc.
The choice not to dedicate those resources up front was an intentional one.
Analogies break down due to the difference of scale between individuals and the biggest companies in the world.
If a company makes an "innocent" unintentional mistake, it can be attributed in part to their choosing not to put resources towards detecting and avoiding that kind of error
Getting too invested? In what, the society that surrounds each individual? I don't think that is possible.
There is a difference between empowerment ("Joe Public can't do much to change society") versus investing (literally putting money into local businesses, demanding better behavior, etc).
I have met a lot of my American peers who believe their individual liberty absolves them of responsibility to the society they live in, which is entirely opposite: their personal liberty is granted precisely because of the society they live in.
> Likewise for big companies. If a company is acting badly, you need to figure out if it was intentional.
I disagree that it's likewise for big companies. Corporations like that don't really have intentions; every intention is fundamentally about profit. Profit is in fact both its intention as well as its reward/punishment.
It's really the only way to properly "communicate" with an entity like that. A corporation is not a human being. And just like it's not useful to try to reason with (or attribute human-like intention to) a cat, it's basically futile to do so with a corporation.
Any time that human-like reasoning seems to apply with a corporation, it really only happened because the reasoning happens to align with its profit intentions.
Edit: this is less true for smaller companies, but for a multinational it's pretty much a given.
Also, there's something in the way that large companies are structured, that actual accountability (like you'd find with humans or small groups) seems to disappear and slip between the gaps of hierarchy. Lacking accountability, attributing intent becomes guesswork.
Seems to be intentional as a system for taking a user's password, scanning their contacts and then uploading them requires specifically built tools with intention. Stepping in dog poo and tracking it into a house requires only shoes that you already wear and walking that you already do. That doesn't require any new intentions or tools to be built
> if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet
Yes, but if someone treads dog poo into your house every day for a year, their repeated claims that it was a mistake every time are not going to carry as much weight. With FB we're not talking about a single incident of treading dog poo; we're talking about a repeated pattern of getting dog poo all over the place.
>Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.
Have to disagree with this; I am much more forgiving of accidental harm than intentional harm. And speaking of dogs, pretty sure my dog understands this as well, since there's barking if he thinks I did something on purpose but accidentally stepping on him doesn't elicit the same response.
Who was talking about forgiveness? I thought we were discussing ethics here. No amount of good intentions will make Zuckerberg's actions moral... at least the actions of which I'm aware.
We're talking about your feelings after a harmful event, and whether the perpetrator's intention matters. I'm using "willingness to forgive" as a measure of how much it matters.
Words like malice and evil require intent. If we don't want the conversation to be about intent, we should use words like amoral. Facebook isn't actively trying to harm their customers, they simply don't care about them. That is amoral not evil.
That isn't evil. You wouldn't call a hurricane or a fire evil. You wouldn't call a person who kills someone in a no fault car accident evil. Evil isn't about results. It is about intent and motive.
You can call Facebook a number of things from amoral to negligent or even criminal, but once you start talking about evilness you have to start judging their intentions and motives.
> That isn't evil. You wouldn't call a hurricane or a fire evil. You wouldn't call a person who kills someone in a no fault car accident evil.
All of these things are no one's fault.
> Evil isn't about results. It is about intent and motive.
Exactly. Prioritizing profit at the cost of customers' well-being is a deliberate decision; if not, seeing that customers are harmed by your own, continued actions and doing nothing to change it is, to me, actively being evil. Your intent may not be exactly to harm, but you have no problem harming people to get there. There is no difference.
I think that is just too broad of a definition for evil. According to that, everyone who isn't carbon neutral would be evil. We are worsening climate change through our "own, continued actions and doing nothing to change it". And if we are all evil then evilness has no real meaning.
> According to that, everyone who isn't carbon neutral would be evil.
While I agree that it's useful to maintain some nuance in our perspectives, generally I think it's better to recognize and realign our actions, not definitions. It's the difference between accidentally hitting someone, and accidentally hitting someone and proceeding to run them over.
And to your example, I think we know most people aren't really aware of what is going on. Another group of people don't believe it at all, a combination of ignorance and poor government. We're all human; that means something.
Lets say Mark gets in a waterballoon fight, but with the balloons full of gasoline. Lets say this happens in hospital. Mark is aware that his behavior is likely to result in great harm, many deaths. He doesn't _want_ people to die, it's not his goal, he just doesn't care that he's putting them in danger. He wants to have fun with his balloons, and that's all that matters to him.
Mark isn't trying to kill people. Mark _is_ evil, because he chooses to take actions that are likely to cause great harm.
No, but you would call it evil if someone repeatedly built rickety buildings in a hurricane zone or built structures out of dry wood and paper next to a forest at high risk for a forest fire, and then acted like they had no responsibility when the buildings repeatedly got destroyed by fires and hurricanes and people's personal property got lost forever or looted in the resulting disorder.
Evil and amoral lie on a spectrum (with things like "altruistic" and "noble" on the other end).
Trying to fit things into neat little boxes so you can apply words to them isn't particularly helpful. What Facebook is doing is some form of wrong. They aren't murdering children, but still, what they are doing is not good and they are doing it at a massive scale, so the harm is multiplied.
> which can probably only be mitigated by breaking it up.
I think about this a lot with regard to Big Tech. For some companies it looks easy and obvious (e.g., Amazon spins off AWS). For Facebook is it really as obvious as "spin off Instagram"? I'm not really convinced of that. It seems like their power is so ingrained in Facebook itself that it's not immediately obvious what splicing off Instagram would do. What would we actually want to accomplish?
Honestly I can't say if it's definitely the right course of action, although in the last few years there have been reports of millions of 16-35 year-old Americans leaving Facebook, only to move to Instagram. I do think the ability of these companies to buy their nearest rivals is an antitrust issue that needs addressing, for the health of this relatively new market, if nothing else.
AWS + Amazon doesn’t seem obvious to me; would there be much to gain from breaking up a conglomerate into several businesses, when those separate businesses are in entirely different verticals?
I suppose breaking off AWS might lower Amazon’s ability to subsidize an unprofitable retail business in search of market share, though that is probably a moot point now that Amazon is raising its prices in search of profitability and is likely to stop being the de facto online shopping destination now that other online retailers are also standardizing in two day shipping and easy returns (often easier due to providing return labels in the box).
I think one thing missing in the responses here is that there is some culpability without intent. Assume there was not ill intent, the fact that so many things that went on still happens, means they didn't have sufficient controls, or checks and balances in the practices to prevent it from happening. That lack of control is itself part of the problem, and something they could have fixed any number of times.
My 9 year old son always rightfully claims that many of the harmful things he does was accidental. The problem is that he frequently leaves little margin for error in a lot of things he does. Follows his sister just a few feet behind his bike; of course your going to run into her if she stops quickly. Stacking your bowl, cup and silverware on your plate then bring it up one handed; of course they're going to spill.
Facebook's internal controls and practices are insufficient to manage their business. It doesn't matter if they didn't willfully intend to do all of the shit they did. They did intentionally create the controls, practices, and culture in place that allowed it to happen.
We do distinguish manslaughter and murder however. Someone dies either way, but in that case intent matters.
I think in most of these cases the intent should simply unleash an additional charge or penalty - leading to the imprisonment of executives.
If my company accidentally does something extremely stupid or negligent, or against the interests of my customers, fine me enough to ensure I create systems and oversight to attempt very hard not to.
If it turns out I was doing so intentionally, lock me up and throw away the key.
>Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.
So, whether it's your own two year old or a malicious adult, you think it's wrong to respond differently because they both produced the same harm?
I think it's not wrong to treat an adult exactly like you would a child if he's acting like one. I interpret parent poster as trying to make a point about how we cannot tell anything about intent so we should judge on the action, solely.
> I think it's not wrong to treat an adult exactly like you would a child if he's acting like one.
So, are you saying that the malicious adult poo-tracker is being childish and should be treated with the leniency we afford to children?
> I interpret parent poster as trying to make a point about how we cannot tell anything about intent so we should judge on the action, solely.
Why on Earth do you think we can't deduce someone's intent? In the case I posited, you can know the intent of both the child (no malicious intent) and the malicious poo-tracker (malicious intent). In many other cases, you can also deduce intent from someone's behavior.
Like, if someone treads dog poo into my house on their shoes, it doesn't really matter if they did it by mistake, or spent ages walking around town trying to find some dog poo to step in before coming to my house; the effect is that there is now dog poo on my carpet.
We need to be more dispassionate when discussing these issues because otherwise threads like this descend into analysis of whether Zuckerberg/Bezos/whoever is a moral person. Which is a)probably unknowable and b)besides the point.
There is a problem here with a very big company that has more power than it knows how to handle, which can probably only be mitigated by breaking it up. That's all there is to it, really.
[edit] Just as an addendum, that's not to say that if the company has done something illegal, the people responsible shouldn't be prosecuted- they should.