Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am really trying to understand both sides of this. Focusing on the actual research, what about it do you think requires informed consent?

Do you understand how widespread this kind of research is? Literally everyone does this.

The act of publishing can't be the ethical breach -- just focus on the research, what do you think they did wrong there?



> Do you understand how widespread this kind of research is? Literally everyone does this.

One of the main objections I'm seeing from people (in my bubble) isn't that Facebook did this, but that Cornell, UCSF, and PNAS participated in this. Facebook can do this, and while it's unethical it's not illegal. Same goes for manipulative people in your everyday life (let me not tell you about the horrific human being of a girlfriend I once had). The point is that science and the people who purport to carry it out should be held to higher and rigorous ethical standards. If those standards are not met, those people should be excluded from science and their findings ignored. They should not be awarded serious consideration in a journal such as PNAS. That is what is happening here as far as I can see, and while a bit dramatic fashion I think it is correct.

Also, if I may toss my personal interpretation of the research into this... ethics aside, the study is extremely weak, and I honestly don't see how it can be published in such a "good" journal. The effect size was < 0.0001. They hand-wavingly try to explain that this is still significant given the sample size. I'm personally not convinced, at all. Sounds like they needed a positive conclusion out of the study and so they came up with a reason for one. If this landed on my desk for review I would have reject on that alone.


OK, this is interesting. I don't think this is people turning their nose up at it, just because [insert endowing authority here] is somehow seen to have endorsed it. Apparently someone actually did something wrong here...

If it's the case they should be held to a higher standard simply because it was academic research, it's seems like a terribly inconsistent position to take. But if true, then FB walked right into it, and all we can do is shrug.


Ethics committees can and do give the all-clear to experiments that have a negative impact on people, as long as the experimental procedure is generally tight, anonymous, information is well-controlled with little scope for leakage or abuse, and with a potential for a result that is solid and informative enough to be worth the inconvenience or other negative impact.


Perhaps it could be summed up as "Attempting to negatively influence a visitor's mental state without their knowledge or consent IS NOT COOL."


The assertion that this study refuted was that too much positive bias in a filtered source of information causes negative sentiment. Reducing said positive bias had heretofore unknown effects. The attempt was to learn, not to negatively influence anyone's mental state.


Just like the Milgram Experiment was designed to see whether a small number of people could be coerced into believing they are torturing and killing people when so ordered? It wasn't designed to cause mental distress, but to see what happened.

You see, that's why there are ethics committees at Universities.


So much advertising could be said to negatively influence a person's mental state. These are not simple questions with simple answers.


Wholly different thing.

Let's draw an analogy here. Let's say that Google decided to conduct an experiment on its Glass users, and that for a period of a week, Google decided to see if it could alter the mental state of its customers by using Glass to delete or diminish positive social interactions. Let's assume this research was conducted without any kind of consent or knowledge of the customers being experimented on. What's your immediate reaction to that? Still cool? Still harmless and just like advertising and A/B testing? Or, creepy and dangerous?

Like it or not, Facebook does have a special responsibility here. It is, quite literally, the lens through which people see their world.


Google Glass absolutely will have to carefully rank the content that is displayed on its interface. The algorithms for such ranking are surely ripe for R&D and competitive advantage, and as such, will be constantly evolving and being tested. If this Facebook test creeps you out, there is literally nothing about Google Glass which should NOT have you running for the exits.


onewaystreet did not say advertising was harmless; rather, the opposite. It's interesting that you think advertising is harmless, not creepy, and not dangerous.


Has someone told Fox News?


Would it be OK to just try to positively influence a visitor's mental state? What about negatively influencing your opinion about something? What about making you feel scared a particular event might happen to you? What about making you feel like you need a pick-me-up?

...What about enticing the maximum number of people to click on your buttons for the maximum amount of time, with no purpose but capturing their eyeballs?


Most of those are quite different things, and obviously so if you think about them.

"Do customers prefer the green button or the blue button" is harmless. Psychological testing is NOT harmless and NOT something that anyone with a website can or should do. That is medical research - and as such is strictly controlled precisely because of the real risks and genuine human cost that it can have if conducted inexpertly.


> "Do customers prefer the green button or the blue button" is harmless.

That is a psychological test.


Not one which is specifically designed to alter someone's emotional mental state. The difference is quite obvious, surely.


Actually, no. You are putting the cart before the horse. What actually happened is that they tweaked the ranking algorithm, and then measured a minuscule effect in a particular scoring algorithm (in this case counting certain types of words used in future posts).

So the nature of the scoring algorithm (counting emotional words) used to measure the impact of a change makes deploying the A/B test suddenly unethical?


Most advertising is specifically designed to alter someone's emotional mental state, and plenty of that is in a negative direction. Would you also outlaw advertising? Why should advertising get a free pass and not well-controlled psych testing? What about signs warning you not to infringe on [random local law] under threat of penalty? They create a sense of oppression. Should they be forbidden?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: