> I want to illustrate the point that we intervene less, not more, when judgments about ourselves are involved.
Given "ourselves" is all of us here, a way to illustrate that in a trustworthy way would be to expose the meta data around the story. Which articles were flagged and removed, who flagged, how much it mattered, etc. If not in real time, due to possible exploitation for ranking tweaks, then perhaps in an acceptable offset of time.
I also recommend implementing a cost of suffering flag type for articles which use negative emotional responses to spread the information in the article, especially when it is dissonant and viral in nature. See https://www.youtube.com/watch?v=rE3j_RHkqJc for context.
This article seems reasonable, but here's one indication it's flirting with recursive irrational behavior:
> I have to admit that I found it a bit comforting that I wasn’t the only one who thought this all seemed a bit fishy.
Confirmation bias is still a bias. Making an argument with bias prevents exploring other explanations, such as errors in HN code, activity periods (lunch for example) and sharing of stories within other aggregates which elicit a negative response (a large company monitoring the comments may not agree with post, and consensus there effectively moderates it down).
As a regular user, I have no interest in being witch hunted because I flagged a story. Votes and flags should be anonymous as far as other users are concerned.
It has to cost something to hide good content if you expect people to take the time to produce or submit quality content and engage with the community. Otherwise you'll end with the things no one cared much about.
This is highly irrational and speculative in nature. If an entity flags all articles by another entity or group, they should be held accountable for their actions, which could be shown to be biased. Further, a "witch hunt" would be driven by irrational decision making processes in the "hunter" aggregate, which is the point of exposing the meta data in the first place. Stopping recursive irrational thinking is the goal here.
Incorrectly indicates agreement on concensus of the aggregate. I agree with your assertions. Moderators, as infrastructure currently stands, serve an important role in shielding users from the truth of things.
>Given "ourselves" is all of us here, a way to illustrate that in a trustworthy way would be to expose the meta data around the story. Which articles were flagged and removed, who flagged, how much it mattered, etc.
Publicly pointing out who flagged a story seems like a bad idea. I think the data should always be anonymized. I just think it will lead down a bad path to ostracize people for specific votes or flags.
I do however think it would be interesting to see some data after the fact for some stories say ones that reach 500 points or greater.
Publicly pointing out who flagged a story seems like a bad idea.
Possibly better idea:
Moderators could take a look at unfairly flagged stories and silently adjust flag weights for flag-abusing users.
This is of course based on my unscientific thought that the majority of interesting (IMO) stories flagged of the frobt page are removed by competitors (political or business-wise) rather than moderators.
Given "ourselves" is all of us here, a way to illustrate that in a trustworthy way would be to expose the meta data around the story. Which articles were flagged and removed, who flagged, how much it mattered, etc. If not in real time, due to possible exploitation for ranking tweaks, then perhaps in an acceptable offset of time.
I also recommend implementing a cost of suffering flag type for articles which use negative emotional responses to spread the information in the article, especially when it is dissonant and viral in nature. See https://www.youtube.com/watch?v=rE3j_RHkqJc for context.
This article seems reasonable, but here's one indication it's flirting with recursive irrational behavior:
> I have to admit that I found it a bit comforting that I wasn’t the only one who thought this all seemed a bit fishy.
Confirmation bias is still a bias. Making an argument with bias prevents exploring other explanations, such as errors in HN code, activity periods (lunch for example) and sharing of stories within other aggregates which elicit a negative response (a large company monitoring the comments may not agree with post, and consensus there effectively moderates it down).