Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not seeing that in the actual proposed legal text (https://www.justice.gov/file/1319331/download), and I'm wondering if I'm overlooking it or this is just posturing that they weren't able to write up in a reasonable way.

The closest thing I see is subsection (d)(2), which says that the platform can be prosecuted (A) for a "specific instance of material or activity" (B) if it had "actual notice of that material's or activity's presence on the service," unless (C) they remove/block "the specific instance of material," report it to law enforcement, and "preserve evidence related to the material or activity for at least 1 year."

I believe the major commercial E2E platforms generally have the ability to notice specific hashes of known-bad material (think, e.g., child sexual abuse material) and block it / alert the platform through a client-side filter, which I think would make it pretty easy to comply with these requirements.

Alternatively, it would be enough, I think, to remove and ban the accounts involved.

The only difficult part is that you need to "preserve evidence," but my understanding is that this phrasing doesn't generally compel you to create evidence where none existed. Privacy-focused platforms have for years avoided keeping logs that they do not want to get turned over for the government, and it's generally much more onerous for the government to ask you to start keeping logs than to get mad at you for deleting/purging logs you already collected.

So I don't think this actually imposes any requirements on design, or gets in the way of E2E or non-logging platforms. If you are informed of specific illegal content, you need to take action. But if you operate the service in a way that you don't have "actual notice" or "evidence" of anything people send with it, I think that's still fine.

The other carve-outs don't seem to be relevant. (d)(4) might be if you look funny enough: it says the platform has to make itself able to receive notification of illegal content, and that a platform doesn't get immunity "if it designs or operates its service to avoid receiving actual notice of Federal criminal material on its service or the ability to comply with the requirements under Subsection (d)(2)(C)." I suppose you could argue that not keeping logs means that you've designed your service in a way where you can't "preserve evidence," which would run afoul of this. But I don't think that's the right interpretation: if you're not creating unnecessary logs in the first place, if you keep the logs you do log for a year, you've preserved all the evidence that exists.

Am I being too optimistic here? (I do agree that the plaintext summary you quoted is very concerning.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: