Yes, but it also creates a straightforward method for triggering a TOS violation if an unlabelled AI video is detected, allowing YT to remove deceptive videos and ban bad actors without adding more subjective editorial decisions into the review process.
It would be hard, for example, for a political party to claim that this policy selectively discriminates against their viewpoint.
Nope, not pointless: you remove bullshit excuses. "Everyone knows it's AI! I didn't mean to deceive anyone, honest!"
This is also why we have to do the insufferable corporate training on how not to do bribery, sexual harassment, etc. It's not that HR thinks you don't know, it's that HR knows that if they don't make everyone take the training then bad actors can successfully avoid/delay consequences by pretending to not have known. I think of these things like jury duty: a civic duty that is slightly obnoxious in and of themselves but very important for the functioning of the overall system.
It gives YouTube the justification to remove videos that may not be technically rule-breaking otherwise. Though, I do imagine proving that a video is AI generated will quickly become functionally impossible.
Still, I believe you are wrong despite your statement ringing true. You are conflating different reasons as to why people may want to generate AI videos. The nefarious motive may be nothing more than profit ('cheaper than paying actor') as opposed to malice('we want to defraud people'). There are all sorts of reasons as to why self-disclosure is not a bad start including the fact that if it turns out you lied, you can be removed without it being a question of freedom of speech and so on.