It doesn't imply "control" but, if all these people define themselves as "Jewish", then they have something in common that distinguishes them from the general population. And unless "being Jewish" (as "being Christian", or "being Palestinian", or "being Russian") is an empty word, then "being Jewish" must have some predictive value about their beliefs, behaviours and choices. It is perfectly normal to imagine that their "being Jewish" will affect their choices more when they concern things that feel close to their identity: their religion, their culture, their ethnic group, their communities, their associations, their lobbies, and finally their Jewish State. Being over-represented in positions of power certainly allows them to exercise their free choices in directions that benefit the group and community they feel they belong to. It's that simple.
Makes me wonder if during training LLMs are asked to tell whether they've written something themselves or not. Should be quite easy: ask the LLM to produce many continuations of a prompt, then mix them with many other produced by humans, and then ask the LLM to tell them apart. This should be possible by introspecting on the hidden layers and comparing with the provided continuation. I believe Anthropic has already demonstrated that the models have already partially developed this capability, but should be trivial and useful to train it.
Isn't that something different? If I prompt an LLM to identify the speaker, that's different from keeping track of speaker while processing a different prompt.
I can't understand how it is possible that when such ceasefires are agreed there isn't a designated third party who has the signatures of both parties and can say, and prove, if it's been violated.
They say it, but can they prove it? Because everyone seems to be saying a different thing. Shouldn't Pakistan be able to say "this is the document, these are the terms, these are the signatures, case closed"?
I'm not saying they're lying, I am just wondering why they don't seem to have a definitive proof of what they say- they say one thing and the US "disagrees"- why can't any of the parties just show the fucking document?
On the slightly optimistic side, much more intelligence will be spent in countering these criminal uses than in enabling them. For each of the terrible inventions you mentioned, there are other inventions to counter them.
It's true though that the cyber security skills put firmly these models in the "weapons" category. I can't imagine China and other major powers not scrambling to get their own equivalent models asap and at any cost- it's almost existential at this point. So a proper arms race between superpowers has begun.
> the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.
That you can't "become Schwartz" by using LLMs is an unproven assumption. Actually, it's a contradiction in the logic of the essay: if Bob managed to produce a valid output by using an LLM at all, then it means that he must have acquired precisely that supervision ability that the essay claims to be necessary.
Btw, note that in the thought experiment Bob isn't just delegating all the work to the LLM. He makes it summarise articles, extract important knowledge and clarify concepts. This is part of a process of learning, not being a passive consumer.
There's no contradiction, the point is that Bob is able to produce valid output using LLMs, but only while he himself is being supervised; and that he doesn't develop the skills to supervise independently himself in the future.
No, this is impossible unless Bob is presenting at each weekly meeting simply the output of the LLM and feeding the tutor's feedback straight into it. For a total of 10 minutes work per week, and the tutor would notice straight away at least for the lack of progress.
No, the article specifies that Bob actually works with the LLM, doesn't just delegate. He asks the agent to summarise, to explain, and to help with bug fixing. You could easily argue that Bob, having such an AI tutor available 24/7, can develop understanding much faster. He certainly won't waste his time with small details of python syntax (though working with a "coding expert" will make his code much cleaner and more advanced).
This is the rub, Bob would not be promoted if he consistently provided unreliable LLM output. In order to get promoted, Bob needs to learn the skills that get reliable output out of an LLM. These may not be the same skills that Alice learns, but if the argument is that Schwartz's LLM output is valuable -- why are we to assume Bob's path isn't towards Schwartz?
There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.
But there is a distinction we can make between flowers and wasps. If there is no distinction we can make between Schwartz and non-Schwartz, then we are susceptible to the sample problem with or without AI. And if there is a distinction then we can use that distinction to test Bob, and make him learn from his test failures.
But the whole point is that there is a significant difference between Schwartz and non-Schwartz, that only turns up after they start working for real, producing new work rather than rehashing established material, and it takes years to detect. By that time, Bob's forty.
It isn't a "sample problem" it's a process problem. By perpetually raising the stakes and focusing on metrics (e.g. grades, number of publications for students, graduation rates for schools) we've created and fallen into a Poe's law trap. Adding a new metric isn't likely to help.
What might help? Making the metrics harder to game (e.g. something like oral exams, early and often), more discerning (grade deflation), and moving the wrong-track consequences earlier (start holding people back in grade school, make failing to graduate high school easier, make getting into college harder, etc.), and change the cash-cow funding models to remove the perverse incentives.
You mistake for competence his greed and that of those who surround him. I don't think there was a plan to profit from the disaster; rather, they're so incompetent that they even lack the basic self-control to avoid publicly taking advantage of the mess they unwillingly caused, however bad and dangerous that might be.
I imagine Trump wanted to do some fun new things when he is old and will soon die. Its not many who get to experience what it feels like to start a war and kill world leaders, and when you are gonna die soon anyway why not?
I think this is the correct lense. He's a malignant narcissist on his way out, with absolutely nobody to stop him.
I'm genuinely worried that he secretly wants to go down in history as the crazy guy who set the oil fields on fire and dropped a nuke on Tehran or something.
Not sure what moves Trump- could be any of that or more. What we all know is that Netanyahu and Kushner found this and used it to get what they wanted. This is not Trump's war, he's not the initiator and he doesn't have goals of his own (though at times he might believe he does). It actually contradicts what he campaigned on for years.
reply