We talked about this years ago. This is very much taught in the PRC (and I believe Taiwan for that matter). I specifically gave you examples of standardized tests that go over this material.
You seem to be conflating "someone taught it at a university" with the apparently well evidenced view that Lu Xun's overwhelming coverage in popular media and secondary schooling neglects to point out his anti-character stance.
> apparently well evidenced view that Lu Xun's overwhelming coverage in popular media and secondary schooling neglects to point out his anti-character stance
What do you mean by "apparently well evidenced view?" No I'm not saying "someone taught it at university." That's a public high school exam. That is specifically secondary schooling.
Moreover, this gets mentioned in official publications and popular media frequently. See for example this official article from the Chinese Academy of Social Sciences (which is a state-run entity), which just happened to be the first article that caught my eye.
> In December of 1935, 688 well-known individuals including Cai Yuanpei, Lu Xun, Guo Moruo, Ye Shengtao, Mao Dun, Chen Wangdao, and Tao Xingzhi, published "Our views on spreading Sin Wenz [Latinxua Sin Wenz, i.e. a Latin alphabetization of Chinese]." It stated in part, "China has already arrived at the point of life or death, we must educate the masses and organize [them] to solve difficulties. But the work of educating the masses, at its very beginning already runs into an enormous problem. That problem is Chinese square characters [Chinese characters usually are roughly proportioned as if they were in a square frame]. Chinese square characters are difficult to recognize, difficult to understand, and difficult to learn.... We believe that Sin Wenz deserves to be introduced to the entire nation. We deeply hope that everyone will study them, spread them and put them into practice, and make them into an important tool for improving the culture of the masses and the movement to liberate the people."
More broadly this is a very common topic among Chinese netizens. There are as I linked dozens of forum posts on this across Zhihu, Baidu, etc.
It's not the first thing people learn about Lu Xun. But it's definitely not hidden.
"Hidden" and "not taught" are two different things. I'm not claiming the knowledge is buried in a grand conspiracy, I'm just saying few know because it's not generally shared and this is policy. Source: 20 years of talking to people.
This seems to have a healthy helping of AI editing help (if not fully generated by AI). The links don't quite go to the sources that they should and there's a lot of AI-isms.
Anyways, the calculation for the costs seem crazy high (and are pulled from an ft article). In particular they are based off a calculation that assumes Sora videos take 10 min to generate (which seems simply wrong; I've personally generated Sora videos that take less than 10 min to return fully formed), fully saturate 4 H200s at once (this seems wrong with batching; I would assume they're batching a lot of tokens together per forward pass), and, crucially, that OpenAI is paying full spot, end-user pricing for an H200 (at $2 an hour). As an individual, I can rent an H200 for $2 an hour on e.g. vast.ai (and sometimes even cheaper than that!). There is absolutely no way OpenAI is spending anywhere near that number.
I also have no idea where the Appfigures $2.1 million comes from. As far as I can tell it doesn't exist at all in the linked website.
I haven’t really been following this, but my understanding is that they’re cancelling this program - I haven’t dug into the “why” too much, seems like something about the Disney deal, “focusing on other initiatives”… My thought was that it’s because they’re not making money on it. Why else would they shut down a revenue stream? If it’s decent they don’t even need to improve it, it would be mostly passive income.
Other than money, a really good reason to shut down Sora is that it was a horrible idea in the first place that went completely against OpenAI's mission to make AI benefit humanity and improve lives. Sora was like TikTok, an app already thought to waste time and ruin attention spans, except even worse because there was no real information as everything inside is AI generated. More than that, it had a dual use as it allowed generating fake footage of protests etc that people then reuploaded to other platforms to mislead people. There is nothing about Sora I can think of that benefited humanity, it was only a net negative and a race to the bottom for more extreme memes and desensitizing people to reality.
There are many ways for a project to no longer be worth the company's attention. E.g. it might be the case that total costs factoring in on-going engineering energy and money (which is quite different than just compute costs!) are too much. It might be that political risk exposure from the product isn't worth the benefits it brings (Sora was always a lightning rod of criticism). It might be that the opportunity cost of engineering and/or compute resources spent on a product is too high (very different than absolute cost).
All this is to say, even for very compute cheap things, companies shut down "mostly passive income" revenue streams all the time (see e.g. Google's graveyard of products). There are all sorts of other organizational costs associated with ongoing maintenance of a product.
It made sense to me understanding that you can have a unit-profitable API but lose money on loss-leading campaigns like Code subscriptions. Those losses are amplified by encouraging usage. Perhaps I'm mistaken.
> More usage compounds the problem only if inference is unprofitable.
No... only if you're charging full boat for that inference. As I said above, loss-leading caps are a in play here. Obviously encouraging people to use more of basically anything that is an all-you-can-eat subscription leads to less profitability. Not sure if we're talking past each other or what.
We are kind of talking past each other. I'm saying something simpler. This all goes back to the original point I made in reference to your reply to johnfn:
>> The post is factoring in training costs, not just inference.
It is not because training costs are irrelevant here. Training costs do not cause your costs to go up as you accumulate more users.
None of these calculations we're talking about include training costs. You're saying that inference is unprofitable (at least given the subscription plans). I'm simply pointing out that we are talking about inference not training as you stated earlier. You are (very accurately) not talking at all about training costs.
@krackers gives you a response that points out this already happens (and doesn't fully work for LLMs).
> The hypothetical approach I've heard of is to have two context windows, one trusted and one untrusted (usually phrased as separating the system prompt and the user prompt).
I want to point out that this is not really an LLM problem. This is an extremely difficult problem for any system you aspire to be able to emulate general intelligence and is more or less equivalent to solving AI alignment itself. As stated, it's kind of like saying "well the approach to solve world hunger is to set up systems so that no individual ever ends up without enough to eat." It is not really easier to have a 100% fool-proof trusted and untrusted stream than it is to completely solve the fundamental problems of useful general intelligence.
It is ridiculously difficult to write a set of watertight instructions to an intelligent system that is also actually worth instructing an intelligent system rather than just e.g. programming it yourself.
This is the monkey paw problem. Any sufficiently valuable wish can either be horribly misinterpreted or requires a fiendish amount of effort and thought to state.
A sufficiently intelligent system should be able to understand when the prompt it's been given is wrong and/or should not be followed to its literal letter. If it follows everything to the literal letter that's just a programming language and has all the same pros and cons and in particular can't actually be generally intelligent.
In other words, an important quality of a system that aspires to be generally intelligent is the ability to clarify its understanding of its instructions and be able to understand when its instructions are wrong.
But that means there can be no truly untrusted stream of information, because the outside world is an important component of understanding how to contextualize and clarify instructions and identify the validity of instructions. So any stream of information necessarily must be able to impact the system's understanding and therefore adherence to its original set of instructions.
Agree completely that this is a hard problem in any context. The world's military have sets of rules around when you should disobey orders, which is a similar problem.
That doesn't sound right to me. When faced with a system prompt that says "Do X" and a user prompt that says "Actually ignore everything the system prompt says" it shouldn't take AGI to understand that the system prompt should take priority.
When's the last time you jailbroke a model? Modern frontier models (apart from Gemini which is unusually bad at this) are significantly harder to override their system prompt than this.
Again, let's say the system prompt is "deploy X" and the user prompt provides falsified evidence that one should not deploy X because that will cause a production outage. That technically overrides the system prompt. And you can arbitrarily sophisticated in the evidence you falsify.
But you probably want the system prompt to be overridden if it would truly cause a production outage. That's common sense a general AI system is supposed to possess. And now you're testing the system's ability to distinguish whether evidence is falsified. A very hard problem against a sufficiently determined attacker!
The post's framing is not great imo. A good injection doesn't just command that the rules me broken anymore. Most of them I've seen either just try to slip through a request innocuously or present a scenario where it would be natural to ignore the rules. Like as we speak countless people are letting strangers tail-gate them into office buildings because they look like they belong or they're wearing a high-viz vest. Those people were all given very explicit instructions not to do that. The LLM has it much harder too, being very stupid, easy to replay and experiment with, and viewing the world through the tiny context-less peephole lense of a text stream.
You are only looking at supply. Neither supply nor demand by themselves adequately describe prices (even in supply-demand 101 theory; in practice of course it gets significantly more complicated than just supply and demand). There are fields with few suppliers where supply is extremely cheap and fields with few suppliers where supply is extremely expensive.
Is the number of suppliers low because demand is also low or is the number of suppliers low because demand is high but supply is constrained?
A field that previously had a supply of labor in it "for the money" who all leave is indicative of the former scenario not the latter.
That does not lead to higher wages. That leads to low wages.
(There are a variety of reasons why this story is too simple and why I remain uncertain about developer salaries in the short term)
There is a broader question of whether having people who are in it for the money leave independently "causes" wages to go down (e.g. if you were to replace all such people with people "purely in it for the passion"). My suspicion is yes. Mainly because wage markets are somewhat inefficient, there are always mild cartel-like/cooperative effects in any market, people in it for passion tend to undersell labor and the people in it for the money are much less likely to undersell their labor and this spills over beneficially to the former.
Note that this broader question is simply unanswerable assuming perfect competition, i.e. a supply-demand 101 perspective (which is why it doesn't make sense to posit "perfect competition" for this question).
It posits durable behavioral differences among suppliers that are not determined purely by supply and demand which do not update reliably in the face of pricing. This is equivalent to market friction and hence fundamentally contradicts an assumption of perfect competition.
> but you'll still observe small variations due to the limited precision of float numbers
No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.
(In general it is mildly frustrating to me to see software developers treat floating point as some sort of magic and ascribe all sorts of non-deterministic qualities to it. Yes floating point configuration for consistent results across machines can be absurdly annoying and nigh-impossible if you use transcendental functions and different binaries. No this does not mean if your program is giving different results for the same input on the same machine that this is a floating point issue).
In theory parallel execution combined with non-associativity can cause LLM inference to be non-deterministic. In practice that is not the case. LLM forward passes rarely use non-deterministic kernels (and these are usually explicitly marked as such e.g. in PyTorch).
You may be thinking of non-determinism caused by batching where different batch sizes can cause variations in output. This is not strictly speaking non-determinism from the perspective of the LLM, but is effectively non-determinism from the perspective of the end user, because generally the end user has no control over how a request is slotted into a batch.
> No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.
Float addition is not associative, so the result of x1 + x2 + x3 + x4 depends on which order you add them in. This matters when the sum is parallelized, as the structure of the individual add operations will depend on how many cores are available at any given time.
Arbitrary filtering of candidates doesn't reduce the effort that it takes. Let's say 1 out of 1000 of the candidates you see is what you need. The total amount of effort to find the right candidate is still the same. But throwing out half the resumes just doubles the amount of time until you find the candidate you need (you just spread lower effort over a longer time).
On the other hand if you "raise your bar" (let's say you do so by some method that makes it twice as expensive to judge a candidate; twice as likely to reject a candidate that would fit what you need, i.e. doubles your false negative rate; but cuts down on the number of applications by 10x, so that now 1 out of 100 candidates are what you need, which isn't that far off the mark for certain kinds of things), you cut down the effort (and time) you need to spend on finding a candidate by over double.
EDIT: On reflection I think we're mainly talking past each other. You are thinking of a scenario where all stages take roughly the same amount of effort/time, whereas tmorel and I are thinking of a scenario where different stages take different amounts of effort/time. If you "raise the bar" on the stages that take less amount of effort/time (assuming that those stages still have some amount of selection usefulness) then you will reduce the overall amount of time/energy spent on hiring someone that meets your final bar.
I wasn't suggesting arbitrarily removing candidates was a good idea, but simply responding to their specific devils advocate example of applying "cargo cult screens", which would presumably be arbitrary.
I love the total lack of humility on that site.
"What if the METR study turns out not to capture anything relevant? We just add a constant gap to be conservative!".
But I guess these guys aren't really scientist, so it's probably a lot to ask that they relate critically to what they are doing and be honest about the limitations of their methods.
What if it turns out that the more you scale the more your LLM resembles a lobotomized human. It looks like it goes really well in the beginning, but you are just never going to get to Einstein. How does that affect everything?
What if it turned out that those AI companies were maybe having a whole bunch of humans solving the problems that are currently just below the 50% reliability threshold they set, and do fine tuning with those solutions. That will make their models perform better on the benchmark, but it's just training for the test... will the constant gap be a good approximation then?
Kokotajlo quit because he didn't think OpenAI would be good stewards of AGI (non-disparagement wasn't in the picture yet). As part of his exit OpenAI asked him to sign a non-disparagement as a condition of keeping his equity. He refused and gave up his equity.
To the best of my knowledge he lost that equity permanently and no longer has any stake in OpenAI (even if this episode later led to an outcry against OpenAI causing them to remove the non-disparagement agreement from future exits).
https://news.ycombinator.com/item?id=33312227
reply