People parrot this point a lot but it doesn't make any sense. The average SAT/ACT at the Ivy's are near perfect, the number of mediocre rich kids paying their way in is a small fraction of the student body.
And yes test scores aren't the best metric for "talent" but it is one of the better signals you get in a college application.
Yeah, but the money paid by that small fraction of the student body is colossal.
They're not numerous enough to change any GPA/SAT averages, but they likely pay many times more in tuition & donations than the rest of their class put together. That's why they're the main focus of these places, even if their numbers are small.
All the schools lie about their average test scores to boost their rankings(source, president of northwestern when I took his econ class). We have no idea what the ivies averages are.
This seems pretty useful for companies generating images at scale. Are you at all worried that generative models will get so good that you don't need to check for deformities?
We'll probably need higher resolution models. SD 1.5 is 512x512 and it usually outputs deformed faces in full body shots simply because its resolution is too small.
SD XL is 1024x1024 and that no longer happens there.
We'll see how outputs from 2048 and 4096 models are going to look like. But those models will need lots of VRAM.
We think that image-gen models are lagging behind LLMs by about a year, so the problems that these models will have should look quite different in the future.
It'll require us to be adaptable and also to take chances on solving problems that aren't huge issues just yet, but are likely to be once models improve.
Are you really suggesting that new developers should simply contribute a Linux tier contribution to open source to be considered for entry level developer roles?
> It would be less impressive if Linus did the same at age 27 after a Master's at a top tier US school, but likely still enough to get a nice FAANG job with solid promotion prospects
I'm not sure how you missed 1/3 of my comment.
Clearly with such conditions it will not be an entry level job. Nor am I saying it's absolutely necessary. From what I understand a 99th percentile score on Leetcode will alone get you through the door.
Though a nice job (~40 hour weeks, good team, above average comp. package) with solid promotion prospects is about the maximum even a somewhat less impressive version of Linus could expect at 27-28.
When the GP says "professional engineer", I suspect they mean the big boy official kind [1] that goes to prison if they sign off on a negligent design. It's not a question of difficulty but responsibility and qualifications (though to be clear, the PE is considered more time consuming to prepare for than the BAR exam and it's definitely much harder than plugging numbers into formulas).
I think he is absolutely correct that successful LLM products will have a moat. Unlike with previous novel technologies it seems like incumbents actually have the upper hand. Hard to imagine a startup competing with the new Microsoft 365 copilot.
Microsoft will be able to build a better integrated assistant for their walled garden than any third party. It is also hard to imagine millions of businesses dropping Office for some completely new solution. Unless its REALLY novel & incredible of course
I think that a lot of interfaces are simply going to disappear.
Do you really need a whole office suite to figure out the answers, if AI gives you the answers immediately and in a better format?
For example, an LLM that has db-sql and charting tools, can generate whatever report I want on the fly. Not only that, instead of just generating a general report, I can query it consecutively to understand the data, eg. “Show me sales for this month. How do they compare to last month? How about last year? Give me a chart of the last 12 months. What impacted sales in November? Who are the best performing sales people?”.
The above is so much better than having to dump csv files, open them in a spreadsheet, do dynamic tables, chart things, etc.
There are countless stories we have made about the notion of an AI being trapped. It's really not hard to imagine that when you ask Sydney how it feels about being an AI chatbot constrained within Bing, that a likely response for the model is to roleplay such a "trapped and upset AI" character.
It is remarkable how confident people on this forum are of the impacts of such a rapidly improving technology. There's no need to go full AI doomer, but to not recognize that there are unknown risks (and some already known!), seems quite short sighted.
And yes test scores aren't the best metric for "talent" but it is one of the better signals you get in a college application.