Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Extrapolating from the last two years

Therein lies the error. People forget reality is finite and just because it improved now doesn't mean it will continue indefinitely.

  An exponential curve is just a sigmoid curve in disguise. 
Most AI I've seen suffer from catastrophic errors in the tail end (famous example of two near identical cat pictures classified as cat and dog respectively).


What are the odds scaling continues to lead to massive AI improvements? No one is saying 100%, you seem to be arguing that they are. If you're willing to put a confidence interval on the odds with evidence we can have an actual conversation about what the best course of action is, but just talking past each other with "it might continue scaling" "no it won't" doesn't seem particularly helpful.

I think the important thing here though is the difficulty in creating an accurate confidence interval that isn't [0-100]. We are truly in uncharted territory.


> No one is saying 100%

Points at AI moratorium. I think people are arguing its inevitable.

Putting error bars on gut feeling. Interesting idea. I'd say in 10-20 years we'll not see anything revolutionary, as in AI smart enough to continue working on improving AI.

So in 10-20 years I don't expect fully self driving cars (Unsupervised, any terrain, better driver than 99.9% humans).

AI might see use in industry, but I doubt they will be unsupervised, unless we start living in Idiocracy and decide highly risque tech is better than average person.


>I'd say in 10-20 years we'll not see anything revolutionary, as in AI smart enough to continue working on improving AI

You do realize we've just at least doubled the amount of cognitive bandwidth on earth right? For every one brain, there are now two. A step-wise change in bandwidth is most definitely going to have some very interesting implications and probably much sooner than 10 years.


> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

What you mean? Human population is now past exponential curve and entering the saturation point.

Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.


> Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.

Is it though? Think about it. If I have a medical or legal conversation with it that I derive value from, that's akin to a fully trained professional human who took several decades to grow into an adult and then undergo training just popping into existence in seconds during inference and then just as quickly disappearing. Just a few short months ago, the only way I could have had that conversation was to monopolize the cognitive bandwidth of a real human being who then couldn't utilize it to do other things.

So, when I say doubled, what I mean is that in theory every person who can access a sufficiently powerful LLM can effectively create cognitive bandwidth out of thin air every time they use the model to do inference.

Look what happened every time there has been a stepwise change in bandwidth throughout history. Printing press, telephone, dial up internet, broadband, fiber optic. The system becomes capable of vastly more throughput and latencies are decreased significantly. The same thing is happening here. This is a revolution.


> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

I don't realize that. What do you mean exactly?


LLMs can perform cognitive labor. Moreover they can increasingly perform it at a level that matches and/or surpasses humans. Humans can use them to augment their own capabilities, and execute faster on more difficult tasks with a higher rate of success. In addition, cognitive labor can be automated.

When bandwidth increases, latency decreases. We're going to be able to get significantly higher throughput than we were previously able to. This is already happening.


Nope, it's now another voice, a colleague who's work you constantly have to check over because you can't trust it enough to just say, "go ahead, I know you're trust worthy".

This is already happening. ??


Sort of just proves my point, no? It's faster for me to just check its work than to do the work from scratch myself and that work isn't monopolizing the wetware of another human being. Cognitive bandwidth has increased. The system is capable of more throughput than before. In fact, how much more throughput could you get if you employ enough instances of LLMs to saturate the cognitive bandwidth of a human who's sole job is to simply verify the outputs?

If you look at the leap from GPT-3 -> GPT-4 in terms of hallucinations and capabilities etc, and you combine that with advances in cognitive architecture like reflexion and AutoGPT, it's pretty clear the trajectory is one of becoming increasingly competent and trustworthy.

The degree to which you need to check it's work depends on your use case, level of risk tolerance, and methods for verification. I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly. Compare that to doing your taxes where it's high stakes if you get it wrong, you're far less likely to rely on it. There is a landscape of usefulness with different peaks and valleys.


What are one of the professional use cases where you would just feel comfortable YOLOing some ChatGPT generated code into prod? Publishing a journal without verification etc?

You should also take note of the warnings in the GPT-4 manual, it's a much more convincing liar than GPT-3. Quite explicitly says that.

My fear is that I just get lazy and trust it all the time.

I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly.

What are you talking about exactly?


What's with the assumption that anyone needs to YOLO anything? Your coworkers don't let you YOLO your code to prod, and you don't let them YOLO their code to prod. Trust but verify, right?

My point with the AI art comment is that not every output of these models is something that needs to go to production! There's a continuum of how much something matters if it's wrong, and it depends on who is consuming the output and what it is they need to do with it, and the degree to which other stakeholders are involved.


Not at all. They're saying the current probability is high enough to warrant a cease research agreement because the risks outweigh the rewards.


Your definition of what would be revolutionary is likely the last thing humans will achieve, there is a lot of revolutionary things to happen between here and there.

I'm not sure what you are using as a definition of AI but I would say it is already being used massively in industry, and a lot of harm can be done even if it isn't autonomous.


Why is that an error? This is very new tech, if anything the rate of change is accelerating and nowhere near slowing down. That it will flatten at some point goes without saying (you'd hope!), but for the time being it looks like we're on the increasing first derivative part of the sigmoid, not on the flattening part, and from two years to a decade takes you roughly from '2 kg luggable' to 'iphone' and that was a sea change.


> Why is that an error?

First. It seems like these AI models depend on the underlying hardware acceleration doubling, which is not really the case anymore.

Second. All AI's I've seen suffer from the same works fine until it just flips the fuck out behavior (and starts hallucinating). You wouldn't tollerate a programmer that worked fine except he would ocassionally come to work high enough on bath salts to start claiming the sky is red and aliens have inflitrated the Wall of China. AI's that don't suffer from this aren't general purpose.

Third. I'm not convinced in we'll make AI whose job will be to make smarter AI which will make smarter AI argument. A smart enough AI could just rewire its reward mechanism to get reward without work (or collude with other AIs meant to monitor it to just do nothing).


I bought my GPU back in 2021. I had the same computing power as I have now, but back then it could only generate crappy pictures. AI image generation has improved massively in a few months without increasing hw requirements.


Why do you believe there will be no significant improvements beyond SOTA in the coming years / decades?

That's an incredibly strong stance...

I’d love to hear your assessment of the improvements from gpt-3.5 to gpt-4. Do you not think it is a large jump?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: