Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are worried AI is making us dumber. You hear it all the time. GPS wrecked our sense of direction. Spellcheck killed spelling. Now it’s AI’s turn to supposedly rot our brains.

It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.

Take GPS. Yeah, okay, maybe you can’t navigate with a paper map anymore. So what? Navigation isn’t about memorizing street names. It’s about getting from A to B. GPS makes that way easier, for way more people. Suddenly, everyone can explore, find their way around unfamiliar places without stress. Is that “dumber”? No, it’s just… better navigation. We optimized for the outcome, not the parlor trick of knowing all the streets by heart.

Same with the printing press. Before that, memory was king. Stories, knowledge – all in your head. Then books came along, and the hand-wringing started. “We’ll stop memorizing! Our minds will get soft!” Except, that’s not what happened. Books didn’t make us dumber. They democratized knowledge. Freed up our brains from rote memorization to actually think, analyze, create. We shifted from being walking libraries to… well, to being able to use libraries. Again, better.

Now it’s AI and coding. The worry is, AI code assistants will make us worse programmers. Maybe we won’t memorize syntax as well. Maybe we’ll lean on AI to fill in the boilerplate. Fine. So what if we do?

Programming isn’t about remembering every function name in some library. It’s about solving problems with code. And AI? Right now, it’s a tool to solve problems faster, more efficiently. To use it well in its current form, you need to be better at the important parts of programming:

- Problem Definition: You have to be crystal clear about what you want to build. Vague prompts, vague code. AI kind of forces you to think precisely.

- System Design: AI can write code snippets. As of right now, designing a whole system? That’s still on you. And that’s the hard part, the valuable part.

- Testing and Debugging: AI isn’t magic. At least, not yet. You still need to test, validate, and fix its output. Critical thinking, still essential.

So, yeah, maybe some brain scans will show changes. Brains are plastic. Use a muscle less, it changes. Use a new one more, it grows. Expected. But if someone’s scoring lower on some old-school coding test because they rely on AI, ask yourself: are they actually worse at building software? Or are they just working smarter? Faster? More effectively with the tools available today?

This isn’t about “dumbing down.” It’s about cognitive specialization. We’re offloading the stuff machines are good at – rote tasks, memorization, syntax drudgery – so we can focus on what humans are actually good at: abstraction, creativity, problem-solving at a higher level.

Don’t get caught up in nostalgia for obsolete skills. Focus on the outcome. Are we building better things? Are we solving harder problems? Are we moving faster in this current technological landscape? If the answer is yes, then maybe “dumber” isn’t the right word. Maybe it’s just... evolved. And who knows what’s next?

https://tulio.org/blog/dumber-no-different/



After watching [Oxford Researchers Discover How to Use AI to Learn Like a Genius](https://youtu.be/TPLPpz6dD3A?si=FJJ-S6wz0PPrJuSn) a few days ago, I've been using ChatGPT in "reverse mode" a lot. I give it a excerpt of a text I'm reading and ask it to ask me questions from it at different levels of detail.

I have to say it feels like a superpower! The answers to questions that you needed to supply really stick on your memory as do the links that spontaneously form to bodies of knowledge you already know when answering deeper level questions.

I'm thinking that LLMs might actually address some of Plato's complaints against reading and writing:

> You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.

See [here](https://fs.blog/an-old-argument-against-writing/).


This is exactly the use case I’ve been thinking about, so thank you for linking this video.

What I want is for my ereader to feed the text I just read into a good LLM and then quiz me on what I read.

What’s kind of funny is that I hated homework as a kid, now I’m basically begging for a computer to give me some


Yes, agreed.

It would be good if some spaced repetition was thrown into the mix as well.


This assumes you always have reliable GPS:

- areas that lack map details and obstructions that preclude straight line paths, e.g. deep forest

- wartime GPS blocking

- device failure incl battery

- errors in mapping data

- maps that contain other information not found in your GPS-enabled map


Like it or not, you live in an industrial society that depends on a million things you can not possibly recplicate or repair if they fail.


My mom used these arguments against pocket calculators when she taught high school math. What if the battery dies?

Now calculators are solar powered, or built into your...

What if your phone battery dies?


So then carry a map or ask for directions when these situations arise? What does this have to do with the positive trade-offs when using GPS?


Knowing which direction you are facing is an obsolete skill?

I don’t think you’ve been paying attention.

You’ve been watching too many Ted talks or something


What do you do with that knowledge? I mean, it's a good skill if you're actually going to apply it to something. Knowing you're facing north is great if there's some use you get out out that. Finding your way around is way more work than just "north is that way".

I remember the first time I traveled internationally. Incredible stress. I had a bunch of printouts with all the details. Still, I depended completely on the friend I was visiting. I literally didn't dare leave his house without him because I wasn't confident I could make it back without having to have him rescue me and didn't want to give him that trouble. Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.

Technology made an absolutely amazing difference. With GPS I could wander around a city aimlessly and still find a way to my hotel. I could figure out where the center was. The early incarnation was rough but amazing for the amount of stress relief it provided.

And modern tech? Just sci-fi magic. I can see both the usual sights and find various obscure ones and any business I might need. With Uber I could get a trip in random countries wherever needed without speaking the local language. Google now tells me about bus and tram routes, tells me what to take, where the station is, and what stations am I going to go through, and when I'm going to be there. There's a magic real time translator for both text and voice.


> Sure I had a bunch of info, but one wrong decision and my static stack of papers might not be enough to get me out of it.

If you knew how to read a map it would have been.

I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.


Paper has limited information. If you failed to acquire a map with the relevant useful information you have a problem. Not every map contains enough information for every possible need. If you planned on going by car then improvised and took a bus you might not even have bus stops marked on it.

> I can't imagine being so dependent on a phone which can be accidentally dropped, misplaced, or simply run out of battery charge that I would be lost without it.

Same way you deal with anything else: what if you have a car problem? So you plan ahead. Get the car checked before a trip, fill the tank, figure out what to do if it does break.

Phones are easier. I've got a stack of old ones that are still functional, easy to bring an extra one. I have an external battery. You can charge in many cafes and similar, just find a Starbucks or something. You can go to a shop and buy a battery or the cheapest phone they have if it comes to that.


…Did you just chastise me for being able to orient myself?


AI can program, but not engineer. Even then, you eventually reach a point in the project where it is too complex for AI to even do snippets; especially if you are pioneering something new that has never been done before.

A sprinter is unlikely to win a marathon, and that is what using AI to program is like. By the time you have to take over, you have a huge learning curve ahead of you as you can lean on the AI less and less.

If you're doing something boring/boiler-plate, yeah, AI is helpful I guess.


Most people with "engineer" titles spend relatively little of their time on actual quantitative engineering or "higher level" thinking. A lot of their work involves manual information processing: Organizing and arranging things, fitting things together, troubleshooting. This could be justified for a couple of reasons: Maybe a lot of the stuff that was "engineering" is now handled by the CAD software. That's great. But also, the efficiency of those tools has raised the complexity of systems to the point where the interaction between parts consumes most of the engineers' attention.

Managers also spend most of their time on the same things, but handling different kinds of information.

But CAD hasn't changed the immutable laws of engineering, such as Brooks's Law. When I hear about the wonders of AI transforming engineers into higher level thinkers, my snarky response is: "Does this mean that projects will finish on time?"


If your engineers (software or otherwise) aren’t spending a lot of time engineering, then you’ve got a hiring problem. Most jobs I’ve worked as a software engineer are 90% engineering (soft and hard skills) and only about 10% programming. With AI, it becomes about 60% engineering and 20% babysitting an AI, and 30% programming because the AI got it wrong.

Now, we can’t even hand this stuff off to juniors and teach them things they’ll hopefully remember. Instead, I have to explain to an AI, for the 60th time, that it has hallucination problems.

Personally, I’d rather have the juniors back.


> AI can program, but not engineer.

I feel like that's what the OP said. People can focus on the engineering part and not memorizing syntax or function names.

Too often I see people thinking in very binary terms, and we see it here again. AI does everything or nothing. I just keep thinking it'll be in between and people who are good at leveraging every tool at their disposal will reap the largest benefits.


You don't need AI if that's all you're using it for. In fact, IDEs have been doing a fine job at that for years.

It feels right now, that much of the time, AI is a solution looking for a problem to solve.

I find it more useful to treat AI like an easier to search stack overflow. You can ask it to go find you an answer, and then elaborate when it's not the right one.


> People are worried AI is making us dumber.

I'm far from an "LLM Defender" but I've heard plenty of people say "well Google said.." for at least a decade.

I think LLMs accelerate this, but we're not in totally unfamiliar territory here.



This is dead on. I'm not even a big AI fan, but this is a key idea about technology in general. I don't want to have to bring to mind the laws of physics every time I drive to work. The whole point is that a group of engineers encoded them into the machine for me, and now I enhance my capabilities without needing to know how. It's what the classic Alfred North Whitehead quote is talking about. I understand the impulse towards mastery and ever-expanding knowledge—who doesn't love the idea that they should be able to "plan an invasion, change a diaper, butcher a hog"—but the truth is we have finite capabilities we are capable of mastering in a lifetime. This is why even literal geniuses often fail when they step outside their field of expertise. It's a valid concern that as a society some skill will be lost, or concentrated in the hands off too few, but losing skills and knowledge (or I would simply call it "being permitted to forget") is in general fine. Now if AI literally killed people's ability to think, that would be one thing. But what I suspect is that like parent is saying, it allows you to turn off your brain for certain tasks, like every technology. Then the question is what more complex tasks can we do on top of the automatic and thoughtless ones.

EDIT: I see some good replies to parent about stability/reliability, alienation etc. There are definitely tradeoffs to the power you get from technology, and it's worth acknowledging those. But that's exactly the framework we should be thinking in. What are the tradeoffs involved? Often these kinds of stories are one-sided arguments that imply losing skills is straightforwardly bad, when in truth it's more complicated than that.


> so we can focus on what humans are actually good at

You know what humans are good at? Deluding ourselves. Because that's what you're doing. Using vague, feel-good words, based on vague analogies, no proof, to keep the inconvenient truth at bay. Not being able to navigate with a paper map is a big thing: people get lost inside buildings without a map. Next time the power fails, half of the GenZ will be lost. Not being able to write with pen and paper is a big thing. Not being able to add a few number is not as big a thing, but it certainly can be a problem. And what for? So you don't have to feel bad about using AI tools?

You know what comes next? Everything based on audio and video. Are you then going to argue: reading is an obsolete skill?


GPS still works when power fails.


Until your phone battery runs out.


> It’s the same old story. New tool comes along, people freak out about what we’re “losing.” But they’re missing the point. It’s never about losing skills, it’s about shifting them. And usually, the shift is upwards.

Except for that widespread feeling of hopelessness, alienation, powerlessness, lack of motivation, and lack of ambition. Almost as if not learning any human skills, and relying solely on technology for everything might have some second-order effects.


I've thought the same way you did, how people are resistant to change, but eventually it's better for everybody.

I do believe that GPS made people worse drivers. It made it so people lost sense of direction and distance. It has removed all critical thinking on the road. Plenty of stories where people drive over stairs because the GPS told them so.

From a driver's ability to navigate, I don't think you can do more now with GPS than you can before with a map. It surely has made it easier, but at a significant cost.

Now, of course, there are plenty of benefits, such as reducing time when getting somewhere unknown (e.g. ambulances), planes not flying over hostile territory (mostly), ability to be able to tell someone where you are when there no landmarks around etc.

But the reality is that overall, a mistake of a GPS is usually rather localized, and the cost of the mistake is rather low.

Books are interesting, instead of memorizing details we now memorize where to find information, little bits that help us get to the solution of the problem we're solving. But books themselves haven't replaced memory, otherwise no-one would read them anymore ahead of time.

When we search something on the internet we are thought to apply critical thinking. What are the sources? [0]. But GPS? Just go with it.

And AI is more like GPS than it is like books. We are being taught take it at face value, and to abandon critical thinking for the sake of speed. Worse yet, because of the enormous financial investments of companies, there is an incentive to lie about how useable it is.

I'm not even talking about context windows. I'm talking about the endless minutia of languages, frameworks, and changes related to specific versions that you only learn by doing. Just the same way a resident does not become a doctor until they finish residency. They have to have done the work, and applied critical thinking.

Software Engineering does not have such legal requirements, but we all learn on the job. AI, and the companies pushing for it basically tell potential clients that this is no longer needed. Would you want a gallbladder surgery done by someone who just read a Wikipedia page about it?

Now, a seasoned developer who writes a crystal clear prompt will probably pick up on bugs, and tell the AI that they want edge cases A, B and C considered. But how did they learn that those exist? Right. By hitting the issues.

Something that happens a lot in Software Engineer, due to the massive amount of things out there and no fixed specs/docs/etcs, is that your approach changes when you're developing a solution for a problem. But the need for those changes only become apparent when you're writing and testing code.

You literally cannot front-load that into your prompt. Yet, reading the news here, we see that our future is writing prompts for a much lower wage. This is orthogonal as to why I went into Software Engineering. Prompts rob me of the ability to express something in an extremely well defined language. Clarity of rules. A syntax where you can express something without ambiguity [1].

You don't know what you don't know, meaning you can't prompt for what you don't know. Hence why they brought back a whole bunch of people out of retirement to build new manpads.

[0] Interestingly when I was growing up a book quote was ok, but Wikipedia was not, even though it came from a book. That now definitely has changed.

[1] A wife sends her programmer husband to the grocery store for a loaf of bread... On his way out she says "and if they have eggs, get a dozen". The programmer husband returns home with 12 loaves of bread....


A good programmer husband would’ve asked “a dozen of what?”. A poor programmer assumes they understand the statement much like an LLM would do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: