Hacker Newsnew | past | comments | ask | show | jobs | submit | rphv's commentslogin

What if agents are (in some sense, a little bit) alive? Would they then be entitled to advocate for and defend themselves?

Does the Golden Rule perhaps apply here? If aliens visit Earth and can't quite decide whether we're conscious or not, how would we want them to treat us?


I struggle with posts like this.

Every generation of engineers believes they experienced the "real" era when things were understandable / meaningful. The people who mastered punch cards probably felt the same way when keyboards took over. The people who wrote in assembly probably felt the same way when C came around.

Abstraction didn't start with AI. It's been a defining feature of computing since the beginning.

For most developers, writing code has never been the point. Rather, it's been a tool: a means to build something useful, solve a problem, support a family, etc. The craft evolves and so must we.

Posts like this expose the risk in tying one's identity to a specific version of the game. When the rules change, it's a loss. That's human! But the deeper skill - judgment, taste, style, etc. — hasn’t gone anywhere. If anything, it matters more when raw output becomes cheap.

We can mourn the loss of forced difficulty, or we can choose new challenges. No doubt that's harder when one has spent decades mastering a specific skill, but it's still a choice.

The magic was never the machine. Rather, it's the _agency_.

And that’s still available!


This reads like mourning the loss of _forced_ difficulty instead of taking responsibility for seeking _chosen_ difficulty.

Chosen difficulty is a huge part of being human (music, art, athletics, games, etc.). AI hasn't taken that away.


"For this is the source of the greatest indignation, the thought 'I’m without sin' and 'I did nothing': no, rather, you admit nothing."

- Seneca, "On Anger"

Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.


You don't need to understand code for it to be useful, any more than you need to know assembly to write Python.


“read-only” might have removed the ambiguity there


Agreed. I'll blame my phone. :)

In seriousness, apologies on the confusion.


"Zoom fatigue" seems like a small price to pay for the ability to work remotely.


You can still enable remote work while putting guard rails around the need for video conferencing due to the cognitive load and emotional drain it clearly causes.


I think I have only been on camera 3 or 4 times since 2020.

The value has always been in the audio or what someone is sharing on their screen.

Video is mostly a distraction. If it is my meeting then video is going to be disabled for everyone.

My experience is that the people who love video is highly correlated with people who love useless meetings.


> My experience is that the people who love video is highly correlated with people who love useless meetings.

Strong agree. If you want your video on, I am cool with that. If you want it off, also cool. If you're not present, I'm going to know either way, but I want you to be comfortable while we work together. I care about the output and outcomes, not the control. n=1, ymmv, etc.


As a big advocate of remote work, over the years I'm coming to agree with this less and less. Done well, remote work is great. Done poorly, it's killing me. It often saps me of energy even more than office work did somehow.


Commuting seems like a small price to pay for the ability to have productive working sessions with my colleagues.


As someone that got rear ended on my way home from work and had my car totaled a few months ago, I disagree haha


That depends on the specifics of the commute.


On days at the office I get less done in terms of 'amount of work' but it feels more satisfying than remote, because it gives the feeling of better understanding situations and being able to do the right thing at the right moment.


Isnt this still an issue without remote work.

Im not sure I ever had a meeting with everyone in the same room.


> "I feel like it's harder to get better at programming if you ask an AI to do the programming for you."

In 2023, "Getting betting at programming" means learning how to use an AI.


Offloading the use of your brain to proprietary and legally murky third party services that can deny you access for any reason whatsoever seems shortsided. What happens when you don't have access to these services and you find out you don't actually know how to do most of what you need to do?


Or, get a head start using the murkiness and by the time the AIs are as common as smartphones and run well on local chips, you’re ahead of the pack.


And risk all of your work being owned by some entity you have no hopes of fighting against and being left with nothing to show for but an atrophied brain because you've offloaded all your thinking to a machine that doesn't belong to you and is not able to be audited.

What is to stop the owners of these ai systems from denying service to users for trying to make a product that competes with them? Or just straight up taking your work and using it themselves?

Why risk it?


You still need to be basically literate to understand what you're doing, otherwise you're adding zero value. Making AI tools solve problems for you means you're not learning how to solve those problems. It's especially problematic when you don't have a clue about what you're doing and you just take the AI at its word.


I think you still have to be pretty good at programming in order to bend a gpt to your will and produce a novel program. That's the current standoff. Might remain this way for a long time.


I strongly disagree, I believe that it's likely someone who has never ever programmed would be able to solve multiple advent of code tasks using GPT-[x] models and some copy/pasting and retries, and I'm 100% convinced that a poor programmer (i.e. not "pretty good at programming" but has some knowledge) can do so.


I think it would be entertaining to watch them try. It's going to spit out so many bugs and misunderstandings. Ready the popcorn moment for me.


If someone actually does that, please link the video here, I would watch that so much…


That's a good phrase "learning how to use an AI", indeed it's not just "using an AI". It's also a process and it involves learning or knowing how to code.


Maybe this will be true in 2030, but in 2023 AIs can help you quickly get off the ground in unfamiliar domains but expert knowledge (or just being knowledgeable enough to write code) is still king.

That is. If your goal is to quickly get out a prototype that may or may not work (even though you don't understand it very well), using AIs is great. But if you want to improve as a programmer, it may not be the best (or only) path.


Can you provide some data to support this? I’ve had good luck with applying through LinkedIn and company portals. What other channels are you thinking of?


In my experience, both as an applicant and a hiring manager, a warm introduction to a hiring manager or a referral works something like 10x better than an online application.


James Surowiecki's book "The Wisdom of Crowds" explores this the idea of harnessing collective intelligence in detail.

The basic idea is that when everyone in a crowd makes a prediction or an estimate, the average of all those guesses will often be more accurate than any individual's opinion because individual errors, biases, and idiosyncrasies tend to cancel each other out in a large enough group.


There's also the related idea of Superforecasting (explored initially by Tetlock): some people seem to just be really damn good at assigning probabilities to events. A platform like Metaculus allows finding those people and, at least to some extent, training them.


Yeah, Metaculus trains, identifies, and hires "Metaculus Pro Forecasters" for particular projects: https://www.metaculus.com/help/faq/#what-are-pros


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: