What if agents are (in some sense, a little bit) alive? Would they then be entitled to advocate for and defend themselves?
Does the Golden Rule perhaps apply here? If aliens visit Earth and can't quite decide whether we're conscious or not, how would we want them to treat us?
Every generation of engineers believes they experienced the "real" era when things were understandable / meaningful. The people who mastered punch cards probably felt the same way when keyboards took over. The people who wrote in assembly probably felt the same way when C came around.
Abstraction didn't start with AI. It's been a defining feature of computing since the beginning.
For most developers, writing code has never been the point. Rather, it's been a tool: a means to build something useful, solve a problem, support a family, etc. The craft evolves and so must we.
Posts like this expose the risk in tying one's identity to a specific version of the game. When the rules change, it's a loss. That's human! But the deeper skill - judgment, taste, style, etc. — hasn’t gone anywhere. If anything, it matters more when raw output becomes cheap.
We can mourn the loss of forced difficulty, or we can choose new challenges. No doubt that's harder when one has spent decades mastering a specific skill, but it's still a choice.
The magic was never the machine. Rather, it's the _agency_.
"For this is the source of the greatest indignation, the thought 'I’m without sin' and 'I did nothing': no, rather, you admit nothing."
- Seneca, "On Anger"
Sad to see such an otherwise wise/intelligent person fall into one of the oldest of all cognitive errors, namely, the certainty of one’s own innocence.
You can still enable remote work while putting guard rails around the need for video conferencing due to the cognitive load and emotional drain it clearly causes.
> My experience is that the people who love video is highly correlated with people who love useless meetings.
Strong agree. If you want your video on, I am cool with that. If you want it off, also cool. If you're not present, I'm going to know either way, but I want you to be comfortable while we work together. I care about the output and outcomes, not the control. n=1, ymmv, etc.
As a big advocate of remote work, over the years I'm coming to agree with this less and less. Done well, remote work is great. Done poorly, it's killing me. It often saps me of energy even more than office work did somehow.
On days at the office I get less done in terms of 'amount of work' but it feels more satisfying than remote, because it gives the feeling of better understanding situations and being able to do the right thing at the right moment.
Offloading the use of your brain to proprietary and legally murky third party services that can deny you access for any reason whatsoever seems shortsided. What happens when you don't have access to these services and you find out you don't actually know how to do most of what you need to do?
And risk all of your work being owned by some entity you have no hopes of fighting against and being left with nothing to show for but an atrophied brain because you've offloaded all your thinking to a machine that doesn't belong to you and is not able to be audited.
What is to stop the owners of these ai systems from denying service to users for trying to make a product that competes with them? Or just straight up taking your work and using it themselves?
You still need to be basically literate to understand what you're doing, otherwise you're adding zero value. Making AI tools solve problems for you means you're not learning how to solve those problems. It's especially problematic when you don't have a clue about what you're doing and you just take the AI at its word.
I think you still have to be pretty good at programming in order to bend a gpt to your will and produce a novel program. That's the current standoff. Might remain this way for a long time.
I strongly disagree, I believe that it's likely someone who has never ever programmed would be able to solve multiple advent of code tasks using GPT-[x] models and some copy/pasting and retries, and I'm 100% convinced that a poor programmer (i.e. not "pretty good at programming" but has some knowledge) can do so.
That's a good phrase "learning how to use an AI", indeed it's not just "using an AI". It's also a process and it involves learning or knowing how to code.
Maybe this will be true in 2030, but in 2023 AIs can help you quickly get off the ground in unfamiliar domains but expert knowledge (or just being knowledgeable enough to write code) is still king.
That is. If your goal is to quickly get out a prototype that may or may not work (even though you don't understand it very well), using AIs is great. But if you want to improve as a programmer, it may not be the best (or only) path.
Can you provide some data to support this? I’ve had good luck with applying through LinkedIn and company portals. What other channels are you thinking of?
In my experience, both as an applicant and a hiring manager, a warm introduction to a hiring manager or a referral works something like 10x better than an online application.
James Surowiecki's book "The Wisdom of Crowds" explores this the idea of harnessing collective intelligence in detail.
The basic idea is that when everyone in a crowd makes a prediction or an estimate, the average of all those guesses will often be more accurate than any individual's opinion because individual errors, biases, and idiosyncrasies tend to cancel each other out in a large enough group.
There's also the related idea of Superforecasting (explored initially by Tetlock): some people seem to just be really damn good at assigning probabilities to events. A platform like Metaculus allows finding those people and, at least to some extent, training them.
Does the Golden Rule perhaps apply here? If aliens visit Earth and can't quite decide whether we're conscious or not, how would we want them to treat us?
reply