Right now, this standard recruiter question is on the side of the table that's often being especially penny-wise and pound-foolish...
A weird thing I'm seeing is early AI startups lowballing both salary and equity for AI startup jobs, compared to a few years ago for generic Web/app developer jobs.
You're in a narrow opportunity window of a massive investment gold rush. You probably got funding with a weak/nonexistent business model, and some mostly vibe-coded demo and handwavey partnership.
Now you need to hire a few good founding engineer types who can help get the startup through a series of harder milestones, with skillsets less clear than for generic Web/app development. If you can hire people as smart and dedicated as yourself, they'll probably do things that make a big positive difference, relative to what bottom of the barrel hires will do.
So why would you lowball these key early hires, at less than a new-grad starting salary, plus a pittance of ISOs that will be near-worthless even if you have a good exit.
Is it so that the founders and investors can have the maximum percentage of... something probably less valuable than what they'd get by attracting and aligning the right early hires? (Unless it's completely an investment scam, in which genuine execution doesn't affect the exit value.)
I’ve also noticed this, and it causes real issues long term when you want to build the product. Suddenly management is surprised your senior engineer with no relevant experience is taking a long time and needs to bring in a half million in consultants to actually do the work. It stresses everyone else out and then you end up with a lot of churn, a lot of burn, and very little internal knowledge to build off of for the future
1. Removes the pain of age verification, encouraging some people to stay in the proprietary walled garden when everyone would be better served by open platforms (and network effects).
2. Provides a pretext for more invasive age verification and identification, because "the privacy-respecting way is too easily circumvented".
3. Encourages people to run arbitrary code from a random Web site in connection with their accounts, which is bad practice, even if this one isn't malware and is fully secure.
Proving that something is possible doesn't mean encouraging it. This was a beautiful work of reverse engineering, that shows how hard it can be to verify personal data without invading privacy. I prefer this awareness to blind trust.
The code was released, therefore it is not arbitrary (problem #3). Should companies react with more invasive techniques (problem #2), users can always move to other platforms (problem #1).
>users can always move to other platforms (problem #1)
Until the cycle restarts again with new platforms.
Also, I am convinced self-hosting or getting a new platform (including return to traditional forums) to run might as well be bureaucratically harder at this point, given the case of lfgss' shutdown: https://news.ycombinator.com/item?id=42433044
This suggests that the immediate availability of a drop-in replacement today means there is no utility in encouraging that growth.
There are multiple open-source tools that do everything Discord does. There are few-to-none that offer everything Discord does, and certainly none that are centralized, network-effect-capture-ready.
Short term:
* Small group chats with known friends: Signal, whatsapp, IRC, Matrix
* Community chat: Zulip, Rocket.chat
* Community voice: Mumble, Teamspeak
* Video / screen sharing and voice chat: Zoom, BigBlueButton, Jitsi
If you want to host your own stoat server, you will also need to recompile the apps to use your URL and distribute them to your friends, and they will not be compatible for any other server.
Yes, I learned in the Zulip promo discussion earlier this week that self-hosted push notification servers have to have certs compiled directly into the app. I can't tell if it's malice, indifference or incompetence to have that design; any answer is completely believable.
Is there an architectural opportunity to build a "Self-hosted push notification" app and business, where the push broker builds an app to deploy to play, then the self-hosted apps build trust with the broker. The broker app sends push notifications to the user device, which can inform them of the message sent and open arbitrary app windows?
None of those play in the same league as discord for hosting a community, and none of them look in a position to be there in the foreseeable future. It sucks but that's how it is.
This is how it always is, until suddenly one day it isn't. Linux didn't play in the same league as serious and commercial UNIX systems until one fateful day it killed them all dead forever.
My previous two startups used GitLab successfully. The smaller startup used paid-tier hosted by gitlab.com. The bigger startup (with strategic cutting-edge IP, and multinational security sensitivity) used the expensive on-prem enterprise GitLab.
(The latter startup, I spent some principal engineer political capital to move us to GitLab, after our software team was crippled by the Microsoft Azure-branded thing that non-software people had purchased by default. It helped that GitLab had a testimonial from Nvidia, since we were also in the AI hardware space.)
If you prefer to use fully open source, or have $0 budget, there's also Forgejo (forked from Gitea). I'm using it for my current one-person side-startup, and it's mostly as good as GitLab for Git, issues, boards, and wiki. The "scoped" issue labels, which I use heavily, are standard in Foregejo, but paid-tier in GitLab. I haven't yet exercised the CI features.
I just checked out Forgejo. I think i start with it, looks clean and lightweight. For my homelab I don’t have very large requirements. Might be a good starting point for me.
I'm happy for Guile to be getting more attention, but wouldn't write off Racket. A few quick thoughts...
* The recent Guile work on WASM is promising. (Note also Jens Axel Soegaard's recent work on WASM with a Racket-related compiler.)
* Racket's rehosting atop Chez seems like a good idea, and I'd guess that the Racket internals are now easier to work with than Guile's.
* Racket has done a lot of great work, and is a nice platform for people who can choose their tools without worrying about employability keywords for their resume. It made some missteps for broader adoption when it had a chance, and several of the most prominent industry practitioner contributors left.
* Racket still has the best metaprogramming facilities, AFAIK. But even more important than `syntax-parse` and `#lang`, one thing I'd really like from Guile and other Schemes is to support Racket's module system.
(I also wanted to play with Racket's module system for PL research compilers: having early compiler implementation for a new language first expand into Scheme code, and then later (with submodules) also do native/VM code generation, while keeping the option to still expand to Scheme code (for better development tools, or for when changing the language). For example, imagine targeting a microcontroller or a GPU.)
* Right now, any Scheme is for people who don't have to do techbro/brogrammer interviews. The field has been in a bad place for awhile, professionalism-wise, and the troubled economy (and post-ZIRP disruption of the familiar VC growth investment scams) and the push to "AI" robo-plagiarism (albeit with attendant new investment scams) are suddenly making the field worse for ICs.
> Any news content created using generative AI must also be reviewed by a human employee “with editorial control” before publication.
To emphasize this: it's important that the organization assume responsibility, just as they would with traditional human-generated 'content'.
What we don't want is for these disclaimers to be used like the disclaimers of tech companies deploying AI: to try to weasel out of responsibility.
"Oh no, it's 'AI', who could have ever foreseen the possibility that it would make stuff up, and lie about it confidently, with terrible effects. Aw, shucks: AI, what can ya do. We only designed and deployed this system, and are totally innocent of any behavior of the system."
Also don't turn this into a compliance theatre game, like we have with information security.
"We paid for these compliance products, and got our certifications, and have our processes, so who ever could have thought we'd be compromised."
(Other than anyone who knows anything about these systems, and knows that the stacks and implementation and processes are mostly a load of performative poo, chosen by people who really don't care about security.)
Hold the news orgs responsible for 'AI' use. The first time a news report wrongly defames someone, or gets someone killed, a good lawsuit should wipe out all their savings on staffing.
Like I’ve said a few times on HN, if you have 10 friends and ask them what they want to eat for dinner and 6 say “let’s go to a Mexican restaurants” and the other four say “let’s kill Bob and eat him”, it still tells you a lot about your friend group. It tells you even more of the person advocating eating Bob is made the leader of your group and decides where you are going to eat dinner for the next four years.
Especially after you have already seen what your friend has already done for four years
Because it doesn’t matter if you or even 60% of the population doesn’t approve of what Trump is doing - including posting a racist meme showing the Obamas as apes yesterday - this tells you about the country we live in
> There are many things wrong with Canada. It has a go for bronze mentality, the smartest of us keep going to the US because there is not enough opportunity here, much of its public infrastructure is crumbling and the housing prices are frightful. The nation is very obviously sick.
Is anyone currently moving from Canada to the US?
If so, are they the "smartest", or do they simply have different priorities than a lot of equally smart people?
Virtually all of the top performers at my school left for the USA immediately after graduation.
I think somewhere between 70-90% of Waterloo graduates in CS leave every year.
Turns out doubling or tripling your take home compensation is absolutely worth it.
You can buy a house instead of renting an apartment with roommates. You can afford to marry and have children. You can buy all the things the government would've provided you had it not been dysfunctional.
Plus, there are just more jobs in SWE in the USA. Many of my classmates graduating last year in June are still unemployed since you have to be exceptional to get a job here.
Pretty much anyone who can get TN1/H1B/L1B does, unless you were born wealthy, have an extreme sense of patriotism, or have a very strong attachment to family.
That's definitely true, though probably only limited to CS (and maybe the top 5% of people going to investment banking). The vast majority of the most intelligent people at any Canadian university will stay in Canada, and given the current political situation, I think that's probably only going to become more true.
Also, anecdotally, I would wager that schools with significant numbers of Americans (e.g., McGill) probably have more US students staying in Canada than vice versa at this point (with perhaps the exception of CS).
> Virtually all of the top performers at my school left for the USA immediately after graduation. [...] Turns out doubling or tripling your take home compensation is absolutely worth it. [...] You can buy a house instead of renting an apartment with roommates. You can afford to marry and have children. You can buy all the things the government would've provided you had it not been dysfunctional.
And how does the "dysfunction" of the current Canadian government compare to what is happening in the US, in your eyes?
> Plus, there are just more jobs in SWE in the USA.
There is the rational answer... for graduates in software.
I took issue with this too, but chose to interpret it charitably. It’s true that a lot of our most qualified people move to the US because of the money.
That questionable-sounding stunt by the media outlet wasn't comparable: Google/Alphabet knows much more about individuals than addresses, salary, and political donations.
Google/Alphabet knows quite a lot about your sentiments, what information you've seen, your relationships, who can get to you, who you can get to, your hopes and fears, your economic situation, your health conditions, assorted kompromat, your movements, etc.
Schmidt is actually from OG Internet circles where many people were aware of privacy issues, and who were vigilant against incursions.
But perhaps he had a different philosophical position. Or perhaps it was his job to downplay the risks. Or perhaps he was going to have enough money and power that he wasn't personally threatened by private info that would threaten the less-wealthy.
We might learn this year, how well Google/Alphabet protects this treasure trove of surveillance state data, when that matters most.
If it was his job to downplay the risk's then he absolutely deserved at least this.
Google or any other US company will not be defending your's or anyone's else's data. It's not only that they doesn't want to(which they dont) but they simply can't.
You must comply with the law and you do not want to currently piss off anyone's at the top.
It was probably a decade ago and I recall using something within Google that would tell you about who they thought you were. It profiled me as a middle eastern middle aged man or something like that which was… way off.
If I were extremely cynical, I would suspect they might have intentionally falsified that response to make it seem like they were more naive than they actually were.
I suspect the more likely scenario is they don't actually care how accurate these nominal categorizations are. The information they're ultimately trying to extract is, given your history, how likely you are to click through a particular ad and engage in the way the advertiser wants (typically buying a product), and I would be surprised if the way they calculate that was human interpretable. In the Facebook incident where they were called out for intentionally targeting ads at young girls who were emotionally vulnerable, Facebook clarified that they were merely pointing out to customers that this data was available to Facebook, and that advertisers couldn't intentionally use it.[0] Of course, the result is the same, the culpability is just laundered through software, and nobody can prove it's happening. The winks and nudges from Facebook to its clients are all just marketing copy, they don't know whether these features are invisibly determined any more than we do. Similarly, your Google labels may be, to our eyes, entirely inaccurate, but the underlying data that populates them is going to be effective all the same.
I think its their currently targeted ad demographic or whatever. Its probably a "meaningless" label to humans, but to the computer it makes more sense, he probably watches the same content / googles the same things as some random person who got that label originally, and then anyone else who matched it.
male, lives in this region, has an income between X to X+40000, and has used the following terms in chat or email, regardless of context, in the last 6 months: touchdown, home run, punt, etc. etc.
the ad game is not about profiling you specifically, it's about how many people in a group are likely to click and convert to a sale; they're targeting 6 million people, not you specifically, and that's balanced by how much the people who want the ads are willing to pay.
palantir or chinese social credit, etc., is targeting you specifically, and they don't care about costs if it means they can control the system, forever.
The idea that Google’s lack of knowledge of you a decade ago is somehow related to what they know today is naive. Dangerously naive, I would say. Ad targeting technology (= knowledge about you) is shocking good now.
Color me unconvinced. Google can't even figure what language I speak even though I voluntarily provide them the information in several different ways. I can't understand half the ads they serve me.
Google doesn't choose what ad to show you. Google serves up a platter of details and auctions the ad placement off to the highest bidder.
That platter of details is not shown to you, the consumer.
What you are experiencing is that your ad profile isn't valuable to most bidders, ie you don't buy stuff as much as other people do, or your ad profile is somehow super attractive to stupid companies that suck at running ads who are overpaying for bad matches.
It is not evidence that google knows nothing about you.
Google is pleased that you think they don't know you. It helps keep the pressure down when people mistake this system for "Perfectly target ads". The system is designed to make google money regardless of how good or bad their profile of you is.
It's not just the ads though. Am I to think that Youtube helpfully replacing a video title (whose original text I understand) by a half-assed translation into a language that I don't speak is actually Alphabet playing 5D chess ? If so, hats off to you, Google. I totally fell for it.
> Schmidt is actually from OG Internet circles where many people were aware of privacy issues, and who were vigilant against incursions.
> But perhaps he had a different philosophical position. Or perhaps it was his job to downplay the risks
I feel that as the consumer surveillance industry took off, everyone from those OG Internet circles was presented with a choice - stick with the individualist hacker spirit, or turncoat and build systems of corporate control. The people who chose power ended up incredibly rich, while the people who chose freedom got to watch the world burn while saying I told you so.
(There were also a lot of fence sitters in the middle who chose power but assuaged their own egos with slogans like "Don't be evil" and whatnot)
Yeah, being unaffected by social pressure when philosophizing about what is moral and liberating is strongly related to being unaffected by social pressure regarding personal hygiene and social norms, unfortunately. Still I'd rather have the weirdos, especially this one particular weirdo, than not! Stallman has blazed the trail for us slightly-more-socially-aware types to follow, while we look/act just a little more reasonable.
There's a popular video on YouTube of him eating skin peeled from his foot during a lecture at a college. Not AI, very old, repellant to normal people.
I'm a bit awestruck. Was there any discussion about it among your peers? We might be a generation or two apart, I saw that video when I was not yet an adult and it might have been literally part of my introduction to the person that is Richard Stallman. It definitely wasn't a good first impression.
Yes, I remember that period of conscious choice, and the fence-sitting or rationalizing.
The thing about "Don't Be Evil" at the time, is that (my impression was) everyone thought they knew what that meant, because it was a popular sentiment.
The OG Internet people I'm talking about aren't only the Levy-style hackers, with strong individualist bents, but there was also a lot of collectivism.
And the individualists and collectivists mostly cooperated, or at least coexisted.
And all were pretty universally united in their skepticism of MBAs (halfwits who only care about near-term money and personal incentives), Wall Street bros (evil, coming off of '80s greed-is-good pillaging), and politicians (in the old "their lips are moving" way, not like the modern threats).
Of course it wasn't just the OG people choosing. That period of choice coincided with an influx of people who previously would've gone to Wall Street, as well as a ton of non-ruthless people who would just adapt to what culture they were shown. The money then determined the culture.
Sorry, I didn't mean to write out the hacker collectivists. I said "individualist" because to me hacking is a pretty individualist activity, even if one's ultimate goal is to contribute to some kind of collective. Or maybe I just don't truly understand collectives, I don't know.
But yes, individualists and collectivists mostly cooperated and coexisted. I'd say this is because they were merely different takes on the same liberating ground truths. Or at least liberating-seeming perceptions of ground truths...
It wasn’t a stunt and there was nothing questionable about it. I’m amazed by how easily people shit all over journalists - it really has to end because it is precisely how truth dies.
Here’s a question - since you have such strong feelings did you write the editor of the piece for their explanation?
Having met him one time he seemed like just a really intense dude who embodied the chestnut “the CEO is the guy who walks in and says ‘I’m CEO’.” I dunno if there’s more to it than that.
PSA for any grad student in this situation: get a lawyer, ASAP, to protect your own career.
Universities care about money and reputation. Individuals at universities care about their careers.
With exceptions of some saintly individual faculty members, a university is like a big for-profit corporation, only with less accountability.
Faculty bring in money, are strongly linked to reputation (scandal news articles may even say the university name in headlines rather than the person's name), and faculty are hard to get rid of.
Students are completely disposable, there will always be undamaged replacements standing by, and turnover means that soon hardly anyone at the university will even have heard of the student or internal scandal.
Unless you're really lucky, the university's position will be to suppress the messenger.
But if you go in with a lawyer, the lawyer may help your whistleblowing to be taken more seriously, and may also help you negotiate a deal to save your career. (For example of help, you need the university's/department's help in switching advisors gracefully, with funding, even as the uni/dept is trying to minimize the number of people who know about the scandal.)
I found mistakes in the spreadsheet backing up 2 published articles (corporate governance). The (tenured Ivy) professor responded by paying me (after I’d graduated) to write a comprehensive working paper that relied on a fixed spreadsheet and rebutted the articles.
A weird thing I'm seeing is early AI startups lowballing both salary and equity for AI startup jobs, compared to a few years ago for generic Web/app developer jobs.
You're in a narrow opportunity window of a massive investment gold rush. You probably got funding with a weak/nonexistent business model, and some mostly vibe-coded demo and handwavey partnership.
Now you need to hire a few good founding engineer types who can help get the startup through a series of harder milestones, with skillsets less clear than for generic Web/app development. If you can hire people as smart and dedicated as yourself, they'll probably do things that make a big positive difference, relative to what bottom of the barrel hires will do.
So why would you lowball these key early hires, at less than a new-grad starting salary, plus a pittance of ISOs that will be near-worthless even if you have a good exit.
Is it so that the founders and investors can have the maximum percentage of... something probably less valuable than what they'd get by attracting and aligning the right early hires? (Unless it's completely an investment scam, in which genuine execution doesn't affect the exit value.)
reply