Hacker Newsnew | past | comments | ask | show | jobs | submit | sbrother's commentslogin

Seriously. I didn't even realize this was a wide issue, but I couldn't find a school enrolment email I was looking for this morning, and found it in the spam folder. The fact that I basically never have to do this is actually amazing.

I wonder about difference in experience that different people have with gmail’s spam filter. In my case, the majority of emails that go to my gmail spam folder are legitimate. I don’t actually receive much spam, a single-digit number of emails per month (in the past 30 days, 2 emails), so any time I see anything in my spam folder I have to check so that I can rescue the email if legitimate.

This is my experience also. Closely guarded email, haven't received _any_ spam to it to date, but a large volume of false positives. This, among other reasons, actually led to my setting up my own email server again. Gmail is a great product if you don't know what you're doing or have avaliable to you. It's like a McDonalds burger. Not inpressive, not good, bot bad either, and certainly won't offend anyone while being accessible --- but calling it good is a bit out of touch with what good looks like.

I kinda expect there are a lot of false positives that people just never notice because they've got also thousands of unread (non-spam) emails in their inbox and never check their spam to see if there's anything legitimate there.

They probably have a trillion emails with human labels, either from users directly applying them, or inferrable from actions like deleting.

With that much data, even a simple Bayesian classifier should work pretty much perfectly.


I've heard this, and I've even seen it in plenty of poorly performing businesses, but I've never actually seen it in a highly performing, profitable tech company. Other than at the new grad level but it's treated as net-negative training while they learn how to build consensus and scope out work.

Not coincidentally, the places I've seen this approach to work are the same places that have hired me as a consultant to bring an effective team to build something high priority or fix a dumpster fire.


A lot of highly performing teams don't even use tickets.

Do any highly performing teams use tickets?

A fly-by-night charlatan successfully pushed ticking into our organization in the past year and I would say it was a disaster. I only have the experience of one, but from that experience I am now not sure you can even build good software that way.

I originally hoped it was growing pains, but I see more and more fundamental flaws.


I’ve worked at one, but it required a PM who was ruthless about cutting scope and we focused on user stories after establishing a strong feedback pipeline, both technically through CI/CD/tests and with stakeholders. Looking back, that was the best team I’ve ever worked in. We split up to separate corners of the company once the project was delivered (12 month buildout of an alpha that was internally tested and then fleshed out).

Maybe I had greenfield glasses but I came in for the last 3 months and it was still humming.


How do you keep track of tasks that need to be done, of reported bugs and feature requests?

Previously? There was an understanding of the problem trying to be solved. The gaps left the pangs of "this isn't right".

Now I have no way to know where things stand. It's all disconnected and abstracted. The ticket may suggest that something is done, but if the customer isn't happy, it isn't actually. Worse, now we have people adding tickets without any intent to do the work themselves and there isn't a great way to determine if they're just making up random work, which is something that definitely happens sometimes, or if it truly reflects on what the customer needs.

You might say that isn't technically a problem with ticketing itself, and I would agree. The problems are really with what came with the ticketing. But what would you need tickets for other than to try and eliminate the customer from the picture? If you understand the problem alongside the customer, you know what needs to be done just as you know when you need to eat lunch. Do you create 'lunchtime' tickets for yourself? I've personally never found the need.


You must be working in projects with a relatively small number of “problems to be solved” at any given time, and with the problems having relatively low complexity. In general there’s no way to keep everything in your head and not organize and track things across the team. That doesn’t mean that a lot of communication doesn’t still have to happen within the team and with the customers. Tickets don’t replace communication. But you have to write down the results of the communication, and the progress on tasks and issues that may span weeks or months.

> In general there’s no way to keep everything in your head

I imagine everyone's capacity is different, but you wouldn't want anyone with a low capacity on your team, so that's moot. Frankly, there is no need to go beyond what you can keep in your head, unless your personal capacity is naturally limited I guess, because as soon as you progress in some way the world has changed and you have to reevaluate everything anyway, so there was no reason to worry about the stuff you can't focus on to begin with.


I find that the current way we do Scrum is way more waterfall-ish than what we had before. Managers just walked around and talked, and knew what each person was doing.

We traded properly working on problems for the Kafkaesque nightmare of modern development.


Thing is, Scrum isn't supposed to be something you do for long.

As you no doubt know, Agile is ultimately about eliminating managers from the picture, thinking that software is better developed when developers work with each other and the customer themselves without middlemen. Which, in hindsight, sounds a lot like my previous comment, funnily enough, although I didn't have Agile in mind when I wrote it.

Except in the real world, one day up and deciding no more managers on a whim would lead to chaos, so Scrum offered a "training wheels" method to facilitate the transition, defining practices that push developers into doing things they normally wouldn't have to do with a manager behind them. Once developers are comfortable and into a routine with the new normal Scrum intends for you to move away from it.

The problem: What manager wants to give up their job? So there has always been an ongoing battle to try and bastardize it such that the manager retains relevance. The good news, if you can call it that, is that we as a community have finally wisened up to it and now most pretty well recognize it for what it is instead of allowing misappropriation of the "Agile" label. The bad news is that, while we're getting better at naming it, we're not getting better at dealing with it.


I don’t think people invested in Scrum believe it’s “temporary” or ever marketed it as such.

And agile teams are supposed to be self-managed but there’s nothing saying there should be no engineering managers. It sounds counter intuitive, but agile is about autonomy and lack of micro-management, not lack of leadership.

If anything, the one thing those two things reject are “product managers” in lieu of “product owners”.


> I don’t think people invested in Scrum believe it’s “temporary” or ever marketed it as such.

It is officially marketed as such, but in the real world it is always the managers who introduce it into an organization to get ahead of the curve, allowing them to sour everyone on it before there is a natural movement to push managers out, so everyone's exposure to it is always in the bastardized form. Developers and reading the documentation don't exactly mix, so nobody ever goes back to read what it really says.

> And agile teams are supposed to be self-managed but there’s nothing saying there should be no engineering managers.

The Agile Manifesto is quite vague, I'll give you that, but the 12 Principles makes it quite clear that they were thinking about partnerships. Management, of any kind, is at odds with that. It does not explicitly say "no engineering managers", but having engineering managers would violate the spirit of it.

> not lack of leadership.

Leadership and management are not the same thing. The nature of social dynamic does mean that leadership will emerge, but that does not imply some kind of defined role. The leader is not necessarily even the same person from one day to the next.

But that is the problem. One even recognized by the 12 Principles. Which is that you have to hire motivated developers to make that work. Many, perhaps even most, developers are not motivated. This is what that misguided ticketing scheme we spoke of earlier is trying to solve for, thinking that you can get away with hiring only one or two motivated people if they shove tickets down all the other unmotivated developers' throats, keeping on them until they are complete.

It is an interesting theory, but one I maintain is fundamentally flawed.


I've realized it's a different paradigm in (very loosely) the Kuhn sense. You wouldn't track tasks if you're fundamentally not even thinking of the work in terms of tasks! (You might still want a bug tracker to track reported bugs, but it's a bug tracker, not a work tracker.)

What you actually do is going to depend on the kind of project you're working on and the people you're working with. But it mostly boils down to just talking to people. You can get a lot done even at scale just by talking to people.


1. Uh, isn't 2000 like extremely fucking good?

2. I played a chess bot on Delta on easy and it was really bad, felt like random moves. I beat it trivially and I am actually bad at chess, ~1000 on chess.com. I wonder if this one is different?


Yeah, he just casually said he had an elo that high, as if that doesn't blow 90% of people out of the water.


Note that 2000 on lichess is probably weaker than 2000 on chess.com (or USCF or FIDE)


That's true, I'm 2050-2100 lichess, around 1800 on chess.com. Never played a rated tournament but played some rated players who were 1400-1500 rated USCF, and they were roughly my strength, maybe a bit better. Still the Delta bot, easy mode, was much, much better than me.


Casually just in the top 2-3 percent of chess players globally world wide humble brag. I'm not that good at it, just a little bit!


I think it depends on the pool to which you're comparing. Being top 2% of all programmers is not so impressive if you include everyone who's ever taken an Intro class. Top 2% of people who do it for a living is much more significant.

I'm in a similar boat as the other posters (2050-2100 lichess, 1400 USCF). The median active rating for USCF is around 1200 and likely much higher if you don't include scholastic players, so if we compare against the OTB pool, "2000 lichess" is probably closer to top 50% than 2%


I mean, if you’re in the top 3 percent of anything, yes that’s pretty good, but not unbelievably so, especially in the field of chess. If for instance you randomly put together a classroom full of chess players, there’s decent odds one of them is better than top 3%. Two classrooms and it’s almost a certainty.

Put another way, looking at chess.com users, there are ~6 million people who would count as the top 3 percent. Difficult to achieve, yes, but if 6 million people can achieve it, it’s not really a “humble brag,” it’s just a statement.


It made me smile to hear “I’m only 97th percentile” isn’t a humblebrag. You may be employing an old saw of mine, you can make people* react however you want by leaning on either percentages or whole numbers when you shouldn’t.

* who don’t have strong numeracy and time to think


It's still significantly stronger than the average online chess player


I heard it's never intended to be the same since initial rating for Lichess and chess.com respectively is 1500 and 1200. So they should have 300 rating difference on average. Quite fitting with what the other commenter claims actually.


I don’t think it would average out to a 300 elo difference simply based on the starting rating being 300 apart.

If everything else was the same, and people play enough games they will average out to the same elo.

The difference is caused by many factors. People don’t play enough games to sink to their real elo, the player pool is different, and you gain/lose fewer points per game with Lichess’s elo algorithm.


ELO is relative. There's no reason why a GM ELO should be 2800 or 280 or 28000. So it's all decided by ELO of every other person. So if the ELO gain/loss calculation and audience of Lichess and chess.com are exactly the same, because of different starting position, I don't think they'd converge to the same ELO but instead will differ by starting position difference.

Also I can't really prove it mathematically but I guess average ELO would also hover on the starting ELO. Because I can't see why it would hover anywhere else and any ELO gained would be lost by someone else.


On further thought yes, I think you're correct.

When I started playing I believe chess.com let you select whether you’re beginner, intermediate or advanced and your start elo was based on that. Could be wrong, and it could’ve changed since.


This was my experience on a long Delta flight, I don't remember if I picked easy or not but it was laughably bad. I took its lunch money for a game and then turned the screen off. I was mostly irritated by the horrible touch interface, it felt so laggy among other issues. (I don't have a ranking, I barely play these days and usually just in person, but my memory says around 1400 back in the yahoo chess days as a teen but it's probably closer to 1000 now.)


I wonder if it's different on different planes? I can easily beat my friend and he won a few games on a flight, I played on a different flight and got crushed for two hours straight. I'm probably 1400-ish


Can you independently set desktop wallpapers on the two screens? I know this seems nitpicky but it's literally impossible with Ubuntu/Gnome as far as I know; I have one vertical and one horizontal and have to just go with a solid color background to make that work.


Yes. It was actually more tedious to do the inverse when I wanted three screens to do a rotating wallpapers from the same set of folders as I had to set the list of folders three times


Funny enough this sounds like my experience with ex-Amazon SWEs


> People usually use C++ or Julia. All of the fastest answers are in Julia

That's surprising to me and piques my interest. What sort of pipeline is this that's faster in Julia than C++? Does Julia automatically use something like SIMD or other array magic that C++ doesn't?


I use Rust instead of C++, but I also see my Julia code being faster than my Rust code.

In my view, it's not that Julia itself is faster than Rust - on the contrary, Rust as a language is faster than Julia. However, Julia's prototyping, iteration speed, benchmarking, profiling and observability is better. By the time I would have written the first working Rust version, I would have written it in Julia, profiled it, maybe changed part of the algorithm, and optimised it. Also, Julia makes more heavy use of generics than Rust, which often leads to better code specialization.

There are some ways in which Julia produces better machine code that Rust, but they're usually not decisive, and there are more ways in which Rust produces better machine code than Julia. Also, the performance ceiling for Rust is better because Rust allows you to do more advanced, low level optimisations than Julia.


This is pretty much it – when we had follow up interviews with the C++ devs, they had usually only had time to try one or two high-level approaches, and then do a bit of profiling & iteration. The Julia devs had time to try several approaches and do much more detailed profiling.


The main thing is just that Julia has a standard library that works with you rather than working against you. The built in sort will use radix sort where appropriate and a highly optimized quicksort otherwise. You get built in matrices and higher dimensional arrays with optimized BLAS/LaPack configured for you (and CSC+structured sparse matrices). You get complex and rational numbers, and a calling convention (pass by sharing) which is the fast one by default 90% of the time instead of being slow (copying) 90% of the time. You have a built in package manager that doesn't require special configuration, that also lets you install GPU libraries that make it trivial to run generic code on all sorts of accelerators.

Everything you can do in Julia you can do in C++, but lots of projects that would take a week in C++ can be done in an hour in Julia.


To be clear, the fastest theoretically possible C++ is probably faster than the fastest theoretically possible Julia. But the fastest C++ that Senior Data Engineer candidates would write in ~2 hours was slower than the fastest Julia (though still pretty fast! The benchmark for this problem was 10ms, and the fastest C++ answer was 3 ms, and the top two Julia answers were 2.3ms and .21ms)

The pipeline was pretty heavily focused on mathematical calculations – something like, given a large set of trading signals, calculate a bunch of stats for those signals. All the best Julia and C++ answers used SIMD.


I once built a music game that basically ran entirely on SVG. We hacked Musescore to attach the same UUID to both the note head in SVG and the MusicXML object in two different output modes, and then used that to synchronise the sheet music scrolling with a MIDI stream. If you're interested you can see it in action in our failed Kickstarter video from like eight years ago: https://www.youtube.com/watch?v=vgbB5Q4-dgY


How is their managed Kubernetes product nowadays? I've realized all I really use on GCP and AWS is managed Kubernetes and Postgres, and I feel like I must be overpaying particularly for GPU instances.


Fair, but then that functionality should be built into the flagging system. Obvious AI comments (worse, ones that are commercially driven) are a cancer that's breaking online discussion forums.


I think Slashdot still has the best moderating system. Being able to flag a comment as insightful, funny, offtopic, redundant, etc... adds a lot of information and gives more control to readers over the types, quantity, and quality of discussion they see.

For example, some people seem to be irritated by jokes and being able to ignore +5 funny comments might be something they want.


I strongly agree with this sentiment and I feel the same way.

The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.


If I want to participate in a conversation in a language I don't understand I use machine translation. I include a disclaimer that I've used machine translation & hope that gets translated. I also include the input to the machine translator, so that if someone who understands both languages happens to read it they might notice any problems.


You are adding your comments and translating them, thats fine.

If it was just a translation, then that adds no value.


You are joking right?

I mean we probably don't talk about someone not knowing english at all, that wouldn't make sense but i'm german and i probably write german.

I would often enough tell some LLM to clean up my writing (not on hn, sry i'm to lazy for hn)


When I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.


I think multi -language forums with AI translators is a cool idea.

You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.

I think building it as a forum feature rather than a browser feature is maybe worth.


You know that this is the most hated feature of reddit ? (because the translations are shitty so maybe that can be improved)


OTOH I am participating in a wonderful discord server community, primarily Italians and Brazilians, with other nationalities sprinkled in.

We heavily use connected translating apps and it feels really great. It would be such a massive pita to copy every message somewhere outside, having to translate it and then back.

Now, discussions usually follow the sun, and when someone not speaking, say, Portuguese wants to join in, they usually use English (sometimes German or Dutch), and just join.

We know it's not perfect but it works. Without the embedded translation? It absolutely wouldn't.

I also used pretty heavily a telegram channel with similar setup, but it was even better, with transparent auto translation.


Reddit would be even worse if the translations were better, now you don't have to waste much time because it hits you right in the face. Never ever translate something without asking about it first.

When I search for something in my native tongue it is almost always because I want the perspective of people living in my country having experience with X. Now the results are riddled with reddit posts that are from all over the world with crappy translation instead.


I think we should distinguish between the feature being good/hated:

1. An automatic translation feature.

2. Being able to submit an "original language" version of a post in case the translation is bad/unavailable, or someone can read the original for more nuance.

The only problem I see with #2 involves malicious usage, where the author is out to deliberately sow confusion/outrage or trying to evade moderation by presenting fundamentally different messages.


I didn't, but I don't think it would work well on an established English-only forum.

It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.

I'm also open to the thought that it's a terrible idea.


I think the audience that would be interested in this is vanishingly small, there exist relatively few conversations online that would be meaningfully improved by this.

I also suspect that automatically translating a forum would tend to attract a far worse ratio of high-effort to low-effort contributions than simply accepting posts in a specific language. For example, I'd expect programmers who don't speak any english to have on average a far lower skill level than those who know at least basic english.


That's Twitter currently, in a way. I've seen and had short conversations in which each person speaks their own language and trusts the other to use the built-in translation feature.


Non-native English speaker here:

Just use a spell checker and that's it, you don't need LLMs to translate for you if your target is learning the language


Better yet, I prefer to read some unusual word choices from someone who’s clearly put a lot of work into learning English than a robot.


Indeed, this sort of “writing with an accent” can illuminate interesting aspects of both English and the speakers’ native language that I find fascinating.


Yeah, the German speakers I work with often say "Can you do this until [some deadline]?" When they mean "can you complete this by [some deadline]?"

Its common enough that it must be a literal translation difference between German and English.


100%! I will always give the benefit of the doubt when I see odd syntax/grammar (and do my best to provide helpful correction if it's off-base to the extent that it muddies your point), but hit me with a wordy, em-dash battered pile of gobbledygook and you might as well be spitting in my face.


Yep, it’s a 2 way learning street - you can learn new things from non native speakers, and they can learn from you as well. Any kind of auto Translation removed this. (It’s still important to have for non fluent people though!)


Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.


I honestly think that very few people here are completely non-conversant in English. For better or worse, it's the dominant language. Amost everyone who doesn't speak English natively learns it in school.

I'm fine with reading slightly incorrect English from a non-native speaker. I'd rather see that than an LLM interpretation.


...I'm not sure I agree. I sometimes have a lot of trouble understanding what non-English speakers are trying to say. I appreciate that they're doing their best, and as someone who can only speak English, I have the utmost respect anyone who knows multiple languages—but I just find it really hard.

Some AI translation is so good now that I do think it might be a better option. If they try to write in English and mess up, the information is just lost, there's nothing I can do to recover the real meaning.


> I'm not sure what the solution here

The solution is to use a translator rather than a hallucinatory text generator. Google Translate is exceptionally good at maintaining naturalness when you put a multi-sentence/multi-paragraph block through it -- if you're fluent in another language, try it out!


Google translate used to be the best, but it's essentially outdated technology now, surpassed by even small open-weight multilingual LLMs.

Caveat: The remaining thing to watch out for is that some LLMs are not -by default- prompted to translate accurately due to (indeed) hallucination and summarization tendencies.

* Check a given LLM with language-pairs you are familiar with before you commit to using one in situations you are less familiar with.

* always proof-read if you are at all able to!

Ultimately you should be responsible for your own posts.


I haven't had a reason to use Google Translate in years, so will ask: Have they opted to not use/roll out modern LLM translation capabilities in the Google Translate product?


As of right now, correct.


You are aware that insofar as AI chat apps are "hallucinatory text generator(s)", then so is Google Translate, right?

(while AFAICT Google hasn't explicitly said so, it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT)


> it's almost certainly also powered by an autoregressive transformer model, just like ChatGPT

The objective of that model, however, is quite different to that of an LLM.


I have seen Google Translate hallucinate exactly zero times over thousands of queries over the years. Meanwhile, LLMs emit garbage roughly 1/3 of the time, in my experience. Can you provide an example of Translate hallucinating something?


Agreed, and I use G translate daily to handle living in a country where 95% of the population doesn’t speak any language I do.

It occasionally messes up, but not by hallucinating, usually grammar salad because what I put into it was somewhat ambiguous. It’s also terrible with genders in Romance languages, but then that is a nightmare for humans too.

Palmada palmada bot.


Every single time it mistranslates something it is hallucinations.


Google Translate hasn't moved to LLM-style translation yet, unfortunately


Hard disagree. Google Translate performance is abysmal when dealing with danish. In many cases its output is unusable. On the other hand, ChatGPT is excellent at it.


Google Translate doesn't hold a candle to LLMs at translating between even common languages.


IMO chatgpt is a much better translator. Especially if you’re using one of their normal models like 5.1. I’ve used it many times with an obscure and difficult slavic language that i’m fluent in for example, and chatgpt nailed it whereas google translate sounded less natural.

The big difference? I could easily prompt the LLM with “i’d like to translate the following into language X. For context this is a reply to their email on topic Y, and Z is a female.”

Doing even a tiny bit of prompting will easily get you better results than google translate. Some languages have words with multiple meanings and the context of the sentence/topic is crucial. So is gender in many languages! You can’t provide any hints like that to google translate, especially if you are starting with an un-gendered language like English.

I do still use google translate though. When my phone is offline, or translating very long text. LLM’s perform poorly with larger context windows.


Maybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.


I have found that prompting "translate my text to English, do not change anything else" works fine.

However, now I prefer to write directly in English and consider whatever grammar/ortographic error I have as part of my writing style. I hate having to rewrite the LLM output to add myself again into the text.


As AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.


I think even when this is used they should include "(translated by llm)" for transparency. When you use a intermediate layer there is always bias.

I've written blog articles using HTML and asked llms to change certain html structure and it ALSO tried to change wording.

If a user doesn't speak a language well, they won't know whether their meanings were altered.


one solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish

i don't think it is likely to catch on, though, outside of culturally multilingual environments


> i don't think it is likely to catch on, though, outside of culturally multilingual environments

It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.

https://blog.thms.uk/2023/02/mastodon-translation-options


I wrote about this recently. You need to prompt better if you don't want AI to flatten your original tone into corporate speak:

https://jampauchoa.substack.com/p/writing-with-ai-without-th...

TL;DR: Ask for a line edit, "Line edit this Slack message / HN comment." It goes beyond fixing grammar (because it improves flow) without killing your meaning or adding AI-isms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: