Hacker Newsnew | past | comments | ask | show | jobs | submit | galangalalgol's commentslogin

I think pointing to a single puppet master is reductive. Demography and geography predict essentially all of these changes. Protesting and civil disobedience can obviously tip matters, but the authoritarianism taking the us has been a long time coming just based on the centralization of federal power that started almost as soon as the ink was dry. The tendencies of landlocked resource heavy states are going to be authoritarian. Coastal trade based states will tend to go pluralist. Giant continent spanning states need coordination and continuity, so they go authoritarian. The federated nature of the original US, the EU and countries like Switzerland let those differing tendencies coexist. So once the US began centralizing power it was only a matter of time.

The fix is only barely in the realm of the possible. US states have to be given back their power, and the federal government must be limited to its original remit. This will let coastal states tend to pluralism, and resource heavy and or landlocked states tend to authoritarianism and as long as money and feet are free to cross state borders. It will all work out. Ditching first past the poles and mitigating gerrymandering would also obviously help.


> Ditching first past the poles and mitigating gerrymandering would also obviously help.

Mitigating gerrymandering is a lost cause with first past the post because someone has to draw the lines and whoever is in the majority at the time is going to find a way to benefit themselves. It's especially hard because in a state which is e.g. 60% for one party, drawing the lines in a "normal" way can pretty easily result in a bunch of districts that are each 60% for that party (i.e. they get 100% of the seats with 60% of the votes), and getting it to not do that is the thing that could require a bunch of strange looking lines.

Whereas if you switch from first past the post to score voting, gerrymandering is basically irrelevant.

First past the post de facto disenfranchises the majority of the district including members of both parties whenever the split isn't almost exactly 50:50, because then the outcome is effectively a certainty even if significant numbers of voters change their minds. Everyone who supports the losing major party or any third party fails to benefit them, and everyone who supports the victorious major party in excess of what they needed to secure the district is also not moving the needle even a hair.

Whereas with score voting, you can have more than two viable candidates, and then hyper-partisans can't win in a district where 40% of the voters hate them because they'd lose to a member of their own party, or a now-viable third party candidate, who can appeal to voters on both sides. Changing the composition of the district changes which candidate wins even when the change doesn't put a different party in the majority, and with more than two viable parties there may not even be a "majority" party anymore.

The problem is someone got the Democrats to start promoting IRV, which is barely better than first past the post in many cases and actually worse (i.e. more partisan) in some pretty common ones. Which in turn got a lot of Republicans to start opposing all voting system reforms because they didn't like the results. Meanwhile they would both benefit from using score voting instead of FPTP or IRV. I mean seriously, does either party actually like this partisan hellscape?


With scoring isn't the logical play to score your favorite the highest and zero out everyone else? Then it just devolves to fptp.

No, to begin with the people who want to play stupid games would give the highest score to every candidate they approve of and the lowest score to everyone else, and then it devolves to approval voting, which is significantly better than FPTP -- and IRV.

And even doing that is people being too clever by half.

Imagine there are three candidates. The one you prefer is polling at a score of 6/10, another that you like almost as much is also polling at 6/10 and a third that you very much don't like is polling at 4/10. If you were voting honestly you'd give the first candidate 10/10, the second 8/10 and the third 1/10. So what should you do if you're voting strategically?

If you do the one that devolves to approval voting you give the first two candidates 10/10. But that's pretty dumb, the third candidate was just barely in the race and all you're doing then is screwing yourself by giving your second choice a better chance against your first choice.

If you do the one that devolves to FPTP you're really screwing yourself, because then you're putting the third candidate, which you hate, back in the running by tanking the chances of the second candidate that you were pretty okay with. You're making it so if your first choice doesn't win you get your third choice, which is bad for you, because the amount you wanted the first to win over the second is much smaller than the amount you wanted the second to win over the third but then you foolishly failed to express that even though the voting system allowed you to.

You can find some "proofs" that giving every candidate either a 1/10 or 10/10 is the optimal strategy, but the thing those proofs take as an assumption is that you know exactly how everyone else is going to vote, i.e. you have perfectly 100% accurate infallible polls. Which, of course, you don't.

And then think about what you have to do with that second candidate you'd like to give 8/10: Under that logic you're "required" to either give them 10/10 or 1/10. But you can't be sure if giving the second candidate a 1/10 will cause the first candidate to win or the third. Without knowing that, you can't know which one is actually better for you.

At which point the optimal strategy is to hedge by picking a number in the middle, and choose which one in proportion to how strongly you feel about each risk. But that's the same as voting according to your actual preferences! You end up giving the second candidate 8/10 because that's the measure of how much more you prefer that they defeat the third candidate than that they don't defeat the first.

The only real strategic choice here is to put some consideration of the polling into the weighting. If Hitler is on the ballot then you're definitely giving him the lowest score, but if he's only polling at 2/10 and you're pretty sure he's not going to win, you might want to give someone else you only moderately disfavor a 3/10 rather than 5/10 because you're not that worried about the probability of Hitler defeating them even if you're very worried about the consequences if it happened. But you still don't want to give them the same score as you give Hitler because you still want to hedge at least a little bit against even a small chance of something that bad.


Interesting, and what are your thoughts on the star variant? Also what is so bad about irv?

STAR is basically fine, it's a variant on score.

There are several problems with IRV, but the most obvious one is that it can often knock the moderate candidate out of the final round.

Suppose you have a district that goes 60% for one party. That party runs two candidates and the other party runs one. With IRV, one of the first party's candidates is the most likely to get knocked out, because they'll each average ~30% of the vote (half of 60%) while the other candidate gets the other 40%. But if the majority party then has a preference for their own extremist, it's their moderate that gets knocked out, and then in a district that goes 60% for that party, the extremist has a decent chance of getting in.

The same dynamic can also cause the minority party candidate to win. 51% of the majority party (i.e. 30.5% of the district's voters) prefer an extremist, but enough of the majority party is afraid of them that in combination with the 40% of the vote from the minority party, the minority party wins the run off. Even though the "winner" would have lost to the majority party's moderate using score voting regardless of whether the extremist was on the ballot and even in a two-candidate election using FPTP.


> Demography and geography predict essentially all of these changes.

> The tendencies of landlocked resource heavy states are going to be authoritarian.

What are you basing this on? Where can I read more about this?


Montesquieu, Wittfogel, and Sachs are the old ones. Modern writers acknowledge geography isn't destiny, but it would definitely be fighting uphill for Russia to maintain democracy. Mobile middle class seems to be the real driver of democracy, and coastal trade is what created that in most modern democracies. Seems like maybe technology could change that. But big regions make mobility harder. If you have to move half a world away to reach different laws the pressure to retain you is less. Where a doctor in Hungary can pack up and take a train to find a government more to their liking. The shrinking of the middle class drives authoritarianism fairly reliably according to these sources. Sometimes the older ones call it the merchant class.

Interesting.

Compensating for the temperature will never be as accurate as actually controlling it (O is for ovenized). I keep reading about chip scale atomic clocks coming down in price but I've yet to see them as the oscillator in anything mass produced.

When 2G started being decommissioned ebay was suddenly flooded with super cheap rubidium frequency standards from parted out base stations.

I'd love to have some.

Also cheap OCXOs.

I have it on good authority they just used unixtime for everyone but put all the leapseconds in the tz table.

My understanding is that they started turning away from it, but have turned back in many states. We were told it was important that we delay teaching our child typing until they had finished learning cursive because it had been discovered that teaching cursive developed something or other that I zoned out on while waiting to ask when that would be. Education has fads that don't seem to line up with peer reviewed articles that well. For instance, current reading instruction is non optimal for dyslexic students, while early 20th century instruction seems to (not entirely intentionally) worked much better.

Edit: Apparently it has to do with dyslexia and executive functioning. California and Texas amongst others have now required it be resumed. So there is a roughly decade long gap in cursive in the us, maybe a little less.


I try to avoid tokio in its entirety. There are some embedded use cases with embassy that make sense to me, but I have never needed to write something that benefited from more threads than I had cores to give it. I don't deny those use cases exist, I just don't run into them. I typically spend more time computing than on i/o but so many solid libraries have abandoned their non-async branches I still have to use it more often than I'd like. I get this is a bit of a whine, I could fork those branches if I cared that much. But complaining is easier.

I think the dream is executor-independence. You shouldn't really need to care what executor you or your library consumer is using, and the Rust auto traits are designed so that you can in theory be generic over it. There are a few speed bumps that still make that harder than it really should be though.

I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.


When you are compute bound threads are just better. Async shines when you are i/o bound and you need to wait on a lot of i/o concurrently. I'm usually compute bound, and I've never needed to wait on more i/o connections than I could handle with threads. Typically all the output and input ip addresses are known in advance and in the helm chart. And countable on one hand.

Oh, right, sure. In Rust the async code and async executor are decoupled. So it's your _executor_ that decides how/whether tasks are mapped to threads and all that jazz.

Meanwhile the async _code_ itself is just a new(ish), lower-level way of writing code that lets you peek under an abstraction. Traditional ‘blocking’ I/O tries to pretend that I/O is an active, sequential process like a normal function call, and then the OS is responsible for providing that abstraction by in fact pausing your whole process until the async event you're waiting on occurs. That's a pretty nice high-level abstraction in a lot of cases, but sometimes you want to take advantage of those extra cycles. Async code is a bit more powerful and ‘closer to the metal’ in that it exposes to your code which operations are going to result in your code being suspended, and so gives you an opportunity to do something else while you wait.

Of course if you're not spending a lot of time doing I/O then the performance improvements probably aren't worth dropping the nice high-level abstraction — if you're barely doing I/O then it doesn't matter if it's not ‘really’ a function call! But even so async functions can provide a nice way of writing things that are kind of like function calls but might not return immediately. For example, request-response–style communication with other threads.


I agree. Async makes sense for Embassy and WASM. I'm skeptical that it really ever makes sense for performance, even if it is technically faster in some extreme cases.

Very few red states in those 19...

The western notion that a middle class whose well-being is independent of state owned enterprises should exist is destabilizing to the ccp, so retaliation seems likely.

I can't recall a single Firefox crash in at least a decade. What are people doing? I run ublock origin, nothing else. I do sometimes have Firefox mobile misbehave where it stops loading new pages and I jave to restart it, but open pages work normally as do all other operations, so not a crash exactly. Happens maybe once a month

Edit: more context, I power cycle at least once a week on desktop and the version is typically a bit behind new. I also don't have more tabs open than will fit in the row. All these habits seem likely to decrease crashes.


We have 5 computers running Firefox. One computer has regular Firefox crashes. I've done some memory testing that didn't detect anything wrong.

I've tried all kinds of things software-wise but keep getting random crashes.

I wonder if I should do a longer memory test, maybe some CPU stress testing at the same time...


If you want to dig into it, you can post a bunch of that computer's crash reports (navigate to about:crashes) on bugzilla: https://bugzilla.mozilla.org/enter_bug.cgi?product=Firefox&c...

Or you can view several of them and see if there's a common pattern in the "Signature" field. Firefox really should only be regularly crashing if: (1) there's a real bug and the thing that triggers it, (2) you're running out of memory, or (3) you have hardware.

I don't know what the odds of faulty hardware are for a randomly chosen user, but they're much higher for a randomly chosen user who is seeing regular crashes.


Yeah. Lately even if I OOM my system, firefox doesn't crash so easily, individual tabs do.

For me, OOM effectively crashes my system 90% of the time, usually caused by firefox (chromium too), if a website goes out of control (rarely it's caused by too many pages open, as tab discarding takes care of that).

Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone

I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models.

What should that support look like? Maybe have a userdebug build already built and available? I don't include a root account on hardened container images for some of the same reasons they cite. So including it for everyone and creating a way to activate it is suboptimal for people who don't want that trade off. A parallel build pipeline seems the most reasonable to me?

Yeah, I would be fine with a different build stream. I do think it could be sufficiently secure in a single stream but it will always be increased attack surface so the safest option is to do separate builds.

I also don't include a root account in my container images, but you probably have a root account on the sever that runs them in case you need to debug something. But you can probably also build and deploy a new container. At the end of the day you almost always want some last-resort way to access the data stored in case something goes very wrong. Whether that is for backups, "hostile" data export or for other reasons it is important to me.


I don't actually. Devs don't get root at my employer. Even on a vm. I have rootless podman, and can be root in a container. Even our gitlab instances don't have any privileged runners. So kaneko etc.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: