Idk. What are these programmers doing afterwards? Build more shoddy code? Perhaps it's a better idea to focus on what's necessary and not run from feature to feature at top speed. This might require some rethinking in the finance department, though.
I am curious to know if that 8.6x speedup is consistent.
I don't see many "fair" benchmarks about this, but I guess it is probably difficult to properly benchmarks module compilation as it can depend on cases.
If modules can reach that sort of speedup consistently, it's obviously great news.
it's obvious that tiktok is doing this intentionally, pretending it's a technical issue, so that people can blame the US government for forcing the sale of tiktok
If you care to fight them directly, upload them using "Ellistein" or "Epsteen". But really you should delete the app, and find an alternative. Vote with your attention/wallet.
As something to compare to, I picked a random repository from what GitHub Explore showed, clicked on the first that looked like a desktop application (https://github.com/siyuan-note/siyuan/releases/tag/v3.5.4), and their Windows binary is currently 166MB for a "privacy-first, self-hosted, fully open source personal knowledge management software".
I'd claim 80MB for an entire game engine + editor for said engine is very good.
What is to note here, this is without export templates, these are ~800MB extra (200 per platform, but it seems like you can download only all at once nowadays).
Engines like Unity and UE include those in the primary download already.
That is actually pretty amazing for a game engine. I'm not a game dev and I've only ever made some tiny games in Unity back in college but this makes me want to install Godot and try making games again.
The source of the problem is the respect of the rule of law and due process
Data collection is not the source of the problem because people give their data willingly
Do you think data collection is a problem in China, or do you think the government and rule of law is the problem?
Companies collecting data is not the true problem. Even when data collection is illegal, a corrupt government that doesn't respect the rule of law doesn't need data collection.
yeah, this is exactly it. all the arguments kind of boil down to
"well how about if the government does illegal or evil stuff?"
its very similar to arguments about the second ammendment. But laws and rules shouldnt be structured around expecting a future moment where the government isnt serving the people. At that moment the rules already dont matter
The Rights are not intended as preemptive. You don't have a right to free speech b/c otherwise maybe the government regulation of speech will get out of hand. You have it because it's espoused as a fundamental right. Same with separation of church and state. It's like "Well maybe a future evil government will regulate the church poorly, so lets ban it completely". It's just seen as an area the government shouldn't delve in entirely.
Collecting information about people doesn't really fit the same mold. It's not sensible to remove that function entirely. It's not a right. And it's not sensible to structure things with the expectation the future government will be evil
The rights weren’t invented out if thin air but to address real issues that happened earlier. Yes, every government has been evil. Power corrupts. That’s why constitutions exist, to address that problem.
Are we supposed to structure out society so we're safer in the case that the Chinese invade and use all our institutions against us? There is a risk-benefit tradeoff to make. Crippling society and institutions in preparation for an a worst-case scenario future hypothetical is not sensible. To get things done you operate from the standpoint that the democratic government is responsive to the desires of the people. The adversarial perspective is self sabotaging
I think the real problem is that the government is not structured in an accountable way and things like DOGE can happen. These things basically don't happen in other democracies. The Japanese don't all have assault rifles in their basement b/c they're waiting for the day the Diet is going to harvest their data to oppress them
It's harder to do social/human science because it's just easier to make mistakes that leads to bias. It's harder to do in maths, physics, biology, medecine, astronomy, etc.
I often say that "hard sciences" have often progressed much more than social/human sciences.
you get a replication crisis on the bleeding edge between replication being possible and impossible. There’s never going to be a replication crisis in linear algebra, there’s never going to be a replication crisis in theology, there definitely was a replication crisis in psych and a replication crisis in nutrition science is distinctly plausible and would be extremely good news for the field as it moves through the edge.
Leslie Lamport came up with a structured method to find errors in proof. Testing it on a batch, he found most of them had errors. Peter Guttman's paper on formal verification likewise showed many "proven" or "verified" works had errors that were spottes quickly upon informal review or testing. We've also see important theories in math and physics change over time with new information.
With the above, I think we've empirically proven that we can't trust mathmeticians more than any other humans We should still rigorously verify their work with diverse, logical, and empirical methods. Also, build ground up on solid ideas that are highly vetted. (Which linear algebra actually does.)
The other approach people are taking are foundational, machine-checked, proof assistants. These use a vetted logic whose assistant produces a series of steps that can be checked by a tiny, highly-verified checker. They'll also oftne use a reliable formalism to check other formalisms. The people doing this have been making everything from proof checkers to compilers to assembly languages to code extraction in those tools so they are highly trustworthy.
But, we still need people to look at the specs of all that to see if there are spec errors. There's fewer people who can vet the specs than can check the original English and code combos. So, are they more trustworthy? (Who knows except when tested empirically on many programs or proofs, like CompCert was.)
A friend of mine was given an assignment in a masters-level CS class, which was to prove a lemma in some seminal paper (It was one of those "Proof follows a similar form to Lemma X" points).
This had been assigned many times previously. When my friend disproved the lemma, he asked the professor what he had done wrong. Turns out the lemma was in fact false, despite dozens of grad students having turned in "proofs" of the lemma already. The paper itself still stood, as a weaker form of the lemma was sufficient for its findings, but still very interesting.
I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.
I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.
Yeah, there is also the work of primatologists which challenges some of our beliefs of what we think is human sciences (like politics). See Frans De Waal.
Yet, I believe there hasn't been much progress as compared with STEM. But it is just a belief at the end of the day. There might be some study about this out there.
Maybe one day that will change