You missed the "total wealth of U.S. billionaires". The billionaires own a tiny fraction of total US wealth. Most of the wealth is owned by people like me, who own a house, and stocks in retirement account.
However billionaires don’t own tiny part of US wealth, more like 5%-10%. And top 1% (and grandparent was talking about rich people) own 1/3 of US wealth.
The point is that billionaire wealth is not that much compared to the government’s current spending, much less what you’d need to support large numbers of immigrants on welfare (as suggested by OP above).
The top 1% have a lot more, but the cutoff for that is $11 million, and that includes home equity, family farms, etc. The bulk of those people are retired professionals and small business owners. For example, 4% of 75-79 year olds are in the top 1% of wealth. These are rich people, but not the kind of rich that AOC is talking about taxing.
I’m a huge supporter of taxing upper middle class people, but we should just tax them instead of playing games about wealth. The top 5%, that is people making above $260,000 a year, have an income of $5.6 trillion a year. They only pay $1.3 trillion in income taxes. Just double that.
Places full of single family houses with essentials being 30 minutes away don't tend to stay like that for long. They are great business opportunities for developers of supermarkets, malls and the like. You buy some cheap land, build some cheap commercial low-rises, and rake cash as the tenants come flooding in.
There's not a single home in Summerlin, NV that's more than a 10 minute drive from a large supermarket. Majority of the houses are less than 5 minutes drive.
Sorry, I misread. I was giving an example of "They are great business opportunities for developers of supermarkets, malls and the like.", not of an isolated neighborhood.
"Buy, borrow, die" is simply not a thing. It's a half-baked idea invented by academics, who don't realize that it's just not viable in practice. Billionaires do not actually utilize it[1].
Might be most devices by count, but certainly not by power consumption. EVs are the only major appliance that’s DC, and most people don’t even have them.
No, it’s mentioned because “Asian and Pacific Islander” is a specific, separate category for government’s data collection purposes. It was created as a result of Pacific Islander lobby. You can look up all of this.
I don’t know anything about the “Pacific Islander lobby” but it’s not relevant here. The sentence here is clearly just not using the Oxford comma. Only the summary for journalists uses this construction. The actual abstract does not.
This paper breaks out Asian and Pacific Islander as separate ethnic groups in their data. You can see it in this summary and in the slightly more detailed data if you click through to the paper on JAMA.
(Actually they break out “Non-Hispanic Asian” and “Non-Hispanic
Native Hawaiian or
Other Pacific
Islander”. The need to call out “non-Hispanic” on every ethnicity seems weird.)
Why need production if you don't have consumption? I jest, only partially.
I suppose we do things how we do because taxing income is a lot easier to do progressively than taxing consumption.
You can't meter how many times someone has been out to eat or how many gallons of gas they have put into their car, but you can more easily track what their employer puts in their bank account.
You can progressively tax consumption by combining high, non-progressive consumption tax with negative income tax rates. Something like, everyone gets some small UBI, and also extra income for every dollar made.
For example, let's introduce 35% consumption tax, but introduce $1k/year UBI and extra 30% on income between $0 and $30k, then additional 20% on income between $30k and $60k, and then 10% on income between $60k to $100k, and 0% on any income above that.
Then, if you make $30k, your gross take home pay is actually $30k + $1k + 30% * $30k = $40k, and if you make $200k, your gross take home pay is $200k + $1k + 30% * $30k + 20% * ($60k-$30k) + 10% * ($100k-$60k) = $220k.
At the same time, if you make $30k, if you spend all of it on consumption, you pay 35% * $40k = $14k in taxes, so your net take home pay after taxes is $40k-$14k = $27k. On the other hand, if you make $200k and consume all of it, you pay $77k in consumption tax, and your net take home pay is $220k - $77k = $143k. All very progressive.
Now, the person making $200k is highly incentivised to avoid some of this tax, and instead of consuming all of it, he might only want to consume half of it, and invest the other half. This is great, because then the other half will (hopefully) get invested in a productive activity, so that in future there's even more production.
There is little point in inventing new protocols, given how low the overhead of UDP is. That's just 8 bytes per packet, and it enables going through NAT. Why come up with a new transport layer protocol, when you can just use UDP framing?
Agreed. Building a custom protocol seems “hard” to many folks who are doing it without any fear on top of HTTP. The wild shenanigans I’ve seen with headers, query params and JSON make me laugh a little. Everything as text is _actually_ hard.
A part of the problem with UDP is the lack of good platforms and tooling. Examples as well. I’m trying to help with that, but it’s an uphill battle for sure.
I think the "problem" of sending data is a lot harder without some concept of payloads and signaling. HTTP just happens to be the way that people do that but many RPCs like zeromsg/nng, gRPC, Avro, Thrift, etc work just fine. Plenty of tech companies use those internally.
Some of this is hurt by the fact that v8, Node's runtime, has had first class JSON parsing support in but no support for binary protocol parsing. So writing Javascript to parse binary protocols is a lot slower than parsing JSON.
Sure, you can reimplement multiplexing on the application level, but it just makes more sense to do it on the transport level, so that people don't have to do it in JavaScript.
It only takes a few thousand lines (easily less than 10k even with zero dependencies and no standard library) to implement QUIC.
Kernel management of transport protocols has zero actual benefit for latency or throughput given proper network stack design. Neither does hardware offload except for crypto offload. Claimed differences are just due to poor network stack design and poor protocol implementation.
Not fully standards compliant since I skipped some irrelevant details like bidirectional streams when I can just make a pair of unidirectional streams, but handles all of the core connection setup and transport logic. It is not actually that complicated. And just to get ahead of it, performance is perfectly comparable.
FWIW, quic-go, a fully-featured implementation in Go used by the Caddy web server, is 36k lines in total (28k SLoC), excluding tests. Not quite 10k, but closer to that than to your figure.
This is not the case with Starlink (and presumably Starlink) satellites. The ground stations use directional phased arrays. They can do it, because they keep good track of where each satellite is at any given moment, and do trajectory adjustments as needed.
Yes, groundstations are virtually always highly directional, except for, like, radio hams sometimes. (Even hams usually use yagis.) Possibly you didn't notice this, but I'm talking about the antennas on the satellites, which are the ones that could suffer interference (since they're the ones receiving the uplink frequencies we're discussing), not the groundstation antennas.
You always have to keep track of where each satellite is at any given moment.
What do you mean by "Starlink (and presumably Starlink)"?
To add to this, we know what objects interfere with our satellite contacts. We keep their orbital positions (as best as possible) in mind when scheduling satellite operations to avoid communication failures (partial or total) caused by their interference.
This is often learned after the fact. A contact will fail or go badly and then you can examine what was around it at the time. Over a series of failures the offending satellite will be identified.
Yeah, if you don't know the name of the thing you're looking for, you can spend weeks looking for it. If you just search for generic like "eigenvalue bound estimate", you'll find thousands of papers and hundreds of textbooks, and it will take substantial amount of time to decide whether each is actually relevant to what you're looking for.
reply