If you say "wait 1 day without using a calendar+locale" then the duration is unambiguously 86400s, but if you say "wait 1 day using a calendar+locale" or "wait until this time tomorrow" then the duration is ambiguous until you've incorporated rules like leap/DST. I think GP's point is that "wait 1 day" unambiguously defaults to the former, and you disagree, but perhaps it's a reasonable default.
Yep, this is exactly my point. Durations are abstract spans of "stopwatch time," they don't adhere to local times or anything else we use as humans to make time more useful to us. In that context there's no real ambiguity to using units like hours/days/weeks (but not months, etc.) because they have unambiguous durations.
Now you've got me wondering something: if a "stopwatch month" can't exist since everyone agrees that different months have different durations (and therefore you must select one like "the month of January" to know how long to run the stopwatch), isn't there an argument that a "stopwatch year" has the same need to select one since everyone agrees that different years have different day counts (unless we mean a solar year in seconds, not quantized to the nearest day, but that's probably a Bad Default)?
The collective human decision to make days-per-year vary (requiring leap rules to calculate days) seems similar to the collective human decision to make days-per-month vary (requiring month names to calculate days). So if we say a "stopwatch year" suffers the same fate as the "stopwatch month" then it's a slippery slope to saying the "stopwatch minute" is no different than a "stopwatch year" (requiring leap rules to calculate seconds) even if, for all practical purposes, it seems exempt.
I guess this is why we make "second" the SI unit, and none of our human-convenience rules mess with the duration of a second. A leap second changes the duration of a minute (and above), and a leap year changes the duration of a month (and above). Which, oddly enough, demonstrates an inconsistency: we ought to say "leap day" instead of "leap year" if the duration being added shall follow the word "leap" as is the case for leap seconds.
If we set aside geopolitics and purely consider whether tightening the security of private networks is sensible whatsoever: are routers a substantially bigger threat than client devices such as the various IoT knickknacks (smart TVs, smart switches/outlets, smart appliances, etc.)? Controlling the NAT/firewall features is handy for opening ports and working around VLAN segmentation, but that isn't required for many scenarios; a compromised client device can often snoop on the rest of the network and exfiltrate what it discovers just fine even with an uncompromised router.
Even if the OS prevents it, Chrome could tell a Google server what device ID you've logged into Chrome with (if you logged into Chrome, of course), at which point the Gmail app could ask the Google server.
We would need to figure out a quantifiable metric for annoyance level. Municipal sound ordinances do tend to correctly utilize SPL(A) and SPL(C), with A-weighting being relevant for safety against ear injury (low frequencies have less influence) and C-weighting being relevant for annoyance level (low frequencies have more influence), but this isn't nearly enough. For example, ordinances carve out additional tolerance for burstiness, which makes sense for rare events like jackhammering but not for common events like routine plant operations. Sound with lots of harmonic content (think distortion) is more annoying than without. High frequencies can be worse if they reach you, but they're less likely to reach you (approaching a need for line-of-sight). It's complicated.
Here's a free idea for someone to run with: just as Zillow has a neighborhood "walkability" score prospective buyers might look at, there could be various pollution scores, including sound and light, sourced from some kind of Flock-like (ew) network of capture devices. Some folks are into mounting things like personal weather stations on their property, so maybe a new generation of devices capturing this type of data (with local signature-based identification of sources, and triangulation when the same thing is heard in multiple places, etc.) wouldn't be too far-fetched.
Ideally, the revenue from the new customer would be enough to cover the upgrades, so long as the new customer makes an up-front committment (from which loans can be written) that makes their risk (of having to pay for the upgrades even if they shut down much sooner than expected) about equal to if they build out their own off-grid system. And then they could sell to existing customers for slightly less than before, due to scale and an overall reduction of peak-to-baseline ratio.
That's always been a weird one for me. If I might quote Gemini's summary since it seems accurate enough:
> Geographical/Historical: The Bosporus Strait in Turkey is historically considered the dividing line between Europe (West) and Asia (East).
> Prime Meridian: The 0° longitude line running through Greenwich, England, is used to technically separate the Eastern and Western Hemispheres.
> Cultural/Political: Cultural definitions are often more relevant, placing countries like Australia, New Zealand, and North America in the "West" due to historical ties, despite their geographic location.
I suppose you're leaning into the "Bosporus Strait" option more than the "Prime Meridian" option, given that the former would put most of Europe in the West while the latter would put most of it in the East.
Oddly enough, many Canadians use the word "American" to refer to Unitedstatesians, so presumably they'd use it to describe cuisine that same way (as in, poutine is Canadian but disco fries are American). This is extremely analogous to the Asia conversation, in that of course people know the term comes from the continental scale, but using that scale is less common, so it must be specifically invoked.
And then you've got Puerto Ricans, who are definitely US'ian but eat more like the non-US'ian Americans, so who knows what they would think of if you ask about American food, but it wouldn't surprise me if Contiguousunitedstatesian is the default (i.e., the same cuisine the Canadians would be referring to).
Probably because the transfer of accounts (typically for reasons of better spamming, but in this case for adult access) is possible.
However, that makes me wonder what mechanism might "unverify" an account holder's age upon transfer. I suppose it's simply a need to re-verify (take a new photo) upon every login, but then folks could transfer the session cookie to avoid needing the new owner to perform a login (unless a new device ID/fingerprint makes the old cookie useless).
Since you don't have to verify every time you use the account, transfer of verified accounts will still be a "problem" though. It's just a CYA to be able to say "we verified this account owner."
But… You could transfer the account after age verification too. The only way to be sure is to ask for ID every time people use the website / application, then children will be truly finally safe from the horrors of the Internet.
Yes, but you also said it's a CYA, when indeed it's not sufficient CYA if only a former account owner, but not "this account owner," had been verified.
It's definitely CYA. Because not transferring accounts is almost definitely in the TOS. So "we didn't know it was someone else using the account, thats against our TOS" will be the response.
This is cool except that the only ad for this I've come across so far was for analog summing. Remote or not, that concept (going out of one's way to theoretically have something more pleasing than digital summing) always smelled like a scam to me. Like ok, maybe a sample rate a hair above what Shannon/Nyquist demand can't do digital summing with all the right IM distortion of the missing supersonic content or whatever, but 192kHz ought to solve for that! So is it something else to be gained via analog summing?
They have 60+ rack units with little robot grabbers physically controlling the knobs.
Re analogue summing, yeah it does near nothing in reality. What you're missing though is that what people actually want with analogue summing isn't really technically better sound but technically worse sound. Analogue gear might have a little bit of harmonic distortion, a little bit of crosstalk between channels, certain transformer characteristics etc that theoretically make it sound more glued together or warm etc etc. But ultimately summing is summing and those differences vs. digital are very small (and won't always contribute positively either).
I'm not interested in analog summing myself, but I think you're missing the point. It's not about "better" summing. You want more euphonic summing. Analog audio processing often comes with artefacts that give the signal sent through it a more pleasing character, for whatever reason (phase shift, saturation, channel differences between left and right, transient modulation, slew rate, power sag, etc.).
I personally think analog summing is a waste of time, because the differences are too subtle to be worth the investment in setting it up. But that's just my opinion. Some people are really into it (Eric Valentine comes to mind).
Just wanted to point out that in the context of audio equipment (both professional and audiophile) "sounds better" often means "sounds worse but more engaging". Just like a polaroid picture often evokes more emotions than a photo taken with a modern digital camera and a great lens.
On my Pixel 10 using Chrome, it says "Mic needed - refresh to allow" but refreshing doesn't change anything. It's possible that I did something years ago to prevent whatever permission popup might normally be offered?
Your browser might have microphone access set to "Deny" by default rather than "Ask". This happened to my friend. He changed the setting and it worked, but maybe there's a way to give a more helpful error in this scenario. Let me see
reply