That combined with the parent's post is, perhaps counterintuitively, somewhat concerning.
The proper technique for yielding to pedestrians wishing to cross is to start slowing down early, as if you were planning to stop before the crossing. That sends a clear signal to the pedestrian they're good to start crossing. Then you're free to speed back up. This is very comfortable for the pedestrian and the vehicle never needs to stop, so the slowdown is minimal.
That Waymos apparently don't act this way and seem to need to send an explicit signal to pedestrians sounds concerning to me, even if its ultimately safe.
Waymo does slow down as it approaches stop signs (usually where crosswalks are) and it will slow down if there is a pedestrian entering the roadway (crosswalk or not) since it doesn't want to crash into them.
The explicit signal of a driver noticing you (eye contact) is replaced by the signal above the vehicle. Are you not equally concerned that pedestrians have to get an explicit signal from drivers who are legally required to yield or stop??
Ah, so this particular pedestrian crossing wasn't at a stop sign (we were at the big historic army place out by the Golden Gate bridge) so that might explain it.
I particularly enjoyed (hated) "... is now the _least RAM browser_ ...".
Reminds me of a childhood friend of mine who always said "it looks very 3D" when he meant "the graphics are good". Pissed me off back then, and apparently still does.
It's hard to say whether it's AI generated or just bad writing, but my eyes kept sliding off of it. The bolding for emphasis could be a sign it's LLM output though, whatever bot was popular to use for reddit posting a few months ago loved to randomly bold parts of its responses.
It splits revenue out to 3 categories, "Productivity and Business Processes", "Intelligent Cloud", and "More Personal Computing", with windows as one of several things in the 3rd group. How did you figure it out as a 5th place revenue source?
Tariffs/inflation/everything has raised the unit cost to the point that they're probably close to running a loss again sometimes on the latest gen consoles.
On possibility I've seen raised is that slower GI movement -> slower alcohol uptake -> not getting as much of a "hit" from drinking as the effects come on more slowly.
In my personal experience, I do still get the same hit from drinking–I feel a buzz almost immediately, same as before. Rather, I just don't feel the "urge". I've never been a heavy drinker, but I would occasionally crave a beer or two, particularly at the end of a work week. Also, drinking on a GLP1 (I've been on both Tirzepatide and Semaglutide) absolutely wrecks my GI tract for 24-48 hours. Usually with an onset of maybe 8 hours, I get horrible heartburn, moderate to severe nausea, and even mild diarrhea.
I don't think it's any one thing. People like different kinds of alcohol, for different reasons. For someone who's alcohol cravings are based on the sugar in their preferred alcoholic drink, it isn't surprising then, that a medication that lowers their desire to ingest sugar lowers their desire to drink (their chosen sugary drink). Naturally this doesn't cover all alcohol drinkers, but it can't also be none of them.
It’s definitely not ccache as they cover that under compiler wrapper. This works for Android because a good chunk of the tree is probably dead code for a single build (device drivers and whatnot). It’s unclear how they benchmark - they probably include checkout time of the codebase which artificially inflates the cost of the build (you only checkout once). It’s a virtual filesystem like what Facebook has open sourced although they claim to also do build caching without needing a dedicated build system that is aware of this and that part feels very novel
Re: including checkout, it’s extremely unlikely. source: worked on Android for 7 years, 2 hr build time tracks to build time after checkout on 128 core AMD machine; checkout was O(hour), leaving only an hour for build if that was the case.
Obviously this is the best-case, hyper-optimized scenario and we were careful not to inflate the numbers.
The machine running SourceFS was a c4d-standard-16, and if I remember correctly, the results were very similar on an equivalent 8-vCPU setup.
As mentioned in the blog post, the results were 51 seconds for a full Android 16 checkout (repo init + repo sync) and ~15 minutes for a clean build (make) of the same codebase. Note that this run was mostly replay - over 99 % of the build steps were served from cache.
Do you have any technical blog post how the filesystem is intercepting and caching build steps? This seems like a non-obvious development. The blog alludes to a sandbox step which I’m assuming is for establishing the graph somehow but it’s not obvious to understand where the pitfalls are (eg what if I install some system library - does this interception recognize when system libraries or tools have changed, what if the build description changes slightly, how does the invalidation work etc). Basically, it’s a bold claim to be able to deliver Blaze-like features without requiring any changes to the build system.
reply