Hacker Newsnew | past | comments | ask | show | jobs | submit | Groxx's commentslogin

wouldn't this be solved by synchronously invalidating everything before computing anything? it seems like that's what the described system is doing tbh, since `setValue` does a depth-first traversal before returning. or is there a gap where that strategy fails you?

yea, this is in javascript. it's inherently single-threaded in almost all contexts (e.g. node.js shared memory where you're intentionally bypassing core semantics for performance, and correctness is entirely on you)

Aside from being a bit small and having to be held close, phones are good proportions for reading. Computers screens have gotten wider and wider, and UIs bigger and bigger, and it eats into reading space pretty heavily. Especially if you don't have a high-density screen.

> Computers screens have gotten wider and wider, and UIs bigger and bigger

Sadly, most websites forcefully limit the width of the text. It's like they pretend our monitors are oriented to be tall rather than wide. Even HN has unnecessarily big margins. So unless I try to cram another window in my FHD monitor, I have ~50% or more completely wasted space. Margins should be 2-3 pixels wide, not 20-30% of the screen.


There are actual user studies to show that wider text is harder to read. https://baymard.com/blog/line-length-readability

The major difference is that in the era of print, it was pretty logical where a multicolumn wide layout could go like on a newspaper, but in an desktop experience the browser markup is theoretically endless.


I can resize my window easily if I wanted shorter text. Or used ctrl-shift-m on Firefox. But I can't easily make the text longer without userscripts or addons.

> actual user studies to show that wider text is harder to read

That may apply to most people, but not to everyone.


afaict it applies to literally everyone. there's a variable "sweet spot" of course, but once you get out to "extremely wide" it's reliably worse for everyone, and there are LOADS of computer monitors that qualify for that label.

margins to control the width of large blocks of text have a ton of research in their favor, it's not just "more whitespace = more gooder" UI design madness. there's some of that of course, but there's a sane core underneath it all.


Solution: rotate your monitor 90 degrees, and inform your OS that you have done so. Now your monitor is 1080x1920. You'll actually be amazed how much more of a document fits on screen without sacrificing readability.

Preach. I have 4 monitors and one is a vertical 1440x2560. Massive productivity boost - terminals running claude code, reading docs, IDE panes, anything with lots of scrolling. Highly recommend it!

In addition to more space, having only one foreground application really reduces distractions and visual clutter. Also, for some reason I am comfortable using larger fonts on phones and tablets, which makes doing lots of reading easier than on my laptop.

> reduces distractions

Have you looked over the shoulder of somebody trying to "do" something on their phone recently?

If so you might have noticed the constant pings and notifications from dating apps, news sites, random games and cool-apps-that-you've-long-forgotten-but-still-have-location-and-background-services-turned-on.


That's where Reduce Interruptions on the iPhone (or Do Not Disturb) comes in handy.

That's not just interruptions. It's the notifications bar itself.

I noticed this only recently - I switched the default phone launcher to a scifi theme built on Total Launcher (there's legit personal research project reasons behind that, it's not just to look cool!) and after few days (and a bunch of missed messages), I realized my life seems suspiciously light in interruptions and random events. It took me a few more moments to pin-point the reason: the theme hid the notification bar entirely. It was still there, ready to pull down and expand with a gesture or a button tap - but that top line with icons was not visible (and through the stroke of luck, I misconfigured something in another experiment and had no notification indicators on the lock screen, either).

Not having notification indicators visible on any surface is really all it took - and conversely, this means that just having them there created the majority of the burden for me. I thought I successfully solved the distraction problem by silencing or eliminating ads and useless notifications, but now I know that even the important ones aren't really that important for the burden their very existence creates.


Android modes provide control over notification display.

Modes control which people and apps can trigger a sound/vibration, but also offer the option to hide the silenced notifications from the status bar, pull-down shade, and dots on app icons. I hide them from the status bar, but not the pull-down shade so that I can manually check if I want to, but don't see them at a glance.

I'm not a heavy user of this feature though; I mostly don't install apps that have spammy notifications.


Right. I'm saying that living for a week without any notification bar at all made me realize that even my usual well-curated notification bar is impacting me much more than I realized.

I imagine usage patterns vary greatly. For me, most of the time, I have it set to only allow messages from contacts, and I usually handle those immediately.

I mean, some, sure. but it's a choice, and not all choose to do that. and I've watched quite a few (of all ages) escape it when they realize how much it's harming their ability to do what they need to do.

This is the first time I've heard someone say a smartphone reduces distractions.

As a millennial boomer, I prefer my triple monitor setup and mechanical keyboard, not to mention network- and client-level content blockers, whenever I have to input more than a sentence.

I was at a conference last week, and I took notes in a fullscreened GNU Nano. Distractions, ADHD, etc. Did get some odd looks, but I couldn't imagine taking notes without an actual keyboard. I'm not an ultra fast typer, but I'm decent - I'd challenge any thumb typer on MonkeyType.


I don't have any social apps or games on my phone. Other than the web browser there's nothing to distract me. I find it so easy to get caught up in checking the news or email or the episode of that show I was watching on my laptop, but I don't do any of those things habitually on my phone or tablet or reader so that's my "distraction free" device.

That's only for reading though! For taking notes I go with a real keyboard or pencil and paper whenever I have the choice.


similar here, I'm gradually removing more and more things from my phone. at this point it's mostly just a couple actually-important apps, a web browser, and messaging apps (because it's clearly superior to whipping out a laptop for brief things). "social" outside messaging is in the web browser or not on the phone at all. if I want to focus I just turn on Do Not Disturb for an hour.

browsing is slowly reducing as time goes on too, as while it's convenient on my phone, it's rarely efficient. it doesn't take long at all before I'd rather pull out a laptop and finish more quickly.


Nothing about this requires an app. Just an ID.

Forcing the app is almost certainly for tracking purposes and justifying the decision for whatever braindead higher-up decided it was a good idea, therefore it must be made to work.


Doesn't seem like those should be mutually exclusive, though the habits involved are quite opposing and I can definitely believe they're uncommon.

E.g. GC doesn't need to be precise. You could reserve CPU budget for GC, and only use that much at a time before yielding control. As long as you still free enough to not OOM, you're fine.


Ha! I've been thinking of this exact thing, and was curious how natural-looking the end result would be / how much you could compress the tokens by choosing less and less likely ones until it became obvious gibberish. I'm kinda surprised that it just sounds like normal slop at that density. Seems viable to use with "just" two bots chattering away at each other, and also occasionally sending meaningful packets.

In principle the output is arbitrarily natural-looking. The arithmetic coding procedure effectively turns your secret message into a stream of bits that is statistically indistinguishable from random, the same as you pull out of your PRNG in normal generation.

Yes, with a few gotchas, especially related to end handling. If the government extracts the hidden bits from possibly stego-streams, and half of the ones theyv encounter give an "unexpected end of input" error, but yours never give that error, they will know that your hidden bit streams likely contain some message.

You can avoid it by using a bijective arithmetic encoder, which by definition never encounters an "unexpected end of stream error", and any bit string decodes to a different message. That's the cool way.

The boring practical way is to just encrypt your bits.


yeah, encrypting the bits before embedding seems like a valid first step... even if someone suspects steganography, they still can't read it

Is there a difference there for a Pixel? I thought those bootloaders have always been unlockable (after carrier unlocking, which should be possible after the contract is paid off).

"Batteries included" means "ossification is guaranteed", yah. "stdlib is where code goes to die" is a fairly common phrase for a reason.

There's clearly merit to both sides, but personally I think a major underlying cause is that libraries are trusted. Obviously that doesn't match reality. We desperately need a permission system for libraries, it's far harder to sneak stuff in when doing so requires an "adds dangerous permission" change approval.


> "Batteries included" means "ossification is guaranteed", yah. "stdlib is where code goes to die" is a fairly common phrase for a reason.

Except I rather have ossified batteries that solve my problem, even if not as convinient as more modern alternatives, than not having them at all on a given platform.


Golang seems to do a good job of keeping the standard library up to date and clean

Largely, yes.

But also everyone sane avoids the built-in http client in any production setting because it has rather severe footguns and complicated (and limited) ability to control it. It can't be fixed in-place due to its API design... and there is no replacement at this point. The closest we got was adding some support for using a Context, with a rather obtuse API (which is now part of the footgunnery).

There's also a v2 of the json package because v1 is similarly full of footguns and lack of reasonable control. The list of quirks to maintain in v2's backport of v1's API in https://github.com/golang/go/issues/71497 (or a smaller overview here: https://go.dev/blog/jsonv2-exp) is quite large and generally very surprising to people. The good news here is that it actually is possible to upgrade v1 "in place" and share the code.

There's a rather large list of such things. And that's in a language that has been doing a relatively good job. In some languages you end up with Perl/Raku or Python 2/3 "it's nearly a different language and the ecosystem is split for many years" outcomes, but Go is nowhere near that.

Because this stuff is in the stdlib, it has taken several years to even discuss a concrete upgrade. For stuff that isn't, ecosystems generally shift rather quickly when a clearly-better library appears, in part because it's a (relatively) level playing field.


This looks like an ad for batteries included to me.

Libraries also don't get it right the first time so they increment minor and major versions.

Then why is it not okay for built-in standard libraries to version their functionality also? Just like Go did with JSON?

The benefits are worth it judging by how ubiquitous Go, Java and .NET are.

I'd rather leverage billions of support paid by the likes of Google, Oracle and Microsoft to build libraries for me than some random low bus factor person, prone to be hacked at anytime due to bad security practices.

Setting up a large JavaScript or Rust project is like giving 300 random people on the internet permission to execute code on my machine. Unless I audit every library update (spoiler: no one does it because it's expensive).


Libraries don't get it right the first time, but there are often multiple competing libraries which allows more experimentation and finding the right abstraction faster.

Third party libraries have been avoiding those json footguns (and significantly improving performance) for well over a decade before stdlib got it. Same with logging. And it's looking like it will be over two decades for an even slightly reasonable http client.

Stuff outside stdlib can, and almost always does, improve at an incomparably faster rate.


And I think the Go people seem to do a fairly good job of picking out the best and most universal ideas from these outside efforts and folding them in.

.NET's JSON and their Kestrel HTTP server beg to differ.

Their JSON even does cross-platform SIMD and their Kestrel stack was top 10/20 on techempower benchmarks for a while without the ugly hacks other frameworks/libs use to get there.

stdlib is the science of good enough and sometimes it's far above good enough.


Rust is especially vulnerable witb Serde included in everything and maintained by 1 perso

For me, the v2 re-writes, as well as the "x" semi-official repo are a major strength. They tell me there is a trustworthy team working on this stuff, but obviously not everything will always be as great as you might want, but the floor is rising.

yea, I like the /x/ repos a fair bit. "first-party but unstable" is an extremely useful area to have, and many languages miss it by only having "first-party stable forever" and "third party". you need an experimentation ground to get good ideas and seek feedback, and keeping it as a completely normal library allows people/the ecosystem to choose versions the same way as any other library.

Another downside of a large stdlib, is that it can be very confusing. Took my a while how unicode is supposed to work in go, as you have to track down throughout the APIs what are the right things to use. Which is even more annoying because the support is strictly binary and buried everywhere without being super explicit or discoverable.

I'm not sure I understand. Why would a standard library, a collection of what would otherwise be a bunch of independent libraries, bundled together, be more confusing than the same (or probably more) independent libraries published on their own?

100% to libraries having permissions. If I'm using some code to say compute a hash of a byte array, it should not have access to say the filesystem nor network.

"Low-profile split mechanical" is I think my ideal too. Though I really like cupped keywells, the Advantage2 definitely convinced me that it's more comfortable than flat.

tbh when typing both together, I just shift my hand and press with my index and middle fingers. otherwise it's my pinky.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: