Hacker Newsnew | past | comments | ask | show | jobs | submit | void-star's commentslogin

It really shines for navigating history. <esc>/ searches history the same way as the editor search function

It’s strange. I have heard this from lots of others too. I think I am an anomaly here. I can’t live without shell vi mode

You're not alone, I heavily rely on vi mode and often struggle if I'm on someone else's machine and can't use it. I always wonder how you're supposed to work without it but I never dare to ask

`set -o vi` is quickly typed in anger...

it is an additional burden to switch to shell vi mode, it is not the standard. Maybe you can put it in all of yout bashrc files but you will probably hear some swearing from the people logging to your machines :).

Same - shell vi mode is critical for intensive terminal sessions.

set -o vi

<esc> puts you into vi mode at the cli prompt with all the semantics of the editor.

These carpal tunnel riddled hands can’t be bothered to reach for ctrl or alt let alone arrow keys.


If you aren't aware already, you can put 'setxkbmap -option ctrl:swapcaps' in one of your startup config files, like .bashrc or somesuch. That flips left CTRL and CAPS LOCK.

It’s almost like we need some deterministic set of instructions that can be fed to a machine and followed reliably? Like… I don’t know… a “programming language”?


I would say that's exactly not the solution, since the surface area is too large to hard code (which is somewhat the point of this). Evidence being, it's 2026 and there are exactly 0 robots that can do this simple task reliably, in any kitchen you put it in.

You need something general and flexible, dare I say "intelligent", or you'll be babysitting the automation, slowly adding the thousand little corner cases that you find, to your hard coded decision tree.

This is also why every company with a home service robot, that can do anything even remotely complex as a sandwich, are doing it via teleoperation.


The advice that everyone seemed to agree on at least just a few months ago was to make sure _you_ write the comprehensive tests/specs and this is what I still would recommend doing to anyone asking. I guess even this may be falling out of fashion though…


Generate with carefully steered AI, sanity check carefully. For a big enough project writing actually comprehensive test coverage completely by hand could be months of work.

Even state of the art AI models seem to have no taste, or sense of 'hang on, what's even the point of this test' so I've seen them diligently write hundreds of completely pointless tests and sometimes the reason they're pointless is some subtle thing that's hard to notice amongst all the legit looking expect code.


There is no need to write tests manually. Just review the tests, and make sure there is good coverage, if there isn't ask AI for additional tests and give it guidance.


Nope. Those are not the only answers I am seeing. I’m still curious though. 2x was nice because nobody really questioned it. Now that we have there doesn’t seem to be one “answer”. This is a fun/interesting question that comes up every now and then here and elsewhere :-) I suspect someone smarter than me about system tuning will have a much smarter and nuanced answer than “just use 2x”


I thought the modern advice was you don't need it at all. No more spinning disks, so the there's no speed gain using the inner-most ring, and modern OSes manage memory in more advanced, and dynamic ways. That's what I choose to believe anyway, I don't need anymore hard choices when setting up Linux :)


The main downside to not having swap is that Linux may start discarding clean file backed pages under memory pressure, when if you had swap available it could go after anonymous pages that are actually cold.

On a related note, your program code is very likely (mostly) clean file backed pages.

Of course, in the modern era of SSDs this isn't as big of a problem, but in the late days of running serious systems with OS/programs on spinning rust I regularly saw full blown collapse this way, like processes getting stuck for tens of seconds as every process on the system was contending on a single disk pagefaulting as they execute code.


I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.

For that reason, I always set up swap space.

Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.


Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.


The mentioned situation is not running out of memory, but being able to use memory more efficiently.

Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).

If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.


Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.

What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?


In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)


Try doing cargo build on a large Rust codebase with a matching number of CPU cores and GBs of RAM.


I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)


I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.


No argument there. I also always had the impression that Linux fails less gracefully than other systems.


That only helps if you don't have much free RAM. If you've got more free RAM than you need cache (including disk cache), swap only slows things down. With RAM prices these days, getting enough RAM is not worth it to avoid swap. IME on a desktop with 128GiB of RAM & Zswap I've never hit the backing store but have gone over 64GiB a few times. I wouldn't want to have pay to rebuild my desktop these days, 128GiB of ECC RAM was pricey enough in 2023!


It’s still beneficial so that unused data pages are evicted in favor of more disk cache


I'm the OP. I got myself into collecting falsehoods people believe about Linux swap and OOM[1]. There is an entry about this 2x rule in this collection, with my answer on how to select swap size.

My question on Retrocomputing.StackExchange is my attempt to add some historical background to this entry.

[1]:https://alexeydemidov.com/2025/05/15/falsehoods-people-and-L...


Why was this downvoted? I’m generally curious what current recommendations for swap are too!

Edit: oh and I don’t have an actual personal system with swap configuration on it anymore to give my own answer anymore either.


same, completely fair thing to ask.

people are too negative these days :|


LLMs may occasionally turn bad code into better code but letting them loose on “good” or even “good enough” code is not always likely to make it “better”.


Groan…


Public Linux rootkits have been around a very very long time. Nothing new here in that regard. Also Linux AV has been around almost as long…

This effort is more useful to up and coming defenders and security researchers than attackers by far.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: