This seems like it could be a useful tool. At first read, there's nothing here that cpio, md5sum, *zip*, and mkfs in tandem couldn't do, but it simplifies the process.
The real question in my mind is if the "attic space" of learning a new tool and dragging it around everywhere is worth the convenience versus doing it The Old, Boring, Manual Way(TM). I suppose time will tell...
FSArchiver reduces cognitive load by guaranteeing that resulting archive is accurate and exact mirror of original filesystem. AFAIK, cpio does not support extended attributes and one have to remember to call tar with specific flags to store xattr.
"by guaranteeing that resulting archive is accurate and exact mirror of original filesystem."
^^^ that is true enough
"reduces cognitive load"
^^^ that I have doubts about. I expect this would be highly variable between people, which leads to my question about the trade-off.
For those who are newcomers (say ten years or fewer) to the *nix world, it may well be an improvement. For those who have been using the standard-ish tools for 30+ years, learning yet-another-new-utility has a very high cost.
That dilemma would be irrelevant if the person taking backups is the same person doing the restoration (as will often be the case). My concern is along the lines of "what if this becomes a de facto standard?"
Unfortunately fsarchive files aren't mountable. But mksquashfs is not useful for archiving due to its lack of some options. There is squashfs-tools-ng though, which can convert tar into squashfs.
But anyway, fsarchiver is my tool of choice for fs backups.
We all have our pet peeves, and this is one of mine, too. But it's surely orthogonal to issues of code reliability—someone can definitely write flawless English but terrible code, and it's definitely believable that the reverse could happen—so it's not clear what's the relevance here.
Honestly, the biggest thing it tells me is that this is likely a one person project… there was clearly no one proof reading or reviewing this text. I am always a bit suspect of relying on single person projects.
Seems like tar, with internal checksums (instead of external checksums or FEC). I'd just stick with tar and add some par2 blocks if you are worried about bitrot.
Basically, you take a normal TAR file and creates external parity files that can be used to recover data in the case of bitrot. Something like this would have been great to have on a few data files I had to retrieve from tape that ended up with a bit flip somewhere in the network > disk > tape > disk process.
However, it doesn't look like the program is maintained anymore...
> DISCLAIMER: This project web space is not actively mantained and is presented here for archive purposes. However some project members still montior the project mailings lists if you have questions. (sic)
The last release was in 2004 (Wikipedia says it was active until 2015, but I don't see that).
PAR2 (libpar2/par2cmdline) continues to see active (if sporadic) maintenance. The most recent tagged release was 2020-02-09 but there've been a handful of PR's merged since. [2]
PAR3 has a reference implementation and alpha spec (libpar3/par3cmdline) which is based around Blake3. [3]
OG PAR (libpar/parcmdline) is legit unmaintained; the last release was 21 years ago. [1]
Fsarchiver works better for me. With tar I have to manually specify a lot to exclude when making full filesystem backups, at least submountpoints. With fsarchiver you don't have to.
The real question in my mind is if the "attic space" of learning a new tool and dragging it around everywhere is worth the convenience versus doing it The Old, Boring, Manual Way(TM). I suppose time will tell...