Racing/Freestyle FPV Drones. It's a wonderful fusion of building them, electronics, software (betaflight is open source), cinematography, and just tinkering in general.
That's typical for foundations that operate along the lines of an endowment. The idea is that they want to be able to continue the sustain and grow their philanthropy forever and they can't do that if they run out of money.
Same here. Running on a pi 3 with nodered, both in docker containers. Works perfectly and reliably with hundreds of entities, numerous automations, etc. Just make sure you have an official pi ac adapter and boot off an SSD drive not SD card otherwise you will have problems.
I'm running HACS in the container release right now, it's worked for a long time. So long as you have your HA configuration folder as a volume etc it will preserve across restarts/upgrades.
I'd go as far as to argue the (officially supported!) container release is the best way to get a production quality install of HA - containers are a great way to package and release complex web apps like HA. Mines automatically updates itself every time new container image released, has done so with no intervention from me for over a year. With the container lifecycle/config, you don't really need Supervisor mode either.
To say it is crippled is nonsense, it is literally one of the two officially recommended install paths:
HACS works in a container, or is there specific functionality of it that doesn't work? I've been using it inside a Docker HA container on a Pi without issue.
S3 Glacier Deep Archive, $0.00099 per GB per month.
I have a ZFS based NAS. And periodically do a incremental backup (zfs send) of the entire dataset, encrypt it gpg and pipe it straight up to S3 deep archive. Works like a charm.
The catch with S3 deep archive is if you want to get the data back... It's reliable, but you will pay quite a bit more. So as a last resort backup, it's perfect.
The very first time you do it, you will need to do a full backup (ie. without the `-i <...>` option). Afterwards, subsequent backups can be done with the -i, so only the incremental difference will be backed up.
I have a path/naming scheme for the .zfs.gpg files on s3 which include the snapshot from/to names. This allows to determine what the latest backed up snapshot name is (so the next one can be incremental against that). And also use when backing up, since the order or restore matters.
Ah gotcha, I haven't done full restore of my main dataset.
I've only verified with a smaller test dataset to validate the workflow on s3 deep archive (retrieval is $0.02/GB). I've done full backup/restore with the zfs send/gpg/recv workflow successfully (to a non aws s3 destination), and used s3 for quite a long time for work and personal without issue, so personally I have high confidence in the entire workflow.
Nostalgia! This was my first (and only) technic set I got when I was a kid. The most impressive here is the actual functional mechanical systems, all wheel steering, all wheel drive, differentials, gearbox, suspension, etc.
VMware Tanzu is tightly coupling Kubernetes and modern app development workflows in to the already widely used vSphere virtualization platform. There are multiple engineering roles, everything from k8s internals, to networking, storage, virtualization, and everything in between. Looking for good all around problem solvers, plus if you are already familiar with k8s and golang, but not a prerequisite.
This is exactly what VMware Horizon VDI does to create desktop VMs very quickly, called Instant Clone (aka VM Fork), and has been around for several years now.
With that, you don't even need to do deduplication after the fact, it comes for free. When a new VM is forked, it's memory and disk are copy-on-write from the source/parent VM.
> Unless they mean it'll instantly show a loading screen, I'll bet it doesn't.
You absolutely can. I am not sure what tech Microsoft is using, but a similar experience can be had with VMware Horizon VDI product with something called Instant Clone, which is essentially forking an already booted up Desktop VM. On each esxi hypervisor host, you would have one of these parent/seed VMs that is booted up and ready in a frozen state, and when a new desktop VM is needed, it's created from the seed/parent using copy-on-write memory/disk and runtime state less than a second.