Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can understand the benefit of automating these things but I think it would probably be better for people to setup these things manually first. At least to understand what each step is doing. Otherwise, people are trusting rather a core piece of infrastructure with a random docker image online.

I found personally that there are several aspects of this automation that needs tweaking.

* If you need ipv6 support this config needs to be overhauled.

* Wireguard config should have ipv6 addresses set to avoid potential leakages (even if ipv6 is disabled).

* This setup would benefit from some ddns mechanism as most people do not have static ip setups.

* Firefox is beginning to have https only modes in which case maybe I would like to adjust lighthttp to work with that.

The list goes on.



* I will update the IPv6 stuff

* Also the wireguard config

* I will look into how I can allow the user to provide that information as the IP is pulled within the doctor container

* Noted on Firefox

Thanks for the detailed comment

EDIT:

Just provided instructions in the repo for how to configure DDNS: https://github.com/IAmStoxe/wirehole#configuring-for-dynamic...

Also modifed it so only the port 51820 is exposed preventing any unintentional exposure.


Have you considered producing a patch for the FreedomBox folks? Getting it into Debian would make it easily available to lots of users.

https://salsa.debian.org/freedombox-team/freedombox/


I have not but that's only because I had never heard of it.

Will check it out


You can minimize a bit of the manual installation of Docker by leveraging the convenience script that's maintained by them [0].

[0] https://get.docker.com/


I'd rather not pipe to bash from curl.


Interesting take given your project is mostly an installer with a quick start of trusting a compose file.


You think an unprivileged docker container is more dangerous than a curl to bash with sudo inside? I beg to differ.

If you care to please elaborate as to why I would be better off piping to bash rather than installing per the documentation? I am honestly interested in your take.


Two things here, from my perspective...

First is that trusting my suggestion is no different than trusting the instructions in your repo. Both can equally harbor nefarious things. So, the question is: do I trust Docker's script over someone that was just on the front page of HN? Probably.

To address the technical side - there are a number of things that can be improved from a security point of view in your compose file:

1) Restrict any new privs. security_opt: - no-new-privileges

2) Drop all privs then selectively add. cap_drop: - ALL cap_add: - NETADMIN

3) Limit CPU and memory. deploy: resources: limits: cpus: '0.50' memory: 50M reservations: cpus: '0.25' memory: 20M

4) Change the running users - looks like all the containers are running as root, which is no different from the system user.

There are others, but.. Just because the container isn't explicitly 'privileged' doesn't mean it's operationally safe.


I understand that it’s better to have less dependencies. And I understand that a pipe to bash thing can do more than a naive user would think. But the header of get.docker.com doesn’t pipe directly to bash anymore. You can inspect the downloaded shell script or compare hashes.


The problem is where do we automate the pulling of the hash to verify against? I do not know of a source.


I disagree that they should try setting up these things manually. Provided the automated installation is good enough, it's easier to pick apart a ready-made install to find out how it works, than to deal with the learning curve of learning how to troubleshoot the pieces that don't quite work together yet and learning their role in the overall stack.

I would have never gotten into email self-hosting if someone hadn't done the hard work of creating an all-in-one solution, because it would have taken more time than I would be willing to invest.


This software simply isn't a simple set it and forget like microsoft office, skype or other consumer apps. It is more important than a basic mail server because it is resposible for networking and if things like DHCP/DNS goes down then access to the internet is at risk. For 99.9% of people out there, that is just going to be an unpleasant experience when things go wrong. That reasonably could take down people's security, automation, VOIP phones etc..

I have seen pihole alone go down because one of the gravity sources goes down or because of a dhcp misconfiguration.

It involves setting up a VPN and that by itself needs some monitoring/debugging to understand what to do when things necessarily go wrong, especially in regards to routing, setting up splitVPNs etc..

Speaking as someone who has all of this setup manually, there is a bunch of fine tuning and fiddling that makes a set and forget not particularly ideal. On my setup I have to work with a custom cloudflare script to enable DNS records to update. I need a custom lighthttpd config to enable https with letsencrypt. My setup uses DNSCryptProxy instead of unbound (to enable ESNI + DoH for Firefox through my pihole) and as such caching needs to be disabled.... Just a bunch of random tweaks here and there that need to be thought out.

Take an example of simply adding multiple other peers to wireguard. Something that people would reasonably want to do with a pihole/VPN. There are no instructions how to do it with this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: