• 4 Posts
  • 71 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle


  • Sometimes you need to understand the basics first. The points I listed are sysadmin 101. If you don’t understand these very basic concepts, there is no chance you will be able to keep any kind of server running, understand how it works, debug certificate problems and so on. Once you’re comfortable with that? Sure, use something “simpler” (a.k.a. another abstraction layer), Caddy is nice. The same point was made in the past about Apache (“just use nginx, it’s simpler”). Meanwhile I still use apache, but if needed I’m able to configure any kind of web server because i taught me the fundamentals.

    At some point we have to refuse the temptation to go the “easy” way when working with complex systems - IT and networking are complex. Just try the hard way first, read the docs, and if it’s too complex/overwhelming/time-consuming, only then go for a more “noob-friendly” solution (I mean we’re on c/selfhosted, why not just buy a commercial NAS or use a hosted service instead? It’s easier). I use firewalld but I learned the basics of iptables a while ago. I don’t build apache from source when I need to upgrade, but I would know how to get 75% there - the docs would teach me the rest.


  • By default nginx will serve the contents of /var/www/html (a.k.a documentroot) directory regardless of what domain is used to access it. So you could build your static site using the tool of your choice, (hugo, sphinx, jekyll, …), put your index.html and all other files directly under that directory, and access your server at https://ip_address and have your static site served like that.

    Step 2 is to automate the process of rebuilding your site and placing the files under the correct directory with the correct ownership and permissions. A basic shell script will do it.

    Step 3 is to point your domain (DNS record) at your server’s public IP address and forwarding public port 80 to your server’s port 80. From there you will be able to access the site from the internet at http://mydomain.org/

    Step 3 is to configure nginx for proper virtualhost handling (that is, direct requests made for mydomain.org to your site under the /var/www/html/ directory, and all other requests like http://public_ip to a default, blank virtualhost. You may as well use an empty /var/www/html for the default site, and move your static site to a dedicated directory.) This is not a strict requirement, but will help in case you need to host multiple sites, is the best practice, and is a requirement for the following step.

    Step 4 is to setup SSL/TLS certificates to serve your site at https://my_domain (HTTPS). Nowadays this is mostly done using an automatic certificate generation service such as Let’s Encrypt or any other ACME provider. certbot is the most well-known tool to do this (but not necessarily the simplest).

    Step 5 is what you should have done at step 1: harden your server, setup a firewall, fail2ban, SSH keys and anything you can find to make it harder for an attacker to gain write access to your server, or read access to places they shouldn’t be able to read.

    Step 6 is to destroy everything and do it again from scratch. You’ve documented or scripted all the steps, right?

    As for the question “how do I actually implement all this? Which config files and what do I put in them?”, the answer is the same old one: RTFM. Yes, even the boring nginx docs, manpages and 1990’s Linux stuff. Each step will bring its own challenges and teach you a few concepts, one at a time. Reading guides can still be a good start for a quick and dirty setup, and will at least show you what can be done. The first time you do this, it can take a few days/weeks. After a few months of practice you will be able to do all that in less than 10 minutes.




    • step 1: use named volumes
    • step 2: stop your containers or just wait for them to crash/stop unnoticed for some reason
    • step 3: run docker system prune --all as one should do periodically to clean up the garbage docker leaves on your system. Lose all your data (this will delete even named volumes if they are not in use by a running container)
    • step 4: never use named or anonymous volumes again, use bind mounts

    The fact that you absolutely need to run docker system prune --all regularly to get rid of GBs of unused layers, test containers, etc, combined with the fact that it deletes explicitely named volumes makes them too unsafe for my taste. Just use bind mounts.



  • allows my mail clients to connect via IMAP to view and search emails

    dovecot will be able to handle this part. This is what I use as a mail archive (once a year, archive all mail from the previous year from various mailboxes to my self-hosted dovecot instance). I wrote this ansible role for it.

    downloads new emails via IMAP

    As others recommended, imapsync should be able to handle that part.

    downloads new emails via IMAP

    These tools are simple enough to install and manage (one package, one config file), Docker is not needed. If you really need it to fit into your docker-based setup, build and maintain your own images.








  • You can definitely replace senders with correct mail addresses for relaying through SMTP servers that expect them (this is what I do):

    # /etc/msmtprc
    account default
    ...
    host smtp.gmail.com
    auto_from on
    auth on
    user myaddress
    password hunter2
    
    # Replace local recipients with addresses in the aliases file
    aliases /etc/aliases
    
    # /etc/aliases
    mailer-daemon: postmaster
    postmaster: root
    nobody: root
    hostmaster: root
    usenet: root
    news: root
    webmaster: root
    www: root
    ftp: root
    abuse: root
    noc: root
    security: root
    root: default
    www-data: root
    default: myaddress@gmail.com
    

    (the only thing I changed from the defaults in the aliases file is adding the last line)

    This makes it so all/most system accounts susceptible to send mail are aliased to root, and root in turn is aliased to my email address (which is the one configured in host/user/password in msmtprc)

    Edit: I think it’s actually the auto_from option which interests you. Check the msmtp manpage



  • Usually you would have a second DNS resolver configured in /etc/resolv.conf (or whatever name resolution config system you are using, resolvconf, systemd-networkd, etc). The system will fall back to this resolver if the first resolver fails to respond (and/or replies NXDOMAIN, I’m not sure. The exact order and fallback conditions may vary depending on which system you use). This can be another dnsmasq instance, a public DNS resolver, your ISP’s resolver, etc. This allows at least basic DNS resolution to work before your dnsmasq instance comes back up.

    I would also add automatic monitoring for dnsmasq (either check that the service/container is running, or check the TCP connection to port 53, or check that DNS resolution is working for a known domain, etc)



  • Not an answer but still relevant: I actively avoid enabling unattended-upgrades for third-party repositories like Docker (or anything that is not an official Debian repository) because they don’t have the same stability guarantees, and rely on other upgrade notification methods instead.

    how bad of an idea is this to run a DNS in docker and use it for the host and other containers?

    Personally I would simply install dnsmasq directly on the host because it is one apt install and a configuration file away. Keep it simple.