From 593fd242bfd4110d92b94511aedc250e27a3c664 Mon Sep 17 00:00:00 2001 From: anoma Date: Tue, 7 Jul 2015 12:37:42 +0100 Subject: [PATCH 01/20] Activate FAIL2BAN recidive jail Recidive can be thought of as FAIL2BAN checking itself. This setup will monitor the FAIL2BAN log and if 10 bans are seen within one day activate a week long ban and email the mail in a box admin that it has been applied . These bans survive FAIL2BAN service restarts so are much stronger which obviously means we need to be careful with them. Our current settings are relatively safe and definitely not easy to trigger by mistake e.g to activate a recidive IP jail by failed SSH logins a user would have to fail logging into SSH 6 times in 10 minutes, get banned, wait for the ban to expire and then repeat this process 9 further times within a 24 hour period. The default maxretry of 5 is much saner but that can be applied once users are happy with this jail. I have been running a stronger version of this for months and it does a very good job of ejecting persistent abusers. --- conf/fail2ban/jail.local | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/conf/fail2ban/jail.local b/conf/fail2ban/jail.local index 9ecb2095..f995f55a 100644 --- a/conf/fail2ban/jail.local +++ b/conf/fail2ban/jail.local @@ -13,3 +13,7 @@ enabled = true filter = dovecotimap findtime = 30 maxretry = 20 + +[recidive] +enabled = true +maxretry = 10 From 398a66dd4a45eb3f890334aa15b6c3a71a389194 Mon Sep 17 00:00:00 2001 From: Sheldon Rupp Date: Fri, 20 Nov 2015 20:46:28 +0100 Subject: [PATCH 02/20] Typo on 'weirdly' --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 31726dc8..5fec01d8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,7 +12,7 @@ Control panel: * Explanatory text for setting up secondary nameserver is added/fixed. * DNS checks now have a timeout in case a DNS server is not responding, so the checks don't stall indefinitely. -* Better messages if external DNS is used and, wierdly, custom secondary nameservers are set. +* Better messages if external DNS is used and, weirdly, custom secondary nameservers are set. System: From b32cb6229b5ca519f63603271078afc02169b4e8 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Thu, 26 Nov 2015 14:20:59 +0000 Subject: [PATCH 03/20] install boto (py2) via the package manager, not pip (used by duplicity) --- setup/management.sh | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/setup/management.sh b/setup/management.sh index c4af7092..b69001a7 100755 --- a/setup/management.sh +++ b/setup/management.sh @@ -4,13 +4,14 @@ source setup/functions.sh echo "Installing Mail-in-a-Box system management daemon..." -# build-essential libssl-dev libffi-dev python3-dev: Required to pip install cryptography. -apt_install python3-flask links duplicity libyaml-dev python3-dnspython python3-dateutil \ - build-essential libssl-dev libffi-dev python3-dev python-pip -hide_output pip3 install --upgrade rtyaml "email_validator>=1.0.0" "idna>=2.0.0" "cryptography>=1.0.2" boto +# Switching python 2 boto to package manager's, not pypi's. +if [ -f /usr/local/lib/python2.7/dist-packages/boto/__init__.py ]; then hide_output pip uninstall -y boto; fi # duplicity uses python 2 so we need to use the python 2 package of boto -hide_output pip install --upgrade boto +# build-essential libssl-dev libffi-dev python3-dev: Required to pip install cryptography. +apt_install python3-flask links duplicity python-boto libyaml-dev python3-dnspython python3-dateutil \ + build-essential libssl-dev libffi-dev python3-dev python-pip +hide_output pip3 install --upgrade rtyaml "email_validator>=1.0.0" "idna>=2.0.0" "cryptography>=1.0.2" boto # email_validator is repeated in setup/questions.sh From 161d0961391d96f33fa6e877d28ebc2de9fcea5b Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Thu, 26 Nov 2015 14:34:07 +0000 Subject: [PATCH 04/20] add a way to dump backup status from the command line --- management/backup.py | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/management/backup.py b/management/backup.py index cd005010..aebfbbc7 100755 --- a/management/backup.py +++ b/management/backup.py @@ -437,6 +437,11 @@ if __name__ == "__main__": # are readable, and b) report if they are up to date. run_duplicity_verification() + elif sys.argv[-1] == "--status": + # Show backup status. + ret = backup_status(load_environment()) + print(rtyaml.dump(ret["backups"])) + else: # Perform a backup. Add --full to force a full backup rather than # possibly performing an incremental backup. From cf33be4596bf3fb784fbeaaea58a934b139ead0e Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Thu, 26 Nov 2015 14:47:49 +0000 Subject: [PATCH 05/20] fix boto 2 conflict on Google Compute Engine instances GCE installs some Python-2-only boto plugin that conflicts with boto running under Python 3. It gives a SyntaxError in /usr/share/google/boto/boto_plugins/compute_auth.py (https://github.com/GoogleCloudPlatform/compute-image-packages). Disabling boto's default configuration file prior to importing boto so that GCE's plugin is not loaded. See https://discourse.mailinabox.email/t/500-internal-server-error-for-admin/942. --- CHANGELOG.md | 1 + management/backup.py | 3 ++- management/daemon.py | 1 + management/utils.py | 8 ++++++++ 4 files changed, 12 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5fec01d8..b7622478 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,6 +22,7 @@ System: * If ownCloud sends out email, it will use the box's administrative address now (admin@yourboxname). * Z-Push (Exchange/ActiveSync) logs now exclude warnings and are now rotated to save disk space. * Fix pip command that might have not installed all necessary Python packages. +* The control panel and backup would not work on Google Compute Engine because they install a conflicting boto package. v0.14 (November 4, 2015) ------------------------ diff --git a/management/backup.py b/management/backup.py index aebfbbc7..63b8aa66 100755 --- a/management/backup.py +++ b/management/backup.py @@ -12,7 +12,7 @@ import os, os.path, shutil, glob, re, datetime import dateutil.parser, dateutil.relativedelta, dateutil.tz import rtyaml -from utils import exclusive_process, load_environment, shell, wait_for_service +from utils import exclusive_process, load_environment, shell, wait_for_service, fix_boto def backup_status(env): # Root folder @@ -326,6 +326,7 @@ def list_target_files(config): elif p.scheme == "s3": # match to a Region + fix_boto() # must call prior to importing boto import boto.s3 from boto.exception import BotoServerError for region in boto.s3.regions(): diff --git a/management/daemon.py b/management/daemon.py index 9b71b611..32bbcc5f 100755 --- a/management/daemon.py +++ b/management/daemon.py @@ -94,6 +94,7 @@ def index(): no_users_exist = (len(get_mail_users(env)) == 0) no_admins_exist = (len(get_admins(env)) == 0) + utils.fix_boto() # must call prior to importing boto import boto.s3 backup_s3_hosts = [(r.name, r.endpoint) for r in boto.s3.regions()] diff --git a/management/utils.py b/management/utils.py index 9e628fd1..b77b7482 100644 --- a/management/utils.py +++ b/management/utils.py @@ -245,6 +245,14 @@ def wait_for_service(port, public, env, timeout): return False time.sleep(min(timeout/4, 1)) +def fix_boto(): + # Google Compute Engine instances install some Python-2-only boto plugins that + # conflict with boto running under Python 3. Disable boto's default configuration + # file prior to importing boto so that GCE's plugin is not loaded: + import os + os.environ["BOTO_CONFIG"] = "/etc/boto3.cfg" + + if __name__ == "__main__": from dns_update import get_dns_domains from web_update import get_web_domains, get_default_www_redirects From c422543fdd52e13d9c0d93b621e153c697b85250 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sun, 29 Nov 2015 01:27:03 +0000 Subject: [PATCH 06/20] make the system SSL certificate a symlink so we never have to replace a certificate file, and flatten the directory structure of user-installed certificates --- management/web_update.py | 43 +++++++++++++++++++++------------------- setup/migrate.py | 26 ++++++++++++++++++++++++ setup/ssl.sh | 11 +++++++--- 3 files changed, 57 insertions(+), 23 deletions(-) diff --git a/management/web_update.py b/management/web_update.py index c9c93f60..18fd27f8 100644 --- a/management/web_update.py +++ b/management/web_update.py @@ -351,21 +351,18 @@ def install_cert(domain, ssl_cert, ssl_chain, env): return cert_status # Where to put it? - if domain == env['PRIMARY_HOSTNAME']: - ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) - else: - # Make a unique path for the certificate. - from status_checks import load_cert_chain, load_pem, get_certificate_domains - from cryptography.hazmat.primitives import hashes - from binascii import hexlify - cert = load_pem(load_cert_chain(fn)[0]) - all_domains, cn = get_certificate_domains(cert) - path = "%s-%s-%s" % ( - cn, # common name - cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date - hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix - ) - ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', path, 'ssl_certificate.pem')) + # Make a unique path for the certificate. + from status_checks import load_cert_chain, load_pem, get_certificate_domains + from cryptography.hazmat.primitives import hashes + from binascii import hexlify + cert = load_pem(load_cert_chain(fn)[0]) + all_domains, cn = get_certificate_domains(cert) + path = "%s-%s-%s.pem" % ( + cn, # common name + cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date + hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix + ) + ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', path)) # Install the certificate. os.makedirs(os.path.dirname(ssl_certificate), exist_ok=True) @@ -373,17 +370,23 @@ def install_cert(domain, ssl_cert, ssl_chain, env): ret = ["OK"] - # When updating the cert for PRIMARY_HOSTNAME, also update DNS because it is - # used in the DANE TLSA record and restart postfix and dovecot which use - # that certificate. + # When updating the cert for PRIMARY_HOSTNAME, symlink it from the system + # certificate path, which is hard-coded for various purposes, and then + # update DNS (because of the DANE TLSA record), postfix, and dovecot, + # which all use the file. if domain == env['PRIMARY_HOSTNAME']: - ret.append( do_dns_update(env) ) + # Update symlink. + system_ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) + os.unlink(system_ssl_certificate) + os.symlink(ssl_certificate, system_ssl_certificate) + # Update DNS & restart postfix and dovecot so they pick up the new file. + ret.append( do_dns_update(env) ) shell('check_call', ["/usr/sbin/service", "postfix", "restart"]) shell('check_call', ["/usr/sbin/service", "dovecot", "restart"]) ret.append("mail services restarted") - # Kick nginx so it sees the cert. + # Update the web configuration so nginx picks up the new certificate file. ret.append( do_web_update(env) ) return "\n".join(ret) diff --git a/setup/migrate.py b/setup/migrate.py index 6acd0edc..45f748bc 100755 --- a/setup/migrate.py +++ b/setup/migrate.py @@ -111,6 +111,32 @@ def migration_9(env): db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite') shell("check_call", ["sqlite3", db, "ALTER TABLE aliases ADD permitted_senders TEXT"]) +def migration_10(env): + # Clean up the SSL certificates directory. + + # Move the primary certificate to a new name and then + # symlink it to the system certificate path. + import datetime + system_certificate = os.path.join(env["STORAGE_ROOT"], 'ssl/ssl_certificate.pem') + if not os.path.islink(system_certificate): # not already a symlink + new_path = os.path.join(env["STORAGE_ROOT"], 'ssl', env['PRIMARY_HOSTNAME'] + "-" + datetime.datetime.now().date().isoformat().replace("-", "") + ".pem") + print("Renamed", system_certificate, "to", new_path, "and created a symlink for the original location.") + shutil.move(system_certificate, new_path) + os.symlink(new_path, system_certificate) + + # Flatten the directory structure. For any directory + # that contains a single file named ssl_certificate.pem, + # move the file out and name it the same as the directory, + # and remove the directory. + for sslcert in glob.glob(os.path.join( env["STORAGE_ROOT"], 'ssl/*/ssl_certificate.pem' )): + d = os.path.dirname(sslcert) + if len(os.listdir(d)) == 1: + # This certificate is the only file in that directory. + newname = os.path.join(env["STORAGE_ROOT"], 'ssl', os.path.basename(d) + '.pem') + if not os.path.exists(newname): + shutil.move(sslcert, newname) + os.rmdir(d) + def get_current_migration(): ver = 0 while True: diff --git a/setup/ssl.sh b/setup/ssl.sh index 5d6143f5..fa29a211 100755 --- a/setup/ssl.sh +++ b/setup/ssl.sh @@ -77,12 +77,17 @@ if [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ]; then -sha256 -subj "/C=$CSR_COUNTRY/ST=/L=/O=/CN=$PRIMARY_HOSTNAME" # Generate the self-signed certificate. + CERT=$STORAGE_ROOT/ssl/$PRIMARY_HOSTNAME-selfsigned-$(date --rfc-3339=date | sed s/-//g).pem hide_output \ openssl x509 -req -days 365 \ - -in $CSR -signkey $STORAGE_ROOT/ssl/ssl_private_key.pem -out $STORAGE_ROOT/ssl/ssl_certificate.pem + -in $CSR -signkey $STORAGE_ROOT/ssl/ssl_private_key.pem -out $CERT - # Delete the certificate signing request because it has no other purpose. - rm -f $CSR + # Delete the certificate signing request because it has no other purpose. + rm -f $CSR + + # Symlink the certificate into the system certificate path, so system services + # can find it. + ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem fi # Generate some Diffie-Hellman cipher bits. From 766b98c4adbd98a2cbb125695fe0d261b1ee1957 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sun, 29 Nov 2015 13:59:22 +0000 Subject: [PATCH 07/20] refactor: move SSL-related management functions into a new module ssl_certificates.py --- management/daemon.py | 7 +- management/ssl_certificates.py | 382 +++++++++++++++++++++++++++++++++ management/status_checks.py | 180 +--------------- management/web_update.py | 211 +----------------- 4 files changed, 392 insertions(+), 388 deletions(-) create mode 100644 management/ssl_certificates.py diff --git a/management/daemon.py b/management/daemon.py index 32bbcc5f..bcb9633c 100755 --- a/management/daemon.py +++ b/management/daemon.py @@ -319,17 +319,20 @@ def dns_get_dump(): @app.route('/ssl/csr/', methods=['POST']) @authorized_personnel_only def ssl_get_csr(domain): - from web_update import create_csr + from ssl_certificates import create_csr ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem')) return create_csr(domain, ssl_private_key, env) @app.route('/ssl/install', methods=['POST']) @authorized_personnel_only def ssl_install_cert(): - from web_update import install_cert + from web_update import get_web_domains, get_default_www_redirects + from ssl_certificates import install_cert domain = request.form.get('domain') ssl_cert = request.form.get('cert') ssl_chain = request.form.get('chain') + if domain not in get_web_domains(env) + get_default_www_redirects(env): + return "Invalid domain name." return install_cert(domain, ssl_cert, ssl_chain, env) # WEB diff --git a/management/ssl_certificates.py b/management/ssl_certificates.py new file mode 100644 index 00000000..f9b0855f --- /dev/null +++ b/management/ssl_certificates.py @@ -0,0 +1,382 @@ +# Utilities for installing and selecting SSL certificates. + +import os, os.path, re, shutil + +from utils import shell + +def get_ssl_certificates(env): + # Scan all of the installed SSL certificates and map every domain + # that the certificates are good for to the best certificate for + # the domain. + + from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey + from cryptography.x509 import Certificate + + # The certificates are all stored here: + ssl_root = os.path.join(env["STORAGE_ROOT"], 'ssl') + + # List all of the files in the SSL directory and one level deep. + def get_file_list(): + for fn in os.listdir(ssl_root): + fn = os.path.join(ssl_root, fn) + if os.path.isfile(fn): + yield fn + elif os.path.isdir(fn): + for fn1 in os.listdir(fn): + fn1 = os.path.join(fn, fn1) + if os.path.isfile(fn1): + yield fn1 + + # Remember stuff. + private_keys = { } + certificates = [ ] + + # Scan each of the files to find private keys and certificates. + # We must load all of the private keys first before processing + # certificates so that we can check that we have a private key + # available before using a certificate. + for fn in get_file_list(): + try: + pem = load_pem(load_cert_chain(fn)[0]) + except ValueError: + # Not a valid PEM format for a PEM type we care about. + continue + + # Remember where we got this object. + pem._filename = fn + + # Is it a private key? + if isinstance(pem, RSAPrivateKey): + private_keys[pem.public_key().public_numbers()] = pem + + # Is it a certificate? + if isinstance(pem, Certificate): + certificates.append(pem) + + # Process the certificates. + domains = { } + for cert in certificates: + # What domains is this certificate good for? + cert_domains, primary_domain = get_certificate_domains(cert) + cert._primary_domain = primary_domain + + # Is there a private key file for this certificate? + private_key = private_keys.get(cert.public_key().public_numbers()) + if not private_key: + continue + cert._private_key = private_key + + # Add this cert to the list of certs usable for the domains. + for domain in cert_domains: + domains.setdefault(domain, []).append(cert) + + # Sort the certificates to prefer good ones. + import datetime + now = datetime.datetime.utcnow() + ret = { } + for domain, cert_list in domains.items(): + cert_list.sort(key = lambda cert : ( + # must be valid NOW + cert.not_valid_before <= now <= cert.not_valid_after, + + # prefer one that is not self-signed + cert.issuer != cert.subject, + + # prefer one with the expiration furthest into the future so + # that we can easily rotate to new certs as we get them + cert.not_valid_after, + + # in case a certificate is installed in multiple paths, + # prefer the... lexicographically last one? + cert._filename, + + ), reverse=True) + cert = cert_list.pop(0) + ret[domain] = { + "private-key": cert._private_key._filename, + "certificate": cert._filename, + "primary-domain": cert._primary_domain, + } + + return ret + +def get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=False): + # Get the default paths. + ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem')) + ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) + + if domain == env['PRIMARY_HOSTNAME']: + # The primary domain must use the server certificate because + # it is hard-coded in some service configuration files. + return ssl_private_key, ssl_certificate, None + + wildcard_domain = re.sub("^[^\.]+", "*", domain) + + if domain in ssl_certificates: + cert_info = ssl_certificates[domain] + cert_type = "multi-domain" + elif wildcard_domain in ssl_certificates: + cert_info = ssl_certificates[wildcard_domain] + cert_type = "wildcard" + elif not allow_missing_cert: + # No certificate is available for this domain! Return default files. + ssl_via = "Using certificate for %s." % env['PRIMARY_HOSTNAME'] + return ssl_private_key, ssl_certificate, ssl_via + else: + # No certificate is available - and warn appropriately. + return None + + # 'via' is a hint to the user about which certificate is in use for the domain + if cert_info['certificate'] == os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'): + # Using the server certificate. + via = "Using same %s certificate as for %s." % (cert_type, env['PRIMARY_HOSTNAME']) + elif cert_info['primary-domain'] != domain and cert_info['primary-domain'] in ssl_certificates and cert_info == ssl_certificates[cert_info['primary-domain']]: + via = "Using same %s certificate as for %s." % (cert_type, cert_info['primary-domain']) + else: + via = None # don't show a hint - show expiration info instead + + return cert_info['private-key'], cert_info['certificate'], via + +def create_csr(domain, ssl_key, env): + return shell("check_output", [ + "openssl", "req", "-new", + "-key", ssl_key, + "-sha256", + "-subj", "/C=%s/ST=/L=/O=/CN=%s" % (env["CSR_COUNTRY"], domain)]) + +def install_cert(domain, ssl_cert, ssl_chain, env): + # Write the combined cert+chain to a temporary path and validate that it is OK. + # The certificate always goes above the chain. + import tempfile + fd, fn = tempfile.mkstemp('.pem') + os.write(fd, (ssl_cert + '\n' + ssl_chain).encode("ascii")) + os.close(fd) + + # Do validation on the certificate before installing it. + ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem')) + cert_status, cert_status_details = check_certificate(domain, fn, ssl_private_key) + if cert_status != "OK": + if cert_status == "SELF-SIGNED": + cert_status = "This is a self-signed certificate. I can't install that." + os.unlink(fn) + if cert_status_details is not None: + cert_status += " " + cert_status_details + return cert_status + + # Where to put it? + # Make a unique path for the certificate. + from cryptography.hazmat.primitives import hashes + from binascii import hexlify + cert = load_pem(load_cert_chain(fn)[0]) + all_domains, cn = get_certificate_domains(cert) + path = "%s-%s-%s.pem" % ( + cn, # common name + cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date + hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix + ) + ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', path)) + + # Install the certificate. + os.makedirs(os.path.dirname(ssl_certificate), exist_ok=True) + shutil.move(fn, ssl_certificate) + + ret = ["OK"] + + # When updating the cert for PRIMARY_HOSTNAME, symlink it from the system + # certificate path, which is hard-coded for various purposes, and then + # update DNS (because of the DANE TLSA record), postfix, and dovecot, + # which all use the file. + if domain == env['PRIMARY_HOSTNAME']: + # Update symlink. + system_ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) + os.unlink(system_ssl_certificate) + os.symlink(ssl_certificate, system_ssl_certificate) + + # Update DNS & restart postfix and dovecot so they pick up the new file. + from dns_update import do_dns_update + ret.append( do_dns_update(env) ) + shell('check_call', ["/usr/sbin/service", "postfix", "restart"]) + shell('check_call', ["/usr/sbin/service", "dovecot", "restart"]) + ret.append("mail services restarted") + + # Update the web configuration so nginx picks up the new certificate file. + from web_update import do_web_update + ret.append( do_web_update(env) ) + return "\n".join(ret) + + +def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring_soon=True, rounded_time=False, just_check_domain=False): + # Check that the ssl_certificate & ssl_private_key files are good + # for the provided domain. + + from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey + from cryptography.x509 import Certificate + + # The ssl_certificate file may contain a chain of certificates. We'll + # need to split that up before we can pass anything to openssl or + # parse them in Python. Parse it with the cryptography library. + try: + ssl_cert_chain = load_cert_chain(ssl_certificate) + cert = load_pem(ssl_cert_chain[0]) + if not isinstance(cert, Certificate): raise ValueError("This is not a certificate file.") + except ValueError as e: + return ("There is a problem with the certificate file: %s" % str(e), None) + + # First check that the domain name is one of the names allowed by + # the certificate. + if domain is not None: + certificate_names, cert_primary_name = get_certificate_domains(cert) + + # Check that the domain appears among the acceptable names, or a wildcard + # form of the domain name (which is a stricter check than the specs but + # should work in normal cases). + wildcard_domain = re.sub("^[^\.]+", "*", domain) + if domain not in certificate_names and wildcard_domain not in certificate_names: + return ("The certificate is for the wrong domain name. It is for %s." + % ", ".join(sorted(certificate_names)), None) + + # Second, check that the certificate matches the private key. + if ssl_private_key is not None: + try: + priv_key = load_pem(open(ssl_private_key, 'rb').read()) + except ValueError as e: + return ("The private key file %s is not a private key file: %s" % (ssl_private_key, str(e)), None) + + if not isinstance(priv_key, RSAPrivateKey): + return ("The private key file %s is not a private key file." % ssl_private_key, None) + + if priv_key.public_key().public_numbers() != cert.public_key().public_numbers(): + return ("The certificate does not correspond to the private key at %s." % ssl_private_key, None) + + # We could also use the openssl command line tool to get the modulus + # listed in each file. The output of each command below looks like "Modulus=XXXXX". + # $ openssl rsa -inform PEM -noout -modulus -in ssl_private_key + # $ openssl x509 -in ssl_certificate -noout -modulus + + # Third, check if the certificate is self-signed. Return a special flag string. + if cert.issuer == cert.subject: + return ("SELF-SIGNED", None) + + # When selecting which certificate to use for non-primary domains, we check if the primary + # certificate or a www-parent-domain certificate is good for the domain. There's no need + # to run extra checks beyond this point. + if just_check_domain: + return ("OK", None) + + # Check that the certificate hasn't expired. The datetimes returned by the + # certificate are 'naive' and in UTC. We need to get the current time in UTC. + import datetime + now = datetime.datetime.utcnow() + if not(cert.not_valid_before <= now <= cert.not_valid_after): + return ("The certificate has expired or is not yet valid. It is valid from %s to %s." % (cert.not_valid_before, cert.not_valid_after), None) + + # Next validate that the certificate is valid. This checks whether the certificate + # is self-signed, that the chain of trust makes sense, that it is signed by a CA + # that Ubuntu has installed on this machine's list of CAs, and I think that it hasn't + # expired. + + # The certificate chain has to be passed separately and is given via STDIN. + # This command returns a non-zero exit status in most cases, so trap errors. + retcode, verifyoutput = shell('check_output', [ + "openssl", + "verify", "-verbose", + "-purpose", "sslserver", "-policy_check",] + + ([] if len(ssl_cert_chain) == 1 else ["-untrusted", "/proc/self/fd/0"]) + + [ssl_certificate], + input=b"\n\n".join(ssl_cert_chain[1:]), + trap=True) + + if "self signed" in verifyoutput: + # Certificate is self-signed. Probably we detected this above. + return ("SELF-SIGNED", None) + + elif retcode != 0: + if "unable to get local issuer certificate" in verifyoutput: + return ("The certificate is missing an intermediate chain or the intermediate chain is incorrect or incomplete. (%s)" % verifyoutput, None) + + # There is some unknown problem. Return the `openssl verify` raw output. + return ("There is a problem with the SSL certificate.", verifyoutput.strip()) + + else: + # `openssl verify` returned a zero exit status so the cert is currently + # good. + + # But is it expiring soon? + cert_expiration_date = cert.not_valid_after + ndays = (cert_expiration_date-now).days + if not rounded_time or ndays < 7: + expiry_info = "The certificate expires in %d days on %s." % (ndays, cert_expiration_date.strftime("%x")) + elif ndays <= 14: + expiry_info = "The certificate expires in less than two weeks, on %s." % cert_expiration_date.strftime("%x") + elif ndays <= 31: + expiry_info = "The certificate expires in less than a month, on %s." % cert_expiration_date.strftime("%x") + else: + expiry_info = "The certificate expires on %s." % cert_expiration_date.strftime("%x") + + if ndays <= 31 and warn_if_expiring_soon: + return ("The certificate is expiring soon: " + expiry_info, None) + + # Return the special OK code. + return ("OK", expiry_info) + +def load_cert_chain(pemfile): + # A certificate .pem file may contain a chain of certificates. + # Load the file and split them apart. + re_pem = rb"(-+BEGIN (?:.+)-+[\r\n]+(?:[A-Za-z0-9+/=]{1,64}[\r\n]+)+-+END (?:.+)-+[\r\n]+)" + with open(pemfile, "rb") as f: + pem = f.read() + b"\n" # ensure trailing newline + pemblocks = re.findall(re_pem, pem) + if len(pemblocks) == 0: + raise ValueError("File does not contain valid PEM data.") + return pemblocks + +def load_pem(pem): + # Parse a "---BEGIN .... END---" PEM string and return a Python object for it + # using classes from the cryptography package. + from cryptography.x509 import load_pem_x509_certificate + from cryptography.hazmat.primitives import serialization + from cryptography.hazmat.backends import default_backend + pem_type = re.match(b"-+BEGIN (.*?)-+[\r\n]", pem) + if pem_type is None: + raise ValueError("File is not a valid PEM-formatted file.") + pem_type = pem_type.group(1) + if pem_type in (b"RSA PRIVATE KEY", b"PRIVATE KEY"): + return serialization.load_pem_private_key(pem, password=None, backend=default_backend()) + if pem_type == b"CERTIFICATE": + return load_pem_x509_certificate(pem, default_backend()) + raise ValueError("Unsupported PEM object type: " + pem_type.decode("ascii", "replace")) + +def get_certificate_domains(cert): + from cryptography.x509 import DNSName, ExtensionNotFound, OID_COMMON_NAME, OID_SUBJECT_ALTERNATIVE_NAME + import idna + + names = set() + cn = None + + # The domain may be found in the Subject Common Name (CN). This comes back as an IDNA (ASCII) + # string, which is the format we store domains in - so good. + try: + cn = cert.subject.get_attributes_for_oid(OID_COMMON_NAME)[0].value + names.add(cn) + except IndexError: + # No common name? Certificate is probably generated incorrectly. + # But we'll let it error-out when it doesn't find the domain. + pass + + # ... or be one of the Subject Alternative Names. The cryptography library handily IDNA-decodes + # the names for us. We must encode back to ASCII, but wildcard certificates can't pass through + # IDNA encoding/decoding so we must special-case. See https://github.com/pyca/cryptography/pull/2071. + def idna_decode_dns_name(dns_name): + if dns_name.startswith("*."): + return "*." + idna.encode(dns_name[2:]).decode('ascii') + else: + return idna.encode(dns_name).decode('ascii') + + try: + sans = cert.extensions.get_extension_for_oid(OID_SUBJECT_ALTERNATIVE_NAME).value.get_values_for_type(DNSName) + for san in sans: + names.add(idna_decode_dns_name(san)) + except ExtensionNotFound: + pass + + return names, cn diff --git a/management/status_checks.py b/management/status_checks.py index 4b6947e0..b9b17f60 100755 --- a/management/status_checks.py +++ b/management/status_checks.py @@ -4,8 +4,6 @@ # SSL certificates have been signed, etc., and if not tells the user # what to do next. -__ALL__ = ['check_certificate'] - import sys, os, os.path, re, subprocess, datetime, multiprocessing.pool import dns.reversename, dns.resolver @@ -13,7 +11,8 @@ import dateutil.parser, dateutil.tz import idna from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_record -from web_update import get_web_domains, get_default_www_redirects, get_ssl_certificates, get_domain_ssl_files, get_domains_with_a_records +from web_update import get_web_domains, get_default_www_redirects, get_domains_with_a_records +from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate from mailconfig import get_mail_domains, get_mail_aliases from utils import shell, sort_domains, load_env_vars_from_file, load_settings @@ -669,181 +668,6 @@ def check_ssl_cert(domain, rounded_time, ssl_certificates, env, output): output.print_line(cert_status_details) output.print_line("") -def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring_soon=True, rounded_time=False, just_check_domain=False): - # Check that the ssl_certificate & ssl_private_key files are good - # for the provided domain. - - from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey - from cryptography.x509 import Certificate - - # The ssl_certificate file may contain a chain of certificates. We'll - # need to split that up before we can pass anything to openssl or - # parse them in Python. Parse it with the cryptography library. - try: - ssl_cert_chain = load_cert_chain(ssl_certificate) - cert = load_pem(ssl_cert_chain[0]) - if not isinstance(cert, Certificate): raise ValueError("This is not a certificate file.") - except ValueError as e: - return ("There is a problem with the certificate file: %s" % str(e), None) - - # First check that the domain name is one of the names allowed by - # the certificate. - if domain is not None: - certificate_names, cert_primary_name = get_certificate_domains(cert) - - # Check that the domain appears among the acceptable names, or a wildcard - # form of the domain name (which is a stricter check than the specs but - # should work in normal cases). - wildcard_domain = re.sub("^[^\.]+", "*", domain) - if domain not in certificate_names and wildcard_domain not in certificate_names: - return ("The certificate is for the wrong domain name. It is for %s." - % ", ".join(sorted(certificate_names)), None) - - # Second, check that the certificate matches the private key. - if ssl_private_key is not None: - try: - priv_key = load_pem(open(ssl_private_key, 'rb').read()) - except ValueError as e: - return ("The private key file %s is not a private key file: %s" % (ssl_private_key, str(e)), None) - - if not isinstance(priv_key, RSAPrivateKey): - return ("The private key file %s is not a private key file." % ssl_private_key, None) - - if priv_key.public_key().public_numbers() != cert.public_key().public_numbers(): - return ("The certificate does not correspond to the private key at %s." % ssl_private_key, None) - - # We could also use the openssl command line tool to get the modulus - # listed in each file. The output of each command below looks like "Modulus=XXXXX". - # $ openssl rsa -inform PEM -noout -modulus -in ssl_private_key - # $ openssl x509 -in ssl_certificate -noout -modulus - - # Third, check if the certificate is self-signed. Return a special flag string. - if cert.issuer == cert.subject: - return ("SELF-SIGNED", None) - - # When selecting which certificate to use for non-primary domains, we check if the primary - # certificate or a www-parent-domain certificate is good for the domain. There's no need - # to run extra checks beyond this point. - if just_check_domain: - return ("OK", None) - - # Check that the certificate hasn't expired. The datetimes returned by the - # certificate are 'naive' and in UTC. We need to get the current time in UTC. - now = datetime.datetime.utcnow() - if not(cert.not_valid_before <= now <= cert.not_valid_after): - return ("The certificate has expired or is not yet valid. It is valid from %s to %s." % (cert.not_valid_before, cert.not_valid_after), None) - - # Next validate that the certificate is valid. This checks whether the certificate - # is self-signed, that the chain of trust makes sense, that it is signed by a CA - # that Ubuntu has installed on this machine's list of CAs, and I think that it hasn't - # expired. - - # The certificate chain has to be passed separately and is given via STDIN. - # This command returns a non-zero exit status in most cases, so trap errors. - retcode, verifyoutput = shell('check_output', [ - "openssl", - "verify", "-verbose", - "-purpose", "sslserver", "-policy_check",] - + ([] if len(ssl_cert_chain) == 1 else ["-untrusted", "/proc/self/fd/0"]) - + [ssl_certificate], - input=b"\n\n".join(ssl_cert_chain[1:]), - trap=True) - - if "self signed" in verifyoutput: - # Certificate is self-signed. Probably we detected this above. - return ("SELF-SIGNED", None) - - elif retcode != 0: - if "unable to get local issuer certificate" in verifyoutput: - return ("The certificate is missing an intermediate chain or the intermediate chain is incorrect or incomplete. (%s)" % verifyoutput, None) - - # There is some unknown problem. Return the `openssl verify` raw output. - return ("There is a problem with the SSL certificate.", verifyoutput.strip()) - - else: - # `openssl verify` returned a zero exit status so the cert is currently - # good. - - # But is it expiring soon? - cert_expiration_date = cert.not_valid_after - ndays = (cert_expiration_date-now).days - if not rounded_time or ndays < 7: - expiry_info = "The certificate expires in %d days on %s." % (ndays, cert_expiration_date.strftime("%x")) - elif ndays <= 14: - expiry_info = "The certificate expires in less than two weeks, on %s." % cert_expiration_date.strftime("%x") - elif ndays <= 31: - expiry_info = "The certificate expires in less than a month, on %s." % cert_expiration_date.strftime("%x") - else: - expiry_info = "The certificate expires on %s." % cert_expiration_date.strftime("%x") - - if ndays <= 31 and warn_if_expiring_soon: - return ("The certificate is expiring soon: " + expiry_info, None) - - # Return the special OK code. - return ("OK", expiry_info) - -def load_cert_chain(pemfile): - # A certificate .pem file may contain a chain of certificates. - # Load the file and split them apart. - re_pem = rb"(-+BEGIN (?:.+)-+[\r\n]+(?:[A-Za-z0-9+/=]{1,64}[\r\n]+)+-+END (?:.+)-+[\r\n]+)" - with open(pemfile, "rb") as f: - pem = f.read() + b"\n" # ensure trailing newline - pemblocks = re.findall(re_pem, pem) - if len(pemblocks) == 0: - raise ValueError("File does not contain valid PEM data.") - return pemblocks - -def load_pem(pem): - # Parse a "---BEGIN .... END---" PEM string and return a Python object for it - # using classes from the cryptography package. - from cryptography.x509 import load_pem_x509_certificate - from cryptography.hazmat.primitives import serialization - from cryptography.hazmat.backends import default_backend - pem_type = re.match(b"-+BEGIN (.*?)-+[\r\n]", pem) - if pem_type is None: - raise ValueError("File is not a valid PEM-formatted file.") - pem_type = pem_type.group(1) - if pem_type in (b"RSA PRIVATE KEY", b"PRIVATE KEY"): - return serialization.load_pem_private_key(pem, password=None, backend=default_backend()) - if pem_type == b"CERTIFICATE": - return load_pem_x509_certificate(pem, default_backend()) - raise ValueError("Unsupported PEM object type: " + pem_type.decode("ascii", "replace")) - -def get_certificate_domains(cert): - from cryptography.x509 import DNSName, ExtensionNotFound, OID_COMMON_NAME, OID_SUBJECT_ALTERNATIVE_NAME - import idna - - names = set() - cn = None - - # The domain may be found in the Subject Common Name (CN). This comes back as an IDNA (ASCII) - # string, which is the format we store domains in - so good. - try: - cn = cert.subject.get_attributes_for_oid(OID_COMMON_NAME)[0].value - names.add(cn) - except IndexError: - # No common name? Certificate is probably generated incorrectly. - # But we'll let it error-out when it doesn't find the domain. - pass - - # ... or be one of the Subject Alternative Names. The cryptography library handily IDNA-decodes - # the names for us. We must encode back to ASCII, but wildcard certificates can't pass through - # IDNA encoding/decoding so we must special-case. See https://github.com/pyca/cryptography/pull/2071. - def idna_decode_dns_name(dns_name): - if dns_name.startswith("*."): - return "*." + idna.encode(dns_name[2:]).decode('ascii') - else: - return idna.encode(dns_name).decode('ascii') - - try: - sans = cert.extensions.get_extension_for_oid(OID_SUBJECT_ALTERNATIVE_NAME).value.get_values_for_type(DNSName) - for san in sans: - names.add(idna_decode_dns_name(san)) - except ExtensionNotFound: - pass - - return names, cn - _apt_updates = None def list_apt_updates(apt_update=True): # See if we have this information cached recently. diff --git a/management/web_update.py b/management/web_update.py index 18fd27f8..92a56ff9 100644 --- a/management/web_update.py +++ b/management/web_update.py @@ -2,10 +2,11 @@ # domains for which a mail account has been set up. ######################################################################## -import os, os.path, shutil, re, tempfile, rtyaml +import os.path, re, rtyaml from mailconfig import get_mail_domains -from dns_update import get_custom_dns_config, do_dns_update, get_dns_zones +from dns_update import get_custom_dns_config, get_dns_zones +from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate from utils import shell, safe_domain_name, sort_domains def get_web_domains(env): @@ -185,217 +186,11 @@ def get_web_root(domain, env, test_exists=True): if os.path.exists(root) or not test_exists: break return root -def get_ssl_certificates(env): - # Scan all of the installed SSL certificates and map every domain - # that the certificates are good for to the best certificate for - # the domain. - - from cryptography.hazmat.primitives.asymmetric.rsa import RSAPrivateKey - from cryptography.x509 import Certificate - - # The certificates are all stored here: - ssl_root = os.path.join(env["STORAGE_ROOT"], 'ssl') - - # List all of the files in the SSL directory and one level deep. - def get_file_list(): - for fn in os.listdir(ssl_root): - fn = os.path.join(ssl_root, fn) - if os.path.isfile(fn): - yield fn - elif os.path.isdir(fn): - for fn1 in os.listdir(fn): - fn1 = os.path.join(fn, fn1) - if os.path.isfile(fn1): - yield fn1 - - # Remember stuff. - private_keys = { } - certificates = [ ] - - # Scan each of the files to find private keys and certificates. - # We must load all of the private keys first before processing - # certificates so that we can check that we have a private key - # available before using a certificate. - from status_checks import load_cert_chain, load_pem - for fn in get_file_list(): - try: - pem = load_pem(load_cert_chain(fn)[0]) - except ValueError: - # Not a valid PEM format for a PEM type we care about. - continue - - # Remember where we got this object. - pem._filename = fn - - # Is it a private key? - if isinstance(pem, RSAPrivateKey): - private_keys[pem.public_key().public_numbers()] = pem - - # Is it a certificate? - if isinstance(pem, Certificate): - certificates.append(pem) - - # Process the certificates. - domains = { } - from status_checks import get_certificate_domains - for cert in certificates: - # What domains is this certificate good for? - cert_domains, primary_domain = get_certificate_domains(cert) - cert._primary_domain = primary_domain - - # Is there a private key file for this certificate? - private_key = private_keys.get(cert.public_key().public_numbers()) - if not private_key: - continue - cert._private_key = private_key - - # Add this cert to the list of certs usable for the domains. - for domain in cert_domains: - domains.setdefault(domain, []).append(cert) - - # Sort the certificates to prefer good ones. - import datetime - now = datetime.datetime.utcnow() - ret = { } - for domain, cert_list in domains.items(): - cert_list.sort(key = lambda cert : ( - # must be valid NOW - cert.not_valid_before <= now <= cert.not_valid_after, - - # prefer one that is not self-signed - cert.issuer != cert.subject, - - # prefer one with the expiration furthest into the future so - # that we can easily rotate to new certs as we get them - cert.not_valid_after, - - # in case a certificate is installed in multiple paths, - # prefer the... lexicographically last one? - cert._filename, - - ), reverse=True) - cert = cert_list.pop(0) - ret[domain] = { - "private-key": cert._private_key._filename, - "certificate": cert._filename, - "primary-domain": cert._primary_domain, - } - - return ret - -def get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=False): - # Get the default paths. - ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem')) - ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) - - if domain == env['PRIMARY_HOSTNAME']: - # The primary domain must use the server certificate because - # it is hard-coded in some service configuration files. - return ssl_private_key, ssl_certificate, None - - wildcard_domain = re.sub("^[^\.]+", "*", domain) - - if domain in ssl_certificates: - cert_info = ssl_certificates[domain] - cert_type = "multi-domain" - elif wildcard_domain in ssl_certificates: - cert_info = ssl_certificates[wildcard_domain] - cert_type = "wildcard" - elif not allow_missing_cert: - # No certificate is available for this domain! Return default files. - ssl_via = "Using certificate for %s." % env['PRIMARY_HOSTNAME'] - return ssl_private_key, ssl_certificate, ssl_via - else: - # No certificate is available - and warn appropriately. - return None - - # 'via' is a hint to the user about which certificate is in use for the domain - if cert_info['certificate'] == os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'): - # Using the server certificate. - via = "Using same %s certificate as for %s." % (cert_type, env['PRIMARY_HOSTNAME']) - elif cert_info['primary-domain'] != domain and cert_info['primary-domain'] in ssl_certificates and cert_info == ssl_certificates[cert_info['primary-domain']]: - via = "Using same %s certificate as for %s." % (cert_type, cert_info['primary-domain']) - else: - via = None # don't show a hint - show expiration info instead - - return cert_info['private-key'], cert_info['certificate'], via - -def create_csr(domain, ssl_key, env): - return shell("check_output", [ - "openssl", "req", "-new", - "-key", ssl_key, - "-sha256", - "-subj", "/C=%s/ST=/L=/O=/CN=%s" % (env["CSR_COUNTRY"], domain)]) - -def install_cert(domain, ssl_cert, ssl_chain, env): - if domain not in get_web_domains(env) + get_default_www_redirects(env): - return "Invalid domain name." - - # Write the combined cert+chain to a temporary path and validate that it is OK. - # The certificate always goes above the chain. - import tempfile, os - fd, fn = tempfile.mkstemp('.pem') - os.write(fd, (ssl_cert + '\n' + ssl_chain).encode("ascii")) - os.close(fd) - - # Do validation on the certificate before installing it. - from status_checks import check_certificate - ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem')) - cert_status, cert_status_details = check_certificate(domain, fn, ssl_private_key) - if cert_status != "OK": - if cert_status == "SELF-SIGNED": - cert_status = "This is a self-signed certificate. I can't install that." - os.unlink(fn) - if cert_status_details is not None: - cert_status += " " + cert_status_details - return cert_status - - # Where to put it? - # Make a unique path for the certificate. - from status_checks import load_cert_chain, load_pem, get_certificate_domains - from cryptography.hazmat.primitives import hashes - from binascii import hexlify - cert = load_pem(load_cert_chain(fn)[0]) - all_domains, cn = get_certificate_domains(cert) - path = "%s-%s-%s.pem" % ( - cn, # common name - cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date - hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix - ) - ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', path)) - - # Install the certificate. - os.makedirs(os.path.dirname(ssl_certificate), exist_ok=True) - shutil.move(fn, ssl_certificate) - - ret = ["OK"] - - # When updating the cert for PRIMARY_HOSTNAME, symlink it from the system - # certificate path, which is hard-coded for various purposes, and then - # update DNS (because of the DANE TLSA record), postfix, and dovecot, - # which all use the file. - if domain == env['PRIMARY_HOSTNAME']: - # Update symlink. - system_ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem')) - os.unlink(system_ssl_certificate) - os.symlink(ssl_certificate, system_ssl_certificate) - - # Update DNS & restart postfix and dovecot so they pick up the new file. - ret.append( do_dns_update(env) ) - shell('check_call', ["/usr/sbin/service", "postfix", "restart"]) - shell('check_call', ["/usr/sbin/service", "dovecot", "restart"]) - ret.append("mail services restarted") - - # Update the web configuration so nginx picks up the new certificate file. - ret.append( do_web_update(env) ) - return "\n".join(ret) - def get_web_domains_info(env): has_root_proxy_or_redirect = get_web_domains_with_root_overrides(env) # for the SSL config panel, get cert status def check_cert(domain): - from status_checks import check_certificate ssl_certificates = get_ssl_certificates(env) x = get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=True) if x is None: return ("danger", "No Certificate Installed") From be9efe0273d9019e0c561c7f9996db11f45c17fc Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sun, 29 Nov 2015 14:04:37 +0000 Subject: [PATCH 08/20] ensure malformed ssl certificate can't cause it to be written to an arbitrary path --- management/ssl_certificates.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/management/ssl_certificates.py b/management/ssl_certificates.py index f9b0855f..0365251c 100644 --- a/management/ssl_certificates.py +++ b/management/ssl_certificates.py @@ -2,7 +2,7 @@ import os, os.path, re, shutil -from utils import shell +from utils import shell, safe_domain_name def get_ssl_certificates(env): # Scan all of the installed SSL certificates and map every domain @@ -170,7 +170,7 @@ def install_cert(domain, ssl_cert, ssl_chain, env): cert = load_pem(load_cert_chain(fn)[0]) all_domains, cn = get_certificate_domains(cert) path = "%s-%s-%s.pem" % ( - cn, # common name + safe_domain_name(cn), # common name, which should be filename safe because it is IDNA-encoded, but in case of a malformed cert make sure it's ok to use as a filename cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix ) From 808522d89556eae16a190e36f649b079424bbb7c Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sun, 29 Nov 2015 14:43:12 +0000 Subject: [PATCH 09/20] merge functions get_web_domains and get_default_www_redirects --- management/daemon.py | 4 +-- management/dns_update.py | 8 ++--- management/status_checks.py | 4 +-- management/utils.py | 6 ++-- management/web_update.py | 71 ++++++++++++++++++------------------- 5 files changed, 44 insertions(+), 49 deletions(-) diff --git a/management/daemon.py b/management/daemon.py index bcb9633c..4f56a767 100755 --- a/management/daemon.py +++ b/management/daemon.py @@ -326,12 +326,12 @@ def ssl_get_csr(domain): @app.route('/ssl/install', methods=['POST']) @authorized_personnel_only def ssl_install_cert(): - from web_update import get_web_domains, get_default_www_redirects + from web_update import get_web_domains from ssl_certificates import install_cert domain = request.form.get('domain') ssl_cert = request.form.get('cert') ssl_chain = request.form.get('chain') - if domain not in get_web_domains(env) + get_default_www_redirects(env): + if domain not in get_web_domains(env): return "Invalid domain name." return install_cert(domain, ssl_cert, ssl_chain, env) diff --git a/management/dns_update.py b/management/dns_update.py index 195f8bc1..1e0cba9f 100755 --- a/management/dns_update.py +++ b/management/dns_update.py @@ -57,8 +57,8 @@ def do_dns_update(env, force=False): # Custom records to add to zones. additional_records = list(get_custom_dns_config(env)) - from web_update import get_default_www_redirects - www_redirect_domains = get_default_www_redirects(env) + from web_update import get_web_domains + www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) # Write zone files. os.makedirs('/etc/nsd/zones', exist_ok=True) @@ -907,8 +907,8 @@ def build_recommended_dns(env): domains = get_dns_domains(env) zonefiles = get_dns_zones(env) additional_records = list(get_custom_dns_config(env)) - from web_update import get_default_www_redirects - www_redirect_domains = get_default_www_redirects(env) + from web_update import get_web_domains + www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) for domain, zonefile in zonefiles: records = build_zone(domain, domains, additional_records, www_redirect_domains, env) diff --git a/management/status_checks.py b/management/status_checks.py index b9b17f60..a8a24edf 100755 --- a/management/status_checks.py +++ b/management/status_checks.py @@ -11,7 +11,7 @@ import dateutil.parser, dateutil.tz import idna from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_record -from web_update import get_web_domains, get_default_www_redirects, get_domains_with_a_records +from web_update import get_web_domains, get_domains_with_a_records from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate from mailconfig import get_mail_domains, get_mail_aliases @@ -240,7 +240,7 @@ def run_domain_checks(rounded_time, env, output, pool): dns_domains = set(dns_zonefiles) # Get the list of domains we serve HTTPS for. - web_domains = set(get_web_domains(env) + get_default_www_redirects(env)) + web_domains = set(get_web_domains(env)) domains_to_check = mail_domains | dns_domains | web_domains diff --git a/management/utils.py b/management/utils.py index b77b7482..d590abb5 100644 --- a/management/utils.py +++ b/management/utils.py @@ -254,10 +254,8 @@ def fix_boto(): if __name__ == "__main__": - from dns_update import get_dns_domains - from web_update import get_web_domains, get_default_www_redirects + from web_update import get_web_domains env = load_environment() - domains = get_dns_domains(env) | set(get_web_domains(env) + get_default_www_redirects(env)) - domains = sort_domains(domains, env) + domains = get_web_domains(env) for domain in domains: print(domain) diff --git a/management/web_update.py b/management/web_update.py index 92a56ff9..9757dc84 100644 --- a/management/web_update.py +++ b/management/web_update.py @@ -9,20 +9,29 @@ from dns_update import get_custom_dns_config, get_dns_zones from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate from utils import shell, safe_domain_name, sort_domains -def get_web_domains(env): - # What domains should we serve websites for? +def get_web_domains(env, include_www_redirects=True): + # What domains should we serve HTTP(S) for? domains = set() - # At the least it's the PRIMARY_HOSTNAME so we can serve webmail - # as well as Z-Push for Exchange ActiveSync. - domains.add(env['PRIMARY_HOSTNAME']) - - # Also serve web for all mail domains so that we might at least + # Serve web for all mail domains so that we might at least # provide auto-discover of email settings, and also a static website - # if the user wants to make one. These will require an SSL cert. + # if the user wants to make one. + domains |= get_mail_domains(env) + + if include_www_redirects: + # Add 'www.' subdomains that we want to provide default redirects + # to the main domain for. We'll add 'www.' to any DNS zones, i.e. + # the topmost of each domain we serve. + domains |= set('www.' + zone for zone, zonefile in get_dns_zones(env)) + # ...Unless the domain has an A/AAAA record that maps it to a different # IP address than this box. Remove those domains from our list. - domains |= (get_mail_domains(env) - get_domains_with_a_records(env)) + domains -= get_domains_with_a_records(env) + + # Ensure the PRIMARY_HOSTNAME is in the list so we can serve webmail + # as well as Z-Push for Exchange ActiveSync. This can't be removed + # by a custom A/AAAA record and is never a 'www.' redirect. + domains.add(env['PRIMARY_HOSTNAME']) # Sort the list so the nginx conf gets written in a stable order. domains = sort_domains(domains, env) @@ -51,15 +60,6 @@ def get_web_domains_with_root_overrides(env): root_overrides[domain] = (type, value) return root_overrides - -def get_default_www_redirects(env): - # Returns a list of www subdomains that we want to provide default redirects - # for, i.e. any www's that aren't domains the user has actually configured - # to serve for real. Which would be unusual. - web_domains = set(get_web_domains(env)) - www_domains = set('www.' + zone for zone, zonefile in get_dns_zones(env)) - return sort_domains(www_domains - web_domains - get_domains_with_a_records(env), env) - def do_web_update(env): # Pre-load what SSL certificates we will use for each domain. ssl_certificates = get_ssl_certificates(env) @@ -78,16 +78,20 @@ def do_web_update(env): # Add configuration all other web domains. has_root_proxy_or_redirect = get_web_domains_with_root_overrides(env) + web_domains_not_redirect = get_web_domains(env, include_www_redirects=False) for domain in get_web_domains(env): - if domain == env['PRIMARY_HOSTNAME']: continue # handled above - if domain not in has_root_proxy_or_redirect: - nginx_conf += make_domain_config(domain, [template0, template1], ssl_certificates, env) + if domain == env['PRIMARY_HOSTNAME']: + # PRIMARY_HOSTNAME is handled above. + continue + if domain in web_domains_not_redirect: + # This is a regular domain. + if domain not in has_root_proxy_or_redirect: + nginx_conf += make_domain_config(domain, [template0, template1], ssl_certificates, env) + else: + nginx_conf += make_domain_config(domain, [template0], ssl_certificates, env) else: - nginx_conf += make_domain_config(domain, [template0], ssl_certificates, env) - - # Add default www redirects. - for domain in get_default_www_redirects(env): - nginx_conf += make_domain_config(domain, [template0, template3], ssl_certificates, env) + # Add default 'www.' redirect. + nginx_conf += make_domain_config(domain, [template0, template3], ssl_certificates, env) # Did the file change? If not, don't bother writing & restarting nginx. nginx_conf_fn = "/etc/nginx/conf.d/local.conf" @@ -187,7 +191,8 @@ def get_web_root(domain, env, test_exists=True): return root def get_web_domains_info(env): - has_root_proxy_or_redirect = get_web_domains_with_root_overrides(env) + www_redirects = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) + has_root_proxy_or_redirect = set(get_web_domains_with_root_overrides(env)) # for the SSL config panel, get cert status def check_cert(domain): @@ -213,15 +218,7 @@ def get_web_domains_info(env): "root": get_web_root(domain, env), "custom_root": get_web_root(domain, env, test_exists=False), "ssl_certificate": check_cert(domain), - "static_enabled": domain not in has_root_proxy_or_redirect, + "static_enabled": domain not in (www_redirects | has_root_proxy_or_redirect), } for domain in get_web_domains(env) - ] + \ - [ - { - "domain": domain, - "ssl_certificate": check_cert(domain), - "static_enabled": False, - } - for domain in get_default_www_redirects(env) - ] + ] \ No newline at end of file From 7a93d219ef598d018676237c0eed7663123c4c07 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sun, 29 Nov 2015 14:59:35 +0000 Subject: [PATCH 10/20] some cleanup in dns_update.py --- management/dns_update.py | 102 +++++++++------------------------------ 1 file changed, 24 insertions(+), 78 deletions(-) diff --git a/management/dns_update.py b/management/dns_update.py index 1e0cba9f..1ec88607 100755 --- a/management/dns_update.py +++ b/management/dns_update.py @@ -51,21 +51,13 @@ def get_dns_zones(env): return zonefiles def do_dns_update(env, force=False): - # What domains (and their zone filenames) should we build? - domains = get_dns_domains(env) - zonefiles = get_dns_zones(env) - - # Custom records to add to zones. - additional_records = list(get_custom_dns_config(env)) - from web_update import get_web_domains - www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) - # Write zone files. os.makedirs('/etc/nsd/zones', exist_ok=True) + zonefiles = [] updated_domains = [] - for i, (domain, zonefile) in enumerate(zonefiles): - # Build the records to put in the zone. - records = build_zone(domain, domains, additional_records, www_redirect_domains, env) + for (domain, zonefile, records) in build_zones(env): + # The final set of files will be signed. + zonefiles.append((domain, zonefile + ".signed")) # See if the zone has changed, and if so update the serial number # and write the zone file. @@ -73,14 +65,6 @@ def do_dns_update(env, force=False): # Zone was not updated. There were no changes. continue - # If this is a .justtesting.email domain, then post the update. - try: - justtestingdotemail(domain, records) - except: - # Hmm. Might be a network issue. If we stop now, will we end - # up in an inconsistent state? Let's just continue. - pass - # Mark that we just updated this domain. updated_domains.append(domain) @@ -95,14 +79,8 @@ def do_dns_update(env, force=False): # and return True so we get a chance to re-sign it. sign_zone(domain, zonefile, env) - # Now that all zones are signed (some might not have changed and so didn't - # just get signed now, but were before) update the zone filename so nsd.conf - # uses the signed file. - for i in range(len(zonefiles)): - zonefiles[i][1] += ".signed" - # Write the main nsd.conf file. - if write_nsd_conf(zonefiles, additional_records, env): + if write_nsd_conf(zonefiles, list(get_custom_dns_config(env)), env): # Make sure updated_domains contains *something* if we wrote an updated # nsd.conf so that we know to restart nsd. if len(updated_domains) == 0: @@ -112,8 +90,8 @@ def do_dns_update(env, force=False): if len(updated_domains) > 0: shell('check_call', ["/usr/sbin/service", "nsd", "restart"]) - # Write the OpenDKIM configuration tables. - if write_opendkim_tables(domains, env): + # Write the OpenDKIM configuration tables for all of the domains. + if write_opendkim_tables([domain for domain, zonefile in zonefiles], env): # Settings changed. Kick opendkim. shell('check_call', ["/usr/sbin/service", "opendkim", "restart"]) if len(updated_domains) == 0: @@ -132,6 +110,22 @@ def do_dns_update(env, force=False): ######################################################################## +def build_zones(env): + # What domains (and their zone filenames) should we build? + domains = get_dns_domains(env) + zonefiles = get_dns_zones(env) + + # Custom records to add to zones. + additional_records = list(get_custom_dns_config(env)) + from web_update import get_web_domains + www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) + + # Build DNS records for each zone. + for domain, zonefile in zonefiles: + # Build the records to put in the zone. + records = build_zone(domain, domains, additional_records, www_redirect_domains, env) + yield (domain, zonefile, records) + def build_zone(domain, all_domains, additional_records, www_redirect_domains, env, is_zone=True): records = [] @@ -861,57 +855,9 @@ def get_custom_dns_record(custom_dns, qname, rtype): ######################################################################## -def justtestingdotemail(domain, records): - # If the domain is a subdomain of justtesting.email, which we own, - # automatically populate the zone where it is set up on dns4e.com. - # Ideally if dns4e.com supported NS records we would just have it - # delegate DNS to us, but instead we will populate the whole zone. - - import subprocess, json, urllib.parse - - if not domain.endswith(".justtesting.email"): - return - - for subdomain, querytype, value, explanation in records: - if querytype in ("NS",): continue - if subdomain in ("www", "ns1", "ns2"): continue # don't do unnecessary things - - if subdomain == None: - subdomain = domain - else: - subdomain = subdomain + "." + domain - - if querytype == "TXT": - # nsd requires parentheses around txt records with multiple parts, - # but DNS4E requires there be no parentheses; also it goes into - # nsd with a newline and a tab, which we replace with a space here - value = re.sub("^\s*\(\s*([\w\W]*)\)", r"\1", value) - value = re.sub("\s+", " ", value) - else: - continue - - print("Updating DNS for %s/%s..." % (subdomain, querytype)) - resp = json.loads(subprocess.check_output([ - "curl", - "-s", - "https://api.dns4e.com/v7/%s/%s" % (urllib.parse.quote(subdomain), querytype.lower()), - "--user", "2ddbd8e88ed1495fa0ec:A97TDJV26CVUJS6hqAs0CKnhj4HvjTM7MwAAg8xb", - "--data", "record=%s" % urllib.parse.quote(value), - ]).decode("utf8")) - print("\t...", resp.get("message", "?")) - -######################################################################## - def build_recommended_dns(env): ret = [] - domains = get_dns_domains(env) - zonefiles = get_dns_zones(env) - additional_records = list(get_custom_dns_config(env)) - from web_update import get_web_domains - www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) - for domain, zonefile in zonefiles: - records = build_zone(domain, domains, additional_records, www_redirect_domains, env) - + for (domain, zonefile, records) in build_zones(env): # remove records that we don't dislay records = [r for r in records if r[3] is not False] From 5bbe9f9a049cf47f065c5069ea16ac744397d63c Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Mon, 7 Dec 2015 08:37:00 -0500 Subject: [PATCH 11/20] status checks: when ipv6 is enabled, check that services are accessible over ipv6 too --- CHANGELOG.md | 1 + management/status_checks.py | 77 ++++++++++++++++++++++--------------- 2 files changed, 47 insertions(+), 31 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index b7622478..fde29af0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,7 @@ Mail: Control panel: +* When IPv6 is enabled, check that system services are accessible over IPv6 too. * Explanatory text for setting up secondary nameserver is added/fixed. * DNS checks now have a timeout in case a DNS server is not responding, so the checks don't stall indefinitely. * Better messages if external DNS is used and, weirdly, custom secondary nameservers are set. diff --git a/management/status_checks.py b/management/status_checks.py index a8a24edf..68f4a57b 100755 --- a/management/status_checks.py +++ b/management/status_checks.py @@ -103,45 +103,60 @@ def check_service(i, service, env): # Skip check (no port, e.g. no sshd). return (i, None, None, None) - import socket output = BufferedOutput() running = False fatal = False - s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - s.settimeout(1) - try: - try: - s.connect(( - "127.0.0.1" if not service["public"] else env['PUBLIC_IP'], - service["port"])) - running = True - except OSError as e1: - if service["public"] and service["port"] != 53: - # For public services (except DNS), try the private IP as a fallback. - s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - s1.settimeout(1) - try: - s1.connect(("127.0.0.1", service["port"])) - output.print_error("%s is running but is not publicly accessible at %s:%d (%s)." % (service['name'], env['PUBLIC_IP'], service['port'], str(e1))) - except: - raise e1 - finally: - s1.close() - else: - raise - except OSError as e: - output.print_error("%s is not running (%s; port %d)." % (service['name'], str(e), service['port'])) + # Helper function to make a connection to the service, since we try + # up to three ways (localhost, IPv4 address, IPv6 address). + def try_connect(ip): + # Connect to the given IP address on the service's port with a one-second timeout. + import socket + s = socket.socket(socket.AF_INET if ":" not in ip else socket.AF_INET6, socket.SOCK_STREAM) + s.settimeout(1) + try: + s.connect((ip, service["port"])) + return True + except OSError as e: + # timed out or some other odd error + return False + finally: + s.close() + + if service["public"]: + # Service should be publicly accessible. + if try_connect(env["PUBLIC_IP"]): + # IPv4 ok. + if not env.get("PUBLIC_IPV6") or service.get("ipv6") is False or try_connect(env["PUBLIC_IPV6"]): + # No IPv6, or service isn't meant to run on IPv6, or IPv6 is good. + running = True + + # IPv4 ok but IPv6 failed. Try the PRIVATE_IPV6 address to see if the service is bound to the interface. + elif service["port"] != 53 and try_connect(env["PRIVATE_IPV6"]): + output.print_error("%s is running (and available over IPv4 and the local IPv6 address), but it is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IP'], service['port'])) + else: + output.print_error("%s is running and available over IPv4 but is not accessible over IPv6 at %s port %d." % (service['name'], env['PUBLIC_IPV6'], service['port'])) + + # IPv4 failed. Try the private IP to see if the service is running but not accessible (except DNS because a different service runs on the private IP). + elif service["port"] != 53 and try_connect("127.0.0.1"): + output.print_error("%s is running but is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IP'], service['port'])) + else: + output.print_error("%s is not running (port %d)." % (service['name'], service['port'])) # Why is nginx not running? - if service["port"] in (80, 443): + if not running and service["port"] in (80, 443): output.print_line(shell('check_output', ['nginx', '-t'], capture_stderr=True, trap=True)[1].strip()) - # Flag if local DNS is not running. - if service["port"] == 53 and service["public"] == False: - fatal = True - finally: - s.close() + else: + # Service should be running locally. + if try_connect("127.0.0.1"): + running = True + else: + output.print_error("%s is not running (port %d)." % (service['name'], service['port'])) + + # Flag if local DNS is not running. + if not running and service["port"] == 53 and service["public"] == False: + fatal = True return (i, running, fatal, output) From 20e11bbab329b2f02af508c82841d8e412136f9e Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Mon, 7 Dec 2015 08:45:59 -0500 Subject: [PATCH 12/20] fail2ban: whitelist our machine's public ip address so status checks dont cause bans of the machine itself --- conf/fail2ban/jail.local | 6 ++++++ setup/system.sh | 5 ++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/conf/fail2ban/jail.local b/conf/fail2ban/jail.local index 682ae0d8..6e7b77e2 100644 --- a/conf/fail2ban/jail.local +++ b/conf/fail2ban/jail.local @@ -1,5 +1,11 @@ # Fail2Ban configuration file for Mail-in-a-Box +[DEFAULT] +# Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks +# ping services over the public interface so we should whitelist that address of +# ours too. The string is substituted during installation. +ignoreip = 127.0.0.1/8 PUBLIC_IP + # JAILS [ssh] diff --git a/setup/system.sh b/setup/system.sh index 125a4fe0..c00fcff4 100755 --- a/setup/system.sh +++ b/setup/system.sh @@ -1,3 +1,4 @@ +source /etc/mailinabox.conf source setup/functions.sh # load our functions # Basic System Configuration @@ -198,7 +199,9 @@ restart_service resolvconf # ### Fail2Ban Service # Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix and ssh -cp conf/fail2ban/jail.local /etc/fail2ban/jail.local +cat conf/fail2ban/jail.local \ + | sed "s/PUBLIC_IP/$PUBLIC_IP/g" \ + > /etc/fail2ban/jail.local cp conf/fail2ban/dovecotimap.conf /etc/fail2ban/filter.d/dovecotimap.conf restart_service fail2ban From fdad83a1bbb04a35e85b610820e8f724a8aa9a39 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Mon, 7 Dec 2015 08:58:48 -0500 Subject: [PATCH 13/20] status checks: check IPv6 reverse DNS --- CHANGELOG.md | 2 +- management/status_checks.py | 19 ++++++++++++------- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index fde29af0..2a862b88 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,7 +10,7 @@ Mail: Control panel: -* When IPv6 is enabled, check that system services are accessible over IPv6 too. +* When IPv6 is enabled, check that system services are accessible over IPv6 too and that reverse DNS is setup correctly for the IPv6 address. * Explanatory text for setting up secondary nameserver is added/fixed. * DNS checks now have a timeout in case a DNS server is not responding, so the checks don't stall indefinitely. * Better messages if external DNS is used and, weirdly, custom secondary nameservers are set. diff --git a/management/status_checks.py b/management/status_checks.py index 68f4a57b..6b49fd43 100755 --- a/management/status_checks.py +++ b/management/status_checks.py @@ -347,15 +347,20 @@ def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles): issues listed here.""" % (env['PUBLIC_IP'], ip)) - # Check reverse DNS on the PRIMARY_HOSTNAME. Note that it might not be + # Check reverse DNS matches the PRIMARY_HOSTNAME. Note that it might not be # a DNS zone if it is a subdomain of another domain we have a zone for. - ipaddr_rev = dns.reversename.from_address(env['PUBLIC_IP']) - existing_rdns = query_dns(ipaddr_rev, "PTR") - if existing_rdns == domain: - output.print_ok("Reverse DNS is set correctly at ISP. [%s ↦ %s]" % (env['PUBLIC_IP'], env['PRIMARY_HOSTNAME'])) - else: + existing_rdns_v4 = query_dns(dns.reversename.from_address(env['PUBLIC_IP']), "PTR") + existing_rdns_v6 = query_dns(dns.reversename.from_address(env['PUBLIC_IPV6']), "PTR") if env.get("PUBLIC_IPV6") else None + if existing_rdns_v4 == domain and existing_rdns_v6 in (None, domain): + output.print_ok("Reverse DNS is set correctly at ISP. [%s ↦ %s]" % ( + env['PUBLIC_IP'] + (("/"+env['PUBLIC_IPV6']) if env.get("PUBLIC_IPV6") else ""), + env['PRIMARY_HOSTNAME'])) + elif existing_rdns_v4 == existing_rdns_v6 or existing_rdns_v6 is None: output.print_error("""Your box's reverse DNS is currently %s, but it should be %s. Your ISP or cloud provider will have instructions - on setting up reverse DNS for your box at %s.""" % (existing_rdns, domain, env['PUBLIC_IP']) ) + on setting up reverse DNS for your box.""" % (existing_rdns_v4, domain) ) + else: + output.print_error("""Your box's reverse DNS is currently %s (IPv4) and %s (IPv6), but it should be %s. Your ISP or cloud provider will have instructions + on setting up reverse DNS for your box.""" % (existing_rdns_v4, existing_rdns_v6, domain) ) # Check the TLSA record. tlsa_qname = "_25._tcp." + domain From c4f00626efb7a9d14dfc2222c0cd460a48b6b8dd Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Mon, 7 Dec 2015 09:08:00 -0500 Subject: [PATCH 14/20] status checks: check that PRIMARY_HOSTNAME's AAAA record is working --- CHANGELOG.md | 2 +- management/status_checks.py | 17 +++++++++-------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2a862b88..67d24136 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,7 +10,7 @@ Mail: Control panel: -* When IPv6 is enabled, check that system services are accessible over IPv6 too and that reverse DNS is setup correctly for the IPv6 address. +* When IPv6 is enabled, check that system services are accessible over IPv6 too, that the box's hostname resolves over IPv6, and that reverse DNS is setup correctly for IPv6. * Explanatory text for setting up secondary nameserver is added/fixed. * DNS checks now have a timeout in case a DNS server is not responding, so the checks don't stall indefinitely. * Better messages if external DNS is used and, weirdly, custom secondary nameservers are set. diff --git a/management/status_checks.py b/management/status_checks.py index 6b49fd43..dd442bbc 100755 --- a/management/status_checks.py +++ b/management/status_checks.py @@ -316,6 +316,7 @@ def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles): ip = query_dns(domain, "A") ns_ips = query_dns("ns1." + domain, "A") + '/' + query_dns("ns2." + domain, "A") + my_ips = env['PUBLIC_IP'] + ((" / "+env['PUBLIC_IPV6']) if env.get("PUBLIC_IPV6") else "") # Check that the ns1/ns2 hostnames resolve to A records. This information probably # comes from the TLD since the information is set at the registrar as glue records. @@ -338,23 +339,23 @@ def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles): public DNS to update after a change.""" % (env['PRIMARY_HOSTNAME'], env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'], ns_ips)) - # Check that PRIMARY_HOSTNAME resolves to PUBLIC_IP in public DNS. - if ip == env['PUBLIC_IP']: - output.print_ok("Domain resolves to box's IP address. [%s ↦ %s]" % (env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'])) + # Check that PRIMARY_HOSTNAME resolves to PUBLIC_IP[V6] in public DNS. + ipv6 = query_dns(domain, "AAAA") if env.get("PUBLIC_IPV6") else None + if ip == env['PUBLIC_IP'] and ipv6 in (None, env['PUBLIC_IPV6']): + output.print_ok("Domain resolves to box's IP address. [%s ↦ %s]" % (env['PRIMARY_HOSTNAME'], my_ips)) else: output.print_error("""This domain must resolve to your box's IP address (%s) in public DNS but it currently resolves to %s. It may take several hours for public DNS to update after a change. This problem may result from other - issues listed here.""" - % (env['PUBLIC_IP'], ip)) + issues listed above.""" + % (my_ips, ip + ((" / " + ipv6) if ipv6 is not None else ""))) + # Check reverse DNS matches the PRIMARY_HOSTNAME. Note that it might not be # a DNS zone if it is a subdomain of another domain we have a zone for. existing_rdns_v4 = query_dns(dns.reversename.from_address(env['PUBLIC_IP']), "PTR") existing_rdns_v6 = query_dns(dns.reversename.from_address(env['PUBLIC_IPV6']), "PTR") if env.get("PUBLIC_IPV6") else None if existing_rdns_v4 == domain and existing_rdns_v6 in (None, domain): - output.print_ok("Reverse DNS is set correctly at ISP. [%s ↦ %s]" % ( - env['PUBLIC_IP'] + (("/"+env['PUBLIC_IPV6']) if env.get("PUBLIC_IPV6") else ""), - env['PRIMARY_HOSTNAME'])) + output.print_ok("Reverse DNS is set correctly at ISP. [%s ↦ %s]" % (my_ips, env['PRIMARY_HOSTNAME'])) elif existing_rdns_v4 == existing_rdns_v6 or existing_rdns_v6 is None: output.print_error("""Your box's reverse DNS is currently %s, but it should be %s. Your ISP or cloud provider will have instructions on setting up reverse DNS for your box.""" % (existing_rdns_v4, domain) ) From aedfe62bb07d304891873c3e0bbcd07ff14aef0d Mon Sep 17 00:00:00 2001 From: Ariejan de Vroom Date: Mon, 23 Nov 2015 15:12:33 +0100 Subject: [PATCH 15/20] Add alias for abuse@ --- CHANGELOG.md | 3 ++- management/mailconfig.py | 16 ++++++++++------ management/templates/aliases.html | 2 +- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5fec01d8..76409a21 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,7 @@ Still In Development Mail: * Updated Roundcube to version 1.1.3. +* Auto-create RFC2142 aliases for abuse@. Control panel: @@ -392,7 +393,7 @@ v0.02 (September 21, 2014) * Better logic for determining when to take a full backup. * Reduce DNS TTL, not that it seems to really matter. * Add SSHFP DNS records. -* Add an API for setting custom DNS records +* Add an API for setting custom DNS records * Update to ownCloud 7.0.2. * Some things were broken if the machine had an IPv6 address. * Use a dialogs library to ask users questions during setup. diff --git a/management/mailconfig.py b/management/mailconfig.py index 17fa9499..d9ffdf65 100755 --- a/management/mailconfig.py +++ b/management/mailconfig.py @@ -77,7 +77,7 @@ def prettify_idn_email_address(email): def is_dcv_address(email): email = email.lower() - for localpart in ("admin", "administrator", "postmaster", "hostmaster", "webmaster"): + for localpart in ("admin", "administrator", "postmaster", "hostmaster", "webmaster", "abuse"): if email.startswith(localpart+"@") or email.startswith(localpart+"+"): return True return False @@ -520,17 +520,21 @@ def get_required_aliases(env): # email on that domain are the required aliases or a catch-all/domain-forwarder. real_mail_domains = get_mail_domains(env, filter_aliases = lambda alias : - not alias.startswith("postmaster@") and not alias.startswith("admin@") + not alias.startswith("postmaster@") + and not alias.startswith("admin@") + and not alias.startswith("abuse@") and not alias.startswith("@") ) - # Create postmaster@ and admin@ for all domains we serve mail on. - # postmaster@ is assumed to exist by our Postfix configuration. admin@ - # isn't anything, but it might save the user some trouble e.g. when + # Create postmaster@, admin@ and abuse@ for all domains we serve + # mail on. postmaster@ is assumed to exist by our Postfix configuration. + # admin@isn't anything, but it might save the user some trouble e.g. when # buying an SSL certificate. + # abuse@ is part of RFC2142: https://www.ietf.org/rfc/rfc2142.txt for domain in real_mail_domains: aliases.add("postmaster@" + domain) aliases.add("admin@" + domain) + aliases.add("abuse@" + domain) return aliases @@ -572,7 +576,7 @@ def kick(env, mail_result=None): # longer have any other email addresses for. for address, forwards_to, *_ in existing_alias_records: user, domain = address.split("@") - if user in ("postmaster", "admin") \ + if user in ("postmaster", "admin", "abuse") \ and address not in required_aliases \ and forwards_to == get_system_administrator(env): remove_mail_alias(address, env, do_kick=False) diff --git a/management/templates/aliases.html b/management/templates/aliases.html index 215e728c..dc916f95 100644 --- a/management/templates/aliases.html +++ b/management/templates/aliases.html @@ -86,7 +86,7 @@ -

hostmaster@, postmaster@, and admin@ email addresses are required on some domains.

+

hostmaster@, postmaster@, admin@ and abuse@ email addresses are required on some domains.

From fe9ed3f70d6df5f6859bfadddac57c2b1e53d69b Mon Sep 17 00:00:00 2001 From: Scott Bronson Date: Mon, 7 Dec 2015 10:14:04 -0800 Subject: [PATCH 16/20] don't install bind9-host when setting hostname also remove an incorrect comment --- setup/questions.sh | 2 -- 1 file changed, 2 deletions(-) diff --git a/setup/questions.sh b/setup/questions.sh index 8b5fbb99..a02b9bd1 100644 --- a/setup/questions.sh +++ b/setup/questions.sh @@ -207,8 +207,6 @@ if [ "$PUBLIC_IPV6" = "auto" ]; then PUBLIC_IPV6=$(get_publicip_from_web_service 6 || get_default_privateip 6) fi if [ "$PRIMARY_HOSTNAME" = "auto" ]; then - # Use reverse DNS to get this machine's hostname. Install bind9-host early. - hide_output apt-get -y install bind9-host PRIMARY_HOSTNAME=$(get_default_hostname) elif [ "$PRIMARY_HOSTNAME" = "auto-easy" ]; then # Generate a probably-unique subdomain under our justtesting.email domain. From f8b4e3775dcae3fee60579b8400ddc089548d7fb Mon Sep 17 00:00:00 2001 From: Marius Date: Sat, 12 Dec 2015 13:35:12 +0100 Subject: [PATCH 17/20] Update mail-guide.html (POP3) --- management/templates/mail-guide.html | 1 + 1 file changed, 1 insertion(+) diff --git a/management/templates/mail-guide.html b/management/templates/mail-guide.html index edf9fefd..d5a0c2ae 100644 --- a/management/templates/mail-guide.html +++ b/management/templates/mail-guide.html @@ -37,6 +37,7 @@

In addition to setting up your email, you’ll also need to set up contacts and calendar synchronization separately.

+

As an alternative to IMAP you can also use the POP protocol. For this you just need to pick POP as the protocol in your e-mail client, and use the port 995. The SMTP settings and usernames and passwords remain the same. However, we strongly recommend to use IMAP or ActiveSync.

Exchange/ActiveSync settings

From 6e6c9937246cec932691d71bdd8efc3eb2b69aa9 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Sat, 12 Dec 2015 08:44:13 -0500 Subject: [PATCH 18/20] reword POP documentation, add to changelog/readme --- CHANGELOG.md | 1 + management/templates/mail-guide.html | 5 +++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8a8e68d0..79314c84 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -15,6 +15,7 @@ Control panel: * Explanatory text for setting up secondary nameserver is added/fixed. * DNS checks now have a timeout in case a DNS server is not responding, so the checks don't stall indefinitely. * Better messages if external DNS is used and, weirdly, custom secondary nameservers are set. +* Add POP to the mail client settings documentation. System: diff --git a/management/templates/mail-guide.html b/management/templates/mail-guide.html index d5a0c2ae..906e7b2a 100644 --- a/management/templates/mail-guide.html +++ b/management/templates/mail-guide.html @@ -29,7 +29,7 @@ Protocol/Method IMAP Mail server {{hostname}} IMAP Port 993 - IMAP Security SSL + IMAP Security SSL or TLS SMTP Port 587 SMTP Security STARTTLS (“always” or “required”, if prompted) Username: Your whole email address. @@ -37,7 +37,8 @@

In addition to setting up your email, you’ll also need to set up contacts and calendar synchronization separately.

-

As an alternative to IMAP you can also use the POP protocol. For this you just need to pick POP as the protocol in your e-mail client, and use the port 995. The SMTP settings and usernames and passwords remain the same. However, we strongly recommend to use IMAP or ActiveSync.

+ +

As an alternative to IMAP you can also use the POP protocol: choose POP as the protocol, port 995, and SSL or TLS security in your mail client. The SMTP settings and usernames and passwords remain the same. However, we recommend you use IMAP instead.

Exchange/ActiveSync settings

From 6336cc64528494974d75942f5d24a2c097ae6819 Mon Sep 17 00:00:00 2001 From: Scott Bronson Date: Tue, 22 Dec 2015 12:33:26 -0800 Subject: [PATCH 19/20] tiny tweaks to make the bash slightly more readable --- setup/mail-dovecot.sh | 9 +++++---- setup/system.sh | 9 +++------ 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/setup/mail-dovecot.sh b/setup/mail-dovecot.sh index 1cd0398b..978cba25 100755 --- a/setup/mail-dovecot.sh +++ b/setup/mail-dovecot.sh @@ -88,18 +88,19 @@ sed -i "s/#port = 110/port = 0/" /etc/dovecot/conf.d/10-master.conf # this are minimal. But for good measure, let's go to 4 minutes to halve the # bandwidth and number of times the device's networking might be woken up. # The risk is that if the connection is silent for too long it might be reset -# by a peer. See #129 and http://razor.occams.info/blog/2014/08/09/how-bad-is-imap-idle/. +# by a peer. See [#129](https://github.com/mail-in-a-box/mailinabox/issues/129) +# and [How bad is IMAP IDLE](http://razor.occams.info/blog/2014/08/09/how-bad-is-imap-idle/). tools/editconf.py /etc/dovecot/conf.d/20-imap.conf \ imap_idle_notify_interval="4 mins" -# Set POP3 UIDL -# UIDLs are used by POP3 clients to keep track of what messages they've downloaded. +# Set POP3 UIDL. +# UIDLs are used by POP3 clients to keep track of what messages they've downloaded. # For new POP3 servers, the easiest way to set up UIDLs is to use IMAP's UIDVALIDITY # and UID values, the default in Dovecot. tools/editconf.py /etc/dovecot/conf.d/20-pop3.conf \ pop3_uidl_format="%08Xu%08Xv" -# Full Text Search - Enable full text search of mail using dovecot's lucene plugin, +# Full Text Search - Enable full text search of mail using dovecot's lucene plugin, # which *we* package and distribute (dovecot-lucene package). tools/editconf.py /etc/dovecot/conf.d/10-mail.conf \ mail_plugins="\$mail_plugins fts fts_lucene" diff --git a/setup/system.sh b/setup/system.sh index c00fcff4..8f7b640b 100755 --- a/setup/system.sh +++ b/setup/system.sh @@ -12,12 +12,9 @@ source setup/functions.sh # load our functions # text search plugin for (and by) dovecot, which is not available in # Ubuntu currently. # -# Add that to the system's list of repositories using add-apt-repository. -# But add-apt-repository may not be installed. If it's not available, -# then install it. But we have to run apt-get update before we try to -# install anything so the package index is up to date. After adding the -# PPA, we have to run apt-get update *again* to load the PPA's index, -# so this must precede the apt-get update line below. +# So, first ensure add-apt-repository is installed, then use it to install +# the [mail-in-a-box ppa](https://launchpad.net/~mail-in-a-box/+archive/ubuntu/ppa). + if [ ! -f /usr/bin/add-apt-repository ]; then echo "Installing add-apt-repository..." From dbf47291097a12f11e83c9340d7a1d3d06e5dde7 Mon Sep 17 00:00:00 2001 From: Joshua Tauberer Date: Wed, 9 Dec 2015 13:29:58 +0000 Subject: [PATCH 20/20] add management/backup.py --restore --- CHANGELOG.md | 1 + management/backup.py | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 79314c84..4651873d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,6 +26,7 @@ System: * Z-Push (Exchange/ActiveSync) logs now exclude warnings and are now rotated to save disk space. * Fix pip command that might have not installed all necessary Python packages. * The control panel and backup would not work on Google Compute Engine because they install a conflicting boto package. +* Added a new command `management/backup.py --restore` to restore files from a backup to a target directory (command line arguments are passed to `duplicity restore`). v0.14 (November 4, 2015) ------------------------ diff --git a/management/backup.py b/management/backup.py index 63b8aa66..b627e1d1 100755 --- a/management/backup.py +++ b/management/backup.py @@ -314,6 +314,18 @@ def run_duplicity_verification(): env["STORAGE_ROOT"], ], get_env(env)) +def run_duplicity_restore(args): + env = load_environment() + config = get_backup_config(env) + backup_cache_dir = os.path.join(env["STORAGE_ROOT"], 'backup', 'cache') + shell('check_call', [ + "/usr/bin/duplicity", + "restore", + "--archive-dir", backup_cache_dir, + config["target"], + ] + args, + get_env(env)) + def list_target_files(config): import urllib.parse try: @@ -443,6 +455,11 @@ if __name__ == "__main__": ret = backup_status(load_environment()) print(rtyaml.dump(ret["backups"])) + elif len(sys.argv) >= 2 and sys.argv[1] == "--restore": + # Run duplicity restore. Rest of command line passed as arguments + # to duplicity. The restore path should be specified. + run_duplicity_restore(sys.argv[2:]) + else: # Perform a backup. Add --full to force a full backup rather than # possibly performing an incremental backup.