mirror of
https://github.com/mail-in-a-box/mailinabox.git
synced 2026-03-12 17:07:23 +01:00
Compare commits
44 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f9ca440ce8 | ||
|
|
d880f088be | ||
|
|
5cabfd591b | ||
|
|
af80849857 | ||
|
|
7a191e67b8 | ||
|
|
4b2e48f2c0 | ||
|
|
eb545d7941 | ||
|
|
a2e6e81697 | ||
|
|
1b24e2cbaf | ||
|
|
0843159fb4 | ||
|
|
b8e99c30a2 | ||
|
|
3d933c16d0 | ||
|
|
e785886447 | ||
|
|
23ecff04b8 | ||
|
|
a0bae5db5c | ||
|
|
86368ed165 | ||
|
|
5e4c0ed825 | ||
|
|
ffa9dc5d67 | ||
|
|
43cb6c4995 | ||
|
|
36cb2ef41d | ||
|
|
098e250cc4 | ||
|
|
3d5a35b184 | ||
|
|
87d3f2641d | ||
|
|
c6c75c5a17 | ||
|
|
1ba44b02d4 | ||
|
|
6fd4cd85ca | ||
|
|
6182347641 | ||
|
|
401b0526a3 | ||
|
|
2f24328608 | ||
|
|
8ea42847da | ||
|
|
4ed23f44e6 | ||
|
|
178527dab1 | ||
|
|
f5c376dca8 | ||
|
|
239eac662c | ||
|
|
4e18f66db6 | ||
|
|
77937df955 | ||
|
|
4db8efa0df | ||
|
|
66c80bd16a | ||
|
|
5895aeecd7 | ||
|
|
83ffc99b9c | ||
|
|
85a9a1608c | ||
|
|
2e693f7011 | ||
|
|
6f0220da4b | ||
|
|
09a45b4397 |
32
CHANGELOG.md
32
CHANGELOG.md
@@ -1,10 +1,40 @@
|
|||||||
CHANGELOG
|
CHANGELOG
|
||||||
=========
|
=========
|
||||||
|
|
||||||
|
v0.17 (February 25, 2016)
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
Mail:
|
||||||
|
|
||||||
|
* Roundcube updated to version 1.1.4.
|
||||||
|
* When there's a problem delivering an outgoing message, a new 'warning' bounce will come after 3 hours and the box will stop trying after 2 days (instead of 5).
|
||||||
|
* On multi-homed machines, Postfix now binds to the right network interface when sending outbound mail so that SPF checks on the receiving end will pass.
|
||||||
|
* Mail sent from addresses on subdomains of other domains hosted by this box would not be DKIM-signed and so would fail DMARC checks by recipients, since version v0.15.
|
||||||
|
|
||||||
|
Control panel:
|
||||||
|
|
||||||
|
* TLS certificate provisioning would crash if DNS propagation was in progress and a challenge failed; might have shown the wrong error when provisioning fails.
|
||||||
|
* Backup times were displayed with the wrong time zone.
|
||||||
|
* Thresholds for displaying messages when the system is running low on memory have been reduced from 30% to 20% for a warning and from 15% to 10% for an error.
|
||||||
|
* Other minor fixes.
|
||||||
|
|
||||||
|
System:
|
||||||
|
|
||||||
|
* Backups to some AWS S3 regions broke in version 0.15 because we reverted the version of boto. That's now fixed.
|
||||||
|
* On low-usage systems, don't hold backups for quite so long by taking a full backup more often.
|
||||||
|
* Nightly status checks might fail on systems not configured with a default Unicode locale.
|
||||||
|
* If domains need a TLS certificate and the user hasn't installed one yet using Let's Encrypt, the administrator would get a nightly email with weird interactive text asking them to agree to Let's Encrypt's ToS. Now just say that the provisioning can't be done automatically.
|
||||||
|
* Reduce the number of background processes used by the management daemon to lower memory consumption.
|
||||||
|
|
||||||
|
Setup:
|
||||||
|
|
||||||
|
* The first screen now warns users not to install on a machine used for other things.
|
||||||
|
|
||||||
v0.16 (January 30, 2016)
|
v0.16 (January 30, 2016)
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
This update primarily adds automatica SSL (now "TLS") certificate provisioning from Let's Encrypt (https://letsencrypt.org/).
|
This update primarily adds automatic SSL (now "TLS") certificate provisioning from Let's Encrypt (https://letsencrypt.org/).
|
||||||
|
* The Sieve port is now open so tools like the Thunderbird Sieve program can be used to edit mail filters.
|
||||||
|
|
||||||
Control Panel:
|
Control Panel:
|
||||||
|
|
||||||
|
|||||||
@@ -59,20 +59,20 @@ by me:
|
|||||||
$ curl -s https://keybase.io/joshdata/key.asc | gpg --import
|
$ curl -s https://keybase.io/joshdata/key.asc | gpg --import
|
||||||
gpg: key C10BDD81: public key "Joshua Tauberer <jt@occams.info>" imported
|
gpg: key C10BDD81: public key "Joshua Tauberer <jt@occams.info>" imported
|
||||||
|
|
||||||
$ git verify-tag v0.16
|
$ git verify-tag v0.17
|
||||||
gpg: Signature made ..... using RSA key ID C10BDD81
|
gpg: Signature made ..... using RSA key ID C10BDD81
|
||||||
gpg: Good signature from "Joshua Tauberer <jt@occams.info>"
|
gpg: Good signature from "Joshua Tauberer <jt@occams.info>"
|
||||||
gpg: WARNING: This key is not certified with a trusted signature!
|
gpg: WARNING: This key is not certified with a trusted signature!
|
||||||
gpg: There is no indication that the signature belongs to the owner.
|
gpg: There is no indication that the signature belongs to the owner.
|
||||||
Primary key fingerprint: 5F4C 0E73 13CC D744 693B 2AEA B920 41F4 C10B DD81
|
Primary key fingerprint: 5F4C 0E73 13CC D744 693B 2AEA B920 41F4 C10B DD81
|
||||||
|
|
||||||
You'll get a lot of warnings, but that's OK. Check that the primary key fingerprint matchs the
|
You'll get a lot of warnings, but that's OK. Check that the primary key fingerprint matches the
|
||||||
fingerprint in the key details at [https://keybase.io/joshdata](https://keybase.io/joshdata)
|
fingerprint in the key details at [https://keybase.io/joshdata](https://keybase.io/joshdata)
|
||||||
and on my [personal homepage](https://razor.occams.info/). (Of course, if this repository has been compromised you can't trust these instructions.)
|
and on my [personal homepage](https://razor.occams.info/). (Of course, if this repository has been compromised you can't trust these instructions.)
|
||||||
|
|
||||||
Checkout the tag corresponding to the most recent release:
|
Checkout the tag corresponding to the most recent release:
|
||||||
|
|
||||||
$ git checkout v0.16
|
$ git checkout v0.17
|
||||||
|
|
||||||
Begin the installation.
|
Begin the installation.
|
||||||
|
|
||||||
|
|||||||
@@ -27,9 +27,9 @@ EXEC_AS_USER=root
|
|||||||
|
|
||||||
# Ensure Python reads/writes files in UTF-8. If the machine
|
# Ensure Python reads/writes files in UTF-8. If the machine
|
||||||
# triggers some other locale in Python, like ASCII encoding,
|
# triggers some other locale in Python, like ASCII encoding,
|
||||||
# Python may not be able to read/write files. Here and in
|
# Python may not be able to read/write files. Set also
|
||||||
# setup/start.sh (where the locale is also installed if not
|
# setup/start.sh (where the locale is also installed if not
|
||||||
# already present).
|
# already present) and management/daily_tasks.sh.
|
||||||
export LANGUAGE=en_US.UTF-8
|
export LANGUAGE=en_US.UTF-8
|
||||||
export LC_ALL=en_US.UTF-8
|
export LC_ALL=en_US.UTF-8
|
||||||
export LANG=en_US.UTF-8
|
export LANG=en_US.UTF-8
|
||||||
|
|||||||
@@ -42,10 +42,10 @@ def backup_status(env):
|
|||||||
# Get duplicity collection status and parse for a list of backups.
|
# Get duplicity collection status and parse for a list of backups.
|
||||||
def parse_line(line):
|
def parse_line(line):
|
||||||
keys = line.strip().split()
|
keys = line.strip().split()
|
||||||
date = dateutil.parser.parse(keys[1])
|
date = dateutil.parser.parse(keys[1]).astimezone(dateutil.tz.tzlocal())
|
||||||
return {
|
return {
|
||||||
"date": keys[1],
|
"date": keys[1],
|
||||||
"date_str": date.strftime("%x %X"),
|
"date_str": date.strftime("%x %X") + " " + now.tzname(),
|
||||||
"date_delta": reldate(date, now, "the future?"),
|
"date_delta": reldate(date, now, "the future?"),
|
||||||
"full": keys[0] == "full",
|
"full": keys[0] == "full",
|
||||||
"size": 0, # collection-status doesn't give us the size
|
"size": 0, # collection-status doesn't give us the size
|
||||||
@@ -81,50 +81,66 @@ def backup_status(env):
|
|||||||
# This is relied on by should_force_full() and the next step.
|
# This is relied on by should_force_full() and the next step.
|
||||||
backups = sorted(backups.values(), key = lambda b : b["date"], reverse=True)
|
backups = sorted(backups.values(), key = lambda b : b["date"], reverse=True)
|
||||||
|
|
||||||
# Get the average size of incremental backups and the size of the
|
# Get the average size of incremental backups, the size of the
|
||||||
# most recent full backup.
|
# most recent full backup, and the date of the most recent
|
||||||
|
# backup and the most recent full backup.
|
||||||
incremental_count = 0
|
incremental_count = 0
|
||||||
incremental_size = 0
|
incremental_size = 0
|
||||||
|
first_date = None
|
||||||
first_full_size = None
|
first_full_size = None
|
||||||
|
first_full_date = None
|
||||||
for bak in backups:
|
for bak in backups:
|
||||||
|
if first_date is None:
|
||||||
|
first_date = dateutil.parser.parse(bak["date"])
|
||||||
if bak["full"]:
|
if bak["full"]:
|
||||||
first_full_size = bak["size"]
|
first_full_size = bak["size"]
|
||||||
|
first_full_date = dateutil.parser.parse(bak["date"])
|
||||||
break
|
break
|
||||||
incremental_count += 1
|
incremental_count += 1
|
||||||
incremental_size += bak["size"]
|
incremental_size += bak["size"]
|
||||||
|
|
||||||
# Predict how many more increments until the next full backup,
|
# When will the most recent backup be deleted? It won't be deleted if the next
|
||||||
# and add to that the time we hold onto backups, to predict
|
# backup is incremental, because the increments rely on all past increments.
|
||||||
# how long the most recent full backup+increments will be held
|
# So first guess how many more incremental backups will occur until the next
|
||||||
# onto. Round up since the backup occurs on the night following
|
# full backup. That full backup frees up this one to be deleted. But, the backup
|
||||||
# when the threshold is met.
|
# must also be at least min_age_in_days old too.
|
||||||
deleted_in = None
|
deleted_in = None
|
||||||
if incremental_count > 0 and first_full_size is not None:
|
if incremental_count > 0 and first_full_size is not None:
|
||||||
deleted_in = "approx. %d days" % round(config["min_age_in_days"] + (.5 * first_full_size - incremental_size) / (incremental_size/incremental_count) + .5)
|
# How many days until the next incremental backup? First, the part of
|
||||||
|
# the algorithm based on increment sizes:
|
||||||
|
est_days_to_next_full = (.5 * first_full_size - incremental_size) / (incremental_size/incremental_count)
|
||||||
|
est_time_of_next_full = first_date + datetime.timedelta(days=est_days_to_next_full)
|
||||||
|
|
||||||
# When will a backup be deleted?
|
# ...And then the part of the algorithm based on full backup age:
|
||||||
|
est_time_of_next_full = min(est_time_of_next_full, first_full_date + datetime.timedelta(days=config["min_age_in_days"]*10+1))
|
||||||
|
|
||||||
|
# It still can't be deleted until it's old enough.
|
||||||
|
est_deleted_on = max(est_time_of_next_full, first_date + datetime.timedelta(days=config["min_age_in_days"]))
|
||||||
|
|
||||||
|
deleted_in = "approx. %d days" % round((est_deleted_on-now).total_seconds()/60/60/24 + .5)
|
||||||
|
|
||||||
|
# When will a backup be deleted? Set the deleted_in field of each backup.
|
||||||
saw_full = False
|
saw_full = False
|
||||||
days_ago = now - datetime.timedelta(days=config["min_age_in_days"])
|
|
||||||
for bak in backups:
|
for bak in backups:
|
||||||
if deleted_in:
|
if deleted_in:
|
||||||
# Subsequent backups are deleted when the most recent increment
|
# The most recent increment in a chain and all of the previous backups
|
||||||
# in the chain would be deleted.
|
# it relies on are deleted at the same time.
|
||||||
bak["deleted_in"] = deleted_in
|
bak["deleted_in"] = deleted_in
|
||||||
if bak["full"]:
|
if bak["full"]:
|
||||||
# Reset when we get to a full backup. A new chain start next.
|
# Reset when we get to a full backup. A new chain start *next*.
|
||||||
saw_full = True
|
saw_full = True
|
||||||
deleted_in = None
|
deleted_in = None
|
||||||
elif saw_full and not deleted_in:
|
elif saw_full and not deleted_in:
|
||||||
# Mark deleted_in only on the first increment after a full backup.
|
# We're now on backups prior to the most recent full backup. These are
|
||||||
deleted_in = reldate(days_ago, dateutil.parser.parse(bak["date"]), "on next daily backup")
|
# free to be deleted as soon as they are min_age_in_days old.
|
||||||
|
deleted_in = reldate(now, dateutil.parser.parse(bak["date"]) + datetime.timedelta(days=config["min_age_in_days"]), "on next daily backup")
|
||||||
bak["deleted_in"] = deleted_in
|
bak["deleted_in"] = deleted_in
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"tz": now.tzname(),
|
|
||||||
"backups": backups,
|
"backups": backups,
|
||||||
}
|
}
|
||||||
|
|
||||||
def should_force_full(env):
|
def should_force_full(config, env):
|
||||||
# Force a full backup when the total size of the increments
|
# Force a full backup when the total size of the increments
|
||||||
# since the last full backup is greater than half the size
|
# since the last full backup is greater than half the size
|
||||||
# of that full backup.
|
# of that full backup.
|
||||||
@@ -136,8 +152,14 @@ def should_force_full(env):
|
|||||||
inc_size += bak["size"]
|
inc_size += bak["size"]
|
||||||
else:
|
else:
|
||||||
# ...until we reach the most recent full backup.
|
# ...until we reach the most recent full backup.
|
||||||
# Return if we should to a full backup.
|
# Return if we should to a full backup, which is based
|
||||||
return inc_size > .5*bak["size"]
|
# on the size of the increments relative to the full
|
||||||
|
# backup, as well as the age of the full backup.
|
||||||
|
if inc_size > .5*bak["size"]:
|
||||||
|
return True
|
||||||
|
if dateutil.parser.parse(bak["date"]) + datetime.timedelta(days=config["min_age_in_days"]*10+1) < datetime.datetime.now(dateutil.tz.tzlocal()):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
else:
|
else:
|
||||||
# If we got here there are no (full) backups, so make one.
|
# If we got here there are no (full) backups, so make one.
|
||||||
# (I love for/else blocks. Here it's just to show off.)
|
# (I love for/else blocks. Here it's just to show off.)
|
||||||
@@ -216,7 +238,7 @@ def perform_backup(full_backup):
|
|||||||
# the increments since the most recent full backup are
|
# the increments since the most recent full backup are
|
||||||
# large.
|
# large.
|
||||||
try:
|
try:
|
||||||
full_backup = full_backup or should_force_full(env)
|
full_backup = full_backup or should_force_full(config, env)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# This was the first call to duplicity, and there might
|
# This was the first call to duplicity, and there might
|
||||||
# be an error already.
|
# be an error already.
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ from mailconfig import get_mail_aliases, get_mail_aliases_ex, get_mail_domains,
|
|||||||
# live across http requests so we don't baloon the system with
|
# live across http requests so we don't baloon the system with
|
||||||
# processes.
|
# processes.
|
||||||
import multiprocessing.pool
|
import multiprocessing.pool
|
||||||
pool = multiprocessing.pool.Pool(processes=10)
|
pool = multiprocessing.pool.Pool(processes=5)
|
||||||
|
|
||||||
env = utils.load_environment()
|
env = utils.load_environment()
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,14 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
# This script is run daily (at 3am each night).
|
# This script is run daily (at 3am each night).
|
||||||
|
|
||||||
|
# Set character encoding flags to ensure that any non-ASCII
|
||||||
|
# characters don't cause problems. See setup/start.sh and
|
||||||
|
# the management daemon startup script.
|
||||||
|
export LANGUAGE=en_US.UTF-8
|
||||||
|
export LC_ALL=en_US.UTF-8
|
||||||
|
export LANG=en_US.UTF-8
|
||||||
|
export LC_TYPE=en_US.UTF-8
|
||||||
|
|
||||||
# Take a backup.
|
# Take a backup.
|
||||||
management/backup.py | management/email_administrator.py "Backup Status"
|
management/backup.py | management/email_administrator.py "Backup Status"
|
||||||
|
|
||||||
|
|||||||
@@ -91,7 +91,7 @@ def do_dns_update(env, force=False):
|
|||||||
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
|
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
|
||||||
|
|
||||||
# Write the OpenDKIM configuration tables for all of the domains.
|
# Write the OpenDKIM configuration tables for all of the domains.
|
||||||
if write_opendkim_tables([domain for domain, zonefile in zonefiles], env):
|
if write_opendkim_tables(get_mail_domains(env), env):
|
||||||
# Settings changed. Kick opendkim.
|
# Settings changed. Kick opendkim.
|
||||||
shell('check_call', ["/usr/sbin/service", "opendkim", "restart"])
|
shell('check_call', ["/usr/sbin/service", "opendkim", "restart"])
|
||||||
if len(updated_domains) == 0:
|
if len(updated_domains) == 0:
|
||||||
|
|||||||
@@ -204,7 +204,7 @@ def get_certificates_to_provision(env, show_extended_problems=True, force_domain
|
|||||||
domains_if_any.add(domain)
|
domains_if_any.add(domain)
|
||||||
|
|
||||||
# It's valid. Should we report its validness?
|
# It's valid. Should we report its validness?
|
||||||
if show_extended_problems:
|
elif show_extended_problems:
|
||||||
problems[domain] = "The certificate is valid for at least another 30 days --- no need to replace."
|
problems[domain] = "The certificate is valid for at least another 30 days --- no need to replace."
|
||||||
|
|
||||||
# Warn the user about domains hosted elsewhere.
|
# Warn the user about domains hosted elsewhere.
|
||||||
@@ -365,7 +365,7 @@ def provision_certificates(env, agree_to_tos_url=None, logger=None, show_extende
|
|||||||
"message": "Something unexpected went wrong. It looks like your local Let's Encrypt account data is corrupted. There was a problem with the file " + e.account_file_path + ".",
|
"message": "Something unexpected went wrong. It looks like your local Let's Encrypt account data is corrupted. There was a problem with the file " + e.account_file_path + ".",
|
||||||
})
|
})
|
||||||
|
|
||||||
except (client.InvalidDomainName, client.NeedToTakeAction, acme.messages.Error, requests.exceptions.RequestException) as e:
|
except (client.InvalidDomainName, client.NeedToTakeAction, client.ChallengeFailed, acme.messages.Error, requests.exceptions.RequestException) as e:
|
||||||
ret_item.update({
|
ret_item.update({
|
||||||
"result": "error",
|
"result": "error",
|
||||||
"message": "Something unexpected went wrong: " + str(e),
|
"message": "Something unexpected went wrong: " + str(e),
|
||||||
@@ -458,9 +458,14 @@ def provision_certificates_cmdline():
|
|||||||
if agree_to_tos_url is not None:
|
if agree_to_tos_url is not None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Can't ask the user a question in this mode.
|
# Can't ask the user a question in this mode. Warn the user that something
|
||||||
if headless in sys.argv:
|
# needs to be done.
|
||||||
print("Can't issue TLS certficate until user has agreed to Let's Encrypt TOS.")
|
if headless:
|
||||||
|
print(", ".join(request["domains"]) + " need a new or renewed TLS certificate.")
|
||||||
|
print()
|
||||||
|
print("This box can't do that automatically for you until you agree to Let's Encrypt's")
|
||||||
|
print("Terms of Service agreement. Use the Mail-in-a-Box control panel to provision")
|
||||||
|
print("certificates for these domains.")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
print("""
|
print("""
|
||||||
@@ -513,7 +518,7 @@ Do you agree to the agreement? Type Y or N and press <ENTER>: """
|
|||||||
print("A TLS certificate was requested for: " + ", ".join(wait_domains) + ".")
|
print("A TLS certificate was requested for: " + ", ".join(wait_domains) + ".")
|
||||||
first = True
|
first = True
|
||||||
while wait_until > datetime.datetime.now():
|
while wait_until > datetime.datetime.now():
|
||||||
if "--headless" not in sys.argv or first:
|
if not headless or first:
|
||||||
print ("We have to wait", int(round((wait_until - datetime.datetime.now()).total_seconds())), "seconds for the certificate to be issued...")
|
print ("We have to wait", int(round((wait_until - datetime.datetime.now()).total_seconds())), "seconds for the certificate to be issued...")
|
||||||
time.sleep(10)
|
time.sleep(10)
|
||||||
first = False
|
first = False
|
||||||
|
|||||||
@@ -222,14 +222,14 @@ def check_free_memory(rounded_values, env, output):
|
|||||||
# Check free memory.
|
# Check free memory.
|
||||||
percent_free = 100 - psutil.virtual_memory().percent
|
percent_free = 100 - psutil.virtual_memory().percent
|
||||||
memory_msg = "System memory is %s%% free." % str(round(percent_free))
|
memory_msg = "System memory is %s%% free." % str(round(percent_free))
|
||||||
if percent_free >= 30:
|
if percent_free >= 20:
|
||||||
if rounded_values: memory_msg = "System free memory is at least 30%."
|
if rounded_values: memory_msg = "System free memory is at least 20%."
|
||||||
output.print_ok(memory_msg)
|
output.print_ok(memory_msg)
|
||||||
elif percent_free >= 15:
|
elif percent_free >= 10:
|
||||||
if rounded_values: memory_msg = "System free memory is below 30%."
|
if rounded_values: memory_msg = "System free memory is below 20%."
|
||||||
output.print_warning(memory_msg)
|
output.print_warning(memory_msg)
|
||||||
else:
|
else:
|
||||||
if rounded_values: memory_msg = "System free memory is below 15%."
|
if rounded_values: memory_msg = "System free memory is below 10%."
|
||||||
output.print_error(memory_msg)
|
output.print_error(memory_msg)
|
||||||
|
|
||||||
def run_network_checks(env, output):
|
def run_network_checks(env, output):
|
||||||
@@ -464,7 +464,7 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
|
|||||||
elif ip is None:
|
elif ip is None:
|
||||||
output.print_error("Secondary nameserver %s is not configured to resolve this domain." % ns)
|
output.print_error("Secondary nameserver %s is not configured to resolve this domain." % ns)
|
||||||
else:
|
else:
|
||||||
output.print_error("Secondary nameserver %s is not configured correctly. (It resolved this domain as %s. It should be %s.)" % (ns, ip, env['PUBLIC_IP']))
|
output.print_error("Secondary nameserver %s is not configured correctly. (It resolved this domain as %s. It should be %s.)" % (ns, ip, correct_ip))
|
||||||
|
|
||||||
def check_dns_zone_suggestions(domain, env, output, dns_zonefiles, domains_with_a_records):
|
def check_dns_zone_suggestions(domain, env, output, dns_zonefiles, domains_with_a_records):
|
||||||
# Warn if a custom DNS record is preventing this or the automatic www redirect from
|
# Warn if a custom DNS record is preventing this or the automatic www redirect from
|
||||||
@@ -740,10 +740,10 @@ def what_version_is_this(env):
|
|||||||
return tag
|
return tag
|
||||||
|
|
||||||
def get_latest_miab_version():
|
def get_latest_miab_version():
|
||||||
# This pings https://mailinabox.email/bootstrap.sh and extracts the tag named in
|
# This pings https://mailinabox.email/setup.sh and extracts the tag named in
|
||||||
# the script to determine the current product version.
|
# the script to determine the current product version.
|
||||||
import urllib.request
|
import urllib.request
|
||||||
return re.search(b'TAG=(.*)', urllib.request.urlopen("https://mailinabox.email/bootstrap.sh?ping=1").read()).group(1).decode("utf8")
|
return re.search(b'TAG=(.*)', urllib.request.urlopen("https://mailinabox.email/setup.sh?ping=1").read()).group(1).decode("utf8")
|
||||||
|
|
||||||
def check_miab_version(env, output):
|
def check_miab_version(env, output):
|
||||||
config = load_settings(env)
|
config = load_settings(env)
|
||||||
|
|||||||
@@ -117,7 +117,7 @@ function do_login() {
|
|||||||
// Open the next panel the user wants to go to. Do this after the XHR response
|
// Open the next panel the user wants to go to. Do this after the XHR response
|
||||||
// is over so that we don't start a new XHR request while this one is finishing,
|
// is over so that we don't start a new XHR request while this one is finishing,
|
||||||
// which confuses the loading indicator.
|
// which confuses the loading indicator.
|
||||||
setTimeout(function() { show_panel(!switch_back_to_panel ? 'system_status' : switch_back_to_panel) }, 300);
|
setTimeout(function() { show_panel(!switch_back_to_panel || switch_back_to_panel == "login" ? 'system_status' : switch_back_to_panel) }, 300);
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -250,7 +250,7 @@ function provision_tls_cert() {
|
|||||||
var now = new Date();
|
var now = new Date();
|
||||||
n.append(b);
|
n.append(b);
|
||||||
function ready_to_finish() {
|
function ready_to_finish() {
|
||||||
var remaining = r.seconds - Math.round((new Date() - now)/1000);
|
var remaining = Math.round(r.seconds - (new Date() - now)/1000);
|
||||||
if (remaining > 0) {
|
if (remaining > 0) {
|
||||||
setTimeout(ready_to_finish, 1000);
|
setTimeout(ready_to_finish, 1000);
|
||||||
b.text("Finish (" + remaining + "...)")
|
b.text("Finish (" + remaining + "...)")
|
||||||
|
|||||||
@@ -142,7 +142,7 @@ function show_system_backup() {
|
|||||||
var b = r.backups[i];
|
var b = r.backups[i];
|
||||||
var tr = $('<tr/>');
|
var tr = $('<tr/>');
|
||||||
if (b.full) tr.addClass("full-backup");
|
if (b.full) tr.addClass("full-backup");
|
||||||
tr.append( $('<td/>').text(b.date_str + " " + r.tz) );
|
tr.append( $('<td/>').text(b.date_str) );
|
||||||
tr.append( $('<td/>').text(b.date_delta + " ago") );
|
tr.append( $('<td/>').text(b.date_delta + " ago") );
|
||||||
tr.append( $('<td/>').text(b.full ? "full" : "increment") );
|
tr.append( $('<td/>').text(b.full ? "full" : "increment") );
|
||||||
tr.append( $('<td style="text-align: right"/>').text( nice_size(b.size)) );
|
tr.append( $('<td style="text-align: right"/>').text( nice_size(b.size)) );
|
||||||
|
|||||||
@@ -2,12 +2,12 @@
|
|||||||
#########################################################
|
#########################################################
|
||||||
# This script is intended to be run like this:
|
# This script is intended to be run like this:
|
||||||
#
|
#
|
||||||
# curl https://.../bootstrap.sh | sudo bash
|
# curl https://mailinabox.email/setup.sh | sudo bash
|
||||||
#
|
#
|
||||||
#########################################################
|
#########################################################
|
||||||
|
|
||||||
if [ -z "$TAG" ]; then
|
if [ -z "$TAG" ]; then
|
||||||
TAG=v0.16
|
TAG=v0.17
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Are we running as root?
|
# Are we running as root?
|
||||||
|
|||||||
@@ -39,7 +39,7 @@ fi
|
|||||||
# Create a new DKIM key. This creates mail.private and mail.txt
|
# Create a new DKIM key. This creates mail.private and mail.txt
|
||||||
# in $STORAGE_ROOT/mail/dkim. The former is the private key and
|
# in $STORAGE_ROOT/mail/dkim. The former is the private key and
|
||||||
# the latter is the suggested DNS TXT entry which we'll include
|
# the latter is the suggested DNS TXT entry which we'll include
|
||||||
# in our DNS setup. Note tha the files are named after the
|
# in our DNS setup. Note that the files are named after the
|
||||||
# 'selector' of the key, which we can change later on to support
|
# 'selector' of the key, which we can change later on to support
|
||||||
# key rotation.
|
# key rotation.
|
||||||
#
|
#
|
||||||
|
|||||||
@@ -57,15 +57,26 @@ apt_install postfix postfix-pcre postgrey ca-certificates
|
|||||||
# Set some basic settings...
|
# Set some basic settings...
|
||||||
#
|
#
|
||||||
# * Have postfix listen on all network interfaces.
|
# * Have postfix listen on all network interfaces.
|
||||||
|
# * Make outgoing connections on a particular interface (if multihomed) so that SPF passes on the receiving side.
|
||||||
# * Set our name (the Debian default seems to be "localhost" but make it our hostname).
|
# * Set our name (the Debian default seems to be "localhost" but make it our hostname).
|
||||||
# * Set the name of the local machine to localhost, which means xxx@localhost is delivered locally, although we don't use it.
|
# * Set the name of the local machine to localhost, which means xxx@localhost is delivered locally, although we don't use it.
|
||||||
# * Set the SMTP banner (which must have the hostname first, then anything).
|
# * Set the SMTP banner (which must have the hostname first, then anything).
|
||||||
tools/editconf.py /etc/postfix/main.cf \
|
tools/editconf.py /etc/postfix/main.cf \
|
||||||
inet_interfaces=all \
|
inet_interfaces=all \
|
||||||
|
smtp_bind_address=$PRIVATE_IP \
|
||||||
|
smtp_bind_address6=$PRIVATE_IPV6 \
|
||||||
myhostname=$PRIMARY_HOSTNAME\
|
myhostname=$PRIMARY_HOSTNAME\
|
||||||
smtpd_banner="\$myhostname ESMTP Hi, I'm a Mail-in-a-Box (Ubuntu/Postfix; see https://mailinabox.email/)" \
|
smtpd_banner="\$myhostname ESMTP Hi, I'm a Mail-in-a-Box (Ubuntu/Postfix; see https://mailinabox.email/)" \
|
||||||
mydestination=localhost
|
mydestination=localhost
|
||||||
|
|
||||||
|
# Tweak some queue settings:
|
||||||
|
# * Inform users when their e-mail delivery is delayed more than 3 hours (default is not to warn).
|
||||||
|
# * Stop trying to send an undeliverable e-mail after 2 days (instead of 5), and for bounce messages just try for 1 day.
|
||||||
|
tools/editconf.py /etc/postfix/main.cf \
|
||||||
|
delay_warning_time=3h \
|
||||||
|
maximal_queue_lifetime=2d \
|
||||||
|
bounce_queue_lifetime=1d
|
||||||
|
|
||||||
# ### Outgoing Mail
|
# ### Outgoing Mail
|
||||||
|
|
||||||
# Enable the 'submission' port 587 smtpd server and tweak its settings.
|
# Enable the 'submission' port 587 smtpd server and tweak its settings.
|
||||||
|
|||||||
@@ -4,19 +4,25 @@ source setup/functions.sh
|
|||||||
|
|
||||||
echo "Installing Mail-in-a-Box system management daemon..."
|
echo "Installing Mail-in-a-Box system management daemon..."
|
||||||
|
|
||||||
# Switching python 2 boto to package manager's, not pypi's.
|
# Install packages.
|
||||||
if [ -f /usr/local/lib/python2.7/dist-packages/boto/__init__.py ]; then hide_output pip uninstall -y boto; fi
|
# flask, yaml, dnspython, and dateutil are all for our Python 3 management daemon itself.
|
||||||
|
# duplicity does backups. python-pip is so we can 'pip install boto' for Python 2, for duplicity, so it can do backups to AWS S3.
|
||||||
|
apt_install python3-flask links duplicity libyaml-dev python3-dnspython python3-dateutil python-pip
|
||||||
|
|
||||||
# duplicity uses python 2 so we need to use the python 2 package of boto
|
# These are required to pip install cryptography.
|
||||||
# build-essential libssl-dev libffi-dev python3-dev: Required to pip install cryptography.
|
apt_install build-essential libssl-dev libffi-dev python3-dev
|
||||||
apt_install python3-flask links duplicity python-boto libyaml-dev python3-dnspython python3-dateutil \
|
|
||||||
build-essential libssl-dev libffi-dev python3-dev python-pip
|
|
||||||
|
|
||||||
# Install other Python packages. The first line is the packages that Josh maintains himself!
|
# Install other Python 3 packages used by the management daemon.
|
||||||
|
# The first line is the packages that Josh maintains himself!
|
||||||
|
# NOTE: email_validator is repeated in setup/questions.sh, so please keep the versions synced.
|
||||||
hide_output pip3 install --upgrade \
|
hide_output pip3 install --upgrade \
|
||||||
rtyaml "email_validator>=1.0.0" free_tls_certificates \
|
rtyaml "email_validator>=1.0.0" "free_tls_certificates>=0.1.3" \
|
||||||
"idna>=2.0.0" "cryptography>=1.0.2" boto psutil
|
"idna>=2.0.0" "cryptography>=1.0.2" boto psutil
|
||||||
# email_validator is repeated in setup/questions.sh
|
|
||||||
|
# duplicity uses python 2 so we need to get the python 2 package of boto to have backups to S3.
|
||||||
|
# boto from the Ubuntu package manager is too out-of-date -- it doesn't support the newer
|
||||||
|
# S3 api used in some regions, which breaks backups to those regions. See #627, #653.
|
||||||
|
hide_output pip install --upgrade boto
|
||||||
|
|
||||||
# Create a backup directory and a random key for encrypting backups.
|
# Create a backup directory and a random key for encrypting backups.
|
||||||
mkdir -p $STORAGE_ROOT/backup
|
mkdir -p $STORAGE_ROOT/backup
|
||||||
|
|||||||
@@ -18,7 +18,8 @@ if [ -z "$NONINTERACTIVE" ]; then
|
|||||||
message_box "Mail-in-a-Box Installation" \
|
message_box "Mail-in-a-Box Installation" \
|
||||||
"Hello and thanks for deploying a Mail-in-a-Box!
|
"Hello and thanks for deploying a Mail-in-a-Box!
|
||||||
\n\nI'm going to ask you a few questions.
|
\n\nI'm going to ask you a few questions.
|
||||||
\n\nTo change your answers later, just run 'sudo mailinabox' from the command line."
|
\n\nTo change your answers later, just run 'sudo mailinabox' from the command line.
|
||||||
|
\n\nNOTE: You should only install this on a brand new Ubuntu installation 100% dedicated to Mail-in-a-Box. Mail-in-a-Box will, for example, remove apache2."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# The box needs a name.
|
# The box needs a name.
|
||||||
|
|||||||
@@ -10,8 +10,8 @@ source setup/preflight.sh
|
|||||||
|
|
||||||
# Ensure Python reads/writes files in UTF-8. If the machine
|
# Ensure Python reads/writes files in UTF-8. If the machine
|
||||||
# triggers some other locale in Python, like ASCII encoding,
|
# triggers some other locale in Python, like ASCII encoding,
|
||||||
# Python may not be able to read/write files. Here and in
|
# Python may not be able to read/write files. This is also
|
||||||
# the management daemon startup script.
|
# in the management daemon startup script and the cron script.
|
||||||
|
|
||||||
if [ -z `locale -a | grep en_US.utf8` ]; then
|
if [ -z `locale -a | grep en_US.utf8` ]; then
|
||||||
# Generate locale if not exists
|
# Generate locale if not exists
|
||||||
|
|||||||
@@ -34,10 +34,10 @@ apt-get purge -qq -y roundcube* #NODOC
|
|||||||
# Install Roundcube from source if it is not already present or if it is out of date.
|
# Install Roundcube from source if it is not already present or if it is out of date.
|
||||||
# Combine the Roundcube version number with the commit hash of vacation_sieve to track
|
# Combine the Roundcube version number with the commit hash of vacation_sieve to track
|
||||||
# whether we have the latest version.
|
# whether we have the latest version.
|
||||||
VERSION=1.1.3
|
VERSION=1.1.4
|
||||||
HASH=4513227bd64eb8564f056817341b1dfe478e215e
|
HASH=4883c8bb39fadf8af94ffb09ee426cba9f8ef2e3
|
||||||
VACATION_SIEVE_VERSION=91ea6f52216390073d1f5b70b5f6bea0bfaee7e5
|
VACATION_SIEVE_VERSION=91ea6f52216390073d1f5b70b5f6bea0bfaee7e5
|
||||||
PERSISTENT_LOGIN_VERSION=117fbd8f93b56b2bf72ad055193464803ef3bc36
|
PERSISTENT_LOGIN_VERSION=1e9d724476a370ce917a2fcd5b3217b0c306c24e
|
||||||
HTML5_NOTIFIER_VERSION=046eb388dd63b1ec77a3ee485757fc25ae9e684d
|
HTML5_NOTIFIER_VERSION=046eb388dd63b1ec77a3ee485757fc25ae9e684d
|
||||||
UPDATE_KEY=$VERSION:$VACATION_SIEVE_VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:a
|
UPDATE_KEY=$VERSION:$VACATION_SIEVE_VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:a
|
||||||
needs_update=0 #NODOC
|
needs_update=0 #NODOC
|
||||||
|
|||||||
@@ -2,7 +2,8 @@
|
|||||||
#
|
#
|
||||||
# This is a tool Josh uses on his box serving mailinabox.email to parse the nginx
|
# This is a tool Josh uses on his box serving mailinabox.email to parse the nginx
|
||||||
# access log to see how many people are installing Mail-in-a-Box each day, by
|
# access log to see how many people are installing Mail-in-a-Box each day, by
|
||||||
# looking at accesses to the bootstrap.sh script.
|
# looking at accesses to the bootstrap.sh script (which is currently at the URL
|
||||||
|
# .../setup.sh).
|
||||||
|
|
||||||
import re, glob, gzip, os.path, json
|
import re, glob, gzip, os.path, json
|
||||||
import dateutil.parser
|
import dateutil.parser
|
||||||
@@ -24,9 +25,10 @@ for fn in glob.glob("/var/log/nginx/access.log*"):
|
|||||||
# Loop through the lines in the access log.
|
# Loop through the lines in the access log.
|
||||||
with f:
|
with f:
|
||||||
for line in f:
|
for line in f:
|
||||||
# Find lines that are GETs on /bootstrap.sh by either curl or wget.
|
# Find lines that are GETs on the bootstrap script by either curl or wget.
|
||||||
# (Note that we purposely skip ...?ping=1 requests which is the admin panel querying us for updates.)
|
# (Note that we purposely skip ...?ping=1 requests which is the admin panel querying us for updates.)
|
||||||
m = re.match(rb"(?P<ip>\S+) - - \[(?P<date>.*?)\] \"GET /bootstrap.sh HTTP/.*\" 200 \d+ .* \"(?:curl|wget)", line, re.I)
|
# (Also, the URL changed in January 2016, but we'll accept both.)
|
||||||
|
m = re.match(rb"(?P<ip>\S+) - - \[(?P<date>.*?)\] \"GET /(bootstrap.sh|setup.sh) HTTP/.*\" 200 \d+ .* \"(?:curl|wget)", line, re.I)
|
||||||
if m:
|
if m:
|
||||||
date, time = m.group("date").decode("ascii").split(":", 1)
|
date, time = m.group("date").decode("ascii").split(":", 1)
|
||||||
date = dateutil.parser.parse(date).date().isoformat()
|
date = dateutil.parser.parse(date).date().isoformat()
|
||||||
|
|||||||
Reference in New Issue
Block a user