Merge tag 'v0.53' of https://github.com/mail-in-a-box/mailinabox
v0.53 (April 12, 2021) ---------------------- Software updates: * Upgraded Roundcube to version 1.4.11 addressing a security issue, and its desktop notifications plugin. * Upgraded Z-Push (for Exchange/ActiveSync) to version 2.6.2. Control panel: * Backblaze B2 is now a supported backup protocol. * Fixed an issue in the daily mail reports. * Sort the Custom DNS by zone and qname, and add an option to go back to the old sort order (creation order). Mail: * Enable sending DMARC failure reports to senders that request them. Setup: * Fixed error when upgrading from Nextcloud 13.
This commit is contained in:
commit
8993ebd3a8
55
CHANGELOG.md
55
CHANGELOG.md
|
@ -1,6 +1,57 @@
|
|||
CHANGELOG
|
||||
=========
|
||||
|
||||
v0.53 (April 12, 2021)
|
||||
----------------------
|
||||
|
||||
Software updates:
|
||||
|
||||
* Upgraded Roundcube to version 1.4.11 addressing a security issue, and its desktop notifications plugin.
|
||||
* Upgraded Z-Push (for Exchange/ActiveSync) to version 2.6.2.
|
||||
|
||||
Control panel:
|
||||
|
||||
* Backblaze B2 is now a supported backup protocol.
|
||||
* Fixed an issue in the daily mail reports.
|
||||
* Sort the Custom DNS by zone and qname, and add an option to go back to the old sort order (creation order).
|
||||
|
||||
Mail:
|
||||
|
||||
* Enable sending DMARC failure reports to senders that request them.
|
||||
|
||||
Setup:
|
||||
|
||||
* Fixed error when upgrading from Nextcloud 13.
|
||||
|
||||
v0.52 (January 31, 2021)
|
||||
------------------------
|
||||
|
||||
Software updates:
|
||||
|
||||
* Upgraded Roundcube to version 1.4.10.
|
||||
* Upgraded Z-Push to 2.6.1.
|
||||
|
||||
Mail:
|
||||
|
||||
* Incoming emails with SPF/DKIM/DMARC failures now get a higher spam score, and these messages are more likely to appear in the junk folder, since they are often spam/phishing.
|
||||
* Fixed the MTA-STS policy file's line endings.
|
||||
|
||||
Control panel:
|
||||
|
||||
* A new Download button in the control panel's External DNS page can be used to download the required DNS records in zonefile format.
|
||||
* Fixed the problem when the control panel would report DNS entries as Not Set by increasing a bind query limit.
|
||||
* Fixed a control panel startup bug on some systems.
|
||||
* Improved an error message on a DNS lookup timeout.
|
||||
* A typo was fixed.
|
||||
|
||||
DNS:
|
||||
|
||||
* The TTL for NS records has been increased to 1 day to comply with some registrar requirements.
|
||||
|
||||
System:
|
||||
|
||||
* Nextcloud's photos, dashboard, and activity apps are disabled since we only support contacts and calendar.
|
||||
|
||||
v0.51 (November 14, 2020)
|
||||
-------------------------
|
||||
|
||||
|
@ -13,7 +64,7 @@ Mail:
|
|||
|
||||
* The MTA-STA max_age value was increased to the normal one week.
|
||||
|
||||
Control Panel:
|
||||
Control panel:
|
||||
|
||||
* Two-factor authentication can now be enabled for logins to the control panel. However, keep in mind that many online services (including domain name registrars, cloud server providers, and TLS certificate providers) may allow an attacker to take over your account or issue a fraudulent TLS certificate with only access to your email address, and this new two-factor authentication does not protect access to your inbox. It therefore remains very important that user accounts with administrative email addresses have strong passwords.
|
||||
* TLS certificate expiry dates are now shown in ISO8601 format for clarity.
|
||||
|
@ -39,7 +90,7 @@ TLS:
|
|||
|
||||
* TLS certificates are now provisioned in groups by parent domain to limit easy domain enumeration and make provisioning more resilient to errors for particular domains.
|
||||
|
||||
Control Panel:
|
||||
Control panel:
|
||||
|
||||
* The control panel API is now fully documented at https://mailinabox.email/api-docs.html.
|
||||
* User passwords can now have spaces.
|
||||
|
|
35
README.md
35
README.md
|
@ -71,6 +71,14 @@ Issues
|
|||
Changes
|
||||
-------
|
||||
|
||||
### v0.53-quota-0.22-beta
|
||||
|
||||
* Update to v0.53 of Mail-in-a-Box
|
||||
|
||||
### v0.52-quota-0.22-beta
|
||||
|
||||
* Update to v0.52 of Mail-in-a-Box
|
||||
|
||||
### v0.51-quota-0.22-beta
|
||||
|
||||
* Update to v0.51 of Mail-in-a-Box
|
||||
|
@ -244,36 +252,18 @@ See the [setup guide](https://mailinabox.email/guide.html) for detailed, user-fr
|
|||
|
||||
For experts, start with a completely fresh (really, I mean it) Ubuntu 18.04 LTS 64-bit machine. On the machine...
|
||||
|
||||
Clone this repository:
|
||||
Clone this repository and checkout the tag corresponding to the most recent release:
|
||||
|
||||
$ git clone https://github.com/mail-in-a-box/mailinabox
|
||||
$ cd mailinabox
|
||||
|
||||
_Optional:_ Download Josh's PGP key and then verify that the sources were signed
|
||||
by him:
|
||||
|
||||
$ curl -s https://keybase.io/joshdata/key.asc | gpg --import
|
||||
gpg: key C10BDD81: public key "Joshua Tauberer <jt@occams.info>" imported
|
||||
|
||||
$ git verify-tag v0.51
|
||||
gpg: Signature made ..... using RSA key ID C10BDD81
|
||||
gpg: Good signature from "Joshua Tauberer <jt@occams.info>"
|
||||
gpg: WARNING: This key is not certified with a trusted signature!
|
||||
gpg: There is no indication that the signature belongs to the owner.
|
||||
Primary key fingerprint: 5F4C 0E73 13CC D744 693B 2AEA B920 41F4 C10B DD81
|
||||
|
||||
You'll get a lot of warnings, but that's OK. Check that the primary key fingerprint matches the
|
||||
fingerprint in the key details at [https://keybase.io/joshdata](https://keybase.io/joshdata)
|
||||
and on his [personal homepage](https://razor.occams.info/). (Of course, if this repository has been compromised you can't trust these instructions.)
|
||||
|
||||
Checkout the tag corresponding to the most recent release:
|
||||
|
||||
$ git checkout v0.51
|
||||
$ git checkout v0.53
|
||||
|
||||
Begin the installation.
|
||||
|
||||
$ sudo setup/start.sh
|
||||
|
||||
The installation will install, uninstall, and configure packages to turn the machine into a working, good mail server.
|
||||
|
||||
For help, DO NOT contact Josh directly --- I don't do tech support by email or tweet (no exceptions).
|
||||
|
||||
Post your question on the [discussion forum](https://discourse.mailinabox.email/) instead, where maintainers and Mail-in-a-Box users may be able to help you.
|
||||
|
@ -281,6 +271,7 @@ Post your question on the [discussion forum](https://discourse.mailinabox.email/
|
|||
Note that while we want everything to "just work," we can't control the rest of the Internet. Other mail services might block or spam-filter email sent from your Mail-in-a-Box.
|
||||
This is a challenge faced by everyone who runs their own mail server, with or without Mail-in-a-Box. See our discussion forum for tips about that.
|
||||
|
||||
|
||||
Contributing and Development
|
||||
----------------------------
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ info:
|
|||
license:
|
||||
name: CC0 1.0 Universal
|
||||
url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
|
||||
version: 0.47.0
|
||||
version: 0.51.0
|
||||
x-logo:
|
||||
url: https://mailinabox.email/static/logo.png
|
||||
altText: Mail-in-a-Box logo
|
||||
|
@ -743,6 +743,38 @@ paths:
|
|||
text/html:
|
||||
schema:
|
||||
type: string
|
||||
/dns/zonefile/{zone}:
|
||||
parameters:
|
||||
- in: path
|
||||
name: zone
|
||||
schema:
|
||||
$ref: '#/components/schemas/Hostname'
|
||||
required: true
|
||||
description: Hostname
|
||||
get:
|
||||
tags:
|
||||
- DNS
|
||||
summary: Get DNS zonefile
|
||||
description: Returns a DNS zone file for a hostname.
|
||||
operationId: getDnsZonefile
|
||||
x-codeSamples:
|
||||
- lang: curl
|
||||
source: |
|
||||
curl -X GET "https://{host}/admin/dns/zonefile/<zone>" \
|
||||
-u "<email>:<password>"
|
||||
responses:
|
||||
200:
|
||||
description: Successful operation
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/DNSZonefileResponse'
|
||||
403:
|
||||
description: Forbidden
|
||||
content:
|
||||
text/html:
|
||||
schema:
|
||||
type: string
|
||||
/dns/update:
|
||||
post:
|
||||
tags:
|
||||
|
@ -1781,7 +1813,7 @@ components:
|
|||
text/plain:
|
||||
schema:
|
||||
type: string
|
||||
example: 1.2.3.4
|
||||
example: '1.2.3.4'
|
||||
description: The value of the DNS record.
|
||||
example: '1.2.3.4'
|
||||
schemas:
|
||||
|
@ -2050,6 +2082,8 @@ components:
|
|||
items:
|
||||
$ref: '#/components/schemas/Hostname'
|
||||
description: DNS zones response.
|
||||
DNSZonefileResponse:
|
||||
type: string
|
||||
DNSSecondaryNameserverResponse:
|
||||
type: object
|
||||
required:
|
||||
|
@ -2663,13 +2697,6 @@ components:
|
|||
type: string
|
||||
MfaEnableSuccessResponse:
|
||||
type: string
|
||||
MfaEnableBadRequestResponse:
|
||||
type: object
|
||||
required:
|
||||
- error
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
MfaDisableRequest:
|
||||
type: object
|
||||
properties:
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
version: STSv1
|
||||
mode: MODE
|
||||
mx: PRIMARY_HOSTNAME
|
||||
max_age: 604800
|
||||
version: STSv1
|
||||
mode: MODE
|
||||
mx: PRIMARY_HOSTNAME
|
||||
max_age: 604800
|
||||
|
|
|
@ -456,6 +456,23 @@ def list_target_files(config):
|
|||
raise ValueError(e.reason)
|
||||
|
||||
return [(key.name[len(path):], key.size) for key in bucket.list(prefix=path)]
|
||||
elif target.scheme == 'b2':
|
||||
from b2sdk.v1 import InMemoryAccountInfo, B2Api
|
||||
from b2sdk.v1.exception import NonExistentBucket
|
||||
info = InMemoryAccountInfo()
|
||||
b2_api = B2Api(info)
|
||||
|
||||
# Extract information from target
|
||||
b2_application_keyid = target.netloc[:target.netloc.index(':')]
|
||||
b2_application_key = target.netloc[target.netloc.index(':')+1:target.netloc.index('@')]
|
||||
b2_bucket = target.netloc[target.netloc.index('@')+1:]
|
||||
|
||||
try:
|
||||
b2_api.authorize_account("production", b2_application_keyid, b2_application_key)
|
||||
bucket = b2_api.get_bucket_by_name(b2_bucket)
|
||||
except NonExistentBucket as e:
|
||||
raise ValueError("B2 Bucket does not exist. Please double check your information!")
|
||||
return [(key.file_name, key.size) for key, _ in bucket.ls()]
|
||||
|
||||
else:
|
||||
raise ValueError(config["target"])
|
||||
|
|
|
@ -1,3 +1,12 @@
|
|||
#!/usr/local/lib/mailinabox/env/bin/python3
|
||||
#
|
||||
# During development, you can start the Mail-in-a-Box control panel
|
||||
# by running this script, e.g.:
|
||||
#
|
||||
# service mailinabox stop # stop the system process
|
||||
# DEBUG=1 management/daemon.py
|
||||
# service mailinabox start # when done debugging, start it up again
|
||||
|
||||
import os, os.path, re, json, time
|
||||
import multiprocessing.pool, subprocess
|
||||
|
||||
|
@ -292,17 +301,50 @@ def dns_set_secondary_nameserver():
|
|||
@app.route('/dns/custom')
|
||||
@authorized_personnel_only
|
||||
def dns_get_records(qname=None, rtype=None):
|
||||
from dns_update import get_custom_dns_config
|
||||
return json_response([
|
||||
{
|
||||
"qname": r[0],
|
||||
"rtype": r[1],
|
||||
"value": r[2],
|
||||
}
|
||||
for r in get_custom_dns_config(env)
|
||||
if r[0] != "_secondary_nameserver"
|
||||
and (not qname or r[0] == qname)
|
||||
and (not rtype or r[1] == rtype) ])
|
||||
# Get the current set of custom DNS records.
|
||||
from dns_update import get_custom_dns_config, get_dns_zones
|
||||
records = get_custom_dns_config(env, only_real_records=True)
|
||||
|
||||
# Filter per the arguments for the more complex GET routes below.
|
||||
records = [r for r in records
|
||||
if (not qname or r[0] == qname)
|
||||
and (not rtype or r[1] == rtype) ]
|
||||
|
||||
# Make a better data structure.
|
||||
records = [
|
||||
{
|
||||
"qname": r[0],
|
||||
"rtype": r[1],
|
||||
"value": r[2],
|
||||
"sort-order": { },
|
||||
} for r in records ]
|
||||
|
||||
# To help with grouping by zone in qname sorting, label each record with which zone it is in.
|
||||
# There's an inconsistency in how we handle zones in get_dns_zones and in sort_domains, so
|
||||
# do this first before sorting the domains within the zones.
|
||||
zones = utils.sort_domains([z[0] for z in get_dns_zones(env)], env)
|
||||
for r in records:
|
||||
for z in zones:
|
||||
if r["qname"] == z or r["qname"].endswith("." + z):
|
||||
r["zone"] = z
|
||||
break
|
||||
|
||||
# Add sorting information. The 'created' order follows the order in the YAML file on disk,
|
||||
# which tracs the order entries were added in the control panel since we append to the end.
|
||||
# The 'qname' sort order sorts by our standard domain name sort (by zone then by qname),
|
||||
# then by rtype, and last by the original order in the YAML file (since sorting by value
|
||||
# may not make sense, unless we parse IP addresses, for example).
|
||||
for i, r in enumerate(records):
|
||||
r["sort-order"]["created"] = i
|
||||
domain_sort_order = utils.sort_domains([r["qname"] for r in records], env)
|
||||
for i, r in enumerate(sorted(records, key = lambda r : (
|
||||
zones.index(r["zone"]),
|
||||
domain_sort_order.index(r["qname"]),
|
||||
r["rtype"]))):
|
||||
r["sort-order"]["qname"] = i
|
||||
|
||||
# Return.
|
||||
return json_response(records)
|
||||
|
||||
@app.route('/dns/custom/<qname>', methods=['GET', 'POST', 'PUT', 'DELETE'])
|
||||
@app.route('/dns/custom/<qname>/<rtype>', methods=['GET', 'POST', 'PUT', 'DELETE'])
|
||||
|
@ -362,6 +404,12 @@ def dns_get_dump():
|
|||
from dns_update import build_recommended_dns
|
||||
return json_response(build_recommended_dns(env))
|
||||
|
||||
@app.route('/dns/zonefile/<zone>')
|
||||
@authorized_personnel_only
|
||||
def dns_get_zonefile(zone):
|
||||
from dns_update import get_dns_zonefile
|
||||
return Response(get_dns_zonefile(zone, env), status=200, mimetype='text/plain')
|
||||
|
||||
# SSL
|
||||
|
||||
@app.route('/ssl/status')
|
||||
|
@ -719,7 +767,22 @@ def log_failed_login(request):
|
|||
# APP
|
||||
|
||||
if __name__ == '__main__':
|
||||
if "DEBUG" in os.environ: app.debug = True
|
||||
if "DEBUG" in os.environ:
|
||||
# Turn on Flask debugging.
|
||||
app.debug = True
|
||||
|
||||
# Use a stable-ish master API key so that login sessions don't restart on each run.
|
||||
# Use /etc/machine-id to seed the key with a stable secret, but add something
|
||||
# and hash it to prevent possibly exposing the machine id, using the time so that
|
||||
# the key is not valid indefinitely.
|
||||
import hashlib
|
||||
with open("/etc/machine-id") as f:
|
||||
api_key = f.read()
|
||||
api_key += "|" + str(int(time.time() / (60*60*2)))
|
||||
hasher = hashlib.sha1()
|
||||
hasher.update(api_key.encode("ascii"))
|
||||
auth_service.key = hasher.hexdigest()
|
||||
|
||||
if "APIKEY" in os.environ: auth_service.key = os.environ["APIKEY"]
|
||||
|
||||
if not app.debug:
|
||||
|
|
|
@ -470,14 +470,14 @@ def write_nsd_zone(domain, zonefile, records, env, force):
|
|||
|
||||
zone = """
|
||||
$ORIGIN {domain}.
|
||||
$TTL 1800 ; default time to live
|
||||
$TTL 86400 ; default time to live
|
||||
|
||||
@ IN SOA ns1.{primary_domain}. hostmaster.{primary_domain}. (
|
||||
__SERIAL__ ; serial number
|
||||
7200 ; Refresh (secondary nameserver update interval)
|
||||
1800 ; Retry (when refresh fails, how often to try again)
|
||||
86400 ; Retry (when refresh fails, how often to try again)
|
||||
1209600 ; Expire (when refresh fails, how long secondary nameserver will keep records around anyway)
|
||||
1800 ; Negative TTL (how long negative responses are cached)
|
||||
86400 ; Negative TTL (how long negative responses are cached)
|
||||
)
|
||||
"""
|
||||
|
||||
|
@ -564,6 +564,17 @@ $TTL 1800 ; default time to live
|
|||
|
||||
return True # file is updated
|
||||
|
||||
def get_dns_zonefile(zone, env):
|
||||
for domain, fn in get_dns_zones(env):
|
||||
if zone == domain:
|
||||
break
|
||||
else:
|
||||
raise ValueError("%s is not a domain name that corresponds to a zone." % zone)
|
||||
|
||||
nsd_zonefile = "/etc/nsd/zones/" + fn
|
||||
with open(nsd_zonefile, "r") as f:
|
||||
return f.read()
|
||||
|
||||
########################################################################
|
||||
|
||||
def write_nsd_conf(zonefiles, additional_records, env):
|
||||
|
@ -742,7 +753,7 @@ def write_opendkim_tables(domains, env):
|
|||
|
||||
########################################################################
|
||||
|
||||
def get_custom_dns_config(env):
|
||||
def get_custom_dns_config(env, only_real_records=False):
|
||||
try:
|
||||
custom_dns = rtyaml.load(open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml')))
|
||||
if not isinstance(custom_dns, dict): raise ValueError() # caught below
|
||||
|
@ -750,6 +761,8 @@ def get_custom_dns_config(env):
|
|||
return [ ]
|
||||
|
||||
for qname, value in custom_dns.items():
|
||||
if qname == "_secondary_nameserver" and only_real_records: continue # skip fake record
|
||||
|
||||
# Short form. Mapping a domain name to a string is short-hand
|
||||
# for creating A records.
|
||||
if isinstance(value, str):
|
||||
|
|
|
@ -44,9 +44,8 @@ TIME_DELTAS = OrderedDict([
|
|||
('today', datetime.datetime.now() - datetime.datetime.now().replace(hour=0, minute=0, second=0))
|
||||
])
|
||||
|
||||
# Start date > end date!
|
||||
START_DATE = datetime.datetime.now()
|
||||
END_DATE = None
|
||||
END_DATE = NOW = datetime.datetime.now()
|
||||
START_DATE = None
|
||||
|
||||
VERBOSE = False
|
||||
|
||||
|
@ -121,7 +120,7 @@ def scan_mail_log(env):
|
|||
pass
|
||||
|
||||
print("Scanning logs from {:%Y-%m-%d %H:%M:%S} to {:%Y-%m-%d %H:%M:%S}".format(
|
||||
END_DATE, START_DATE)
|
||||
START_DATE, END_DATE)
|
||||
)
|
||||
|
||||
# Scan the lines in the log files until the date goes out of range
|
||||
|
@ -253,7 +252,7 @@ def scan_mail_log(env):
|
|||
|
||||
if collector["postgrey"]:
|
||||
msg = "Greylisted Email {:%Y-%m-%d %H:%M:%S} and {:%Y-%m-%d %H:%M:%S}"
|
||||
print_header(msg.format(END_DATE, START_DATE))
|
||||
print_header(msg.format(START_DATE, END_DATE))
|
||||
|
||||
print(textwrap.fill(
|
||||
"The following mail was greylisted, meaning the emails were temporarily rejected. "
|
||||
|
@ -291,7 +290,7 @@ def scan_mail_log(env):
|
|||
|
||||
if collector["rejected"]:
|
||||
msg = "Blocked Email {:%Y-%m-%d %H:%M:%S} and {:%Y-%m-%d %H:%M:%S}"
|
||||
print_header(msg.format(END_DATE, START_DATE))
|
||||
print_header(msg.format(START_DATE, END_DATE))
|
||||
|
||||
data = OrderedDict(sorted(collector["rejected"].items(), key=email_sort))
|
||||
|
||||
|
@ -344,20 +343,20 @@ def scan_mail_log_line(line, collector):
|
|||
|
||||
# Replaced the dateutil parser for a less clever way of parser that is roughly 4 times faster.
|
||||
# date = dateutil.parser.parse(date)
|
||||
|
||||
# date = datetime.datetime.strptime(date, '%b %d %H:%M:%S')
|
||||
# date = date.replace(START_DATE.year)
|
||||
|
||||
# strptime fails on Feb 29 if correct year is not provided. See https://bugs.python.org/issue26460
|
||||
date = datetime.datetime.strptime(str(START_DATE.year) + ' ' + date, '%Y %b %d %H:%M:%S')
|
||||
# print("date:", date)
|
||||
|
||||
# strptime fails on Feb 29 with ValueError: day is out of range for month if correct year is not provided.
|
||||
# See https://bugs.python.org/issue26460
|
||||
date = datetime.datetime.strptime(str(NOW.year) + ' ' + date, '%Y %b %d %H:%M:%S')
|
||||
# if log date in future, step back a year
|
||||
if date > NOW:
|
||||
date = date.replace(year = NOW.year - 1)
|
||||
#print("date:", date)
|
||||
|
||||
# Check if the found date is within the time span we are scanning
|
||||
# END_DATE < START_DATE
|
||||
if date > START_DATE:
|
||||
if date > END_DATE:
|
||||
# Don't process, and halt
|
||||
return False
|
||||
elif date < END_DATE:
|
||||
elif date < START_DATE:
|
||||
# Don't process, but continue
|
||||
return True
|
||||
|
||||
|
@ -606,7 +605,7 @@ def email_sort(email):
|
|||
|
||||
|
||||
def valid_date(string):
|
||||
""" Validate the given date string fetched from the --startdate argument """
|
||||
""" Validate the given date string fetched from the --enddate argument """
|
||||
try:
|
||||
date = dateutil.parser.parse(string)
|
||||
except ValueError:
|
||||
|
@ -820,12 +819,14 @@ if __name__ == "__main__":
|
|||
|
||||
parser.add_argument("-t", "--timespan", choices=TIME_DELTAS.keys(), default='today',
|
||||
metavar='<time span>',
|
||||
help="Time span to scan, going back from the start date. Possible values: "
|
||||
help="Time span to scan, going back from the end date. Possible values: "
|
||||
"{}. Defaults to 'today'.".format(", ".join(list(TIME_DELTAS.keys()))))
|
||||
parser.add_argument("-d", "--startdate", action="store", dest="startdate",
|
||||
type=valid_date, metavar='<start date>',
|
||||
help="Date and time to start scanning the log file from. If no date is "
|
||||
"provided, scanning will start from the current date and time.")
|
||||
# keep the --startdate arg for backward compatibility
|
||||
parser.add_argument("-d", "--enddate", "--startdate", action="store", dest="enddate",
|
||||
type=valid_date, metavar='<end date>',
|
||||
help="Date and time to end scanning the log file. If no date is "
|
||||
"provided, scanning will end at the current date and time. "
|
||||
"Alias --startdate is for compatibility.")
|
||||
parser.add_argument("-u", "--users", action="store", dest="users",
|
||||
metavar='<email1,email2,email...>',
|
||||
help="Comma separated list of (partial) email addresses to filter the "
|
||||
|
@ -837,13 +838,13 @@ if __name__ == "__main__":
|
|||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.startdate is not None:
|
||||
START_DATE = args.startdate
|
||||
if args.enddate is not None:
|
||||
END_DATE = args.enddate
|
||||
if args.timespan == 'today':
|
||||
args.timespan = 'day'
|
||||
print("Setting start date to {}".format(START_DATE))
|
||||
print("Setting end date to {}".format(END_DATE))
|
||||
|
||||
END_DATE = START_DATE - TIME_DELTAS[args.timespan]
|
||||
START_DATE = END_DATE - TIME_DELTAS[args.timespan]
|
||||
|
||||
VERBOSE = args.verbose
|
||||
|
||||
|
|
|
@ -293,6 +293,8 @@ def run_network_checks(env, output):
|
|||
zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None)
|
||||
if zen is None:
|
||||
output.print_ok("IP address is not blacklisted by zen.spamhaus.org.")
|
||||
elif zen == "[timeout]":
|
||||
output.print_warning("Connection to zen.spamhaus.org timed out. We could not determine whether your server's IP address is blacklisted. Please try again later.")
|
||||
else:
|
||||
output.print_error("""The IP address of this machine %s is listed in the Spamhaus Block List (code %s),
|
||||
which may prevent recipients from receiving your email. See http://www.spamhaus.org/query/ip/%s."""
|
||||
|
@ -678,6 +680,8 @@ def check_mail_domain(domain, env, output):
|
|||
dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None)
|
||||
if dbl is None:
|
||||
output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.")
|
||||
elif dbl == "[timeout]":
|
||||
output.print_warning("Connection to dbl.spamhaus.org timed out. We could not determine whether the domain {} is blacklisted. Please try again later.".format(domain))
|
||||
else:
|
||||
output.print_error("""This domain is listed in the Spamhaus Domain Block List (code %s),
|
||||
which may prevent recipients from receiving your mail.
|
||||
|
|
|
@ -153,8 +153,8 @@ function show_aliases() {
|
|||
function(r) {
|
||||
$('#alias_table tbody').html("");
|
||||
for (var i = 0; i < r.length; i++) {
|
||||
var hdr = $("<tr><td colspan='3'><h4/></td></tr>");
|
||||
hdr.find('h4').text(r[i].domain);
|
||||
var hdr = $("<tr><th colspan='4' style='background-color: #EEE'></th></tr>");
|
||||
hdr.find('th').text(r[i].domain);
|
||||
$('#alias_table tbody').append(hdr);
|
||||
|
||||
for (var k = 0; k < r[i].aliases.length; k++) {
|
||||
|
|
|
@ -57,7 +57,13 @@
|
|||
</div>
|
||||
</form>
|
||||
|
||||
<table id="custom-dns-current" class="table" style="width: auto; display: none">
|
||||
<div style="text-align: right; font-size; 90%; margin-top: 1em;">
|
||||
sort by:
|
||||
<a href="#" onclick="window.miab_custom_dns_data_sort_order='qname'; show_current_custom_dns_update_after_sort(); return false;">domain name</a>
|
||||
|
|
||||
<a href="#" onclick="window.miab_custom_dns_data_sort_order='created'; show_current_custom_dns_update_after_sort(); return false;">created</a>
|
||||
</div>
|
||||
<table id="custom-dns-current" class="table" style="width: auto; display: none; margin-top: 0;">
|
||||
<thead>
|
||||
<th>Domain Name</th>
|
||||
<th>Record Type</th>
|
||||
|
@ -89,7 +95,7 @@
|
|||
<div class="form-group">
|
||||
<div class="col-sm-offset-1 col-sm-11">
|
||||
<p class="small">
|
||||
Multiple secondary servers can be separated with commas or spaces (i.e., <code>ns2.hostingcompany.com ns3.hostingcompany.com</code>).
|
||||
Multiple secondary servers can be separated with commas or spaces (i.e., <code>ns2.hostingcompany.com ns3.hostingcompany.com</code>).
|
||||
To enable zone transfers to additional servers without listing them as secondary nameservers, add an IP address or subnet using <code>xfr:10.20.30.40</code> or <code>xfr:10.0.0.0/8</code>.
|
||||
</p>
|
||||
<p id="secondarydns-clear-instructions" style="display: none" class="small">
|
||||
|
@ -192,36 +198,38 @@ function show_current_custom_dns() {
|
|||
$('#custom-dns-current').fadeIn();
|
||||
else
|
||||
$('#custom-dns-current').fadeOut();
|
||||
|
||||
var reverse_fqdn = function(el) {
|
||||
el.qname = el.qname.split('.').reverse().join('.');
|
||||
return el;
|
||||
}
|
||||
var sort = function(a, b) {
|
||||
if(a.qname === b.qname) {
|
||||
if(a.rtype === b.rtype) {
|
||||
return a.value > b.value ? 1 : -1;
|
||||
}
|
||||
return a.rtype > b.rtype ? 1 : -1;
|
||||
}
|
||||
return a.qname > b.qname ? 1 : -1;
|
||||
}
|
||||
window.miab_custom_dns_data = data;
|
||||
show_current_custom_dns_update_after_sort();
|
||||
});
|
||||
}
|
||||
|
||||
data = data.map(reverse_fqdn).sort(sort).map(reverse_fqdn);
|
||||
function show_current_custom_dns_update_after_sort() {
|
||||
var data = window.miab_custom_dns_data;
|
||||
var sort_key = window.miab_custom_dns_data_sort_order || "qname";
|
||||
|
||||
$('#custom-dns-current').find("tbody").text('');
|
||||
data.sort(function(a, b) { return a["sort-order"][sort_key] - b["sort-order"][sort_key] });
|
||||
|
||||
var tbody = $('#custom-dns-current').find("tbody");
|
||||
tbody.text('');
|
||||
var last_zone = null;
|
||||
for (var i = 0; i < data.length; i++) {
|
||||
if (sort_key == "qname" && data[i].zone != last_zone) {
|
||||
var r = $("<tr><th colspan=4 style='background-color: #EEE'></th></tr>");
|
||||
r.find("th").text(data[i].zone);
|
||||
tbody.append(r);
|
||||
last_zone = data[i].zone;
|
||||
}
|
||||
|
||||
var tr = $("<tr/>");
|
||||
$('#custom-dns-current').find("tbody").append(tr);
|
||||
tbody.append(tr);
|
||||
tr.attr('data-qname', data[i].qname);
|
||||
tr.attr('data-rtype', data[i].rtype);
|
||||
tr.attr('data-value', data[i].value);
|
||||
tr.append($('<td class="long"/>').text(data[i].qname));
|
||||
tr.append($('<td/>').text(data[i].rtype));
|
||||
tr.append($('<td class="long"/>').text(data[i].value));
|
||||
tr.append($('<td class="long" style="max-width: 40em"/>').text(data[i].value));
|
||||
tr.append($('<td>[<a href="#" onclick="return delete_custom_dns_record(this)">delete</a>]</td>'));
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function delete_custom_dns_record(elem) {
|
||||
|
|
|
@ -42,6 +42,19 @@
|
|||
You may need to adopt this technique when adding DomainKeys. Use a tool like <code>named-checkzone</code> to validate your zone file.
|
||||
</p>
|
||||
|
||||
<h3>Download zonefile</h3>
|
||||
<p>You can download your zonefiles here or use the table of records below.</p>
|
||||
<form class="form-inline" role="form" onsubmit="do_download_zonefile(); return false;">
|
||||
<div class="form-group">
|
||||
<div class="form-group">
|
||||
<label for="downloadZonefile" class="control-label sr-only">Zone</label>
|
||||
<select id="downloadZonefile" class="form-control" style="width: auto"> </select>
|
||||
</div>
|
||||
<button type="submit" class="btn btn-primary">Download</button>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
<h3>Records</h3>
|
||||
|
||||
<table id="external_dns_settings" class="table">
|
||||
<thead>
|
||||
|
@ -57,6 +70,18 @@
|
|||
|
||||
<script>
|
||||
function show_external_dns() {
|
||||
api(
|
||||
"/dns/zones",
|
||||
"GET",
|
||||
{ },
|
||||
function(data) {
|
||||
var zones = $('#downloadZonefile');
|
||||
zones.text('');
|
||||
for (var j = 0; j < data.length; j++) {
|
||||
zones.append($('<option/>').text(data[j]));
|
||||
}
|
||||
});
|
||||
|
||||
$('#external_dns_settings tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>")
|
||||
api(
|
||||
"/dns/dump",
|
||||
|
@ -84,4 +109,19 @@ function show_external_dns() {
|
|||
}
|
||||
})
|
||||
}
|
||||
|
||||
function do_download_zonefile() {
|
||||
var zone = $('#downloadZonefile').val();
|
||||
|
||||
api(
|
||||
"/dns/zonefile/"+ zone,
|
||||
"GET",
|
||||
{},
|
||||
function(data) {
|
||||
show_modal_error("Download Zonefile", $("<pre/>").text(data));
|
||||
},
|
||||
function(err) {
|
||||
show_modal_error("Download Zonefile (Error)", $("<pre/>").text(err));
|
||||
});
|
||||
}
|
||||
</script>
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
<option value="local">{{hostname}}</option>
|
||||
<option value="rsync">rsync</option>
|
||||
<option value="s3">Amazon S3</option>
|
||||
<option value="b2">Backblaze B2</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -111,6 +112,31 @@
|
|||
<input type="text" class="form-control" rows="1" id="backup-target-pass">
|
||||
</div>
|
||||
</div>
|
||||
<!-- Backblaze -->
|
||||
<div class="form-group backup-target-b2">
|
||||
<div class="col-sm-10 col-sm-offset-2">
|
||||
<p>Backups are stored in a <a href="https://www.backblaze.com/" target="_blank" rel="noreferrer">Backblaze</a> B2 bucket. You must have a Backblaze account already.</p>
|
||||
<p>You MUST manually copy the encryption password from <tt class="backup-encpassword-file"></tt> to a safe and secure location. You will need this file to decrypt backup files. It is NOT stored in your Backblaze B2 bucket.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group backup-target-b2">
|
||||
<label for="backup-target-b2-user" class="col-sm-2 control-label">B2 Application KeyID</label>
|
||||
<div class="col-sm-8">
|
||||
<input type="text" class="form-control" rows="1" id="backup-target-b2-user">
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group backup-target-b2">
|
||||
<label for="backup-target-b2-pass" class="col-sm-2 control-label">B2 Application Key</label>
|
||||
<div class="col-sm-8">
|
||||
<input type="text" class="form-control" rows="1" id="backup-target-b2-pass">
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group backup-target-b2">
|
||||
<label for="backup-target-b2-bucket" class="col-sm-2 control-label">B2 Bucket</label>
|
||||
<div class="col-sm-8">
|
||||
<input type="text" class="form-control" rows="1" id="backup-target-b2-bucket">
|
||||
</div>
|
||||
</div>
|
||||
<!-- Common -->
|
||||
<div class="form-group backup-target-local backup-target-rsync backup-target-s3">
|
||||
<label for="min-age" class="col-sm-2 control-label">Retention Days:</label>
|
||||
|
@ -144,7 +170,7 @@
|
|||
|
||||
function toggle_form() {
|
||||
var target_type = $("#backup-target-type").val();
|
||||
$(".backup-target-local, .backup-target-rsync, .backup-target-s3").hide();
|
||||
$(".backup-target-local, .backup-target-rsync, .backup-target-s3, .backup-target-b2").hide();
|
||||
$(".backup-target-" + target_type).show();
|
||||
|
||||
init_inputs(target_type);
|
||||
|
@ -215,7 +241,7 @@ function show_system_backup() {
|
|||
}
|
||||
|
||||
function show_custom_backup() {
|
||||
$(".backup-target-local, .backup-target-rsync, .backup-target-s3").hide();
|
||||
$(".backup-target-local, .backup-target-rsync, .backup-target-s3, .backup-target-b2").hide();
|
||||
api(
|
||||
"/system/backup/config",
|
||||
"GET",
|
||||
|
@ -245,6 +271,15 @@ function show_custom_backup() {
|
|||
var host = hostpath.shift();
|
||||
$("#backup-target-s3-host").val(host);
|
||||
$("#backup-target-s3-path").val(hostpath.join('/'));
|
||||
} else if (r.target.substring(0, 5) == "b2://") {
|
||||
$("#backup-target-type").val("b2");
|
||||
var targetPath = r.target.substring(5);
|
||||
var b2_application_keyid = targetPath.split(':')[0];
|
||||
var b2_applicationkey = targetPath.split(':')[1].split('@')[0];
|
||||
var b2_bucket = targetPath.split('@')[1];
|
||||
$("#backup-target-b2-user").val(b2_application_keyid);
|
||||
$("#backup-target-b2-pass").val(b2_applicationkey);
|
||||
$("#backup-target-b2-bucket").val(b2_bucket);
|
||||
}
|
||||
toggle_form()
|
||||
})
|
||||
|
@ -264,6 +299,11 @@ function set_custom_backup() {
|
|||
target = "rsync://" + $("#backup-target-rsync-user").val() + "@" + $("#backup-target-rsync-host").val()
|
||||
+ "/" + $("#backup-target-rsync-path").val();
|
||||
target_user = '';
|
||||
} else if (target_type == "b2") {
|
||||
target = 'b2://' + $('#backup-target-b2-user').val() + ':' + $('#backup-target-b2-pass').val()
|
||||
+ '@' + $('#backup-target-b2-bucket').val()
|
||||
target_user = '';
|
||||
target_pass = '';
|
||||
}
|
||||
|
||||
|
||||
|
@ -303,4 +343,4 @@ function init_inputs(target_type) {
|
|||
set_host($('#backup-target-s3-host-select').val());
|
||||
}
|
||||
}
|
||||
</script>
|
||||
</script>
|
|
@ -1,7 +1,6 @@
|
|||
<h2>Users</h2>
|
||||
|
||||
<style>
|
||||
#user_table h4 { margin: 1em 0 0 0; }
|
||||
#user_table tr.account_inactive td.address { color: #888; text-decoration: line-through; }
|
||||
#user_table .actions { margin-top: .33em; font-size: 95%; }
|
||||
#user_table .account_inactive .if_active { display: none; }
|
||||
|
@ -36,7 +35,7 @@
|
|||
<button type="submit" class="btn btn-primary">Add User</button>
|
||||
</form>
|
||||
<ul style="margin-top: 1em; padding-left: 1.5em; font-size: 90%;">
|
||||
<li>Passwords must be at least eight characters consisting of English lettters and numbers only. For best results, <a href="#" onclick="return generate_random_password()">generate a random password</a>.</li>
|
||||
<li>Passwords must be at least eight characters consisting of English letters and numbers only. For best results, <a href="#" onclick="return generate_random_password()">generate a random password</a>.</li>
|
||||
<li>Use <a href="#" onclick="return show_panel('aliases')">aliases</a> to create email addresses that forward to existing accounts.</li>
|
||||
<li>Administrators get access to this control panel.</li>
|
||||
<li>User accounts cannot contain any international (non-ASCII) characters, but <a href="#" onclick="return show_panel('aliases');">aliases</a> can.</li>
|
||||
|
@ -183,8 +182,8 @@ function show_users() {
|
|||
function(r) {
|
||||
$('#user_table tbody').html("");
|
||||
for (var i = 0; i < r.length; i++) {
|
||||
var hdr = $("<tr><td colspan='6'><h4/></td></tr>");
|
||||
hdr.find('h4').text(r[i].domain);
|
||||
var hdr = $("<tr><th colspan='6' style='background-color: #EEE'></th></tr>");
|
||||
hdr.find('th').text(r[i].domain);
|
||||
$('#user_table tbody').append(hdr);
|
||||
|
||||
for (var k = 0; k < r[i].users.length; k++) {
|
||||
|
|
|
@ -20,7 +20,7 @@ if [ -z "$TAG" ]; then
|
|||
# want to display in status checks.
|
||||
if [ "`lsb_release -d | sed 's/.*:\s*//' | sed 's/18\.04\.[0-9]/18.04/' `" == "Ubuntu 18.04 LTS" ]; then
|
||||
# This machine is running Ubuntu 18.04.
|
||||
TAG=v0.51-quota-0.22-beta
|
||||
TAG=v0.53-quota-0.22-beta
|
||||
|
||||
elif [ "`lsb_release -d | sed 's/.*:\s*//' | sed 's/14\.04\.[0-9]/14.04/' `" == "Ubuntu 14.04 LTS" ]; then
|
||||
# This machine is running Ubuntu 14.04.
|
||||
|
|
|
@ -62,7 +62,40 @@ chmod go-rwx $STORAGE_ROOT/mail/dkim
|
|||
|
||||
tools/editconf.py /etc/opendmarc.conf -s \
|
||||
"Syslog=true" \
|
||||
"Socket=inet:8893@[127.0.0.1]"
|
||||
"Socket=inet:8893@[127.0.0.1]" \
|
||||
"FailureReports=true"
|
||||
|
||||
# SPFIgnoreResults causes the filter to ignore any SPF results in the header
|
||||
# of the message. This is useful if you want the filter to perfrom SPF checks
|
||||
# itself, or because you don't trust the arriving header. This added header is
|
||||
# used by spamassassin to evaluate the mail for spamminess.
|
||||
|
||||
tools/editconf.py /etc/opendmarc.conf -s \
|
||||
"SPFIgnoreResults=true"
|
||||
|
||||
# SPFSelfValidate causes the filter to perform a fallback SPF check itself
|
||||
# when it can find no SPF results in the message header. If SPFIgnoreResults
|
||||
# is also set, it never looks for SPF results in headers and always performs
|
||||
# the SPF check itself when this is set. This added header is used by
|
||||
# spamassassin to evaluate the mail for spamminess.
|
||||
|
||||
tools/editconf.py /etc/opendmarc.conf -s \
|
||||
"SPFSelfValidate=true"
|
||||
|
||||
# Enables generation of failure reports for sending domains that publish a
|
||||
# "none" policy.
|
||||
|
||||
tools/editconf.py /etc/opendmarc.conf -s \
|
||||
"FailureReportsOnNone=true"
|
||||
|
||||
# AlwaysAddARHeader Adds an "Authentication-Results:" header field even to
|
||||
# unsigned messages from domains with no "signs all" policy. The reported DKIM
|
||||
# result will be "none" in such cases. Normally unsigned mail from non-strict
|
||||
# domains does not cause the results header field to be added. This added header
|
||||
# is used by spamassassin to evaluate the mail for spamminess.
|
||||
|
||||
tools/editconf.py /etc/opendkim.conf -s \
|
||||
"AlwaysAddARHeader=true"
|
||||
|
||||
# Add OpenDKIM and OpenDMARC as milters to postfix, which is how OpenDKIM
|
||||
# intercepts outgoing mail to perform the signing (by adding a mail header)
|
||||
|
|
|
@ -18,11 +18,7 @@ while [ -d /usr/local/lib/python3.4/dist-packages/acme ]; do
|
|||
pip3 uninstall -y acme;
|
||||
done
|
||||
|
||||
# duplicity is used to make backups of user data. It uses boto
|
||||
# (via Python 2) to do backups to AWS S3. boto from the Ubuntu
|
||||
# package manager is too out-of-date -- it doesn't support the newer
|
||||
# S3 api used in some regions, which breaks backups to those regions.
|
||||
# See #627, #653.
|
||||
# duplicity is used to make backups of user data.
|
||||
#
|
||||
# virtualenv is used to isolate the Python 3 packages we
|
||||
# install via pip from the system-installed packages.
|
||||
|
@ -30,7 +26,11 @@ done
|
|||
# certbot installs EFF's certbot which we use to
|
||||
# provision free TLS certificates.
|
||||
apt_install duplicity python-pip virtualenv certbot
|
||||
hide_output pip2 install --upgrade boto
|
||||
|
||||
# b2sdk is used for backblaze backups.
|
||||
# boto is used for amazon aws backups.
|
||||
# Both are installed outside the pipenv, so they can be used by duplicity
|
||||
hide_output pip3 install --upgrade b2sdk boto
|
||||
|
||||
# Create a virtualenv for the installation of Python 3 packages
|
||||
# used by the management daemon.
|
||||
|
@ -50,8 +50,8 @@ hide_output $venv/bin/pip install --upgrade pip
|
|||
hide_output $venv/bin/pip install --upgrade \
|
||||
rtyaml "email_validator>=1.0.0" "exclusiveprocess" \
|
||||
flask dnspython python-dateutil \
|
||||
qrcode[pil] pyotp \
|
||||
"idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver
|
||||
qrcode[pil] pyotp \
|
||||
"idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver b2sdk
|
||||
|
||||
# CONFIGURATION
|
||||
|
||||
|
@ -90,6 +90,12 @@ rm -f /tmp/bootstrap.zip
|
|||
# running after a reboot.
|
||||
cat > $inst_dir/start <<EOF;
|
||||
#!/bin/bash
|
||||
# Set character encoding flags to ensure that any non-ASCII don't cause problems.
|
||||
export LANGUAGE=en_US.UTF-8
|
||||
export LC_ALL=en_US.UTF-8
|
||||
export LANG=en_US.UTF-8
|
||||
export LC_TYPE=en_US.UTF-8
|
||||
|
||||
source $venv/bin/activate
|
||||
exec python `pwd`/management/daemon.py
|
||||
EOF
|
||||
|
|
|
@ -24,8 +24,8 @@ InstallNextcloud() {
|
|||
hash_contacts=$4
|
||||
version_calendar=$5
|
||||
hash_calendar=$6
|
||||
version_user_external=$7
|
||||
hash_user_external=$8
|
||||
version_user_external=${7:-}
|
||||
hash_user_external=${8:-}
|
||||
|
||||
echo
|
||||
echo "Upgrading to Nextcloud version $version"
|
||||
|
@ -311,6 +311,9 @@ hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable
|
|||
sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
|
||||
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
|
||||
|
||||
# Disable default apps that we don't support
|
||||
sudo -u www-data php /usr/local/lib/owncloud/occ app:disable photos dashboard activity
|
||||
|
||||
# Set PHP FPM values to support large file uploads
|
||||
# (semicolon is the comment character in this file, hashes produce deprecation warnings)
|
||||
tools/editconf.py /etc/php/7.2/fpm/php.ini -c ';' \
|
||||
|
|
|
@ -67,6 +67,56 @@ tools/editconf.py /etc/spamassassin/local.cf -s \
|
|||
"add_header all Report"=_REPORT_ \
|
||||
"add_header all Score"=_SCORE_
|
||||
|
||||
|
||||
# Authentication-Results SPF/Dmarc checks
|
||||
# ---------------------------------------
|
||||
# OpenDKIM and OpenDMARC are configured to validate and add "Authentication-Results: ..."
|
||||
# headers by checking the sender's SPF & DMARC policies. Instead of blocking mail that fails
|
||||
# these checks, we can use these headers to evaluate the mail as spam.
|
||||
#
|
||||
# Our custom rules are added to their own file so that an update to the deb package config
|
||||
# does not remove our changes.
|
||||
#
|
||||
# We need to escape period's in $PRIMARY_HOSTNAME since spamassassin config uses regex.
|
||||
|
||||
escapedprimaryhostname="${PRIMARY_HOSTNAME//./\\.}"
|
||||
|
||||
cat > /etc/spamassassin/miab_spf_dmarc.cf << EOF
|
||||
# Evaluate DMARC Authentication-Results
|
||||
header DMARC_PASS Authentication-Results =~ /$escapedprimaryhostname; dmarc=pass/
|
||||
describe DMARC_PASS DMARC check passed
|
||||
score DMARC_PASS -0.1
|
||||
|
||||
header DMARC_NONE Authentication-Results =~ /$escapedprimaryhostname; dmarc=none/
|
||||
describe DMARC_NONE DMARC record not found
|
||||
score DMARC_NONE 0.1
|
||||
|
||||
header DMARC_FAIL_NONE Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=none/
|
||||
describe DMARC_FAIL_NONE DMARC check failed (p=none)
|
||||
score DMARC_FAIL_NONE 2.0
|
||||
|
||||
header DMARC_FAIL_QUARANTINE Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=quarantine/
|
||||
describe DMARC_FAIL_QUARANTINE DMARC check failed (p=quarantine)
|
||||
score DMARC_FAIL_QUARANTINE 5.0
|
||||
|
||||
header DMARC_FAIL_REJECT Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=reject/
|
||||
describe DMARC_FAIL_REJECT DMARC check failed (p=reject)
|
||||
score DMARC_FAIL_REJECT 10.0
|
||||
|
||||
# Evaluate SPF Authentication-Results
|
||||
header SPF_PASS Authentication-Results =~ /$escapedprimaryhostname; spf=pass/
|
||||
describe SPF_PASS SPF check passed
|
||||
score SPF_PASS -0.1
|
||||
|
||||
header SPF_NONE Authentication-Results =~ /$escapedprimaryhostname; spf=none/
|
||||
describe SPF_NONE SPF record not found
|
||||
score SPF_NONE 2.0
|
||||
|
||||
header SPF_FAIL Authentication-Results =~ /$escapedprimaryhostname; spf=fail/
|
||||
describe SPF_FAIL SPF check failed
|
||||
score SPF_FAIL 5.0
|
||||
EOF
|
||||
|
||||
# Bayesean learning
|
||||
# -----------------
|
||||
#
|
||||
|
|
|
@ -93,6 +93,9 @@ hide_output add-apt-repository -y universe
|
|||
# Install the certbot PPA.
|
||||
hide_output add-apt-repository -y ppa:certbot/certbot
|
||||
|
||||
# Install the duplicity PPA.
|
||||
hide_output add-apt-repository -y ppa:duplicity-team/duplicity-release-git
|
||||
|
||||
# ### Update Packages
|
||||
|
||||
# Update system packages to make sure we have the latest upstream versions
|
||||
|
@ -128,7 +131,7 @@ apt_get_quiet autoremove
|
|||
# * openssh-client: provides ssh-keygen
|
||||
|
||||
echo Installing system packages...
|
||||
apt_install python3 python3-dev python3-pip \
|
||||
apt_install python3 python3-dev python3-pip python3-setuptools \
|
||||
netcat-openbsd wget curl git sudo coreutils bc \
|
||||
haveged pollinate openssh-client unzip \
|
||||
unattended-upgrades cron ntp fail2ban rsyslog
|
||||
|
@ -317,6 +320,9 @@ fi #NODOC
|
|||
# name server, on IPV6.
|
||||
# * The listen-on directive in named.conf.options restricts `bind9` to
|
||||
# binding to the loopback interface instead of all interfaces.
|
||||
# * The max-recursion-queries directive increases the maximum number of iterative queries.
|
||||
# If more queries than specified are sent, bind9 returns SERVFAIL. After flushing the cache during system checks,
|
||||
# we ran into the limit thus we are increasing it from 75 (default value) to 100.
|
||||
apt_install bind9
|
||||
tools/editconf.py /etc/default/bind9 \
|
||||
"OPTIONS=\"-u bind -4\""
|
||||
|
@ -324,6 +330,10 @@ if ! grep -q "listen-on " /etc/bind/named.conf.options; then
|
|||
# Add a listen-on directive if it doesn't exist inside the options block.
|
||||
sed -i "s/^}/\n\tlisten-on { 127.0.0.1; };\n}/" /etc/bind/named.conf.options
|
||||
fi
|
||||
if ! grep -q "max-recursion-queries " /etc/bind/named.conf.options; then
|
||||
# Add a max-recursion-queries directive if it doesn't exist inside the options block.
|
||||
sed -i "s/^}/\n\tmax-recursion-queries 100;\n}/" /etc/bind/named.conf.options
|
||||
fi
|
||||
|
||||
# First we'll disable systemd-resolved's management of resolv.conf and its stub server.
|
||||
# Breaking the symlink to /run/systemd/resolve/stub-resolv.conf means
|
||||
|
|
|
@ -28,10 +28,11 @@ apt_install \
|
|||
# Install Roundcube from source if it is not already present or if it is out of date.
|
||||
# Combine the Roundcube version number with the commit hash of plugins to track
|
||||
# whether we have the latest version of everything.
|
||||
VERSION=1.4.9
|
||||
HASH=df650f4d3eae9eaae2d5a5f06d68665691daf57d
|
||||
PERSISTENT_LOGIN_VERSION=6b3fc450cae23ccb2f393d0ef67aa319e877e435
|
||||
HTML5_NOTIFIER_VERSION=4b370e3cd60dabd2f428a26f45b677ad1b7118d5
|
||||
|
||||
VERSION=1.4.11
|
||||
HASH=3877f0e70f29e7d0612155632e48c3db1e626be3
|
||||
PERSISTENT_LOGIN_VERSION=6b3fc450cae23ccb2f393d0ef67aa319e877e435 # version 5.2.0
|
||||
HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+
|
||||
CARDDAV_VERSION=3.0.3
|
||||
CARDDAV_HASH=d1e3b0d851ffa2c6bd42bf0c04f70d0e1d0d78f8
|
||||
|
||||
|
|
|
@ -22,8 +22,8 @@ apt_install \
|
|||
phpenmod -v php imap
|
||||
|
||||
# Copy Z-Push into place.
|
||||
VERSION=2.5.2
|
||||
TARGETHASH=2dc3dbd791b96b0ba2638df0d3d1e03c7e1cbab2
|
||||
VERSION=2.6.2
|
||||
TARGETHASH=4b312d64227ef887b24d9cc8f0ae17519586f6e2
|
||||
needs_update=0 #NODOC
|
||||
if [ ! -f /usr/local/lib/z-push/version ]; then
|
||||
needs_update=1 #NODOC
|
||||
|
|
Loading…
Reference in New Issue