1
0
mirror of https://github.com/mail-in-a-box/mailinabox.git synced 2024-12-25 07:47:05 +00:00

Merge remote-tracking branch 'origin/master' into configurablebackupfolder

This commit is contained in:
KiekerJan 2022-04-24 16:01:04 +02:00
commit c1b7a9d4d2
65 changed files with 1391 additions and 610 deletions

71
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,71 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '43 20 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

View File

@ -1,6 +1,78 @@
CHANGELOG CHANGELOG
========= =========
Version 60 (date TBD)
---------------------
This is the first release for Ubuntu 22.04.
**Before upgrading**, you must **first upgrade your existing Ubuntu 18.04 box to Mail-in-a-Box v0.51** (or any later version of Mail-in-a-Box supporting Ubuntu 18.04), if you haven't already done so. That may not be possible after Ubuntu 18.04 reaches its end of life in April 2023, so please compete the upgrade well before then. (If you are not using Nextcloud's contacts or calendar, you can migrate to the latest version of Mail-in-a-Box from any previous version.)
For complete upgrade instructions, see:
LINK TBD
No features of Mail-in-a-Box have changed in this release, but with the newer version of Ubuntu the following software packages we use are updated:
* dovecot is upgraded to 2.3.16, postfix to 3.6.3, opendmark to 1.4 (which adds ARC-Authentication-Results headers), and spampd to 2.53 (alleviating a mail delivery rate limiting bug).
* Nextcloud is upgraded to 23.0.0 with PHP updated from 7.2 to 8.0.
* certbot is upgraded to 1.21 (via the Ubuntu repository instead of a PPA).
* fail2ban is upgraded to 0.11.2.
* nginx is upgraded to 1.18.
* bind9 is replaced with unbound
In Development
--------------
Version 56 (January 19, 2022)
-----------------------------
Software updates:
* Roundcube updated to 1.5.2 (from 1.5.0), and the persistent_login and CardDAV (to 4.3.0 from 3.0.3) plugins are updated.
* Nextcloud updated to 20.0.14 (from 20.0.8), contacts to 4.0.7 (from 3.5.1), and calendar to 3.0.4 (from 2.2.0).
Setup:
* Fixed failed setup if a previous attempt failed while updating Nextcloud.
Control panel:
* Fixed a crash if a custom DNS entry is not under a zone managed by the box.
* Fix DNSSEC instructions typo.
Other:
* Set systemd journald log retention to 10 days (from no limit) to reduce disk usage.
* Fixed log processing for submission lines that have a sasl_sender or other extra information.
* Fix DNS secondary nameserver refesh failure retry period.
Version 55 (October 18, 2021)
-----------------------------
Mail:
* "SMTPUTF8" is now disabled in Postfix. Because Dovecot still does not support SMTPUTF8, incoming mail to internationalized addresses was bouncing. This fixes incoming mail to internationalized domains (which was probably working prior to v0.40), but it will prevent sending outbound mail to addresses with internationalized local-parts.
* Upgraded to Roundcube 1.5.
Control panel:
* The control panel menus are now hidden before login, but now non-admins can log in to access the mail and contacts/calendar instruction pages.
* The login form now disables browser autocomplete in the two-factor authentication code field.
* After logging in, the default page is now a fast-loading welcome page rather than the slow-loading system status checks page.
* The backup retention period option now displays for B2 backup targets.
* The DNSSEC DS record recommendations are cleaned up and now recommend changing records that use SHA1.
* The Munin monitoring pages no longer require a separate HTTP basic authentication login and can be used if two-factor authentication is turned on.
* Control panel logins are now tied to a session backend that allows true logouts (rather than an encrypted cookie).
* Failed logins no longer directly reveal whether the email address corresponds to a user account.
* Browser dark mode now inverts the color scheme.
Other:
* Fail2ban's IPv6 support is enabled.
* The mail log tool now doesn't crash if there are email addresess in log messages with invalid UTF-8 characters.
* Additional nsd.conf files can be placed in /etc/nsd.conf.d.
v0.54 (June 20, 2021) v0.54 (June 20, 2021)
--------------------- ---------------------

View File

@ -20,9 +20,9 @@ _If you're seeing an error message about your *IP address being listed in the Sp
### Modifying your `hosts` file ### Modifying your `hosts` file
After a while, Mail-in-a-Box will be available at `192.168.50.4` (unless you changed that in your `Vagrantfile`). To be able to use the web-based bits, we recommend to add a hostname to your `hosts` file: After a while, Mail-in-a-Box will be available at `192.168.56.4` (unless you changed that in your `Vagrantfile`). To be able to use the web-based bits, we recommend to add a hostname to your `hosts` file:
$ echo "192.168.50.4 mailinabox.lan" | sudo tee -a /etc/hosts $ echo "192.168.56.4 mailinabox.lan" | sudo tee -a /etc/hosts
You should now be able to navigate to https://mailinabox.lan/admin using your browser. There should be an initial admin user with the name `me@mailinabox.lan` and the password `12345678`. You should now be able to navigate to https://mailinabox.lan/admin using your browser. There should be an initial admin user with the name `me@mailinabox.lan` and the password `12345678`.

View File

@ -10,24 +10,35 @@ Functionality changes and additions
This applies geoip filtering on acces to the admin panel of the box. Order of filtering: block continents that are not allowed, block countries that are not allowed, allow countries that are allowed (overriding continent filtering). Edit /etc/nginx/conf.d/10-geoblock.conf to configure. This applies geoip filtering on acces to the admin panel of the box. Order of filtering: block continents that are not allowed, block countries that are not allowed, allow countries that are allowed (overriding continent filtering). Edit /etc/nginx/conf.d/10-geoblock.conf to configure.
* Add geoipblocking for ssh access * Add geoipblocking for ssh access
This applies geoip filtering for access to the ssh server. Edit /etc/geoiplookup.conf. All countries defined in this file are allowed. Works for alternate ssh ports. This applies geoip filtering for access to the ssh server. Edit /etc/geoiplookup.conf. All countries defined in this file are allowed. Works for alternate ssh ports.
* Make fail2ban a more strict This uses goiplookup from https://github.com/axllent/goiplookup
* Make fail2ban more strict
enable postfix filters, lengthen bantime and findtime enable postfix filters, lengthen bantime and findtime
* Add fail2ban jails for both above mentioned geoipblocking filters * Add fail2ban jails for both above mentioned geoipblocking filters
* Add fail2ban filters for web scanners and badbots * Add fail2ban filters for web scanners and badbots
* Add xapian full text searching to dovecot (from https://github.com/grosjo/fts-xapian) * Add xapian full text searching to dovecot (from https://github.com/grosjo/fts-xapian)
* Add rkhunter * Add rkhunter
* Configure domain names for which only www will be hosted. * Configure domain names for which only www will be hosted
Edit /etc/miabwwwdomains.conf to configure. The box will handle incoming traffic asking for these domain names. The DNS entries are entered in an external DNS provider! If you want this box to handle the DNS entries, simply add a mail alias. (existing functionality of the vanilla Mail-in-a-Box) Edit /etc/miabwwwdomains.conf to configure. The box will handle incoming traffic asking for these domain names. The DNS entries are entered in an external DNS provider! If you want this box to handle the DNS entries, simply add a mail alias. (existing functionality of the vanilla Mail-in-a-Box)
* Add some munin plugins * Add some munin plugins
* Update nextcloud to 20.0.8 * Update nextcloud to 22.2.3
And updated apps
* Add nextcloud notes app
* Update roundcube carddav plugin to 4.1.1 * Update roundcube carddav plugin to 4.1.1
* Use shorter TTL values in the DNS server. * Add roundcube context menu plugin
* Add roundcube two factor authentication plugin
* Use shorter TTL values in the DNS server
To be used before for example when changing IP addresses. Shortening TTL values will propagate changes faster. For reference, default TTL is 1 day, short TTL is 5 minutes. To use, edit file /etc/forceshortdnsttl and add a line for each domain for which shorter TTLs should be used. To use short TTLs for all known domains, add "forceshortdnsttl" To be used before for example when changing IP addresses. Shortening TTL values will propagate changes faster. For reference, default TTL is 1 day, short TTL is 5 minutes. To use, edit file /etc/forceshortdnsttl and add a line for each domain for which shorter TTLs should be used. To use short TTLs for all known domains, add "forceshortdnsttl"
* Use the box as a Hidden Master in the DNS system * Use the box as a Hidden Master in the DNS system
Thus only the secondary DNS servers are used as public DNS servers. When using a hidden master, no glue records are necessary at your domain hoster. To use, first setup secondary DNS servers via the Custom DNS administration page. At least two secondary servers should be set. When that functions, edit file /etc/usehiddenmasterdns and add a line for each domain for which Hidden Master should be used. To use Hidden Master for all known domains, add "usehiddenmasterdns". Thus only the secondary DNS servers are used as public DNS servers. When using a hidden master, no glue records are necessary at your domain hoster. To use, first setup secondary DNS servers via the Custom DNS administration page. At least two secondary servers should be set. When that functions, edit file /etc/usehiddenmasterdns and add a line for each domain for which Hidden Master should be used. To use Hidden Master for all known domains, add "usehiddenmasterdns".
* Daily ip blacklist check
Using check-dnsbl.py from https://github.com/gsauthof/utility
* Updated ssl security for web and email
Removed older cryptos following internet.nl recommendations
* Replace opendkim with dkimpy (https://launchpad.net/dkimpy-milter)
Added support for Ed25519 signing
* Replace bind9 with unbound DNS resolver
Bug fixes Bug fixes
* Munin routes are ignored for Multi Factor Authentication [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1865)
* Munin error report fixed [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1555) * Munin error report fixed [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1555)
* Correct nextcloud carddav url [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1918) * Correct nextcloud carddav url [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1918)
@ -40,8 +51,10 @@ Maintenance (personal)
* Remove nextcloud skeleton to save disk space * Remove nextcloud skeleton to save disk space
Fun Fun
* Add option to define ADMIN_IP_ADDRESS (currently only used to ignore fail2ban jails) * Add option to define ADMIN_IP_ADDRESS
* Add dynamic dns tools in the tools directory. Currently only used to ignore fail2ban jails
* Add dynamic dns tools in the tools directory
Can be used to control DNS entries on the mail-in-a-box to point to a machine with a non-fixed (e.g. residential) ip address
Original mailinabox content starts here: Original mailinabox content starts here:
@ -70,7 +83,7 @@ Additionally, this project has a [Code of Conduct](CODE_OF_CONDUCT.md), which su
In The Box In The Box
---------- ----------
Mail-in-a-Box turns a fresh Ubuntu 20.04 or 18.04 LTS 64-bit machine into a working mail server by installing and configuring various components. Mail-in-a-Box turns a fresh Ubuntu 22.04 or 20.04 LTS 64-bit machine into a working mail server by installing and configuring various components.
It is a one-click email appliance. There are no user-configurable setup options. It "just works." It is a one-click email appliance. There are no user-configurable setup options. It "just works."
@ -89,6 +102,8 @@ It also includes system management tools:
* A control panel for adding/removing mail users, aliases, custom DNS records, configuring backups, etc. * A control panel for adding/removing mail users, aliases, custom DNS records, configuring backups, etc.
* An API for all of the actions on the control panel * An API for all of the actions on the control panel
Internationalized domain names are supported and configured easily (but SMTPUTF8 is not supported, unfortunately).
It also supports static website hosting since the box is serving HTTPS anyway. (To serve a website for your domains elsewhere, just add a custom DNS "A" record in you Mail-in-a-Box's control panel to point domains to another server.) It also supports static website hosting since the box is serving HTTPS anyway. (To serve a website for your domains elsewhere, just add a custom DNS "A" record in you Mail-in-a-Box's control panel to point domains to another server.)
For more information on how Mail-in-a-Box handles your privacy, see the [security details page](security.md). For more information on how Mail-in-a-Box handles your privacy, see the [security details page](security.md).
@ -99,13 +114,13 @@ Installation
See the [setup guide](https://mailinabox.email/guide.html) for detailed, user-friendly instructions. See the [setup guide](https://mailinabox.email/guide.html) for detailed, user-friendly instructions.
For experts, start with a completely fresh (really, I mean it) Ubuntu 18.04 LTS 64-bit machine. On the machine... For experts, start with a completely fresh (really, I mean it) Ubuntu 22.04 LTS 64-bit machine. On the machine...
Clone this repository and checkout the tag corresponding to the most recent release: Clone this repository and checkout the tag corresponding to the most recent release:
$ git clone https://github.com/mail-in-a-box/mailinabox $ git clone https://github.com/mail-in-a-box/mailinabox
$ cd mailinabox $ cd mailinabox
$ git checkout v0.54 $ git checkout v60
Begin the installation. Begin the installation.

4
Vagrantfile vendored
View File

@ -2,14 +2,14 @@
# vi: set ft=ruby : # vi: set ft=ruby :
Vagrant.configure("2") do |config| Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/focal64" config.vm.box = "ubuntu/jammy64"
# Network config: Since it's a mail server, the machine must be connected # Network config: Since it's a mail server, the machine must be connected
# to the public web. However, we currently don't want to expose SSH since # to the public web. However, we currently don't want to expose SSH since
# the machine's box will let anyone log into it. So instead we'll put the # the machine's box will let anyone log into it. So instead we'll put the
# machine on a private network. # machine on a private network.
config.vm.hostname = "mailinabox.lan" config.vm.hostname = "mailinabox.lan"
config.vm.network "private_network", ip: "192.168.50.4" config.vm.network "private_network", ip: "192.168.56.4"
config.vm.provision :shell, :inline => <<-SH config.vm.provision :shell, :inline => <<-SH
# Set environment variables so that the setup script does # Set environment variables so that the setup script does

View File

@ -54,24 +54,24 @@ tags:
System operations, which include system status checks, new version checks System operations, which include system status checks, new version checks
and reboot status. and reboot status.
paths: paths:
/me: /login:
get: post:
tags: tags:
- User - User
summary: Get user information summary: Exchange a username and password for a session API key.
description: | description: |
Returns user information. Used for user authentication. Returns user information and a session API key.
Authenticate a user by supplying the auth token as a base64 encoded string in Authenticate a user by supplying the auth token as a base64 encoded string in
format `email:password` using basic authentication headers. format `email:password` using basic authentication headers.
If successful, a long-lived `api_key` is returned which can be used for subsequent If successful, a long-lived `api_key` is returned which can be used for subsequent
requests to the API. requests to the API in place of the password.
operationId: getMe operationId: login
x-codeSamples: x-codeSamples:
- lang: curl - lang: curl
source: | source: |
curl -X GET "https://{host}/admin/me" \ curl -X POST "https://{host}/admin/login" \
-u "<email>:<password>" -u "<email>:<password>"
responses: responses:
200: 200:
@ -92,6 +92,26 @@ paths:
privileges: privileges:
- admin - admin
status: ok status: ok
/logout:
post:
tags:
- User
summary: Invalidates a session API key.
description: |
Invalidates a session API key so that it cannot be used after this API call.
operationId: logout
x-codeSamples:
- lang: curl
source: |
curl -X POST "https://{host}/admin/logout" \
-u "<email>:<session_key>"
responses:
200:
description: Successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/LogoutResponse'
/system/status: /system/status:
post: post:
tags: tags:
@ -1242,7 +1262,7 @@ paths:
$ref: '#/components/schemas/MailUserAddResponse' $ref: '#/components/schemas/MailUserAddResponse'
example: | example: |
mail user added mail user added
updated DNS: OpenDKIM configuration updated DNS: DKIM configuration
400: 400:
description: Bad request description: Bad request
content: content:
@ -1803,7 +1823,7 @@ components:
The `access-token` is comprised of the Base64 encoding of `username:password`. The `access-token` is comprised of the Base64 encoding of `username:password`.
The `username` is the mail user's email address, and `password` can either be the mail user's The `username` is the mail user's email address, and `password` can either be the mail user's
password, or the `api_key` returned from the `getMe` operation. password, or the `api_key` returned from the `login` operation.
When using `curl`, you can supply user credentials using the `-u` or `--user` parameter. When using `curl`, you can supply user credentials using the `-u` or `--user` parameter.
requestBodies: requestBodies:
@ -1843,7 +1863,7 @@ components:
type: string type: string
example: | example: |
mail user added mail user added
updated DNS: OpenDKIM configuration updated DNS: DKIM configuration
description: | description: |
Mail user add response. Mail user add response.
@ -2705,3 +2725,8 @@ components:
nullable: true nullable: true
MfaDisableSuccessResponse: MfaDisableSuccessResponse:
type: string type: string
LogoutResponse:
type: object
properties:
status:
type: string

13
conf/dh4096.pem Normal file
View File

@ -0,0 +1,13 @@
-----BEGIN DH PARAMETERS-----
MIICCAKCAgEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3
7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32
nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZp4e
8W5vUsMWTfT7eTDp5OWIV7asfV9C1p9tGHdjzx1VA0AEh/VbpX4xzHpxNciG77Qx
iu1qHgEtnmgyqQdgCpGBMMRtx3j5ca0AOAkpmaMzy4t6Gh25PXFAADwqTs6p+Y0K
zAqCkc3OyX3Pjsm1Wn+IpGtNtahR9EGC4caKAH5eZV9q//////////8CAQI=
-----END DH PARAMETERS-----

View File

@ -0,0 +1,12 @@
[INCLUDES]
before = common.conf
[Definition]
miab-errors=postfix/(submission/)?smtpd.*warning: hostname .* does not resolve to address <HOST>:.+
miab-normal=postfix/(submission/)?smtpd.*warning: hostname .* does not resolve to address <HOST>$
ignoreregex =
failregex = <miab-<mode>>
mode = normal

View File

@ -0,0 +1,7 @@
[INCLUDES]
before = common.conf
[Definition]
failregex=postfix/submission/smtpd.*warning: non-SMTP command from.*\[<HOST>\].*HTTP.*$
ignoreregex =

View File

@ -0,0 +1,6 @@
# Ban requests for non-existing or not-allowed resources
[Definition]
# regex for nginx error.log
failregex = ^.* \[error\] .*2: No such file or directory.*client: <HOST>.*$
ignoreregex = ^.*(robots.txt|favicon.ico).*$

View File

@ -0,0 +1,6 @@
# Ban requests for non-existing or not-allowed resources
[Definition]
failregex = ^.* \[error\] .*2: No such file or directory.*client: <HOST>.*$
ignoreregex = ^.*(robots.txt|favicon.ico).*$

View File

@ -97,7 +97,8 @@ failregex = ^<HOST> -.*(GET|POST|HEAD).*(/\.git/config)
^<HOST> -.*(GET|POST|HEAD).*(/examples/file-manager\.html) ^<HOST> -.*(GET|POST|HEAD).*(/examples/file-manager\.html)
^<HOST> -.*(GET|POST|HEAD).*(/getcfg\.php) ^<HOST> -.*(GET|POST|HEAD).*(/getcfg\.php)
^<HOST> -.*(GET|POST|HEAD).*(/get_password\.php) ^<HOST> -.*(GET|POST|HEAD).*(/get_password\.php)
^<HOST> -.*(GET|POST|HEAD).*(/\.git/info/) ^<HOST> -.*(GET|POST|HEAD).*(/\.git/info)
^<HOST> -.*(GET|POST|HEAD).*(/\.git/HEAD)
^<HOST> -.*(GET|POST|HEAD).*(/Hello\.World) ^<HOST> -.*(GET|POST|HEAD).*(/Hello\.World)
^<HOST> -.*(GET|POST|HEAD).*(/hndUnblock\.cgi) ^<HOST> -.*(GET|POST|HEAD).*(/hndUnblock\.cgi)
^<HOST> -.*(GET|POST|HEAD).*(/images/login9/login_33\.jpg) ^<HOST> -.*(GET|POST|HEAD).*(/images/login9/login_33\.jpg)
@ -231,7 +232,7 @@ failregex = ^<HOST> -.*(GET|POST|HEAD).*(/\.git/config)
^<HOST> -.*(GET|POST|HEAD).*(\x22sanitize) ^<HOST> -.*(GET|POST|HEAD).*(\x22sanitize)
^<HOST> -.*(GET|POST|HEAD).*(\x22SimplePie) ^<HOST> -.*(GET|POST|HEAD).*(\x22SimplePie)
^<HOST> -.*(GET|POST|HEAD).*(\x5C0disconnectHandlers) ^<HOST> -.*(GET|POST|HEAD).*(\x5C0disconnectHandlers)
^<HOST> -.*(GET).*(\.\./wp-config.php) ^<HOST> -.*(GET|POST|HEAD).*(\.\./wp-config.php)
ignoreregex = ignoreregex =

View File

@ -0,0 +1,13 @@
# Block clients that generate too many non existing resources
# Do not deploy of you host many websites on your box
# any bad html link will trigger a false positive.
# This jail is meant to catch scanners that try many
# sites.
[badrequests]
enabled = true
port = http,https
filter = nginx-badrequests
logpath = /var/log/nginx/error.log
maxretry = 8
findtime = 15m
bantime = 15m

View File

@ -0,0 +1,31 @@
# typically non smtp commands. Block fast for access to postfix
[miab-postfix-scanner]
enabled = true
port = smtp,465,587
filter = miab-postfix-scanner
logpath = /var/log/mail.log
maxretry = 2
findtime = 1d
bantime = 1h
# ip lookup of hostname does not match. Go easy on block
[miab-pf-rdnsfail]
enabled = true
port = smtp,465,587
mode = normal
filter = miab-postfix-rdnsfail
logpath = /var/log/mail.log
maxretry = 8
findtime = 12h
bantime = 30m
# ip lookup of hostname does not match with failure. More strict block
[miab-pf-rdnsfail-e]
enabled = true
port = smtp,465,587
mode = errors
filter = miab-postfix-rdnsfail[mode=errors]
logpath = /var/log/mail.log
maxretry = 4
findtime = 1d
bantime = 1h

View File

@ -1,8 +1,12 @@
# Block clients based on a list of specific requests
# The list contains applications that are not installed
# only scanners and bad parties will try too often
# so blocking can be fast and long
[webexploits] [webexploits]
enabled = true enabled = true
port = http,https port = http,https
filter = webexploits filter = webexploits
logpath = /var/log/nginx/access.log logpath = /var/log/nginx/access.log
maxretry = 2 maxretry = 2
findtime = 240m findtime = 4h
bantime = 60m bantime = 4h

View File

@ -5,7 +5,7 @@
# Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks # Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks
# ping services over the public interface so we should whitelist that address of # ping services over the public interface so we should whitelist that address of
# ours too. The string is substituted during installation. # ours too. The string is substituted during installation.
ignoreip = 127.0.0.1/8 ::1/128 PUBLIC_IP PUBLIC_IPV6 ADMIN_HOME_IP ADMIN_HOME_IPV6 ignoreip = 127.0.0.1/8 ::1/128 PUBLIC_IP PUBLIC_IPV6/64 ADMIN_HOME_IP ADMIN_HOME_IPV6
bantime = 15m bantime = 15m
findtime = 120m findtime = 120m
maxretry = 4 maxretry = 4
@ -69,7 +69,7 @@ findtime = 15m
enabled = true enabled = true
maxretry = 10 maxretry = 10
bantime = 2w bantime = 2w
findtime = 3d findtime = 7d
action = iptables-allports[name=recidive] action = iptables-allports[name=recidive]
# In the recidive section of jail.conf the action contains: # In the recidive section of jail.conf the action contains:
# #

View File

@ -36,6 +36,8 @@
add_header X-Frame-Options "DENY"; add_header X-Frame-Options "DENY";
add_header X-Content-Type-Options nosniff; add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "frame-ancestors 'none';"; add_header Content-Security-Policy "frame-ancestors 'none';";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Referrer-Policy "strict-origin";
} }
# Nextcloud configuration. # Nextcloud configuration.

View File

@ -2,7 +2,7 @@
# Note that these settings are repeated in the SMTP and IMAP configuration. # Note that these settings are repeated in the SMTP and IMAP configuration.
# ssl_protocols has moved to nginx.conf in bionic, check there for enabled protocols. # ssl_protocols has moved to nginx.conf in bionic, check there for enabled protocols.
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_dhparam STORAGE_ROOT/ssl/dh2048.pem; ssl_dhparam STORAGE_ROOT/ssl/dh4096.pem;
# as recommended by http://nginx.org/en/docs/http/configuring_https_servers.html # as recommended by http://nginx.org/en/docs/http/configuring_https_servers.html
ssl_session_cache shared:SSL:50m; ssl_session_cache shared:SSL:50m;

68
conf/unbound.conf Normal file
View File

@ -0,0 +1,68 @@
server:
# the working directory.
directory: "/etc/unbound"
# run as the unbound user
username: unbound
verbosity: 0 # uncomment and increase to get more logging.
# logfile: "/var/log/unbound.log" # won't work due to apparmor
# use-syslog: no
# By default listen only to localhost
#interface: ::1
#interface: 127.0.0.1
port: 53
# Only allow localhost to use this Unbound instance.
access-control: 127.0.0.1/8 allow
access-control: ::1/128 allow
# Private IP ranges, which shall never be returned or forwarded as public DNS response.
private-address: 10.0.0.0/8
private-address: 172.16.0.0/12
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: fd00::/8
private-address: fe80::/10
# Functionality
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
# Performance
num-threads: 2
cache-min-ttl: 300
cache-max-ttl: 86400
serve-expired: yes
neg-cache-size: 4M
msg-cache-size: 50m
rrset-cache-size: 100m
so-reuseport: yes
so-rcvbuf: 4m
so-sndbuf: 4m
# Privacy / hardening
# hide server info from clients
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
harden-algo-downgrade: yes
harden-large-queries: yes
harden-short-bufsize: yes
rrset-roundrobin: yes
minimal-responses: yes
identity: "Server"
# Include possible white/blacklists
include: /etc/unbound/lists.d/*.conf
remote-control:
control-enable: yes
control-port: 953

View File

@ -1,6 +1,7 @@
import base64, os, os.path, hmac, json import base64, os, os.path, hmac, json, secrets
from datetime import timedelta
from flask import make_response from expiringdict import ExpiringDict
import utils import utils
from mailconfig import get_mail_password, get_mail_user_privileges from mailconfig import get_mail_password, get_mail_user_privileges
@ -9,25 +10,18 @@ from mfa import get_hash_mfa_state, validate_auth_mfa
DEFAULT_KEY_PATH = '/var/lib/mailinabox/api.key' DEFAULT_KEY_PATH = '/var/lib/mailinabox/api.key'
DEFAULT_AUTH_REALM = 'Mail-in-a-Box Management Server' DEFAULT_AUTH_REALM = 'Mail-in-a-Box Management Server'
class KeyAuthService: class AuthService:
"""Generate an API key for authenticating clients
Clients must read the key from the key file and send the key with all HTTP
requests. The key is passed as the username field in the standard HTTP
Basic Auth header.
"""
def __init__(self): def __init__(self):
self.auth_realm = DEFAULT_AUTH_REALM self.auth_realm = DEFAULT_AUTH_REALM
self.key = self._generate_key()
self.key_path = DEFAULT_KEY_PATH self.key_path = DEFAULT_KEY_PATH
self.max_session_duration = timedelta(days=2)
def write_key(self): self.init_system_api_key()
"""Write key to file so authorized clients can get the key self.sessions = ExpiringDict(max_len=64, max_age_seconds=self.max_session_duration.total_seconds())
def init_system_api_key(self):
"""Write an API key to a local file so local processes can use the API"""
The key file is created with mode 0640 so that additional users can be
authorized to access the API by granting group/ACL read permissions on
the key file.
"""
def create_file_with_mode(path, mode): def create_file_with_mode(path, mode):
# Based on answer by A-B-B: http://stackoverflow.com/a/15015748 # Based on answer by A-B-B: http://stackoverflow.com/a/15015748
old_umask = os.umask(0) old_umask = os.umask(0)
@ -36,73 +30,92 @@ class KeyAuthService:
finally: finally:
os.umask(old_umask) os.umask(old_umask)
self.key = secrets.token_hex(32)
os.makedirs(os.path.dirname(self.key_path), exist_ok=True) os.makedirs(os.path.dirname(self.key_path), exist_ok=True)
with create_file_with_mode(self.key_path, 0o640) as key_file: with create_file_with_mode(self.key_path, 0o640) as key_file:
key_file.write(self.key + '\n') key_file.write(self.key + '\n')
def authenticate(self, request, env): def authenticate(self, request, env, login_only=False, logout=False):
"""Test if the client key passed in HTTP Authorization header matches the service key """Test if the HTTP Authorization header's username matches the system key, a session key,
or if the or username/password passed in the header matches an administrator user. or if the username/password passed in the header matches a local user.
Returns a tuple of the user's email address and list of user privileges (e.g. Returns a tuple of the user's email address and list of user privileges (e.g.
('my@email', []) or ('my@email', ['admin']); raises a ValueError on login failure. ('my@email', []) or ('my@email', ['admin']); raises a ValueError on login failure.
If the user used an API key, the user's email is returned as None.""" If the user used the system API key, the user's email is returned as None since
this key is not associated with a user."""
def parse_http_authorization_basic(header):
def decode(s): def decode(s):
return base64.b64decode(s.encode('ascii')).decode('ascii') return base64.b64decode(s.encode('ascii')).decode('ascii')
def parse_basic_auth(header):
if " " not in header: if " " not in header:
return None, None return None, None
scheme, credentials = header.split(maxsplit=1) scheme, credentials = header.split(maxsplit=1)
if scheme != 'Basic': if scheme != 'Basic':
return None, None return None, None
credentials = decode(credentials) credentials = decode(credentials)
if ":" not in credentials: if ":" not in credentials:
return None, None return None, None
username, password = credentials.split(':', maxsplit=1) username, password = credentials.split(':', maxsplit=1)
return username, password return username, password
header = request.headers.get('Authorization') username, password = parse_http_authorization_basic(request.headers.get('Authorization', ''))
if not header:
raise ValueError("No authorization header provided.")
username, password = parse_basic_auth(header)
if username in (None, ""): if username in (None, ""):
raise ValueError("Authorization header invalid.") raise ValueError("Authorization header invalid.")
elif username == self.key:
# The user passed the master API key which grants administrative privs. if username.strip() == "" and password.strip() == "":
raise ValueError("No email address, password, session key, or API key provided.")
# If user passed the system API key, grant administrative privs. This key
# is not associated with a user.
if username == self.key and not login_only:
return (None, ["admin"]) return (None, ["admin"])
# If the password corresponds with a session token for the user, grant access for that user.
if self.get_session(username, password, "login", env) and not login_only:
sessionid = password
session = self.sessions[sessionid]
if logout:
# Clear the session.
del self.sessions[sessionid]
else: else:
# The user is trying to log in with a username and either a password # Re-up the session so that it does not expire.
# (and possibly a MFA token) or a user-specific API key. self.sessions[sessionid] = session
return (username, self.check_user_auth(username, password, request, env))
# If no password was given, but a username was given, we're missing some information.
elif password.strip() == "":
raise ValueError("Enter a password.")
else:
# The user is trying to log in with a username and a password
# (and possibly a MFA token). On failure, an exception is raised.
self.check_user_auth(username, password, request, env)
# Get privileges for authorization. This call should never fail because by this
# point we know the email address is a valid user --- unless the user has been
# deleted after the session was granted. On error the call will return a tuple
# of an error message and an HTTP status code.
privs = get_mail_user_privileges(username, env)
if isinstance(privs, tuple): raise ValueError(privs[0])
# Return the authorization information.
return (username, privs)
def check_user_auth(self, email, pw, request, env): def check_user_auth(self, email, pw, request, env):
# Validate a user's login email address and password. If MFA is enabled, # Validate a user's login email address and password. If MFA is enabled,
# check the MFA token in the X-Auth-Token header. # check the MFA token in the X-Auth-Token header.
# #
# On success returns a list of privileges (e.g. [] or ['admin']). On login # On login failure, raises a ValueError with a login error message. On
# failure, raises a ValueError with a login error message. # success, nothing is returned.
# Sanity check.
if email == "" or pw == "":
raise ValueError("Enter an email address and password.")
# The password might be a user-specific API key. create_user_key raises
# a ValueError if the user does not exist.
if hmac.compare_digest(self.create_user_key(email, env), pw):
# OK.
pass
else:
# Get the hashed password of the user. Raise a ValueError if the
# email address does not correspond to a user.
pw_hash = get_mail_password(email, env)
# Authenticate. # Authenticate.
try: try:
# Get the hashed password of the user. Raise a ValueError if the
# email address does not correspond to a user. But wrap it in the
# same exception as if a password fails so we don't easily reveal
# if an email address is valid.
pw_hash = get_mail_password(email, env)
# Use 'doveadm pw' to check credentials. doveadm will return # Use 'doveadm pw' to check credentials. doveadm will return
# a non-zero exit status if the credentials are no good, # a non-zero exit status if the credentials are no good,
# and check_call will raise an exception in that case. # and check_call will raise an exception in that case.
@ -113,7 +126,7 @@ class KeyAuthService:
]) ])
except: except:
# Login failed. # Login failed.
raise ValueError("Invalid password.") raise ValueError("Incorrect email address or password.")
# If MFA is enabled, check that MFA passes. # If MFA is enabled, check that MFA passes.
status, hints = validate_auth_mfa(email, request, env) status, hints = validate_auth_mfa(email, request, env)
@ -121,38 +134,33 @@ class KeyAuthService:
# Login valid. Hints may have more info. # Login valid. Hints may have more info.
raise ValueError(",".join(hints)) raise ValueError(",".join(hints))
# Get privileges for authorization. This call should never fail because by this def create_user_password_state_token(self, email, env):
# point we know the email address is a valid user. But on error the call will # Create a token that changes if the user's password or MFA options change
# return a tuple of an error message and an HTTP status code. # so that sessions become invalid if any of that information changes.
privs = get_mail_user_privileges(email, env) msg = get_mail_password(email, env).encode("utf8")
if isinstance(privs, tuple): raise ValueError(privs[0])
# Return a list of privileges.
return privs
def create_user_key(self, email, env):
# Create a user API key, which is a shared secret that we can re-generate from
# static information in our database. The shared secret contains the user's
# email address, current hashed password, and current MFA state, so that the
# key becomes invalid if any of that information changes.
#
# Use an HMAC to generate the API key using our master API key as a key,
# which also means that the API key becomes invalid when our master API key
# changes --- i.e. when this process is restarted.
#
# Raises ValueError via get_mail_password if the user doesn't exist.
# Construct the HMAC message from the user's email address and current password.
msg = b"AUTH:" + email.encode("utf8") + b" " + get_mail_password(email, env).encode("utf8")
# Add to the message the current MFA state, which is a list of MFA information. # Add to the message the current MFA state, which is a list of MFA information.
# Turn it into a string stably. # Turn it into a string stably.
msg += b" " + json.dumps(get_hash_mfa_state(email, env), sort_keys=True).encode("utf8") msg += b" " + json.dumps(get_hash_mfa_state(email, env), sort_keys=True).encode("utf8")
# Make the HMAC. # Make a HMAC using the system API key as a hash key.
hash_key = self.key.encode('ascii') hash_key = self.key.encode('ascii')
return hmac.new(hash_key, msg, digestmod="sha256").hexdigest() return hmac.new(hash_key, msg, digestmod="sha256").hexdigest()
def _generate_key(self): def create_session_key(self, username, env, type=None):
raw_key = os.urandom(32) # Create a new session.
return base64.b64encode(raw_key).decode('ascii') token = secrets.token_hex(32)
self.sessions[token] = {
"email": username,
"password_token": self.create_user_password_state_token(username, env),
"type": type,
}
return token
def get_session(self, user_email, session_key, session_type, env):
if session_key not in self.sessions: return None
session = self.sessions[session_key]
if session_type == "login" and session["email"] != user_email: return None
if session["type"] != session_type: return None
if session["password_token"] != self.create_user_password_state_token(session["email"], env): return None
return session

View File

@ -1,5 +1,8 @@
#!/usr/local/lib/mailinabox/env/bin/python3 #!/usr/local/lib/mailinabox/env/bin/python3
# #
# The API can be accessed on the command line, e.g. use `curl` like so:
# curl --user $(</var/lib/mailinabox/api.key): http://localhost:10222/mail/users
#
# During development, you can start the Mail-in-a-Box control panel # During development, you can start the Mail-in-a-Box control panel
# by running this script, e.g.: # by running this script, e.g.:
# #
@ -9,6 +12,7 @@
import os, os.path, re, json, time import os, os.path, re, json, time
import multiprocessing.pool, subprocess import multiprocessing.pool, subprocess
import logging
from functools import wraps from functools import wraps
@ -22,7 +26,7 @@ from mfa import get_public_mfa_state, provision_totp, validate_totp_secret, enab
env = utils.load_environment() env = utils.load_environment()
auth_service = auth.KeyAuthService() auth_service = auth.AuthService()
# We may deploy via a symbolic link, which confuses flask's template finding. # We may deploy via a symbolic link, which confuses flask's template finding.
me = __file__ me = __file__
@ -53,7 +57,9 @@ def authorized_personnel_only(viewfunc):
try: try:
email, privs = auth_service.authenticate(request, env) email, privs = auth_service.authenticate(request, env)
except ValueError as e: except ValueError as e:
# Write a line in the log recording the failed login # Write a line in the log recording the failed login, unless no authorization header
# was given which can happen on an initial request before a 403 response.
if "Authorization" in request.headers:
log_failed_login(request) log_failed_login(request)
# Authentication failed. # Authentication failed.
@ -131,11 +137,12 @@ def index():
csr_country_codes=csr_country_codes, csr_country_codes=csr_country_codes,
) )
@app.route('/me') # Create a session key by checking the username/password in the Authorization header.
def me(): @app.route('/login', methods=["POST"])
def login():
# Is the caller authorized? # Is the caller authorized?
try: try:
email, privs = auth_service.authenticate(request, env) email, privs = auth_service.authenticate(request, env, login_only=True)
except ValueError as e: except ValueError as e:
if "missing-totp-token" in str(e): if "missing-totp-token" in str(e):
return json_response({ return json_response({
@ -150,19 +157,29 @@ def me():
"reason": str(e), "reason": str(e),
}) })
# Return a new session for the user.
resp = { resp = {
"status": "ok", "status": "ok",
"email": email, "email": email,
"privileges": privs, "privileges": privs,
"api_key": auth_service.create_session_key(email, env, type='login'),
} }
# Is authorized as admin? Return an API key for future use. app.logger.info("New login session created for {}".format(email))
if "admin" in privs:
resp["api_key"] = auth_service.create_user_key(email, env)
# Return. # Return.
return json_response(resp) return json_response(resp)
@app.route('/logout', methods=["POST"])
def logout():
try:
email, _ = auth_service.authenticate(request, env, logout=True)
app.logger.info("{} logged out".format(email))
except ValueError as e:
pass
finally:
return json_response({ "status": "ok" })
# MAIL # MAIL
@app.route('/mail/users') @app.route('/mail/users')
@ -219,7 +236,7 @@ def mail_aliases():
if request.args.get("format", "") == "json": if request.args.get("format", "") == "json":
return json_response(get_mail_aliases_ex(env)) return json_response(get_mail_aliases_ex(env))
else: else:
return "".join(address+"\t"+receivers+"\t"+(senders or "")+"\n" for address, receivers, senders in get_mail_aliases(env)) return "".join(address+"\t"+receivers+"\t"+(senders or "")+"\n" for address, receivers, senders, auto in get_mail_aliases(env))
@app.route('/mail/aliases/add', methods=['POST']) @app.route('/mail/aliases/add', methods=['POST'])
@authorized_personnel_only @authorized_personnel_only
@ -257,6 +274,7 @@ def dns_update():
try: try:
return do_dns_update(env, force=request.form.get('force', '') == '1') return do_dns_update(env, force=request.form.get('force', '') == '1')
except Exception as e: except Exception as e:
logging.exception('dns update exc')
return (str(e), 500) return (str(e), 500)
@app.route('/dns/secondary-nameserver') @app.route('/dns/secondary-nameserver')
@ -314,7 +332,7 @@ def dns_get_records(qname=None, rtype=None):
r["sort-order"]["created"] = i r["sort-order"]["created"] = i
domain_sort_order = utils.sort_domains([r["qname"] for r in records], env) domain_sort_order = utils.sort_domains([r["qname"] for r in records], env)
for i, r in enumerate(sorted(records, key = lambda r : ( for i, r in enumerate(sorted(records, key = lambda r : (
zones.index(r["zone"]), zones.index(r["zone"]) if r.get("zone") else 0, # record is not within a zone managed by the box
domain_sort_order.index(r["qname"]), domain_sort_order.index(r["qname"]),
r["rtype"]))): r["rtype"]))):
r["sort-order"]["qname"] = i r["sort-order"]["qname"] = i
@ -512,10 +530,7 @@ def web_get_domains():
@authorized_personnel_only @authorized_personnel_only
def web_update(): def web_update():
from web_update import do_web_update from web_update import do_web_update
try:
return do_web_update(env) return do_web_update(env)
except Exception as e:
return (str(e), 500)
# System # System
@ -641,16 +656,42 @@ def privacy_status_set():
# MUNIN # MUNIN
@app.route('/munin/') @app.route('/munin/')
@app.route('/munin/<path:filename>')
@authorized_personnel_only @authorized_personnel_only
def munin(filename=""): def munin_start():
# Checks administrative access (@authorized_personnel_only) and then just proxies # Munin pages, static images, and dynamically generated images are served
# the request to static files. # outside of the AJAX API. We'll start with a 'start' API that sets a cookie
# that subsequent requests will read for authorization. (We don't use cookies
# for the API to avoid CSRF vulnerabilities.)
response = make_response("OK")
response.set_cookie("session", auth_service.create_session_key(request.user_email, env, type='cookie'),
max_age=60*30, secure=True, httponly=True, samesite="Strict") # 30 minute duration
return response
def check_request_cookie_for_admin_access():
session = auth_service.get_session(None, request.cookies.get("session", ""), "cookie", env)
if not session: return False
privs = get_mail_user_privileges(session["email"], env)
if not isinstance(privs, list): return False
if "admin" not in privs: return False
return True
def authorized_personnel_only_via_cookie(f):
@wraps(f)
def g(*args, **kwargs):
if not check_request_cookie_for_admin_access():
return Response("Unauthorized", status=403, mimetype='text/plain', headers={})
return f(*args, **kwargs)
return g
@app.route('/munin/<path:filename>')
@authorized_personnel_only_via_cookie
def munin_static_file(filename=""):
# Proxy the request to static files.
if filename == "": filename = "index.html" if filename == "": filename = "index.html"
return send_from_directory("/var/cache/munin/www", filename) return send_from_directory("/var/cache/munin/www", filename)
@app.route('/munin/cgi-graph/<path:filename>') @app.route('/munin/cgi-graph/<path:filename>')
@authorized_personnel_only @authorized_personnel_only_via_cookie
def munin_cgi(filename): def munin_cgi(filename):
""" Relay munin cgi dynazoom requests """ Relay munin cgi dynazoom requests
/usr/lib/munin/cgi/munin-cgi-graph is a perl cgi script in the munin package /usr/lib/munin/cgi/munin-cgi-graph is a perl cgi script in the munin package
@ -723,34 +764,21 @@ def log_failed_login(request):
# APP # APP
if __name__ == '__main__': if __name__ == '__main__':
logging_level = logging.DEBUG
if "DEBUG" in os.environ: if "DEBUG" in os.environ:
# Turn on Flask debugging. # Turn on Flask debugging.
app.debug = True app.debug = True
logging_level = logging.DEBUG
# Use a stable-ish master API key so that login sessions don't restart on each run.
# Use /etc/machine-id to seed the key with a stable secret, but add something
# and hash it to prevent possibly exposing the machine id, using the time so that
# the key is not valid indefinitely.
import hashlib
with open("/etc/machine-id") as f:
api_key = f.read()
api_key += "|" + str(int(time.time() / (60*60*2)))
hasher = hashlib.sha1()
hasher.update(api_key.encode("ascii"))
auth_service.key = hasher.hexdigest()
if "APIKEY" in os.environ: auth_service.key = os.environ["APIKEY"]
if not app.debug: if not app.debug:
app.logger.addHandler(utils.create_syslog_handler()) app.logger.addHandler(utils.create_syslog_handler())
# For testing on the command line, you can use `curl` like so: #app.logger.info('API key: ' + auth_service.key)
# curl --user $(</var/lib/mailinabox/api.key): http://localhost:10222/mail/users
auth_service.write_key()
# For testing in the browser, you can copy the API key that's output to the logging.basicConfig(level=logging_level, format='%(levelname)s:%(module)s.%(funcName)s %(message)s')
# debug console and enter that as the username logging.info('Logging level set to %s', logging.getLevelName(logging_level))
app.logger.info('API key: ' + auth_service.key)
# Start the application server. Listens on 127.0.0.1 (IPv4 only). # Start the application server. Listens on 127.0.0.1 (IPv4 only).
app.run(port=10222) app.run(port=10222)

View File

@ -14,7 +14,7 @@ source /etc/mailinabox.conf
# On Mondays, i.e. once a week, send the administrator a report of total emails # On Mondays, i.e. once a week, send the administrator a report of total emails
# sent and received so the admin might notice server abuse. # sent and received so the admin might notice server abuse.
if [ `date "+%u"` -eq 1 ]; then if [ `date "+%u"` -eq 1 ]; then
management/mail_log.py -t week | management/email_administrator.py "Mail-in-a-Box Usage Report" management/mail_log.py -t week -r -s -l -g -b | management/email_administrator.py "Mail-in-a-Box Usage Report"
/usr/sbin/pflogsumm -u 5 -h 5 --problems_first /var/log/mail.log.1 | management/email_administrator.py "Postfix log analysis summary" /usr/sbin/pflogsumm -u 5 -h 5 --problems_first /var/log/mail.log.1 | management/email_administrator.py "Postfix log analysis summary"
fi fi

View File

@ -8,6 +8,7 @@ import sys, os, os.path, urllib.parse, datetime, re, hashlib, base64
import ipaddress import ipaddress
import rtyaml import rtyaml
import dns.resolver import dns.resolver
import logging
from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains
from ssl_certificates import get_ssl_certificates, check_certificate from ssl_certificates import get_ssl_certificates, check_certificate
@ -105,21 +106,22 @@ def do_dns_update(env, force=False):
if len(updated_domains) > 0: if len(updated_domains) > 0:
shell('check_call', ["/usr/sbin/service", "nsd", "restart"]) shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
# Write the OpenDKIM configuration tables for all of the mail domains. # Write the DKIM configuration tables for all of the mail domains.
from mailconfig import get_mail_domains from mailconfig import get_mail_domains
if write_opendkim_tables(get_mail_domains(env), env):
# Settings changed. Kick opendkim. if write_dkim_tables(get_mail_domains(env), env):
shell('check_call', ["/usr/sbin/service", "opendkim", "restart"]) # Settings changed. Kick dkimpy.
shell('check_call', ["/usr/sbin/service", "dkimpy-milter", "restart"])
if len(updated_domains) == 0: if len(updated_domains) == 0:
# If this is the only thing that changed? # If this is the only thing that changed?
updated_domains.append("OpenDKIM configuration") updated_domains.append("DKIM configuration")
# Clear bind9's DNS cache so our own DNS resolver is up to date. # Clear unbound's DNS cache so our own DNS resolver is up to date.
# (ignore errors with trap=True) # (ignore errors with trap=True)
shell('check_call', ["/usr/sbin/rndc", "flush"], trap=True) shell('check_call', ["/usr/sbin/unbound-control", "flush_zone", "."], trap=True, capture_stdout=False)
if len(updated_domains) == 0: if len(updated_domains) == 0:
# if nothing was updated (except maybe OpenDKIM's files), don't show any output # if nothing was updated (except maybe DKIM's files), don't show any output
return "" return ""
else: else:
return "updated DNS: " + ",".join(updated_domains) + "\n" return "updated DNS: " + ",".join(updated_domains) + "\n"
@ -303,10 +305,18 @@ def build_zone(domain, domain_properties, additional_records, env, is_zone=True)
if not has_rec(None, "TXT", prefix="v=spf1 "): if not has_rec(None, "TXT", prefix="v=spf1 "):
records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain)) records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain))
# Append the DKIM TXT record to the zone as generated by OpenDKIM. # Append the DKIM TXT record to the zone as generated by DKIMpy.
# Skip if the user has set a DKIM record already. # Skip if the user has set a DKIM record already.
opendkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.txt') dkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-rsa.dns')
with open(opendkim_record_file) as orf: with open(dkim_record_file) as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
records.append((m.group(1), "TXT", val, "Recommended. Provides a way for recipients to verify that this machine sent @%s mail." % domain))
# Also add a ed25519 DKIM record
dkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-ed25519.dns')
with open(dkim_record_file) as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S) m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2))) val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "): if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
@ -501,7 +511,7 @@ def write_nsd_zone(domain, zonefile, records, env, force):
# @ the PRIMARY_HOSTNAME. Hopefully that's legit. # @ the PRIMARY_HOSTNAME. Hopefully that's legit.
# #
# For the refresh through TTL fields, a good reference is: # For the refresh through TTL fields, a good reference is:
# http://www.peerwisdom.org/2013/05/15/dns-understanding-the-soa-record/ # https://www.ripe.net/publications/docs/ripe-203
# Time To Refresh How long in seconds a nameserver should wait prior to checking for a Serial Number # Time To Refresh How long in seconds a nameserver should wait prior to checking for a Serial Number
# increase within the primary zone file. An increased Serial Number means a transfer is needed to sync # increase within the primary zone file. An increased Serial Number means a transfer is needed to sync
@ -670,7 +680,7 @@ def get_dns_zonefile(zone, env):
def write_nsd_conf(zonefiles, additional_records, env): def write_nsd_conf(zonefiles, additional_records, env):
# Write the list of zones to a configuration file. # Write the list of zones to a configuration file.
nsd_conf_file = "/etc/nsd/zones.conf" nsd_conf_file = "/etc/nsd/nsd.conf.d/zones.conf"
nsdconf = "" nsdconf = ""
# Append the zones. # Append the zones.
@ -817,14 +827,15 @@ def sign_zone(domain, zonefile, env):
######################################################################## ########################################################################
def write_opendkim_tables(domains, env): def write_dkim_tables(domains, env):
# Append a record to OpenDKIM's KeyTable and SigningTable for each domain # Append a record to DKIMpy's KeyTable and SigningTable for each domain
# that we send mail from (zones and all subdomains). # that we send mail from (zones and all subdomains).
opendkim_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.private') dkim_rsa_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-rsa.key')
dkim_ed_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-ed25519.key')
if not os.path.exists(opendkim_key_file): if not os.path.exists(dkim_rsa_key_file) or not os.path.exists(dkim_ed_key_file):
# Looks like OpenDKIM is not installed. # Looks like DKIMpy is not installed.
return False return False
config = { config = {
@ -846,7 +857,12 @@ def write_opendkim_tables(domains, env):
# signing domain must match the sender's From: domain. # signing domain must match the sender's From: domain.
"KeyTable": "KeyTable":
"".join( "".join(
"{domain} {domain}:mail:{key_file}\n".format(domain=domain, key_file=opendkim_key_file) "{domain} {domain}:box-rsa:{key_file}\n".format(domain=domain, key_file=dkim_rsa_key_file)
for domain in domains
),
"KeyTableEd25519":
"".join(
"{domain} {domain}:box-ed25519:{key_file}\n".format(domain=domain, key_file=dkim_ed_key_file)
for domain in domains for domain in domains
), ),
} }
@ -854,18 +870,18 @@ def write_opendkim_tables(domains, env):
did_update = False did_update = False
for filename, content in config.items(): for filename, content in config.items():
# Don't write the file if it doesn't need an update. # Don't write the file if it doesn't need an update.
if os.path.exists("/etc/opendkim/" + filename): if os.path.exists("/etc/dkim/" + filename):
with open("/etc/opendkim/" + filename) as f: with open("/etc/dkim/" + filename) as f:
if f.read() == content: if f.read() == content:
continue continue
# The contents needs to change. # The contents needs to change.
with open("/etc/opendkim/" + filename, "w") as f: with open("/etc/dkim/" + filename, "w") as f:
f.write(content) f.write(content)
did_update = True did_update = True
# Return whether the files changed. If they didn't change, there's # Return whether the files changed. If they didn't change, there's
# no need to kick the opendkim process. # no need to kick the dkimpy process.
return did_update return did_update
######################################################################## ########################################################################
@ -1049,6 +1065,7 @@ def set_custom_dns_record(qname, rtype, value, action, env):
def get_secondary_dns(custom_dns, mode=None): def get_secondary_dns(custom_dns, mode=None):
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
resolver.timeout = 10 resolver.timeout = 10
resolver.lifetime = 10
values = [] values = []
for qname, rtype, value in custom_dns: for qname, rtype, value in custom_dns:
@ -1066,10 +1083,17 @@ def get_secondary_dns(custom_dns, mode=None):
# doesn't. # doesn't.
if not hostname.startswith("xfr:"): if not hostname.startswith("xfr:"):
if mode == "xfr": if mode == "xfr":
response = dns.resolver.resolve(hostname+'.', "A", raise_on_no_answer=False) try:
response = resolver.resolve(hostname+'.', "A", raise_on_no_answer=False)
values.extend(map(str, response)) values.extend(map(str, response))
response = dns.resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False) except dns.exception.DNSException:
logging.debug("Secondary dns A lookup exception %s", hostname)
try:
response = resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False)
values.extend(map(str, response)) values.extend(map(str, response))
except dns.exception.DNSException:
logging.debug("Secondary dns AAAA lookup exception %s", hostname)
continue continue
values.append(hostname) values.append(hostname)
@ -1087,16 +1111,32 @@ def set_secondary_dns(hostnames, env):
# Validate that all hostnames are valid and that all zone-xfer IP addresses are valid. # Validate that all hostnames are valid and that all zone-xfer IP addresses are valid.
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
resolver.timeout = 5 resolver.timeout = 5
resolver.lifetime = 5
for item in hostnames: for item in hostnames:
if not item.startswith("xfr:"): if not item.startswith("xfr:"):
# Resolve hostname. # Resolve hostname.
tries = 2
while tries > 0:
tries = tries - 1
try: try:
response = resolver.resolve(item, "A") response = resolver.resolve(item, "A")
tries = 0
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
logging.debug('Error on resolving ipv4 address, trying ipv6')
try: try:
response = resolver.query(item, "AAAA") response = resolver.resolve(item, "AAAA")
tries = 0
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
raise ValueError("Could not resolve the IP address of %s." % item) raise ValueError("Could not resolve the IP address of %s." % item)
except (dns.resolver.Timeout):
logging.debug('Timeout on resolving ipv6 address')
if tries < 1:
raise ValueError("Could not resolve the IP address of %s due to timeout." % item)
except (dns.resolver.Timeout):
logging.debug('Timeout on resolving ipv4 address')
if tries < 1:
raise ValueError("Could not resolve the IP address of %s due to timeout." % item)
else: else:
# Validate IP address. # Validate IP address.
try: try:

View File

@ -2,7 +2,7 @@
# Reads in STDIN. If the stream is not empty, mail it to the system administrator. # Reads in STDIN. If the stream is not empty, mail it to the system administrator.
import sys import sys, traceback
import html import html
import smtplib import smtplib
@ -29,6 +29,7 @@ try:
content = sys.stdin.read().strip() content = sys.stdin.read().strip()
except: except:
print("error occured while cleaning input text") print("error occured while cleaning input text")
traceback.print_exc()
sys.exit(1) sys.exit(1)
# If there's nothing coming in, just exit. # If there's nothing coming in, just exit.

View File

@ -376,7 +376,7 @@ def scan_mail_log_line(line, collector):
if SCAN_BLOCKED: if SCAN_BLOCKED:
scan_postfix_smtpd_line(date, log, collector) scan_postfix_smtpd_line(date, log, collector)
elif service in ("postfix/qmgr", "postfix/pickup", "postfix/cleanup", "postfix/scache", elif service in ("postfix/qmgr", "postfix/pickup", "postfix/cleanup", "postfix/scache",
"spampd", "postfix/anvil", "postfix/master", "opendkim", "postfix/lmtp", "spampd", "postfix/anvil", "postfix/master", "dkimpy", "postfix/lmtp",
"postfix/tlsmgr", "anvil"): "postfix/tlsmgr", "anvil"):
# nothing to look at # nothing to look at
return True return True
@ -549,8 +549,9 @@ def scan_postfix_submission_line(date, log, collector):
""" """
# Match both the 'plain' and 'login' sasl methods, since both authentication methods are # Match both the 'plain' and 'login' sasl methods, since both authentication methods are
# allowed by Dovecot # allowed by Dovecot. Exclude trailing comma after the username when additional fields
m = re.match("([A-Z0-9]+): client=(\S+), sasl_method=(PLAIN|LOGIN), sasl_username=(\S+)", log) # follow after.
m = re.match("([A-Z0-9]+): client=(\S+), sasl_method=(PLAIN|LOGIN), sasl_username=(\S+)(?<!,)", log)
if m: if m:
_, client, method, user = m.groups() _, client, method, user = m.groups()
@ -586,7 +587,7 @@ def scan_postfix_submission_line(date, log, collector):
def readline(filename): def readline(filename):
""" A generator that returns the lines of a file """ A generator that returns the lines of a file
""" """
with open(filename) as file: with open(filename, errors='replace') as file:
while True: while True:
line = file.readline() line = file.readline()
if not line: if not line:

View File

@ -16,8 +16,8 @@ import idna
def validate_email(email, mode=None): def validate_email(email, mode=None):
# Checks that an email address is syntactically valid. Returns True/False. # Checks that an email address is syntactically valid. Returns True/False.
# Until Postfix supports SMTPUTF8, an email address may contain ASCII # An email address may contain ASCII characters only because Dovecot's
# characters only; IDNs must be IDNA-encoded. # authentication mechanism gets confused with other character encodings.
# #
# When mode=="user", we're checking that this can be a user account name. # When mode=="user", we're checking that this can be a user account name.
# Dovecot has tighter restrictions - letters, numbers, underscore, and # Dovecot has tighter restrictions - letters, numbers, underscore, and
@ -186,9 +186,9 @@ def get_admins(env):
return users return users
def get_mail_aliases(env): def get_mail_aliases(env):
# Returns a sorted list of tuples of (address, forward-tos, permitted-senders). # Returns a sorted list of tuples of (address, forward-tos, permitted-senders, auto).
c = open_database(env) c = open_database(env)
c.execute('SELECT source, destination, permitted_senders FROM aliases') c.execute('SELECT source, destination, permitted_senders, 0 as auto FROM aliases UNION SELECT source, destination, permitted_senders, 1 as auto FROM auto_aliases')
aliases = { row[0]: row for row in c.fetchall() } # make dict aliases = { row[0]: row for row in c.fetchall() } # make dict
# put in a canonical order: sort by domain, then by email address lexicographically # put in a canonical order: sort by domain, then by email address lexicographically
@ -208,7 +208,7 @@ def get_mail_aliases_ex(env):
# address_display: "name@domain.tld", # full Unicode # address_display: "name@domain.tld", # full Unicode
# forwards_to: ["user1@domain.com", "receiver-only1@domain.com", ...], # forwards_to: ["user1@domain.com", "receiver-only1@domain.com", ...],
# permitted_senders: ["user1@domain.com", "sender-only1@domain.com", ...] OR null, # permitted_senders: ["user1@domain.com", "sender-only1@domain.com", ...] OR null,
# required: True|False # auto: True|False
# }, # },
# ... # ...
# ] # ]
@ -216,12 +216,13 @@ def get_mail_aliases_ex(env):
# ... # ...
# ] # ]
required_aliases = get_required_aliases(env)
domains = {} domains = {}
for address, forwards_to, permitted_senders in get_mail_aliases(env): for address, forwards_to, permitted_senders, auto in get_mail_aliases(env):
# skip auto domain maps since these are not informative in the control panel's aliases list
if auto and address.startswith("@"): continue
# get alias info # get alias info
domain = get_domain(address) domain = get_domain(address)
required = (address in required_aliases)
# add to list # add to list
if not domain in domains: if not domain in domains:
@ -234,7 +235,7 @@ def get_mail_aliases_ex(env):
"address_display": prettify_idn_email_address(address), "address_display": prettify_idn_email_address(address),
"forwards_to": [prettify_idn_email_address(r.strip()) for r in forwards_to.split(",")], "forwards_to": [prettify_idn_email_address(r.strip()) for r in forwards_to.split(",")],
"permitted_senders": [prettify_idn_email_address(s.strip()) for s in permitted_senders.split(",")] if permitted_senders is not None else None, "permitted_senders": [prettify_idn_email_address(s.strip()) for s in permitted_senders.split(",")] if permitted_senders is not None else None,
"required": required, "auto": bool(auto),
}) })
# Sort domains. # Sort domains.
@ -242,7 +243,7 @@ def get_mail_aliases_ex(env):
# Sort aliases within each domain first by required-ness then lexicographically by address. # Sort aliases within each domain first by required-ness then lexicographically by address.
for domain in domains: for domain in domains:
domain["aliases"].sort(key = lambda alias : (alias["required"], alias["address"])) domain["aliases"].sort(key = lambda alias : (alias["auto"], alias["address"]))
return domains return domains
def get_domain(emailaddr, as_unicode=True): def get_domain(emailaddr, as_unicode=True):
@ -261,11 +262,12 @@ def get_domain(emailaddr, as_unicode=True):
def get_mail_domains(env, filter_aliases=lambda alias : True, users_only=False): def get_mail_domains(env, filter_aliases=lambda alias : True, users_only=False):
# Returns the domain names (IDNA-encoded) of all of the email addresses # Returns the domain names (IDNA-encoded) of all of the email addresses
# configured on the system. If users_only is True, only return domains # configured on the system. If users_only is True, only return domains
# with email addresses that correspond to user accounts. # with email addresses that correspond to user accounts. Exclude Unicode
# forms of domain names listed in the automatic aliases table.
domains = [] domains = []
domains.extend([get_domain(login, as_unicode=False) for login in get_mail_users(env)]) domains.extend([get_domain(login, as_unicode=False) for login in get_mail_users(env)])
if not users_only: if not users_only:
domains.extend([get_domain(address, as_unicode=False) for address, *_ in get_mail_aliases(env) if filter_aliases(address) ]) domains.extend([get_domain(address, as_unicode=False) for address, _, _, auto in get_mail_aliases(env) if filter_aliases(address) and not auto ])
return set(domains) return set(domains)
def add_mail_user(email, pw, privs, env): def add_mail_user(email, pw, privs, env):
@ -512,6 +514,13 @@ def remove_mail_alias(address, env, do_kick=True):
# Update things in case any domains are removed. # Update things in case any domains are removed.
return kick(env, "alias removed") return kick(env, "alias removed")
def add_auto_aliases(aliases, env):
conn, c = open_database(env, with_connection=True)
c.execute("DELETE FROM auto_aliases");
for source, destination in aliases.items():
c.execute("INSERT INTO auto_aliases (source, destination) VALUES (?, ?)", (source, destination))
conn.commit()
def get_system_administrator(env): def get_system_administrator(env):
return "administrator@" + env['PRIMARY_HOSTNAME'] return "administrator@" + env['PRIMARY_HOSTNAME']
@ -558,39 +567,34 @@ def kick(env, mail_result=None):
if mail_result is not None: if mail_result is not None:
results.append(mail_result + "\n") results.append(mail_result + "\n")
# Ensure every required alias exists. auto_aliases = { }
existing_users = get_mail_users(env) # Map required aliases to the administrator alias (which should be created manually).
existing_alias_records = get_mail_aliases(env)
existing_aliases = set(a for a, *_ in existing_alias_records) # just first entry in tuple
required_aliases = get_required_aliases(env)
def ensure_admin_alias_exists(address):
# If a user account exists with that address, we're good.
if address in existing_users:
return
# If the alias already exists, we're good.
if address in existing_aliases:
return
# Doesn't exist.
administrator = get_system_administrator(env) administrator = get_system_administrator(env)
if address == administrator: return # don't make an alias from the administrator to itself --- this alias must be created manually required_aliases = get_required_aliases(env)
add_mail_alias(address, administrator, "", env, do_kick=False) for alias in required_aliases:
if administrator not in existing_aliases: return # don't report the alias in output if the administrator alias isn't in yet -- this is a hack to supress confusing output on initial setup if alias == administrator: continue # don't make an alias from the administrator to itself --- this alias must be created manually
results.append("added alias %s (=> %s)\n" % (address, administrator)) auto_aliases[alias] = administrator
for address in required_aliases: # Add domain maps from Unicode forms of IDNA domains to the ASCII forms stored in the alias table.
ensure_admin_alias_exists(address) for domain in get_mail_domains(env):
try:
domain_unicode = idna.decode(domain.encode("ascii"))
if domain == domain_unicode: continue # not an IDNA/Unicode domain
auto_aliases["@" + domain_unicode] = "@" + domain
except (ValueError, UnicodeError, idna.IDNAError):
continue
# Remove auto-generated postmaster/admin on domains we no add_auto_aliases(auto_aliases, env)
# longer have any other email addresses for.
for address, forwards_to, *_ in existing_alias_records: # Remove auto-generated postmaster/admin/abuse alises from the main aliases table.
# They are now stored in the auto_aliases table.
for address, forwards_to, permitted_senders, auto in get_mail_aliases(env):
user, domain = address.split("@") user, domain = address.split("@")
if user in ("postmaster", "admin", "abuse") \ if user in ("postmaster", "admin", "abuse") \
and address not in required_aliases \ and address not in required_aliases \
and forwards_to == get_system_administrator(env): and forwards_to == get_system_administrator(env) \
and not auto:
remove_mail_alias(address, env, do_kick=False) remove_mail_alias(address, env, do_kick=False)
results.append("removed alias %s (was to %s; domain no longer used for email)\n" % (address, forwards_to)) results.append("removed alias %s (was to %s; domain no longer used for email)\n" % (address, forwards_to))

View File

@ -110,14 +110,6 @@ def validate_auth_mfa(email, request, env):
if len(mfa_state) == 0: if len(mfa_state) == 0:
return (True, []) return (True, [])
# munin routes are proxied by our control panel. We do not have
# full control over their routes so credentials are supplied via
# a basic HTTP authentication prompt.
# There is neither a way to input a mfa credential there nor can we pass
# the user_api_key from localStorage so mfa should be disabled for these routes.
if request.full_path.startswith("/munin"):
return (True, [])
# Try the enabled MFA modes. # Try the enabled MFA modes.
hints = set() hints = set()
for mfa_mode in mfa_state: for mfa_mode in mfa_state:

View File

@ -12,6 +12,7 @@ import dateutil.parser, dateutil.tz
import idna import idna
import psutil import psutil
import postfix_mta_sts_resolver.resolver import postfix_mta_sts_resolver.resolver
import logging
from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_records from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_records
from web_update import get_web_domains, get_domains_with_a_records from web_update import get_web_domains, get_domains_with_a_records
@ -22,13 +23,12 @@ from utils import shell, sort_domains, load_env_vars_from_file, load_settings
def get_services(): def get_services():
return [ return [
{ "name": "Local DNS (bind9)", "port": 53, "public": False, }, { "name": "Local DNS (unbound)", "port": 53, "public": False, },
#{ "name": "NSD Control", "port": 8952, "public": False, }, { "name": "Local DNS Control (unbound)", "port": 953, "public": False, },
{ "name": "Local DNS Control (bind9/rndc)", "port": 953, "public": False, },
{ "name": "Dovecot LMTP LDA", "port": 10026, "public": False, }, { "name": "Dovecot LMTP LDA", "port": 10026, "public": False, },
{ "name": "Postgrey", "port": 10023, "public": False, }, { "name": "Postgrey", "port": 10023, "public": False, },
{ "name": "Spamassassin", "port": 10025, "public": False, }, { "name": "Spamassassin", "port": 10025, "public": False, },
{ "name": "OpenDKIM", "port": 8891, "public": False, }, { "name": "DKIMpy", "port": 8892, "public": False, },
{ "name": "OpenDMARC", "port": 8893, "public": False, }, { "name": "OpenDMARC", "port": 8893, "public": False, },
{ "name": "Mail-in-a-Box Management Daemon", "port": 10222, "public": False, }, { "name": "Mail-in-a-Box Management Daemon", "port": 10222, "public": False, },
{ "name": "SSH Login (ssh)", "port": get_ssh_port(), "public": True, }, { "name": "SSH Login (ssh)", "port": get_ssh_port(), "public": True, },
@ -49,15 +49,15 @@ def run_checks(rounded_values, env, output, pool, domains_to_check=None):
# check that services are running # check that services are running
if not run_services_checks(env, output, pool): if not run_services_checks(env, output, pool):
# If critical services are not running, stop. If bind9 isn't running, # If critical services are not running, stop. If unbound isn't running,
# all later DNS checks will timeout and that will take forever to # all later DNS checks will timeout and that will take forever to
# go through, and if running over the web will cause a fastcgi timeout. # go through, and if running over the web will cause a fastcgi timeout.
return return
# clear bind9's DNS cache so our DNS checks are up to date # clear unbound's DNS cache so our DNS checks are up to date
# (ignore errors; if bind9/rndc isn't running we'd already report # (ignore errors; if unbound isn't running we'd already report
# that in run_services checks.) # that in run_services checks.)
shell('check_call', ["/usr/sbin/rndc", "flush"], trap=True) shell('check_call', ["/usr/sbin/unbound-control", "flush_zone", "."], trap=True, capture_stdout=False)
run_system_checks(rounded_values, env, output) run_system_checks(rounded_values, env, output)
@ -73,6 +73,9 @@ def get_ssh_port():
except FileNotFoundError: except FileNotFoundError:
# sshd is not installed. That's ok. # sshd is not installed. That's ok.
return None return None
except subprocess.CalledProcessError:
# error while calling shell command
return None
returnNext = False returnNext = False
for e in output.split(): for e in output.split():
@ -293,7 +296,7 @@ def run_network_checks(env, output):
# by a spammer, or the user may be deploying on a residential network. We # by a spammer, or the user may be deploying on a residential network. We
# will not be able to reliably send mail in these cases. # will not be able to reliably send mail in these cases.
rev_ip4 = ".".join(reversed(env['PUBLIC_IP'].split('.'))) rev_ip4 = ".".join(reversed(env['PUBLIC_IP'].split('.')))
zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None) zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None, retry = False)
if zen is None: if zen is None:
output.print_ok("IP address is not blacklisted by zen.spamhaus.org.") output.print_ok("IP address is not blacklisted by zen.spamhaus.org.")
elif zen == "[timeout]": elif zen == "[timeout]":
@ -547,6 +550,9 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
# Choose the first IP if nameserver returns multiple # Choose the first IP if nameserver returns multiple
ns_ip = ns_ips.split('; ')[0] ns_ip = ns_ips.split('; ')[0]
if ns_ip == '[Not Set]':
output.print_error("Secondary nameserver %s could not be resolved correctly. (dns result: %s used %s)" % (ns, ns_ips, ns_ip))
else:
# Now query it to see what it says about this domain. # Now query it to see what it says about this domain.
ip = query_dns(domain, "A", at=ns_ip, nxdomain=None) ip = query_dns(domain, "A", at=ns_ip, nxdomain=None)
if ip == correct_ip: if ip == correct_ip:
@ -626,14 +632,16 @@ def check_dnssec(domain, env, output, dns_zonefiles, is_checking_primary=False):
# #
# But it may not be preferred. Only algorithm 13 is preferred. Warn if any of the # But it may not be preferred. Only algorithm 13 is preferred. Warn if any of the
# matched zones uses a different algorithm. # matched zones uses a different algorithm.
if set(r[1] for r in matched_ds) == { '13' }: # all are alg 13 if set(r[1] for r in matched_ds) == { '13' } and set(r[2] for r in matched_ds) <= { '2', '4' }: # all are alg 13 and digest type 2 or 4
output.print_ok("DNSSEC 'DS' record is set correctly at registrar.") output.print_ok("DNSSEC 'DS' record is set correctly at registrar.")
return return
elif '13' in set(r[1] for r in matched_ds): # some but not all are alg 13 elif len([r for r in matched_ds if r[1] == '13' and r[2] in ( '2', '4' )]) > 0: # some but not all are alg 13
output.print_ok("DNSSEC 'DS' record is set correctly at registrar. (Records using algorithm other than ECDSAP256SHA256 should be removed.)") output.print_ok("DNSSEC 'DS' record is set correctly at registrar. (Records using algorithm other than ECDSAP256SHA256 and digest types other than SHA-256/384 should be removed.)")
return return
else: # no record uses alg 13 else: # no record uses alg 13
output.print_warning("DNSSEC 'DS' record set at registrar is valid but should be updated to ECDSAP256SHA256 (see below).") output.print_warning("""DNSSEC 'DS' record set at registrar is valid but should be updated to ECDSAP256SHA256 and SHA-256 (see below).
IMPORTANT: Do not delete existing DNSSEC 'DS' records for this domain until confirmation that the new DNSSEC 'DS' record
for this domain is valid.""")
else: else:
if is_checking_primary: if is_checking_primary:
output.print_error("""The DNSSEC 'DS' record for %s is incorrect. See further details below.""" % domain) output.print_error("""The DNSSEC 'DS' record for %s is incorrect. See further details below.""" % domain)
@ -644,7 +652,8 @@ def check_dnssec(domain, env, output, dns_zonefiles, is_checking_primary=False):
output.print_line("""Follow the instructions provided by your domain name registrar to set a DS record. output.print_line("""Follow the instructions provided by your domain name registrar to set a DS record.
Registrars support different sorts of DS records. Use the first option that works:""") Registrars support different sorts of DS records. Use the first option that works:""")
preferred_ds_order = [(7, 1), (7, 2), (8, 4), (13, 4), (8, 1), (8, 2), (13, 1), (13, 2)] # low to high preferred_ds_order = [(7, 2), (8, 4), (13, 4), (8, 2), (13, 2)] # low to high, see https://github.com/mail-in-a-box/mailinabox/issues/1998
def preferred_ds_order_func(ds_suggestion): def preferred_ds_order_func(ds_suggestion):
k = (int(ds_suggestion['alg']), int(ds_suggestion['digalg'])) k = (int(ds_suggestion['alg']), int(ds_suggestion['digalg']))
if k in preferred_ds_order: if k in preferred_ds_order:
@ -652,11 +661,12 @@ def check_dnssec(domain, env, output, dns_zonefiles, is_checking_primary=False):
return -1 # index before first item return -1 # index before first item
output.print_line("") output.print_line("")
for i, ds_suggestion in enumerate(sorted(expected_ds_records.values(), key=preferred_ds_order_func, reverse=True)): for i, ds_suggestion in enumerate(sorted(expected_ds_records.values(), key=preferred_ds_order_func, reverse=True)):
if preferred_ds_order_func(ds_suggestion) == -1: continue # don't offer record types that the RFC says we must not offer
output.print_line("") output.print_line("")
output.print_line("Option " + str(i+1) + ":") output.print_line("Option " + str(i+1) + ":")
output.print_line("----------") output.print_line("----------")
output.print_line("Key Tag: " + ds_suggestion['keytag']) output.print_line("Key Tag: " + ds_suggestion['keytag'])
output.print_line("Key Flags: KSK") output.print_line("Key Flags: KSK / 257")
output.print_line("Algorithm: %s / %s" % (ds_suggestion['alg'], ds_suggestion['alg_name'])) output.print_line("Algorithm: %s / %s" % (ds_suggestion['alg'], ds_suggestion['alg_name']))
output.print_line("Digest Type: %s / %s" % (ds_suggestion['digalg'], ds_suggestion['digalg_name'])) output.print_line("Digest Type: %s / %s" % (ds_suggestion['digalg'], ds_suggestion['digalg_name']))
output.print_line("Digest: " + ds_suggestion['digest']) output.print_line("Digest: " + ds_suggestion['digest'])
@ -737,7 +747,7 @@ def check_mail_domain(domain, env, output):
# Stop if the domain is listed in the Spamhaus Domain Block List. # Stop if the domain is listed in the Spamhaus Domain Block List.
# The user might have chosen a domain that was previously in use by a spammer # The user might have chosen a domain that was previously in use by a spammer
# and will not be able to reliably send mail. # and will not be able to reliably send mail.
dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None) dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None, retry=False)
if dbl is None: if dbl is None:
output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.") output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.")
elif dbl == "[timeout]": elif dbl == "[timeout]":
@ -773,7 +783,7 @@ def check_web_domain(domain, rounded_time, ssl_certificates, env, output):
# website for also needs a signed certificate. # website for also needs a signed certificate.
check_ssl_cert(domain, rounded_time, ssl_certificates, env, output) check_ssl_cert(domain, rounded_time, ssl_certificates, env, output)
def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False): def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False, retry=True):
# Make the qname absolute by appending a period. Without this, dns.resolver.query # Make the qname absolute by appending a period. Without this, dns.resolver.query
# will fall back a failed lookup to a second query with this machine's hostname # will fall back a failed lookup to a second query with this machine's hostname
# appended. This has been causing some false-positive Spamhaus reports. The # appended. This has been causing some false-positive Spamhaus reports. The
@ -783,7 +793,7 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False):
qname += "." qname += "."
# Use the default nameservers (as defined by the system, which is our locally # Use the default nameservers (as defined by the system, which is our locally
# running bind server), or if the 'at' argument is specified, use that host # running unbound server), or if the 'at' argument is specified, use that host
# as the nameserver. # as the nameserver.
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
if at: if at:
@ -792,15 +802,28 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False):
# Set a timeout so that a non-responsive server doesn't hold us back. # Set a timeout so that a non-responsive server doesn't hold us back.
resolver.timeout = 5 resolver.timeout = 5
resolver.lifetime = 5
if retry:
tries = 2
else:
tries = 1
# Do the query. # Do the query.
while tries > 0:
tries = tries - 1
try: try:
response = resolver.resolve(qname, rtype) response = resolver.resolve(qname, rtype, search=True)
tries = 0
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
# Host did not have an answer for this query; not sure what the # Host did not have an answer for this query; not sure what the
# difference is between the two exceptions. # difference is between the two exceptions.
logging.debug("No result for dns lookup %s, %s (%d)", qname, rtype, tries)
if tries < 1:
return nxdomain return nxdomain
except dns.exception.Timeout: except dns.exception.Timeout:
logging.debug("Timeout on dns lookup %s, %s (%d)", qname, rtype, tries)
if tries < 1:
return "[timeout]" return "[timeout]"
# Normalize IP addresses. IP address --- especially IPv6 addresses --- can # Normalize IP addresses. IP address --- especially IPv6 addresses --- can

View File

@ -1,6 +1,6 @@
<style> <style>
#alias_table .actions > * { padding-right: 3px; } #alias_table .actions > * { padding-right: 3px; }
#alias_table .alias-required .remove { display: none } #alias_table .alias-auto .actions > * { display: none }
</style> </style>
<h2>Aliases</h2> <h2>Aliases</h2>
@ -163,7 +163,7 @@ function show_aliases() {
var n = $("#alias-template").clone(); var n = $("#alias-template").clone();
n.attr('id', ''); n.attr('id', '');
if (alias.required) n.addClass('alias-required'); if (alias.auto) n.addClass('alias-auto');
n.attr('data-address', alias.address_display); // this is decoded from IDNA, but will get re-coded to IDNA on the backend n.attr('data-address', alias.address_display); // this is decoded from IDNA, but will get re-coded to IDNA on the backend
n.find('td.address').text(alias.address_display) n.find('td.address').text(alias.address_display)
for (var j = 0; j < alias.forwards_to.length; j++) for (var j = 0; j < alias.forwards_to.length; j++)

View File

@ -38,7 +38,7 @@
<p class="alert" role="alert"> <p class="alert" role="alert">
<span class="glyphicon glyphicon-info-sign"></span> <span class="glyphicon glyphicon-info-sign"></span>
You may encounter zone file errors when attempting to create a TXT record with a long string. You may encounter zone file errors when attempting to create a TXT record with a long string.
<a href="http://tools.ietf.org/html/rfc4408#section-3.1.3">RFC 4408</a> states a TXT record is allowed to contain multiple strings, and this technique can be used to construct records that would exceed the 255-byte maximum length. <a href="https://tools.ietf.org/html/rfc4408#section-3.1.3">RFC 4408</a> states a TXT record is allowed to contain multiple strings, and this technique can be used to construct records that would exceed the 255-byte maximum length.
You may need to adopt this technique when adding DomainKeys. Use a tool like <code>named-checkzone</code> to validate your zone file. You may need to adopt this technique when adding DomainKeys. Use a tool like <code>named-checkzone</code> to validate your zone file.
</p> </p>

View File

@ -62,6 +62,37 @@
ol li { ol li {
margin-bottom: 1em; margin-bottom: 1em;
} }
.if-logged-in { display: none; }
.if-logged-in-admin { display: none; }
/* The below only gets used if it is supported */
@media (prefers-color-scheme: dark) {
/* Invert invert lightness but not hue */
html {
filter: invert(100%) hue-rotate(180deg);
}
/* Set explicit background color (necessary for Firefox) */
html {
background-color: #111;
}
/* Override Boostrap theme here to give more contrast. The black turns to white by the filter. */
.form-control {
color: black !important;
}
/* Revert the invert for the navbar */
button, div.navbar {
filter: invert(100%) hue-rotate(180deg);
}
/* Revert the revert for the dropdowns */
ul.dropdown-menu {
filter: invert(100%) hue-rotate(180deg);
}
}
</style> </style>
<link rel="stylesheet" href="/admin/assets/bootstrap/css/bootstrap-theme.min.css"> <link rel="stylesheet" href="/admin/assets/bootstrap/css/bootstrap-theme.min.css">
</head> </head>
@ -83,7 +114,7 @@
</div> </div>
<div class="navbar-collapse collapse"> <div class="navbar-collapse collapse">
<ul class="nav navbar-nav"> <ul class="nav navbar-nav">
<li class="dropdown admin-links"> <li class="dropdown if-logged-in-admin">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">System <b class="caret"></b></a> <a href="#" class="dropdown-toggle" data-toggle="dropdown">System <b class="caret"></b></a>
<ul class="dropdown-menu"> <ul class="dropdown-menu">
<li><a href="#system_status" onclick="return show_panel(this);">Status Checks</a></li> <li><a href="#system_status" onclick="return show_panel(this);">Status Checks</a></li>
@ -93,31 +124,36 @@
<li class="dropdown-header">Advanced Pages</li> <li class="dropdown-header">Advanced Pages</li>
<li><a href="#custom_dns" onclick="return show_panel(this);">Custom DNS</a></li> <li><a href="#custom_dns" onclick="return show_panel(this);">Custom DNS</a></li>
<li><a href="#external_dns" onclick="return show_panel(this);">External DNS</a></li> <li><a href="#external_dns" onclick="return show_panel(this);">External DNS</a></li>
<li><a href="/admin/munin" target="_blank">Munin Monitoring</a></li> <li><a href="#munin" onclick="return show_panel(this);">Munin Monitoring</a></li>
</ul> </ul>
</li> </li>
<li class="dropdown"> <li><a href="#mail-guide" onclick="return show_panel(this);" class="if-logged-in-not-admin">Mail</a></li>
<li class="dropdown if-logged-in-admin">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Mail &amp; Users <b class="caret"></b></a> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Mail &amp; Users <b class="caret"></b></a>
<ul class="dropdown-menu"> <ul class="dropdown-menu">
<li><a href="#mail-guide" onclick="return show_panel(this);">Instructions</a></li> <li><a href="#mail-guide" onclick="return show_panel(this);">Instructions</a></li>
<li class="admin-links"><a href="#users" onclick="return show_panel(this);">Users</a></li> <li><a href="#users" onclick="return show_panel(this);">Users</a></li>
<li class="admin-links"><a href="#aliases" onclick="return show_panel(this);">Aliases</a></li> <li><a href="#aliases" onclick="return show_panel(this);">Aliases</a></li>
<li class="divider admin-links"></li> <li class="divider"></li>
<li class="dropdown-header admin-links">Your Account</li> <li class="dropdown-header">Your Account</li>
<li class="admin-links"><a href="#mfa" onclick="return show_panel(this);">Two-Factor Authentication</a></li> <li><a href="#mfa" onclick="return show_panel(this);">Two-Factor Authentication</a></li>
</ul> </ul>
</li> </li>
<li><a href="#sync_guide" onclick="return show_panel(this);">Contacts/Calendar</a></li> <li><a href="#sync_guide" onclick="return show_panel(this);" class="if-logged-in">Contacts/Calendar</a></li>
<li class="admin-links"><a href="#web" onclick="return show_panel(this);">Web</a></li> <li><a href="#web" onclick="return show_panel(this);" class="if-logged-in-admin">Web</a></li>
</ul> </ul>
<ul class="admin-links nav navbar-nav navbar-right"> <ul class="nav navbar-nav navbar-right">
<li><a href="#" onclick="do_logout(); return false;" style="color: white">Log out</a></li> <li class="if-logged-in"><a href="#" onclick="do_logout(); return false;" style="color: white">Log out</a></li>
</ul> </ul>
</div><!--/.navbar-collapse --> </div><!--/.navbar-collapse -->
</div> </div>
</div> </div>
<div class="container"> <div class="container">
<div id="panel_welcome" class="admin_panel">
{% include "welcome.html" %}
</div>
<div id="panel_system_status" class="admin_panel"> <div id="panel_system_status" class="admin_panel">
{% include "system-status.html" %} {% include "system-status.html" %}
</div> </div>
@ -166,6 +202,10 @@
{% include "ssl.html" %} {% include "ssl.html" %}
</div> </div>
<div id="panel_munin" class="admin_panel">
{% include "munin.html" %}
</div>
<hr> <hr>
<footer> <footer>
@ -298,7 +338,7 @@ function ajax_with_indicator(options) {
return false; // handy when called from onclick return false; // handy when called from onclick
} }
var api_credentials = ["", ""]; var api_credentials = null;
function api(url, method, data, callback, callback_error, headers) { function api(url, method, data, callback, callback_error, headers) {
// from http://www.webtoolkit.info/javascript-base64.html // from http://www.webtoolkit.info/javascript-base64.html
function base64encode(input) { function base64encode(input) {
@ -346,9 +386,10 @@ function api(url, method, data, callback, callback_error, headers) {
// We don't store user credentials in a cookie to avoid the hassle of CSRF // We don't store user credentials in a cookie to avoid the hassle of CSRF
// attacks. The Authorization header only gets set in our AJAX calls triggered // attacks. The Authorization header only gets set in our AJAX calls triggered
// by user actions. // by user actions.
if (api_credentials)
xhr.setRequestHeader( xhr.setRequestHeader(
'Authorization', 'Authorization',
'Basic ' + base64encode(api_credentials[0] + ':' + api_credentials[1])); 'Basic ' + base64encode(api_credentials.username + ':' + api_credentials.session_key));
}, },
success: callback, success: callback,
error: callback_error || default_error, error: callback_error || default_error,
@ -367,12 +408,21 @@ var current_panel = null;
var switch_back_to_panel = null; var switch_back_to_panel = null;
function do_logout() { function do_logout() {
api_credentials = ["", ""]; // Clear the session from the backend.
api("/logout", "POST");
// Forget the token.
api_credentials = null;
if (typeof localStorage != 'undefined') if (typeof localStorage != 'undefined')
localStorage.removeItem("miab-cp-credentials"); localStorage.removeItem("miab-cp-credentials");
if (typeof sessionStorage != 'undefined') if (typeof sessionStorage != 'undefined')
sessionStorage.removeItem("miab-cp-credentials"); sessionStorage.removeItem("miab-cp-credentials");
// Return to the start.
show_panel('login'); show_panel('login');
// Reset menus.
show_hide_menus();
} }
function show_panel(panelid) { function show_panel(panelid) {
@ -395,21 +445,22 @@ function show_panel(panelid) {
$(function() { $(function() {
// Recall saved user credentials. // Recall saved user credentials.
try {
if (typeof sessionStorage != 'undefined' && sessionStorage.getItem("miab-cp-credentials")) if (typeof sessionStorage != 'undefined' && sessionStorage.getItem("miab-cp-credentials"))
api_credentials = sessionStorage.getItem("miab-cp-credentials").split(":"); api_credentials = JSON.parse(sessionStorage.getItem("miab-cp-credentials"));
else if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-credentials")) else if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-credentials"))
api_credentials = localStorage.getItem("miab-cp-credentials").split(":"); api_credentials = JSON.parse(localStorage.getItem("miab-cp-credentials"));
} catch (_) {
}
if (!api_credentials[0] && !api_credentials[1]) { // Toggle menu state.
$('.admin-links').hide() show_hide_menus();
}
else {
$('.admin-links').show()
}
// Recall what the user was last looking at. // Recall what the user was last looking at.
if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-lastpanel")) { if (api_credentials != null && typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-lastpanel")) {
show_panel(localStorage.getItem("miab-cp-lastpanel")); show_panel(localStorage.getItem("miab-cp-lastpanel"));
} else if (api_credentials != null) {
show_panel('welcome');
} else { } else {
show_panel('login'); show_panel('login');
} }

View File

@ -64,7 +64,7 @@ sudo management/cli.py user make-admin me@{{hostname}}</pre>
<div class="form-group" id="loginOtp"> <div class="form-group" id="loginOtp">
<label for="loginOtpInput" class="col-sm-3 control-label">Code</label> <label for="loginOtpInput" class="col-sm-3 control-label">Code</label>
<div class="col-sm-9"> <div class="col-sm-9">
<input type="text" class="form-control" id="loginOtpInput" placeholder="6-digit code"> <input type="text" class="form-control" id="loginOtpInput" placeholder="6-digit code" autocomplete="off">
<div class="help-block" style="margin-top: 5px; font-size: 90%">Enter the six-digit code generated by your two factor authentication app.</div> <div class="help-block" style="margin-top: 5px; font-size: 90%">Enter the six-digit code generated by your two factor authentication app.</div>
</div> </div>
</div> </div>
@ -102,11 +102,11 @@ function do_login() {
} }
// Exchange the email address & password for an API key. // Exchange the email address & password for an API key.
api_credentials = [$('#loginEmail').val(), $('#loginPassword').val()] api_credentials = { username: $('#loginEmail').val(), session_key: $('#loginPassword').val() }
api( api(
"/me", "/login",
"GET", "POST",
{}, {},
function(response) { function(response) {
// This API call always succeeds. It returns a JSON object indicating // This API call always succeeds. It returns a JSON object indicating
@ -141,7 +141,9 @@ function do_login() {
// Login succeeded. // Login succeeded.
// Save the new credentials. // Save the new credentials.
api_credentials = [response.email, response.api_key]; api_credentials = { username: response.email,
session_key: response.api_key,
privileges: response.privileges };
// Try to wipe the username/password information. // Try to wipe the username/password information.
$('#loginEmail').val(''); $('#loginEmail').val('');
@ -152,18 +154,21 @@ function do_login() {
// Remember the credentials. // Remember the credentials.
if (typeof localStorage != 'undefined' && typeof sessionStorage != 'undefined') { if (typeof localStorage != 'undefined' && typeof sessionStorage != 'undefined') {
if ($('#loginRemember').val()) { if ($('#loginRemember').val()) {
localStorage.setItem("miab-cp-credentials", api_credentials.join(":")); localStorage.setItem("miab-cp-credentials", JSON.stringify(api_credentials));
sessionStorage.removeItem("miab-cp-credentials"); sessionStorage.removeItem("miab-cp-credentials");
} else { } else {
localStorage.removeItem("miab-cp-credentials"); localStorage.removeItem("miab-cp-credentials");
sessionStorage.setItem("miab-cp-credentials", api_credentials.join(":")); sessionStorage.setItem("miab-cp-credentials", JSON.stringify(api_credentials));
} }
} }
// Toggle menus.
show_hide_menus();
// Open the next panel the user wants to go to. Do this after the XHR response // Open the next panel the user wants to go to. Do this after the XHR response
// is over so that we don't start a new XHR request while this one is finishing, // is over so that we don't start a new XHR request while this one is finishing,
// which confuses the loading indicator. // which confuses the loading indicator.
setTimeout(function() { show_panel(!switch_back_to_panel || switch_back_to_panel == "login" ? 'system_status' : switch_back_to_panel) }, 300); setTimeout(function() { show_panel(!switch_back_to_panel || switch_back_to_panel == "login" ? 'welcome' : switch_back_to_panel) }, 300);
} }
}, },
undefined, undefined,
@ -183,4 +188,19 @@ function show_login() {
} }
}); });
} }
function show_hide_menus() {
var is_logged_in = (api_credentials != null);
var privs = api_credentials ? api_credentials.privileges : [];
$('.if-logged-in').toggle(is_logged_in);
$('.if-logged-in-admin, .if-logged-in-not-admin').toggle(false);
if (is_logged_in) {
$('.if-logged-in-not-admin').toggle(true);
privs.forEach(function(priv) {
$('.if-logged-in-' + priv).toggle(true);
$('.if-logged-in-not-' + priv).toggle(false);
});
}
$('.if-not-logged-in').toggle(!is_logged_in);
}
</script> </script>

View File

@ -0,0 +1,20 @@
<h2>Munin Monitoring</h2>
<style>
</style>
<p>Opening munin in a new tab... You may need to allow pop-ups for this site.</p>
<script>
function show_munin() {
// Set the cookie.
api(
"/munin",
"GET",
{ },
function(r) {
// Redirect.
window.open("/admin/munin/index.html", "_blank");
});
}
</script>

View File

@ -30,9 +30,9 @@
<table class="table"> <table class="table">
<thead><tr><th>For...</th> <th>Use...</th></tr></thead> <thead><tr><th>For...</th> <th>Use...</th></tr></thead>
<tr><td>Contacts and Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=at.bitfire.davdroid">DAVdroid</a> ($3.69; free <a href="https://f-droid.org/packages/at.bitfire.davdroid/">here</a>)</td></tr> <tr><td>Contacts and Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=at.bitfire.davdroid">DAVx⁵</a> ($5.99; free <a href="https://f-droid.org/packages/at.bitfire.davdroid/">here</a>)</td></tr>
<tr><td>Only Contacts</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.carddav.sync">CardDAV-Sync free beta</a> (free)</td></tr> <tr><td>Only Contacts</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.carddav.sync">CardDAV-Sync free</a> (free)</td></tr>
<tr><td>Only Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.caldav.lib">CalDAV-Sync</a> ($2.89)</td></tr> <tr><td>Only Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.caldav.lib">CalDAV-Sync</a> ($2.99)</td></tr>
</table> </table>
<p>Use the following settings:</p> <p>Use the following settings:</p>

View File

@ -5,7 +5,7 @@
<h2>Backup Status</h2> <h2>Backup Status</h2>
<p>The box makes an incremental backup each night. By default the backup is stored on the machine itself, but you can also store in on S3-compatible services like Amazon Web Services (AWS).</p> <p>The box makes an incremental backup each night. By default the backup is stored on the machine itself, but you can also store it on S3-compatible services like Amazon Web Services (AWS).</p>
<h3>Configuration</h3> <h3>Configuration</h3>
@ -138,7 +138,7 @@
</div> </div>
</div> </div>
<!-- Common --> <!-- Common -->
<div class="form-group backup-target-local backup-target-rsync backup-target-s3"> <div class="form-group backup-target-local backup-target-rsync backup-target-s3 backup-target-b2">
<label for="min-age" class="col-sm-2 control-label">Retention Days:</label> <label for="min-age" class="col-sm-2 control-label">Retention Days:</label>
<div class="col-sm-8"> <div class="col-sm-8">
<input type="number" class="form-control" rows="1" id="min-age"> <input type="number" class="form-control" rows="1" id="min-age">

View File

@ -203,7 +203,7 @@ function users_set_password(elem) {
var email = $(elem).parents('tr').attr('data-email'); var email = $(elem).parents('tr').attr('data-email');
var yourpw = ""; var yourpw = "";
if (api_credentials != null && email == api_credentials[0]) if (api_credentials != null && email == api_credentials.username)
yourpw = "<p class='text-danger'>If you change your own password, you will be logged out of this control panel and will need to log in again.</p>"; yourpw = "<p class='text-danger'>If you change your own password, you will be logged out of this control panel and will need to log in again.</p>";
show_modal_confirm( show_modal_confirm(
@ -232,7 +232,7 @@ function users_remove(elem) {
var email = $(elem).parents('tr').attr('data-email'); var email = $(elem).parents('tr').attr('data-email');
// can't remove yourself // can't remove yourself
if (api_credentials != null && email == api_credentials[0]) { if (api_credentials != null && email == api_credentials.username) {
show_modal_error("Archive User", "You cannot archive your own account."); show_modal_error("Archive User", "You cannot archive your own account.");
return; return;
} }
@ -264,7 +264,7 @@ function mod_priv(elem, add_remove) {
var priv = $(elem).parents('td').find('.name').text(); var priv = $(elem).parents('td').find('.name').text();
// can't remove your own admin access // can't remove your own admin access
if (priv == "admin" && add_remove == "remove" && api_credentials != null && email == api_credentials[0]) { if (priv == "admin" && add_remove == "remove" && api_credentials != null && email == api_credentials.username) {
show_modal_error("Modify Privileges", "You cannot remove the admin privilege from yourself."); show_modal_error("Modify Privileges", "You cannot remove the admin privilege from yourself.");
return; return;
} }

View File

@ -0,0 +1,16 @@
<style>
.title {
margin: 1em;
text-align: center;
}
.subtitle {
margin: 2em;
text-align: center;
}
</style>
<h1 class="title">{{hostname}}</h1>
<p class="subtitle">Welcome to your Mail-in-a-Box control panel.</p>

View File

@ -106,7 +106,7 @@ def sort_email_addresses(email_addresses, env):
ret.extend(sorted(email_addresses)) # whatever is left ret.extend(sorted(email_addresses)) # whatever is left
return ret return ret
def shell(method, cmd_args, env={}, capture_stderr=False, return_bytes=False, trap=False, input=None): def shell(method, cmd_args, env={}, capture_stdout=True, capture_stderr=False, return_bytes=False, trap=False, input=None):
# A safe way to execute processes. # A safe way to execute processes.
# Some processes like apt-get require being given a sane PATH. # Some processes like apt-get require being given a sane PATH.
import subprocess import subprocess
@ -116,6 +116,8 @@ def shell(method, cmd_args, env={}, capture_stderr=False, return_bytes=False, tr
'env': env, 'env': env,
'stderr': None if not capture_stderr else subprocess.STDOUT, 'stderr': None if not capture_stderr else subprocess.STDOUT,
} }
if not capture_stdout:
kwargs['stdout'] = subprocess.DEVNULL
if method == "check_output" and input is not None: if method == "check_output" and input is not None:
kwargs['input'] = input kwargs['input'] = input

View File

@ -211,9 +211,14 @@ def make_domain_config(domain, templates, ssl_certificates, env):
# Add the HSTS header. # Add the HSTS header.
if hsts == "yes": if hsts == "yes":
nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=15768000\" always;\n" nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n"
elif hsts == "preload": elif hsts == "preload":
nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=15768000; includeSubDomains; preload\" always;\n" nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\" always;\n"
nginx_conf_extra += "\tadd_header X-Frame-Options \"SAMEORIGIN\" always;\n"
nginx_conf_extra += "\tadd_header X-Content-Type-Options nosniff;\n"
nginx_conf_extra += "\tadd_header Content-Security-Policy-Report-Only \"default-src 'self'; font-src *;img-src * data:; script-src *; style-src *;frame-ancestors 'self'\";\n"
nginx_conf_extra += "\tadd_header Referrer-Policy \"strict-origin\";\n"
# Add in any user customizations in the includes/ folder. # Add in any user customizations in the includes/ folder.
nginx_conf_custom_include = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain) + ".conf") nginx_conf_custom_include = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain) + ".conf")

View File

@ -3,7 +3,12 @@ Mail-in-a-Box Security Guide
Mail-in-a-Box turns a fresh Ubuntu 18.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components. Mail-in-a-Box turns a fresh Ubuntu 18.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components.
This page documents the security features of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box. This page documents the security posture of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box.
Reporting Security Vulnerabilities
----------------------------------
Security vulnerabilities should be reported to the [project's maintainer](https://joshdata.me) via email.
Threat Model Threat Model
------------ ------------
@ -49,9 +54,7 @@ Additionally:
### Password Storage ### Password Storage
The passwords for mail users are stored on disk using the [SHA512-CRYPT](http://man7.org/linux/man-pages/man3/crypt.3.html) hashing scheme. ([source](management/mailconfig.py)) The passwords for mail users are stored on disk using the [SHA512-CRYPT](http://man7.org/linux/man-pages/man3/crypt.3.html) hashing scheme. ([source](management/mailconfig.py)) Password changes (as well as changes to control panel two-factor authentication settings) expire any control panel login sessions.
When using the web-based administrative control panel, after logging in an API key is placed in the browser's local storage (rather than, say, the user's actual password). The API key is an HMAC based on the user's email address and current password, and it is keyed by a secret known only to the control panel service. By resetting an administrator's password, any HMACs previously generated for that user will expire.
### Console access ### Console access
@ -65,12 +68,10 @@ If DNSSEC is enabled at the box's domain name's registrar, the SSHFP record that
`fail2ban` provides some protection from brute-force login attacks (repeated logins that guess account passwords) by blocking offending IP addresses at the network level. `fail2ban` provides some protection from brute-force login attacks (repeated logins that guess account passwords) by blocking offending IP addresses at the network level.
The following services are protected: SSH, IMAP (dovecot), SMTP submission (postfix), webmail (roundcube), Nextcloud/CalDAV/CardDAV (over HTTP), and the Mail-in-a-Box control panel & munin (over HTTP). The following services are protected: SSH, IMAP (dovecot), SMTP submission (postfix), webmail (roundcube), Nextcloud/CalDAV/CardDAV (over HTTP), and the Mail-in-a-Box control panel (over HTTP).
Some other services running on the box may be missing fail2ban filters. Some other services running on the box may be missing fail2ban filters.
`fail2ban` only blocks IPv4 addresses, however. If the box has a public IPv6 address, it is not protected from these attacks.
Outbound Mail Outbound Mail
------------- -------------

View File

@ -15,12 +15,6 @@ sed -i "s/#\& stop/\& stop/g" /etc/rsyslog.d/20-ufw.conf
restart_service rsyslog restart_service rsyslog
# decrease time journal is stored
tools/editconf.py /etc/systemd/journald.conf MaxRetentionSec=2month
tools/editconf.py /etc/systemd/journald.conf MaxFileSec=1week
hide_output systemctl restart systemd-journald.service
# Create forward for root emails # Create forward for root emails
cat > /root/.forward << EOF; cat > /root/.forward << EOF;
administrator@$PRIMARY_HOSTNAME administrator@$PRIMARY_HOSTNAME

View File

@ -6,39 +6,47 @@
# #
######################################################### #########################################################
GITSRC=kj
if [ -z "$TAG" ]; then if [ -z "$TAG" ]; then
# If a version to install isn't explicitly given as an environment # If a version to install isn't explicitly given as an environment
# variable, then install the latest version. But the latest version # variable, then install the latest version. But the latest version
# depends on the operating system. Existing Ubuntu 14.04 users need # depends on the machine's version of Ubuntu. Existing users need to
# to be able to upgrade to the latest version supporting Ubuntu 14.04, # be able to upgrade to the latest version available for that version
# in part because an upgrade is required before jumping to Ubuntu 18.04. # of Ubuntu to satisfy the migration requirements.
# New users on Ubuntu 18.04 need to get the latest version number too.
# #
# Also, the system status checks read this script for TAG = (without the # Also, the system status checks read this script for TAG = (without the
# space, but if we put it in a comment it would confuse the status checks!) # space, but if we put it in a comment it would confuse the status checks!)
# to get the latest version, so the first such line must be the one that we # to get the latest version, so the first such line must be the one that we
# want to display in status checks. # want to display in status checks.
if [ "`lsb_release -d | sed 's/.*:\s*//' | sed 's/20\.04\.[0-9]/20.04/' `" == "Ubuntu 20.04 LTS" ]; then #
# This machine is running Ubuntu 20.04. # Allow point-release versions of the major releases, e.g. 22.04.1 is OK.
TAG=v0.54 UBUNTU_VERSION=$( lsb_release -d | sed 's/.*:\s*//' | sed 's/\([0-9]*\.[0-9]*\)\.[0-9]/\1/' )"
if [ "$UBUNTU_VERSION" == "Ubuntu 22.04 LTS" ]; then
elif [ "$(lsb_release -d | sed 's/.*:\s*//' | sed 's/18\.04\.[0-9]/18.04/' )" == "Ubuntu 18.04 LTS" ]; then # This machine is running Ubuntu 22.04, which is supported by
# This machine is running Ubuntu 18.04. # Mail-in-a-Box versions 60 and later.
TAG=v0.54 TAG=v60
elif [ "$UBUNTU_VERSION" == "Ubuntu 20.04 LTS" ]; then
elif [ "$(lsb_release -d | sed 's/.*:\s*//' | sed 's/14\.04\.[0-9]/14.04/' )" == "Ubuntu 14.04 LTS" ]; then # This machine is running Ubuntu 20.04, which is supported by
# This machine is running Ubuntu 14.04. # Mail-in-a-Box versions 56 and later.
echo "You are installing the last version of Mail-in-a-Box that will" TAG=v56
echo "support Ubuntu 14.04. If this is a new installation of Mail-in-a-Box," elif [ "$UBUNTU_VERSION" == "Ubuntu 18.04 LTS" ]; then
echo "stop now and switch to a machine running Ubuntu 18.04. If you are" # This machine is running Ubuntu 18.04, which is supported by
echo "upgrading an existing Mail-in-a-Box --- great. After upgrading this" # Mail-in-a-Box versions 0.40 through 5x.
echo "box, please visit https://mailinabox.email for notes on how to upgrade" echo "Support is ending for Ubuntu 18.04."
echo "to Ubuntu 18.04." echo "Please immediately begin to migrate your information to"
echo "" echo "a new machine running Ubuntu 22.04. See:"
echo "https://mailinabox.email/maintenance.html#upgrade"
TAG=v56
GITSRC=miab
elif [ "$UBUNTU_VERSION" == "Ubuntu 14.04 LTS" ]; then
# This machine is running Ubuntu 14.04, which is supported by
# Mail-in-a-Box versions 1 through v0.30.
echo "Ubuntu 14.04 is no longer supported."
echo "The last version of Mail-in-a-Box supporting Ubuntu 14.04 will be installed."
TAG=v0.30 TAG=v0.30
else else
echo "This script must be run on a system running Ubuntu 20.04, 18.04 or 14.04." echo "This script may be used only on a machine running Ubuntu 14.04, 18.04, 20.04 or 22.04."
exit 1 exit 1
fi fi
fi fi
@ -59,11 +67,19 @@ if [ ! -d $HOME/mailinabox ]; then
fi fi
echo Downloading Mail-in-a-Box $TAG. . . echo Downloading Mail-in-a-Box $TAG. . .
if [ "$GITSRC" == "miab" ]; then
git clone \ git clone \
-b $TAG --depth 1 \ -b $TAG --depth 1 \
https://github.com/mail-in-a-box/mailinabox \ https://github.com/mail-in-a-box/mailinabox \
$HOME/mailinabox \ $HOME/mailinabox \
< /dev/null 2> /dev/null < /dev/null 2> /dev/null
else
git clone \
-b $TAG --depth 1 \
https://github.com/kiekerjan/mailinabox \
$HOME/mailinabox \
< /dev/null 2> /dev/null
fi
echo echo
fi fi

View File

@ -1,46 +1,45 @@
#!/bin/bash #!/bin/bash
# OpenDKIM # DKIM
# -------- # --------
# #
# OpenDKIM provides a service that puts a DKIM signature on outbound mail. # DKIMpy provides a service that puts a DKIM signature on outbound mail.
# #
# The DNS configuration for DKIM is done in the management daemon. # The DNS configuration for DKIM is done in the management daemon.
source setup/functions.sh # load our functions source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
# Install DKIM... # Remove openDKIM if present
echo Installing OpenDKIM/OpenDMARC... apt-get purge -qq -y opendkim opendkim-tools
apt_install opendkim opendkim-tools opendmarc
# Install DKIMpy-Milter
echo Installing DKIMpy/OpenDMARC...
apt_install dkimpy-milter python3-dkim opendmarc
# Make sure configuration directories exist. # Make sure configuration directories exist.
mkdir -p /etc/opendkim; mkdir -p /etc/dkim;
mkdir -p $STORAGE_ROOT/mail/dkim mkdir -p $STORAGE_ROOT/mail/dkim
# Used in InternalHosts and ExternalIgnoreList configuration directives. # Used in InternalHosts and ExternalIgnoreList configuration directives.
# Not quite sure why. # Not quite sure why.
echo "127.0.0.1" > /etc/opendkim/TrustedHosts echo "127.0.0.1" > /etc/dkim/TrustedHosts
# We need to at least create these files, since we reference them later. # We need to at least create these files, since we reference them later.
# Otherwise, opendkim startup will fail touch /etc/dkim/KeyTable
touch /etc/opendkim/KeyTable touch /etc/dkim/SigningTable
touch /etc/opendkim/SigningTable
if grep -q "ExternalIgnoreList" /etc/opendkim.conf; then tools/editconf.py /etc/dkimpy-milter/dkimpy-milter.conf -s \
true # already done #NODOC "MacroList=daemon_name|ORIGINATING" \
else "MacroListVerify=daemon_name|VERIFYING" \
# Add various configuration options to the end of `opendkim.conf`. "Canonicalization=relaxed/simple" \
cat >> /etc/opendkim.conf << EOF; "MinimumKeyBits=1024" \
Canonicalization relaxed/simple "ExternalIgnoreList=refile:/etc/dkim/TrustedHosts" \
MinimumKeyBits 1024 "InternalHosts=refile:/etc/dkim/TrustedHosts" \
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts "KeyTable=refile:/etc/dkim/KeyTable" \
InternalHosts refile:/etc/opendkim/TrustedHosts "KeyTableEd25519=refile:/etc/dkim/KeyTableEd25519" \
KeyTable refile:/etc/opendkim/KeyTable "SigningTable=refile:/etc/dkim/SigningTable" \
SigningTable refile:/etc/opendkim/SigningTable "Socket=inet:8892@127.0.0.1" \
Socket inet:8891@127.0.0.1 "RequireSafeKeys=false"
RequireSafeKeys false
EOF
fi
# Create a new DKIM key. This creates mail.private and mail.txt # Create a new DKIM key. This creates mail.private and mail.txt
# in $STORAGE_ROOT/mail/dkim. The former is the private key and # in $STORAGE_ROOT/mail/dkim. The former is the private key and
@ -48,16 +47,20 @@ fi
# in our DNS setup. Note that the files are named after the # in our DNS setup. Note that the files are named after the
# 'selector' of the key, which we can change later on to support # 'selector' of the key, which we can change later on to support
# key rotation. # key rotation.
# if [ ! -f "$STORAGE_ROOT/mail/dkim/box-rsa.key" ]; then
# A 1024-bit key is seen as a minimum standard by several providers # All defaults are supposed to be ok, default key for rsa is 2048 bit
# such as Google. But they and others use a 2048 bit key, so we'll dknewkey --ktype rsa $STORAGE_ROOT/mail/dkim/box-rsa
# do the same. Keys beyond 2048 bits may exceed DNS record limits. dknewkey --ktype ed25519 $STORAGE_ROOT/mail/dkim/box-ed25519
if [ ! -f "$STORAGE_ROOT/mail/dkim/mail.private" ]; then
opendkim-genkey -b 2048 -r -s mail -D $STORAGE_ROOT/mail/dkim # Force them into the format dns_update.py expects
sed -i 's/v=DKIM1;/box-rsa._domainkey IN TXT ( "v=DKIM1; s=email;/' $STORAGE_ROOT/mail/dkim/box-rsa.dns
echo '" )' >> $STORAGE_ROOT/mail/dkim/box-rsa.dns
sed -i 's/v=DKIM1;/box-ed25519._domainkey IN TXT ( "v=DKIM1; s=email;/' $STORAGE_ROOT/mail/dkim/box-ed25519.dns
echo '" )' >> $STORAGE_ROOT/mail/dkim/box-ed25519.dns
fi fi
# Ensure files are owned by the opendkim user and are private otherwise. # Ensure files are owned by the dkimpy-milter user and are private otherwise.
chown -R opendkim:opendkim $STORAGE_ROOT/mail/dkim chown -R dkimpy-milter:dkimpy-milter $STORAGE_ROOT/mail/dkim
chmod go-rwx $STORAGE_ROOT/mail/dkim chmod go-rwx $STORAGE_ROOT/mail/dkim
tools/editconf.py /etc/opendmarc.conf -s \ tools/editconf.py /etc/opendmarc.conf -s \
@ -94,31 +97,34 @@ tools/editconf.py /etc/opendmarc.conf -s \
# domains does not cause the results header field to be added. This added header # domains does not cause the results header field to be added. This added header
# is used by spamassassin to evaluate the mail for spamminess. # is used by spamassassin to evaluate the mail for spamminess.
tools/editconf.py /etc/opendkim.conf -s \ tools/editconf.py /etc/dkimpy-milter/dkimpy-milter.conf -s \
"AlwaysAddARHeader=true" "AlwaysAddARHeader=true"
# Add OpenDKIM and OpenDMARC as milters to postfix, which is how OpenDKIM # Add DKIMpy and OpenDMARC as milters to postfix, which is how DKIMpy
# intercepts outgoing mail to perform the signing (by adding a mail header) # intercepts outgoing mail to perform the signing (by adding a mail header)
# and how they both intercept incoming mail to add Authentication-Results # and how they both intercept incoming mail to add Authentication-Results
# headers. The order possibly/probably matters: OpenDMARC relies on the # headers. The order possibly/probably matters: OpenDMARC relies on the
# OpenDKIM Authentication-Results header already being present. # DKIM Authentication-Results header already being present.
# #
# Be careful. If we add other milters later, this needs to be concatenated # Be careful. If we add other milters later, this needs to be concatenated
# on the smtpd_milters line. # on the smtpd_milters line.
# #
# The OpenDMARC milter is skipped in the SMTP submission listener by # The OpenDMARC milter is skipped in the SMTP submission listener by
# configuring smtpd_milters there to only list the OpenDKIM milter # configuring smtpd_milters there to only list the DKIMpy milter
# (see mail-postfix.sh). # (see mail-postfix.sh).
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
"smtpd_milters=inet:127.0.0.1:8891 inet:127.0.0.1:8893"\ "smtpd_milters=inet:127.0.0.1:8892 inet:127.0.0.1:8893"\
non_smtpd_milters=\$smtpd_milters \ non_smtpd_milters=\$smtpd_milters \
milter_default_action=accept milter_default_action=accept
# We need to explicitly enable the opendmarc service, or it will not start # We need to explicitly enable the opendmarc service, or it will not start
hide_output systemctl enable opendmarc hide_output systemctl enable opendmarc
# There is a fault in the dkim code for Ubuntu 20.04, let's fix it. Not necessary for Ubuntu 21.04 or newer
sed -i 's/return b""\.join(r\.items\[0\]\.strings)/return b""\.join(list(r\.items)\[0\]\.strings)/' /usr/lib/python3/dist-packages/dkim/dnsplug.py
# Restart services. # Restart services.
restart_service opendkim restart_service dkimpy-milter
restart_service opendmarc restart_service opendmarc
restart_service postfix restart_service postfix

View File

@ -10,21 +10,15 @@
source setup/functions.sh # load our functions source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
# Install the packages.
#
# * nsd: The non-recursive nameserver that publishes our DNS records.
# * ldnsutils: Helper utilities for signing DNSSEC zones.
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
echo "Installing nsd (DNS server)..." echo "Installing nsd (DNS server)..."
apt_install ldnsutils openssh-client
# Prepare nsd's configuration. # Prepare nsd's configuration.
# We configure nsd before installation as we only want it to bind to some addresses
# and it otherwise will have port / bind conflicts with unbound used as the local resolver
mkdir -p /var/run/nsd mkdir -p /var/run/nsd
mkdir -p /etc/nsd mkdir -p /etc/nsd
mkdir -p /etc/nsd/zones mkdir -p /etc/nsd/zones
touch /etc/nsd/zones.conf touch /etc/nsd/zones.conf
touch /etc/nsd/nsd.conf
cat > /etc/nsd/nsd.conf << EOF; cat > /etc/nsd/nsd.conf << EOF;
# Do not edit. Overwritten by Mail-in-a-Box setup. # Do not edit. Overwritten by Mail-in-a-Box setup.
@ -46,6 +40,22 @@ server:
EOF EOF
# Since we have unbound listening on localhost for locally-generated
# DNS queries that require a recursive nameserver, and the system
# might have other network interfaces for e.g. tunnelling, we have
# to be specific about the network interfaces that nsd binds to.
for ip in $PRIVATE_IP $PRIVATE_IPV6; do
echo " ip-address: $ip" >> /etc/nsd/nsd.conf;
done
# Create a directory for additional configuration directives, including
# the zones.conf file written out by our management daemon.
echo "include: /etc/nsd/nsd.conf.d/*.conf" >> /etc/nsd/nsd.conf;
# Remove the old location of zones.conf that we generate. It will
# now be stored in /etc/nsd/nsd.conf.d.
rm -f /etc/nsd/zones.conf
# Add log rotation # Add log rotation
cat > /etc/logrotate.d/nsd <<EOF; cat > /etc/logrotate.d/nsd <<EOF;
/var/log/nsd.log { /var/log/nsd.log {
@ -58,16 +68,6 @@ cat > /etc/logrotate.d/nsd <<EOF;
} }
EOF EOF
# Since we have bind9 listening on localhost for locally-generated
# DNS queries that require a recursive nameserver, and the system
# might have other network interfaces for e.g. tunnelling, we have
# to be specific about the network interfaces that nsd binds to.
for ip in $PRIVATE_IP $PRIVATE_IPV6; do
echo " ip-address: $ip" >> /etc/nsd/nsd.conf;
done
echo "include: /etc/nsd/zones.conf" >> /etc/nsd/nsd.conf;
# Add systemd override file to fix some permissions # Add systemd override file to fix some permissions
mkdir -p /etc/systemd/system/nsd.service.d/ mkdir -p /etc/systemd/system/nsd.service.d/
cat > /etc/systemd/system/nsd.service.d/nsd-permissions.conf << EOF cat > /etc/systemd/system/nsd.service.d/nsd-permissions.conf << EOF
@ -76,8 +76,12 @@ ReadWritePaths=/var/lib/nsd /etc/nsd /run /var/log /run/nsd
CapabilityBoundingSet=CAP_CHOWN CAP_IPC_LOCK CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_NET_ADMIN CapabilityBoundingSet=CAP_CHOWN CAP_IPC_LOCK CAP_NET_BIND_SERVICE CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_NET_ADMIN
EOF EOF
# Attempting a late install of nsd (after configuration) # Install the packages.
apt_install nsd #
# * nsd: The non-recursive nameserver that publishes our DNS records.
# * ldnsutils: Helper utilities for signing DNSSEC zones.
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
apt_install nsd ldnsutils openssh-client
# Create DNSSEC signing keys. # Create DNSSEC signing keys.

View File

@ -76,7 +76,7 @@ restart_service dovecot
# and compare those to what actually exist in mailboxes. # and compare those to what actually exist in mailboxes.
# This removes mails from the index that have already been expunged and makes # This removes mails from the index that have already been expunged and makes
# sure that the next doveadm index will index all the missing mails (if any). # sure that the next doveadm index will index all the missing mails (if any).
doveadm fts rescan -A hide_output doveadm fts rescan -A
# Adds unindexed files to the fts database # Adds unindexed files to the fts database
# * `-q`: Queues the indexing to be run by indexer process. (will background the indexing) # * `-q`: Queues the indexing to be run by indexer process. (will background the indexing)

View File

@ -217,6 +217,7 @@ function git_clone {
rm -rf $TMPPATH $TARGETPATH rm -rf $TMPPATH $TARGETPATH
git clone -q $REPO $TMPPATH || exit 1 git clone -q $REPO $TMPPATH || exit 1
(cd $TMPPATH; git checkout -q $TREEISH;) || exit 1 (cd $TMPPATH; git checkout -q $TREEISH;) || exit 1
rm -rf $TMPPATH/.git
mv $TMPPATH/$SUBDIR $TARGETPATH mv $TMPPATH/$SUBDIR $TARGETPATH
rm -rf $TMPPATH rm -rf $TMPPATH
} }

View File

@ -78,16 +78,16 @@ tools/editconf.py /etc/dovecot/conf.d/10-auth.conf \
"auth_mechanisms=plain login" "auth_mechanisms=plain login"
# Enable SSL, specify the location of the SSL certificate and private key files. # Enable SSL, specify the location of the SSL certificate and private key files.
# Use Mozilla's "Intermediate" recommendations at https://ssl-config.mozilla.org/#server=dovecot&server-version=2.2.33&config=intermediate&openssl-version=1.1.1, # Use Mozilla's "Intermediate" recommendations at https://ssl-config.mozilla.org/#server=dovecot&server-version=2.3.7.2&config=intermediate&openssl-version=1.1.1,
# except that the current version of Dovecot does not have a TLSv1.3 setting, so we only use TLSv1.2. # except that the current version of Dovecot does not have a TLSv1.3 setting, so we only use TLSv1.2.
tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \ tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \
ssl=required \ ssl=required \
"ssl_cert=<$STORAGE_ROOT/ssl/ssl_certificate.pem" \ "ssl_cert=<$STORAGE_ROOT/ssl/ssl_certificate.pem" \
"ssl_key=<$STORAGE_ROOT/ssl/ssl_private_key.pem" \ "ssl_key=<$STORAGE_ROOT/ssl/ssl_private_key.pem" \
"ssl_protocols=TLSv1.2" \ "ssl_min_protocol=TLSv1.2" \
"ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \ "ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \
"ssl_prefer_server_ciphers=no" \ "ssl_prefer_server_ciphers=yes" \
"ssl_dh_parameters_length=2048" "ssl_dh=<$STORAGE_ROOT/ssl/dh4096.pem"
# Disable in-the-clear IMAP/POP because there is no reason for a user to transmit # Disable in-the-clear IMAP/POP because there is no reason for a user to transmit
# login credentials outside of an encrypted connection. Only the over-TLS versions # login credentials outside of an encrypted connection. Only the over-TLS versions

View File

@ -13,8 +13,8 @@
# destinations according to aliases, and passses email on to # destinations according to aliases, and passses email on to
# another service for local mail delivery. # another service for local mail delivery.
# #
# The first hop in local mail delivery is to Spamassassin via # The first hop in local mail delivery is to spampd via
# LMTP. Spamassassin then passes mail over to Dovecot for # LMTP. spampd then passes mail over to Dovecot for
# storage in the user's mailbox. # storage in the user's mailbox.
# #
# Postfix also listens on ports 465/587 (SMTPS, SMTP+STARTLS) for # Postfix also listens on ports 465/587 (SMTPS, SMTP+STARTLS) for
@ -91,12 +91,14 @@ tools/editconf.py /etc/postfix/master.cf -s -w \
-o smtpd_tls_wrappermode=yes -o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes -o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission -o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891 -o smtpd_milters=inet:127.0.0.1:8892
-o milter_macro_daemon_name=ORIGINATING
-o cleanup_service_name=authclean" \ -o cleanup_service_name=authclean" \
"submission=inet n - - - - smtpd "submission=inet n - - - - smtpd
-o smtpd_sasl_auth_enable=yes -o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission -o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891 -o smtpd_milters=inet:127.0.0.1:8892
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_tls_security_level=encrypt -o smtpd_tls_security_level=encrypt
-o cleanup_service_name=authclean" \ -o cleanup_service_name=authclean" \
"authclean=unix n - - - 0 cleanup "authclean=unix n - - - 0 cleanup
@ -122,16 +124,16 @@ sed -i "s/PUBLIC_IP/$PUBLIC_IP/" /etc/postfix/outgoing_mail_header_filters
# the world are very far behind and if we disable too much, they may not be able to use TLS and # the world are very far behind and if we disable too much, they may not be able to use TLS and
# won't fall back to cleartext. So we don't disable too much. smtpd_tls_exclude_ciphers applies to # won't fall back to cleartext. So we don't disable too much. smtpd_tls_exclude_ciphers applies to
# both port 25 and port 587, but because we override the cipher list for both, it probably isn't used. # both port 25 and port 587, but because we override the cipher list for both, it probably isn't used.
# Use Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=old&openssl-version=1.1.1 # Use Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.4.13&config=old&openssl-version=1.1.1
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
smtpd_tls_security_level=may\ smtpd_tls_security_level=may\
smtpd_tls_auth_only=yes \ smtpd_tls_auth_only=yes \
smtpd_tls_cert_file=$STORAGE_ROOT/ssl/ssl_certificate.pem \ smtpd_tls_cert_file=$STORAGE_ROOT/ssl/ssl_certificate.pem \
smtpd_tls_key_file=$STORAGE_ROOT/ssl/ssl_private_key.pem \ smtpd_tls_key_file=$STORAGE_ROOT/ssl/ssl_private_key.pem \
smtpd_tls_dh1024_param_file=$STORAGE_ROOT/ssl/dh2048.pem \ smtpd_tls_dh1024_param_file=$STORAGE_ROOT/ssl/dh4096.pem \
smtpd_tls_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \ smtpd_tls_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtpd_tls_ciphers=medium \ smtpd_tls_ciphers=medium \
tls_medium_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA \ tls_medium_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 \
smtpd_tls_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL" \ smtpd_tls_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL" \
tls_preempt_cipherlist=yes \ tls_preempt_cipherlist=yes \
smtpd_tls_received_header=yes smtpd_tls_received_header=yes
@ -203,16 +205,17 @@ tools/editconf.py /etc/postfix/main.cf \
# ### Incoming Mail # ### Incoming Mail
# Pass any incoming mail over to a local delivery agent. Spamassassin # Pass mail to spampd, which acts as the local delivery agent (LDA),
# will act as the LDA agent at first. It is listening on port 10025 # which then passes the mail over to the Dovecot LMTP server after.
# with LMTP. Spamassassin will pass the mail over to Dovecot after. # spampd runs on port 10025 by default.
# #
# In a basic setup we would pass mail directly to Dovecot by setting # In a basic setup we would pass mail directly to Dovecot by setting
# virtual_transport to `lmtp:unix:private/dovecot-lmtp`. # virtual_transport to `lmtp:unix:private/dovecot-lmtp`.
tools/editconf.py /etc/postfix/main.cf "virtual_transport=lmtp:[127.0.0.1]:10025" tools/editconf.py /etc/postfix/main.cf "virtual_transport=lmtp:[127.0.0.1]:10025"
# Because of a spampd bug, limit the number of recipients in each connection. # Clear the lmtp_destination_recipient_limit setting which in previous
# versions of Mail-in-a-Box was set to 1 because of a spampd bug.
# See https://github.com/mail-in-a-box/mailinabox/issues/1523. # See https://github.com/mail-in-a-box/mailinabox/issues/1523.
tools/editconf.py /etc/postfix/main.cf lmtp_destination_recipient_limit=1 tools/editconf.py /etc/postfix/main.cf -e lmtp_destination_recipient_limit=
# Who can send mail to us? Some basic filters. # Who can send mail to us? Some basic filters.
@ -241,10 +244,11 @@ tools/editconf.py /etc/postfix/main.cf \
# A lot of legit mail servers try to resend before 300 seconds. # A lot of legit mail servers try to resend before 300 seconds.
# As a matter of fact RFC is not strict about retry timer so postfix and # As a matter of fact RFC is not strict about retry timer so postfix and
# other MTA have their own intervals. To fix the problem of receiving # other MTA have their own intervals. To fix the problem of receiving
# e-mails really latter, delay of greylisting has been set to # e-mails really later, delay of greylisting has been set to
# 180 seconds (default is 300 seconds). # 180 seconds (default is 300 seconds).
# Postgrey removes entries after 185 days of not being used.
tools/editconf.py /etc/default/postgrey \ tools/editconf.py /etc/default/postgrey \
POSTGREY_OPTS=\"'--inet=127.0.0.1:10023 --delay=180'\" POSTGREY_OPTS=\"'--inet=127.0.0.1:10023 --delay=180 --max-age=185'\"
# We are going to setup a newer whitelist for postgrey, the version included in the distribution is old # We are going to setup a newer whitelist for postgrey, the version included in the distribution is old

View File

@ -23,6 +23,7 @@ if [ ! -f $db_path ]; then
echo "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL, extra, privileges TEXT NOT NULL DEFAULT '');" | sqlite3 $db_path; echo "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL, extra, privileges TEXT NOT NULL DEFAULT '');" | sqlite3 $db_path;
echo "CREATE TABLE aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 $db_path; echo "CREATE TABLE aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 $db_path;
echo "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);" | sqlite3 $db_path; echo "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);" | sqlite3 $db_path;
echo "CREATE TABLE auto_aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 $db_path;
fi fi
# ### User Authentication # ### User Authentication
@ -100,8 +101,12 @@ EOF
# ### Destination Validation # ### Destination Validation
# Use a Sqlite3 database to check whether a destination email address exists, # Use a Sqlite3 database to check whether a destination email address exists,
# and to perform any email alias rewrites in Postfix. # and to perform any email alias rewrites in Postfix. Additionally, we disable
# SMTPUTF8 because Dovecot's LMTP server that delivers mail to inboxes does
# not support it, and if a message is received with the SMTPUTF8 flag it will
# bounce.
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
smtputf8_enable=no \
virtual_mailbox_domains=sqlite:/etc/postfix/virtual-mailbox-domains.cf \ virtual_mailbox_domains=sqlite:/etc/postfix/virtual-mailbox-domains.cf \
virtual_mailbox_maps=sqlite:/etc/postfix/virtual-mailbox-maps.cf \ virtual_mailbox_maps=sqlite:/etc/postfix/virtual-mailbox-maps.cf \
virtual_alias_maps=sqlite:/etc/postfix/virtual-alias-maps.cf \ virtual_alias_maps=sqlite:/etc/postfix/virtual-alias-maps.cf \
@ -110,7 +115,7 @@ tools/editconf.py /etc/postfix/main.cf \
# SQL statement to check if we handle incoming mail for a domain, either for users or aliases. # SQL statement to check if we handle incoming mail for a domain, either for users or aliases.
cat > /etc/postfix/virtual-mailbox-domains.cf << EOF; cat > /etc/postfix/virtual-mailbox-domains.cf << EOF;
dbpath=$db_path dbpath=$db_path
query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1 FROM aliases WHERE source LIKE '%%@%s' query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1 FROM aliases WHERE source LIKE '%%@%s' UNION SELECT 1 FROM auto_aliases WHERE source LIKE '%%@%s'
EOF EOF
# SQL statement to check if we handle incoming mail for a user. # SQL statement to check if we handle incoming mail for a user.
@ -145,7 +150,7 @@ EOF
# empty destination here so that other lower priority rules might match. # empty destination here so that other lower priority rules might match.
cat > /etc/postfix/virtual-alias-maps.cf << EOF; cat > /etc/postfix/virtual-alias-maps.cf << EOF;
dbpath=$db_path dbpath=$db_path
query = SELECT destination from (SELECT destination, 0 as priority FROM aliases WHERE source='%s' AND destination<>'' UNION SELECT email as destination, 1 as priority FROM users WHERE email='%s') ORDER BY priority LIMIT 1; query = SELECT destination from (SELECT destination, 0 as priority FROM aliases WHERE source='%s' AND destination<>'' UNION SELECT email as destination, 1 as priority FROM users WHERE email='%s' UNION SELECT destination, 2 as priority FROM auto_aliases WHERE source='%s' AND destination<>'') ORDER BY priority LIMIT 1;
EOF EOF
# Restart Services # Restart Services

View File

@ -25,7 +25,7 @@ done
# #
# certbot installs EFF's certbot which we use to # certbot installs EFF's certbot which we use to
# provision free TLS certificates. # provision free TLS certificates.
apt_install duplicity python3-pip virtualenv certbot apt_install duplicity python3-pip virtualenv certbot rsync
# b2sdk is used for backblaze backups. # b2sdk is used for backblaze backups.
# boto is used for amazon aws backups. # boto is used for amazon aws backups.
@ -49,7 +49,7 @@ hide_output $venv/bin/pip install --upgrade pip
# NOTE: email_validator is repeated in setup/questions.sh, so please keep the versions synced. # NOTE: email_validator is repeated in setup/questions.sh, so please keep the versions synced.
hide_output $venv/bin/pip install --upgrade \ hide_output $venv/bin/pip install --upgrade \
rtyaml "email_validator>=1.0.0" "exclusiveprocess" \ rtyaml "email_validator>=1.0.0" "exclusiveprocess" \
flask dnspython python-dateutil \ flask dnspython python-dateutil expiringdict \
qrcode[pil] pyotp \ qrcode[pil] pyotp \
"idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver b2sdk "idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver b2sdk

View File

@ -186,6 +186,11 @@ def migration_13(env):
db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite') db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite')
shell("check_call", ["sqlite3", db, "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);"]) shell("check_call", ["sqlite3", db, "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);"])
def migration_14(env):
# Add the "auto_aliases" table.
db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite')
shell("check_call", ["sqlite3", db, "CREATE TABLE auto_aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);"])
########################################################### ###########################################################
def get_current_migration(): def get_current_migration():

View File

@ -9,6 +9,39 @@ source /etc/mailinabox.conf # load global vars
echo "Installing Nextcloud (contacts/calendar)..." echo "Installing Nextcloud (contacts/calendar)..."
# Nextcloud core and app (plugin) versions to install.
# With each version we store a hash to ensure we install what we expect.
# Nextcloud core
# --------------
# * See https://nextcloud.com/changelog for the latest version.
# * Check https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html
# for whether it supports the version of PHP available on this machine.
# * Since Nextcloud only supports upgrades from consecutive major versions,
# we automatically install intermediate versions as needed.
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
# copying it from the error message when it doesn't match what is below.
nextcloud_ver=23.0.2
nextcloud_hash=645cba42cab57029ebe29fb93906f58f7abea5f8
# Nextcloud apps
# --------------
# * Find the most recent tag that is compatible with the Nextcloud version above by
# consulting the <dependencies>...<nextcloud> node at:
# https://github.com/nextcloud-releases/contacts/blob/master/appinfo/info.xml
# https://github.com/nextcloud-releases/calendar/blob/master/appinfo/info.xml
# https://github.com/nextcloud/user_external/blob/master/appinfo/info.xml
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
# copying it from the error message when it doesn't match what is below.
contacts_ver=4.0.8
contacts_hash=9f368bb2be98c5555b7118648f4cc9fa51e8cb30
calendar_ver=3.0.6
calendar_hash=ca49bb1ce23f20e10911e39055fd59d7f7a84c30
user_external_ver=3.0.0
user_external_hash=6e5afe7f36f398f864bfdce9cad72200e70322aa
# Clear prior packages and install dependencies from apt.
apt-get purge -qq -y owncloud* # we used to use the package manager apt-get purge -qq -y owncloud* # we used to use the package manager
apt_install php php-fpm \ apt_install php php-fpm \
@ -16,6 +49,13 @@ apt_install php php-fpm \
php-dev php-gd php-xml php-mbstring php-zip php-apcu php-json \ php-dev php-gd php-xml php-mbstring php-zip php-apcu php-json \
php-intl php-imagick php-gmp php-bcmath php-intl php-imagick php-gmp php-bcmath
# Enable apc is required before installing nextcloud
tools/editconf.py /etc/php/$(php_version)/mods-available/apcu.ini -c ';' \
apc.enabled=1 \
apc.enable_cli=1
restart_service php$(php_version)-fpm
InstallNextcloud() { InstallNextcloud() {
version=$1 version=$1
@ -49,11 +89,11 @@ InstallNextcloud() {
# their github repositories. # their github repositories.
mkdir -p /usr/local/lib/owncloud/apps mkdir -p /usr/local/lib/owncloud/apps
wget_verify https://github.com/nextcloud/contacts/releases/download/v$version_contacts/contacts.tar.gz $hash_contacts /tmp/contacts.tgz wget_verify https://github.com/nextcloud-releases/contacts/releases/download/v$version_contacts/contacts-v$version_contacts.tar.gz $hash_contacts /tmp/contacts.tgz
tar xf /tmp/contacts.tgz -C /usr/local/lib/owncloud/apps/ tar xf /tmp/contacts.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/contacts.tgz rm /tmp/contacts.tgz
wget_verify https://github.com/nextcloud/calendar/releases/download/v$version_calendar/calendar.tar.gz $hash_calendar /tmp/calendar.tgz wget_verify https://github.com/nextcloud-releases/calendar/releases/download/v$version_calendar/calendar-v$version_calendar.tar.gz $hash_calendar /tmp/calendar.tgz
tar xf /tmp/calendar.tgz -C /usr/local/lib/owncloud/apps/ tar xf /tmp/calendar.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/calendar.tgz rm /tmp/calendar.tgz
@ -63,6 +103,9 @@ InstallNextcloud() {
wget_verify https://github.com/nextcloud/user_external/releases/download/v$version_user_external/user_external-$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz wget_verify https://github.com/nextcloud/user_external/releases/download/v$version_user_external/user_external-$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz
tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/ tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/user_external.tgz rm /tmp/user_external.tgz
# (Temporary?) workaround to get user_external working with Nextcloud 23 (see https://github.com/nextcloud/user_external/issues/186)
# sed -i "s/nextcloud min-version=\"21\" max-version=\"22\"/nextcloud min-version=\"21\" max-version=\"23\"/g" /usr/local/lib/owncloud/apps/user_external/appinfo/info.xml
fi fi
# Fix weird permissions. # Fix weird permissions.
@ -99,16 +142,6 @@ InstallNextcloud() {
fi fi
} }
# Nextcloud Version to install. Checks are done down below to step through intermediate versions.
nextcloud_ver=20.0.8
nextcloud_hash=372b0b4bb07c7984c04917aff86b280e68fbe761
contacts_ver=3.5.1
contacts_hash=d2ffbccd3ed89fa41da20a1dff149504c3b33b93
calendar_ver=2.2.0
calendar_hash=673ad72ca28adb8d0f209015ff2dca52ffad99af
user_external_ver=1.0.0
user_external_hash=3bf2609061d7214e7f0f69dd8883e55c4ec8f50a
# Current Nextcloud Version, #1623 # Current Nextcloud Version, #1623
# Checking /usr/local/lib/owncloud/version.php shows version of the Nextcloud application, not the DB # Checking /usr/local/lib/owncloud/version.php shows version of the Nextcloud application, not the DB
# $STORAGE_ROOT/owncloud is kept together even during a backup. It is better to rely on config.php than # $STORAGE_ROOT/owncloud is kept together even during a backup. It is better to rely on config.php than
@ -126,7 +159,7 @@ fi
# from the version currently installed, do the install/upgrade # from the version currently installed, do the install/upgrade
if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextcloud_ver ]]; then if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextcloud_ver ]]; then
# Stop php-fpm if running. If theyre not running (which happens on a previously failed install), dont bail. # Stop php-fpm if running. If they are not running (which happens on a previously failed install), dont bail.
service php$(php_version)-fpm stop &> /dev/null || /bin/true service php$(php_version)-fpm stop &> /dev/null || /bin/true
# Backup the existing ownCloud/Nextcloud. # Backup the existing ownCloud/Nextcloud.
@ -175,7 +208,8 @@ if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextc
CURRENT_NEXTCLOUD_VER="17.0.6" CURRENT_NEXTCLOUD_VER="17.0.6"
fi fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^17 ]]; then if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^17 ]]; then
echo "ALTER TABLE oc_flow_operations ADD COLUMN entity VARCHAR;" | sqlite3 $STORAGE_ROOT/owncloud/owncloud.db # Don't exit the install if this column already exists (see #2076)
(echo "ALTER TABLE oc_flow_operations ADD COLUMN entity VARCHAR;" | sqlite3 $STORAGE_ROOT/owncloud/owncloud.db 2>/dev/null) || true
InstallNextcloud 18.0.10 39c0021a8b8477c3f1733fddefacfa5ebf921c68 3.4.1 aee680a75e95f26d9285efd3c1e25cf7f3bfd27e 2.0.3 9d9717b29337613b72c74e9914c69b74b346c466 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a InstallNextcloud 18.0.10 39c0021a8b8477c3f1733fddefacfa5ebf921c68 3.4.1 aee680a75e95f26d9285efd3c1e25cf7f3bfd27e 2.0.3 9d9717b29337613b72c74e9914c69b74b346c466 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a
CURRENT_NEXTCLOUD_VER="18.0.10" CURRENT_NEXTCLOUD_VER="18.0.10"
fi fi
@ -183,12 +217,24 @@ if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextc
InstallNextcloud 19.0.4 01e98791ba12f4860d3d4047b9803f97a1b55c60 3.4.1 aee680a75e95f26d9285efd3c1e25cf7f3bfd27e 2.0.3 9d9717b29337613b72c74e9914c69b74b346c466 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a InstallNextcloud 19.0.4 01e98791ba12f4860d3d4047b9803f97a1b55c60 3.4.1 aee680a75e95f26d9285efd3c1e25cf7f3bfd27e 2.0.3 9d9717b29337613b72c74e9914c69b74b346c466 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a
CURRENT_NEXTCLOUD_VER="19.0.4" CURRENT_NEXTCLOUD_VER="19.0.4"
fi fi
fi if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^19 ]]; then
InstallNextcloud 20.0.14 92cac708915f51ee2afc1787fd845476fd090c81 4.0.0 f893ca57a543b260c9feeecbb5958c00b6998e18 2.2.2 923846d48afb5004a456b9079cf4b46d23b3ef3a 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a
InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash CURRENT_NEXTCLOUD_VER="20.0.14"
# Nextcloud 20 needs to have some optional columns added # Nextcloud 20 needs to have some optional columns added
sudo -u www-data php /usr/local/lib/owncloud/occ db:add-missing-columns sudo -u www-data php /usr/local/lib/owncloud/occ db:add-missing-columns
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^20 ]]; then
InstallNextcloud 21.0.7 f5c7079c5b56ce1e301c6a27c0d975d608bb01c9 4.0.0 f893ca57a543b260c9feeecbb5958c00b6998e18 2.2.2 923846d48afb5004a456b9079cf4b46d23b3ef3a 1.0.0 3bf2609061d7214e7f0f69dd8883e55c4ec8f50a
CURRENT_NEXTCLOUD_VER="21.0.7"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^21 ]]; then
InstallNextcloud 22.2.3 58d2d897ba22a057aa03d29c762c5306211fefd2 4.0.7 8ab31d205408e4f12067d8a4daa3595d46b513e3 3.0.4 6fb1e998d307c53245faf1c37a96eb982bbee8ba 2.1.0 6e5afe7f36f398f864bfdce9cad72200e70322aa
CURRENT_NEXTCLOUD_VER="22.2.3"
fi
fi
InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash
fi fi
# ### Configuring Nextcloud # ### Configuring Nextcloud
@ -214,7 +260,7 @@ if [ ! -f $STORAGE_ROOT/owncloud/owncloud.db ]; then
'overwrite.cli.url' => '/cloud', 'overwrite.cli.url' => '/cloud',
'user_backends' => array( 'user_backends' => array(
array( array(
'class' => 'OC_User_IMAP', 'class' => '\OCA\UserExternal\IMAP',
'arguments' => array( 'arguments' => array(
'127.0.0.1', 143, null '127.0.0.1', 143, null
), ),
@ -279,6 +325,8 @@ php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
<?php <?php
include("$STORAGE_ROOT/owncloud/config.php"); include("$STORAGE_ROOT/owncloud/config.php");
\$CONFIG['config_is_read_only'] = true; # should prevent warnings from occ tool but doesn't
\$CONFIG['trusted_domains'] = array('$PRIMARY_HOSTNAME'); \$CONFIG['trusted_domains'] = array('$PRIMARY_HOSTNAME');
\$CONFIG['memcache.local'] = '\OC\Memcache\APCu'; \$CONFIG['memcache.local'] = '\OC\Memcache\APCu';
@ -318,12 +366,15 @@ if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
sudo -u www-data \ sudo -u www-data \
php /usr/local/lib/owncloud/occ app:disable photos dashboard activity \ php /usr/local/lib/owncloud/occ app:disable photos dashboard activity \
| (grep -v "No such app enabled" || /bin/true) | (grep -v "No such app enabled" || /bin/true)
# Install interesting apps
installed=$(sudo -u www-data php /usr/local/lib/owncloud/occ app:list | grep 'notes')
if [ -z "$installed" ]; then # Install interesting apps
sudo -u www-data php /usr/local/lib/owncloud/occ app:install notes (sudo -u www-data php /usr/local/lib/owncloud/occ app:install notes) || true
fi
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable notes
(sudo -u www-data php /usr/local/lib/owncloud/occ app:install twofactor_totp) || true
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable twofactor_totp
# upgrade apps # upgrade apps
sudo -u www-data php /usr/local/lib/owncloud/occ app:update --all sudo -u www-data php /usr/local/lib/owncloud/occ app:update --all
@ -348,12 +399,6 @@ tools/editconf.py /etc/php/$(php_version)/cli/conf.d/10-opcache.ini -c ';' \
opcache.save_comments=1 \ opcache.save_comments=1 \
opcache.revalidate_freq=1 opcache.revalidate_freq=1
# If apc is explicitly disabled we need to enable it
if grep -q apc.enabled=0 /etc/php/$(php_version)/mods-available/apcu.ini; then
tools/editconf.py /etc/php/$(php_version)/mods-available/apcu.ini -c ';' \
apc.enabled=1
fi
# Set up a cron job for Nextcloud. # Set up a cron job for Nextcloud.
cat > /etc/cron.d/mailinabox-nextcloud << EOF; cat > /etc/cron.d/mailinabox-nextcloud << EOF;
#!/bin/bash #!/bin/bash

View File

@ -7,12 +7,11 @@ if [[ $EUID -ne 0 ]]; then
exit 1 exit 1
fi fi
# Check that we are running on Debian GNU/Linux, or Ubuntu 20.04 # Check that we are running on Ubuntu 20.04 LTS or Ubuntu 22.04 LTS
OS=`lsb_release -d | sed 's/.*:\s*//'` if [ "$( lsb_release --id --short )" != "Ubuntu" ] || [ "$( lsb_release --release --short )" != "22.04" -a "$( lsb_release --release --short )" != "20.04" ]; then
if [ "$OS" != "Debian GNU/Linux 10 (buster)" -a "$(echo $OS | grep -o 'Ubuntu 20.04')" != "Ubuntu 20.04" ]; then echo "Mail-in-a-Box only supports being installed on Ubuntu 20.04 or 22.04, sorry. You are running:"
echo "Mail-in-a-Box only supports being installed on Debian 10 or Ubuntu 20.04 LTS, sorry. You are running:"
echo echo
lsb_release -d | sed 's/.*:\s*//' lsb_release --description --short
echo echo
echo "We can't write scripts that run on every possible setup, sorry." echo "We can't write scripts that run on every possible setup, sorry."
exit 1 exit 1

View File

@ -28,7 +28,7 @@ source /etc/mailinabox.conf # load global vars
if [ ! -f /usr/bin/openssl ] \ if [ ! -f /usr/bin/openssl ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ] \ || [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ] \ || [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then || [ ! -f $STORAGE_ROOT/ssl/dh4096.pem ]; then
echo "Creating initial SSL certificate and perfect forward secrecy Diffie-Hellman parameters..." echo "Creating initial SSL certificate and perfect forward secrecy Diffie-Hellman parameters..."
fi fi
@ -63,7 +63,7 @@ mkdir -p $STORAGE_ROOT/ssl
if [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ]; then if [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ]; then
# Set the umask so the key file is never world-readable. # Set the umask so the key file is never world-readable.
(umask 077; hide_output \ (umask 077; hide_output \
openssl genrsa -out $STORAGE_ROOT/ssl/ssl_private_key.pem 2048) openssl genrsa -out $STORAGE_ROOT/ssl/ssl_private_key.pem 4096)
fi fi
# Generate a self-signed SSL certificate because things like nginx, dovecot, # Generate a self-signed SSL certificate because things like nginx, dovecot,
@ -90,9 +90,7 @@ if [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ]; then
ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem
fi fi
# Generate some Diffie-Hellman cipher bits. # We no longer generate Diffie-Hellman cipher bits. Following rfc7919 we use
# openssl's default bit length for this is 1024 bits, but we'll create # a predefined finite field group, in this case ffdhe4096 from
# 2048 bits of bits per the latest recommendations. # https://raw.githubusercontent.com/internetstandards/dhe_groups/master/ffdhe4096.pem
if [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then cp -f conf/dh4096.pem $STORAGE_ROOT/ssl/
openssl dhparam -out $STORAGE_ROOT/ssl/dh2048.pem 2048
fi

View File

@ -75,7 +75,26 @@ then
fi fi
fi fi
# Certbot doesn't require a PPA in Debian # ### Set log retention policy.
# Set the systemd journal log retention from infinite to 10 days,
# since over time the logs take up a large amount of space.
# (See https://discourse.mailinabox.email/t/journalctl-reclaim-space-on-small-mailinabox/6728/11.)
tools/editconf.py /etc/systemd/journald.conf MaxRetentionSec=10day
hide_output systemctl restart systemd-journald.service
# We install some non-standard Ubuntu packages maintained by other
# third-party providers. First ensure add-apt-repository is installed.
if [ ! -f /usr/bin/add-apt-repository ]; then
echo "Installing add-apt-repository..."
hide_output apt-get update
apt_install software-properties-common
fi
# Ensure the universe repository is enabled since some of our packages
# come from there and minimal Ubuntu installs may have it turned off.
hide_output add-apt-repository -y universe
# ### Update Packages # ### Update Packages
@ -230,6 +249,25 @@ APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Verbose "0"; APT::Periodic::Verbose "0";
EOF EOF
# Adjust apt update and upgrade timers such that they're always before daily status
# checks and thus never report upgrades unless user intervention is necessary.
if [ ! -d /etc/systemd/system/apt-daily.timer.d ]; then
mkdir /etc/systemd/system/apt-daily.timer.d
fi
cat > /etc/systemd/system/apt-daily.timer.d/override.conf <<EOF;
[Timer]
RandomizedDelaySec=5h
EOF
if [ ! -d /etc/systemd/system/apt-daily-upgrade.timer.d ]; then
mkdir /etc/systemd/system/apt-daily-upgrade.timer.d
fi
cat /etc/systemd/system/apt-daily-upgrade.timer.d/override.conf <<EOF;
[Timer]
OnCalendar=
OnCalendar=*-*-* 23:30
EOF
# ### Firewall # ### Firewall
# Various virtualized environments like Docker and some VPSs don't provide #NODOC # Various virtualized environments like Docker and some VPSs don't provide #NODOC
@ -291,54 +329,44 @@ fi #NODOC
# DNS server, which won't work for RBLs. So we really need a local recursive # DNS server, which won't work for RBLs. So we really need a local recursive
# nameserver. # nameserver.
# #
# We'll install `bind9`, which as packaged for Ubuntu, has DNSSEC enabled by default via "dnssec-validation auto". # We'll install unbound, which as packaged for Ubuntu, has DNSSEC enabled by default.
# We'll have it be bound to 127.0.0.1 so that it does not interfere with # We'll have it be bound to 127.0.0.1 so that it does not interfere with
# the public, recursive nameserver `nsd` bound to the public ethernet interfaces. # the public, recursive nameserver `nsd` bound to the public ethernet interfaces.
#
# About the settings: # remove bind9 in case it is still there
# apt-get purge -qq -y bind9 bind9-utils
# * Adding -4 to OPTIONS will have `bind9` not listen on IPv6 addresses
# so that we're sure there's no conflict with nsd, our public domain # Install unbound and dns utils (e.g. dig)
# name server, on IPV6. apt_install unbound python3-unbound bind9-dnsutils
# * The listen-on directive in named.conf.options restricts `bind9` to
# binding to the loopback interface instead of all interfaces. # Configure unbound
# * The max-recursion-queries directive increases the maximum number of iterative queries. cp -f conf/unbound.conf /etc/unbound/unbound.conf.d/miabunbound.conf
# If more queries than specified are sent, bind9 returns SERVFAIL. After flushing the cache during system checks,
# we ran into the limit thus we are increasing it from 75 (default value) to 100. if [ -d /etc/unbound/lists.d ]; then
apt_install bind9 mkdir /etc/unbound/lists.d
touch /etc/default/bind9
tools/editconf.py /etc/default/bind9 \
"OPTIONS=\"-u bind -4\""
if ! grep -q "listen-on " /etc/bind/named.conf.options; then
# Add a listen-on directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tlisten-on { 127.0.0.1; };\n}/" /etc/bind/named.conf.options
fi
if ! grep -q "listen-on-v6 " /etc/bind/named.conf.options; then
# Add a listen-on-v6 directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tlisten-on-v6 { ::1; };\n}/" /etc/bind/named.conf.options
else
# Modify the listen-on-v6 directive if it does exist
sed -i "s/listen-on-v6 { any; }/listen-on-v6 { ::1; }/" /etc/bind/named.conf.options
fi fi
if ! grep -q "max-recursion-queries " /etc/bind/named.conf.options; then systemctl restart unbound
# Add a max-recursion-queries directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tmax-recursion-queries 100;\n}/" /etc/bind/named.conf.options unbound-control -q status
# Only reset the local dns settings if unbound server is running, otherwise we'll
# up with a system with an unusable internet connection
if [ $? -ne 0 ]; then
echo "Recursive DNS server not active"
exit 1
fi fi
# First we'll disable systemd-resolved's management of resolv.conf and its stub server. # Modify systemd settings
# Breaking the symlink to /run/systemd/resolve/stub-resolv.conf means
# systemd-resolved will read it for DNS servers to use. Put in 127.0.0.1,
# which is where bind9 will be running. Obviously don't do this before
# installing bind9 or else apt won't be able to resolve a server to
# download bind9 from.
rm -f /etc/resolv.conf rm -f /etc/resolv.conf
tools/editconf.py /etc/systemd/resolved.conf DNSStubListener=no tools/editconf.py /etc/systemd/resolved.conf \
DNS=127.0.0.1 \
DNSSEC=yes \
DNSStubListener=no
echo "nameserver 127.0.0.1" > /etc/resolv.conf echo "nameserver 127.0.0.1" > /etc/resolv.conf
# Restart the DNS services. # Restart the DNS services.
restart_service bind9
systemctl restart systemd-resolved systemctl restart systemd-resolved
# ### Fail2Ban Service # ### Fail2Ban Service
@ -346,16 +374,39 @@ systemctl restart systemd-resolved
# Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc. # Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc.
rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore
rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config
if [ ! -z "$ADMIN_HOME_IPV6" ]; then
ADMIN_HOME_IPV6_FB="${ADMIN_HOME_IPV6}/64"
else
ADMIN_HOME_IPV6_FB=""
fi
cat conf/fail2ban/jails.conf \ cat conf/fail2ban/jails.conf \
| sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \ | sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \
| sed "s/PUBLIC_IP/$PUBLIC_IP/g" \ | sed "s/PUBLIC_IP/$PUBLIC_IP/g" \
| sed "s/ADMIN_HOME_IPV6/$ADMIN_HOME_IPV6/g" \ | sed "s/ADMIN_HOME_IPV6/$ADMIN_HOME_IPV6_FB/g" \
| sed "s/ADMIN_HOME_IP/$ADMIN_HOME_IP/g" \ | sed "s/ADMIN_HOME_IP/$ADMIN_HOME_IP/g" \
| sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \ | sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \
> /etc/fail2ban/jail.d/00-mailinabox.conf > /etc/fail2ban/jail.d/00-mailinabox.conf
cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/ cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/
cp -f conf/fail2ban/jail.d/* /etc/fail2ban/jail.d/ cp -f conf/fail2ban/jail.d/* /etc/fail2ban/jail.d/
# If SSH port is not default, add the not default to the ssh jail
if [ ! -z "$SSH_PORT" ]; then
# create backup copy
cp -f /etc/fail2ban/jail.conf /etc/fail2ban/jail.conf.miab_old
if [ "$SSH_PORT" != "22" ]; then
# Add alternative SSH port
sed -i "s/port[ ]\+=[ ]\+ssh$/port = ssh,$SSH_PORT/g" /etc/fail2ban/jail.conf
sed -i "s/port[ ]\+=[ ]\+ssh$/port = ssh,$SSH_PORT/g" /etc/fail2ban/jail.d/geoipblock.conf
else
# Set SSH port to default
sed -i "s/port[ ]\+=[ ]\+ssh/port = ssh/g" /etc/fail2ban/jail.conf
sed -i "s/port[ ]\+=[ ]\+ssh/port = ssh/g" /etc/fail2ban/jail.d/geoipblock.conf
fi
fi
# fail2ban should be able to look back far enough because we increased findtime of recidive jail # fail2ban should be able to look back far enough because we increased findtime of recidive jail
tools/editconf.py /etc/fail2ban/fail2ban.conf dbpurgeage=7d tools/editconf.py /etc/fail2ban/fail2ban.conf dbpurgeage=7d

View File

@ -28,16 +28,23 @@ apt_install \
# Install Roundcube from source if it is not already present or if it is out of date. # Install Roundcube from source if it is not already present or if it is out of date.
# Combine the Roundcube version number with the commit hash of plugins to track # Combine the Roundcube version number with the commit hash of plugins to track
# whether we have the latest version of everything. # whether we have the latest version of everything.
# For the latest versions, see:
VERSION=1.4.11 # https://github.com/roundcube/roundcubemail/releases
HASH=3877f0e70f29e7d0612155632e48c3db1e626be3 # https://github.com/mfreiholz/persistent_login/commits/master
PERSISTENT_LOGIN_VERSION=6b3fc450cae23ccb2f393d0ef67aa319e877e435 # version 5.2.0 # https://github.com/stremlau/html5_notifier/commits/master
# https://github.com/mstilkerich/rcmcarddav/releases
# The easiest way to get the package hashes is to run this script and get the hash from
# the error message.
VERSION=1.5.2
HASH=208ce4ca0be423cc0f7070ff59bd03588b4439bf
PERSISTENT_LOGIN_VERSION=59ca1b0d3a02cff5fa621c1ad581d15f9d642fe8
HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+ HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+
CARDDAV_VERSION=4.3.0
CARDDAV_HASH=4ad7df8843951062878b1375f77c614f68bc5c61
CONTEXT_MENU_VERSION=602a3812922fb8f71814eb3b8d91e9b7859aab7e # version 3.2.1
TWOFACT_COMMIT=06e21b0c03aeeb650ee4ad93538873185f776f8b # master @ 21-04-2022
CARDDAV_VERSION=4.1.1 UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION:$CONTEXT_MENU_VERSION:$TWOFACT_COMMIT
CARDDAV_HASH=87b73661b7799b2079c28324311eddb4241242bb
UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION
# paths that are often reused. # paths that are often reused.
RCM_DIR=/usr/local/lib/roundcubemail RCM_DIR=/usr/local/lib/roundcubemail
@ -76,7 +83,7 @@ if [ $needs_update == 1 ]; then
# install roundcube html5_notifier plugin # install roundcube html5_notifier plugin
git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' ${RCM_PLUGIN_DIR}/html5_notifier git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' ${RCM_PLUGIN_DIR}/html5_notifier
# download and verify the full release of the carddav plugin # download and verify the full release of the carddav plugin. Can't use git_clone because repository does not include all dependencies
wget_verify \ wget_verify \
https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \ https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \
$CARDDAV_HASH \ $CARDDAV_HASH \
@ -86,6 +93,12 @@ if [ $needs_update == 1 ]; then
tar -C ${RCM_PLUGIN_DIR} --no-same-owner -zxf /tmp/carddav.tar.gz tar -C ${RCM_PLUGIN_DIR} --no-same-owner -zxf /tmp/carddav.tar.gz
rm -f /tmp/carddav.tar.gz rm -f /tmp/carddav.tar.gz
# install roundcube context menu plugin
git_clone https://github.com/johndoh/roundcube-contextmenu.git $CONTEXT_MENU_VERSION '' ${RCM_PLUGIN_DIR}/contextmenu
# install two factor totp authenticator
git_clone https://github.com/alexandregz/twofactor_gauthenticator.git $TWOFACT_COMMIT '' ${RCM_PLUGIN_DIR}/twofactor_gauthenticator
# record the version we've installed # record the version we've installed
echo $UPDATE_KEY > ${RCM_DIR}/version echo $UPDATE_KEY > ${RCM_DIR}/version
fi fi
@ -130,9 +143,10 @@ cat > $RCM_CONFIG <<EOF;
\$config['product_name'] = '$PRIMARY_HOSTNAME Webmail'; \$config['product_name'] = '$PRIMARY_HOSTNAME Webmail';
\$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things \$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things
\$config['des_key'] = '$SECRET_KEY'; # 37 characters -> ~256 bits for AES-256, see above \$config['des_key'] = '$SECRET_KEY'; # 37 characters -> ~256 bits for AES-256, see above
\$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav'); \$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav', 'markasjunk', 'contextmenu', 'twofactor_gauthenticator');
\$config['skin'] = 'elastic'; \$config['skin'] = 'elastic';
\$config['login_autocomplete'] = 2; \$config['login_autocomplete'] = 2;
\$config['login_username_filter'] = 'email';
\$config['password_charset'] = 'UTF-8'; \$config['password_charset'] = 'UTF-8';
\$config['junk_mbox'] = 'Spam'; \$config['junk_mbox'] = 'Spam';
?> ?>
@ -152,7 +166,7 @@ cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF;
'active' => true, 'active' => true,
'readonly' => false, 'readonly' => false,
'refresh_time' => '02:00:00', 'refresh_time' => '02:00:00',
'fixed' => array('username','password'), 'fixed' => array('username'),
'preemptive_auth' => '1', 'preemptive_auth' => '1',
'hide' => false, 'hide' => false,
); );

View File

@ -232,7 +232,7 @@ if __name__ == "__main__":
run_test(managesieve_test, [], 20, 30, 4) run_test(managesieve_test, [], 20, 30, 4)
# Mail-in-a-Box control panel # Mail-in-a-Box control panel
run_test(http_test, ["/admin/me", 200], 20, 30, 1) run_test(http_test, ["/admin/login", 200], 20, 30, 1)
# Munin via the Mail-in-a-Box control panel # Munin via the Mail-in-a-Box control panel
run_test(http_test, ["/admin/munin/", 401], 20, 30, 1) run_test(http_test, ["/admin/munin/", 401], 20, 30, 1)

View File

@ -48,7 +48,7 @@ def test2(tests, server, description):
for qname, rtype, expected_answer in tests: for qname, rtype, expected_answer in tests:
# do the query and format the result as a string # do the query and format the result as a string
try: try:
response = dns.resolver.query(qname, rtype) response = dns.resolver.resolve(qname, rtype)
except dns.resolver.NoNameservers: except dns.resolver.NoNameservers:
# host did not have an answer for this query # host did not have an answer for this query
print("Could not connect to %s for DNS query." % server) print("Could not connect to %s for DNS query." % server)

View File

@ -48,7 +48,7 @@ server = smtplib.SMTP_SSL(host)
ipaddr = socket.gethostbyname(host) # IPv4 only! ipaddr = socket.gethostbyname(host) # IPv4 only!
reverse_ip = dns.reversename.from_address(ipaddr) # e.g. "1.0.0.127.in-addr.arpa." reverse_ip = dns.reversename.from_address(ipaddr) # e.g. "1.0.0.127.in-addr.arpa."
try: try:
reverse_dns = dns.resolver.query(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname reverse_dns = dns.resolver.resolve(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname
except dns.resolver.NXDOMAIN: except dns.resolver.NXDOMAIN:
print("Reverse DNS lookup failed for %s. SMTP EHLO name check skipped." % ipaddr) print("Reverse DNS lookup failed for %s. SMTP EHLO name check skipped." % ipaddr)
reverse_dns = None reverse_dns = None

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# From https://github.com/gsauthof/utility Thanks!
# 2016, Georg Sauthoff <mail@georg.so>, GPLv3+ # 2016, Georg Sauthoff <mail@georg.so>, GPLv3+
import argparse import argparse
@ -189,7 +189,7 @@ def get_addrs(dest, mx=True):
domains = [ dest ] domains = [ dest ]
if mx: if mx:
try: try:
r = dns.resolver.resolve(dest, 'mx') r = dns.resolver.resolve(dest, 'mx', search=True)
domains = [ answer.exchange for answer in r ] domains = [ answer.exchange for answer in r ]
log.debug('destinatin {} has MXs: {}' log.debug('destinatin {} has MXs: {}'
.format(dest, ', '.join([str(d) for d in domains]))) .format(dest, ', '.join([str(d) for d in domains])))
@ -199,7 +199,7 @@ def get_addrs(dest, mx=True):
for domain in domains: for domain in domains:
for t in ['a', 'aaaa']: for t in ['a', 'aaaa']:
try: try:
r = dns.resolver.resolve(domain, t) r = dns.resolver.resolve(domain, t, search=True)
except dns.resolver.NoAnswer: except dns.resolver.NoAnswer:
continue continue
xs = [ ( answer.address, domain ) for answer in r ] xs = [ ( answer.address, domain ) for answer in r ]
@ -216,12 +216,12 @@ def check_dnsbl(addr, bl):
rev = dns.reversename.from_address(addr) rev = dns.reversename.from_address(addr)
domain = str(rev.split(3)[0]) + '.' + bl domain = str(rev.split(3)[0]) + '.' + bl
try: try:
r = dns.resolver.resolve(domain, 'a') r = dns.resolver.resolve(domain, 'a', search=True)
except (dns.resolver.NXDOMAIN, dns.resolver.NoNameservers, dns.resolver.NoAnswer): except (dns.resolver.NXDOMAIN, dns.resolver.NoNameservers, dns.resolver.NoAnswer):
return 0 return 0
address = list(r)[0].address address = list(r)[0].address
try: try:
r = dns.resolver.resolve(domain, 'txt') r = dns.resolver.resolve(domain, 'txt', search=True)
txt = list(r)[0].to_text() txt = list(r)[0].to_text()
except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN): except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN):
txt = '' txt = ''
@ -237,7 +237,7 @@ def check_rdns(addrs):
log.debug('Check if there is a reverse DNS record that maps address {} to {}' log.debug('Check if there is a reverse DNS record that maps address {} to {}'
.format(addr, domain)) .format(addr, domain))
try: try:
r = dns.resolver.resolve(dns.reversename.from_address(addr), 'ptr') r = dns.resolver.resolve(dns.reversename.from_address(addr), 'ptr', search=True)
a = list(r)[0] a = list(r)[0]
target = str(a.target).lower() target = str(a.target).lower()
source = str(domain).lower() source = str(domain).lower()
@ -316,7 +316,7 @@ if __name__ == '__main__':
# #
## In[ ]: ## In[ ]:
# #
#r = dns.resolver.resolve(dns.reversename.from_address('89.238.75.224'), 'ptr') #r = dns.resolver.resolve(dns.reversename.from_address('89.238.75.224'), 'ptr', search=True)
#a = list(r)[0] #a = list(r)[0]
#a.target.to_text() #a.target.to_text()
# #
@ -360,7 +360,7 @@ if __name__ == '__main__':
## In[ ]: ## In[ ]:
# #
## as of 2016-11, listed ## as of 2016-11, listed
#r = dns.resolver.resolve('39.227.103.116.zen.spamhaus.org', 'txt') #r = dns.resolver.resolve('39.227.103.116.zen.spamhaus.org', 'txt', search=True)
#answer = list(r)[0] #answer = list(r)[0]
#answer.to_text() #answer.to_text()
# #
@ -388,7 +388,7 @@ if __name__ == '__main__':
# #
## In[ ]: ## In[ ]:
# #
#a = dns.resolver.resolve('georg.so', 'MX') #a = dns.resolver.resolve('georg.so', 'MX', search=True)
# #
# #
## In[ ]: ## In[ ]:
@ -404,7 +404,7 @@ if __name__ == '__main__':
## In[ ]: ## In[ ]:
# #
#[ x.exchange for x in a] #[ x.exchange for x in a]
#dns.resolver.resolve(list(a)[0].exchange, 'a') #dns.resolver.resolve(list(a)[0].exchange, 'a', search=True)
# #
# #
## In[ ]: ## In[ ]:
@ -416,14 +416,14 @@ if __name__ == '__main__':
## In[ ]: ## In[ ]:
# #
## should throw NoAnswer ## should throw NoAnswer
#a = dns.resolver.resolve('escher.lru.li', 'mx') #a = dns.resolver.resolve('escher.lru.li', 'mx', search=True)
##b = list(a) ##b = list(a)
#a #a
# #
# #
## In[ ]: ## In[ ]:
# #
#a = dns.resolver.resolve('georg.so', 'a') #a = dns.resolver.resolve('georg.so', 'a', search=True)
#b = list(a)[0] #b = list(a)[0]
#b.address #b.address
#dns.reversename.from_address(b.address) #dns.reversename.from_address(b.address)
@ -433,7 +433,7 @@ if __name__ == '__main__':
# #
## should throw NXDOMAIN ## should throw NXDOMAIN
#rs = str(r.split(3)[0]) #rs = str(r.split(3)[0])
#dns.resolver.resolve(rs + '.zen.spamhaus.org', 'A' ) #dns.resolver.resolve(rs + '.zen.spamhaus.org', 'A' , search=True)
# #
# #
## In[ ]: ## In[ ]:

33
tools/create_dns_blocklist.sh Executable file
View File

@ -0,0 +1,33 @@
#!/bin/bash
set -euo pipefail
# Download select set of malware blocklists from The Firebog's "The Big Blocklist
# Collection" [0] and block access to them with Unbound by returning NXDOMAIN.
#
# [0]: https://firebog.net
(
# Malicious Lists
curl -sSf "https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareHosts.txt" ;
curl -sSf "https://osint.digitalside.it/Threat-Intel/lists/latestdomains.txt" ;
curl -sSf "https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt" ;
curl -sSf "https://v.firebog.net/hosts/Prigent-Crypto.txt" ;
curl -sSf "https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt" ;
curl -sSf "https://phishing.army/download/phishing_army_blocklist_extended.txt" ;
curl -sSf "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt" ;
curl -sSf "https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt" ;
curl -sSf "https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Risk/hosts" ;
curl -sSf "https://urlhaus.abuse.ch/downloads/hostfile/" ;
# curl -sSf "https://v.firebog.net/hosts/Prigent-Malware.txt" ;
# curl -sSf "https://v.firebog.net/hosts/Shalla-mal.txt" ;
) |
cat | # Combine all lists into one
grep -v '#' | # Remove comments lines
grep -v '::' | # Remove universal ipv6 address
tr -d '\r' | # Normalize line endings by removing Windows carriage returns
sed -e 's/0\.0\.0\.0\s\{0,\}//g' | # Remove ip address from start of line
sed -e 's/127\.0\.0\.1\s\{0,\}//g' |
sed -e '/^$/d' | # Remove empty line
sort -u | # Sort and remove duplicates
awk '{print "local-zone: " ""$1"" " always_nxdomain"}' # Convert to Unbound configuration

View File

@ -1 +1,2 @@
user = "<admin mail address @box>:<admin password>" USER_NAME="<admin mail address @box>"
USER_PASS="<admin password>"

View File

@ -15,7 +15,8 @@
#----- Contents of dyndns.cfg file below ------ #----- Contents of dyndns.cfg file below ------
#----- user credentials ----------------------- #----- user credentials -----------------------
#user = "admin@mydomain.com:MYADMINPASSWORD" #USER_NAME="admin@mydomain.com"
#USER_PASS="MYADMINPASSWORD"
#----- Contents of dyndns.domain below -------- #----- Contents of dyndns.domain below --------
#<miabdomain.tld> #<miabdomain.tld>
#------ Contents of dyndns.dynlist below ------ #------ Contents of dyndns.dynlist below ------
@ -35,6 +36,8 @@ CATCMD="/bin/cat"
OATHTOOLCMD="/usr/bin/oathtool" OATHTOOLCMD="/usr/bin/oathtool"
DYNDNSNAMELIST="$MYNAME.dynlist" DYNDNSNAMELIST="$MYNAME.dynlist"
IGNORESTR=";; connection timed out; no servers could be reached"
if [ ! -x $DIGCMD ]; then if [ ! -x $DIGCMD ]; then
echo "$MYNAME: dig command $DIGCMD not found. Check and fix please." echo "$MYNAME: dig command $DIGCMD not found. Check and fix please."
exit 99 exit 99
@ -66,24 +69,43 @@ if [ ! -f $DYNDNSNAMELIST ]; then
exit 99 exit 99
fi fi
source $CFGFILE
AUTHSTR="Authorization: Basic $(echo $USER_NAME:$USER_PASS | base64 -w 0)"
MYIP="`$DIGCMD +short myip.opendns.com @resolver1.opendns.com`" MYIP="`$DIGCMD +short myip.opendns.com @resolver1.opendns.com`"
if [ -z "MYIP" ]; then if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver2.opendns.com`" MYIP="`$DIGCMD +short myip.opendns.com @resolver2.opendns.com`"
fi fi
if [ -z "MYIP" ]; then if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver3.opendns.com`" MYIP="`$DIGCMD +short myip.opendns.com @resolver3.opendns.com`"
fi fi
if [ -z "MYIP" ]; then if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver4.opendns.com`" MYIP="`$DIGCMD +short myip.opendns.com @resolver4.opendns.com`"
fi fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then if [ -z "$MYIP" ]; then
MYIP=$($DIGCMD -4 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"') MYIP=$($DIGCMD -4 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"')
fi fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ ! -z "$MYIP" ]; then if [ ! -z "$MYIP" ]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST` for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do do
@ -97,7 +119,7 @@ if [ ! -z "$MYIP" ]; then
else else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)" echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -K $CFGFILE -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`" STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUS in case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";; "OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
@ -116,7 +138,7 @@ if [ ! -z "$MYIP" ]; then
source $TOTPFILE source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)" TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -K $CFGFILE -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`" STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUST in case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";; "OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";;
@ -137,16 +159,28 @@ fi
# Now to do the same for ipv6 # Now to do the same for ipv6
MYIP="`$DIGCMD AAAA @resolver1.ipv6-sandbox.opendns.com myip.opendns.com +short -6`" MYIP="`$DIGCMD +short AAAA @resolver1.ipv6-sandbox.opendns.com myip.opendns.com -6`"
if [ -z "MYIP" ]; then if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP="`$DIGCMD AAAA @resolver2.ipv6-sandbox.opendns.com myip.opendns.com +short -6`" MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short AAAA @resolver2.ipv6-sandbox.opendns.com myip.opendns.com -6`"
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi fi
if [ -z "$MYIP" ]; then if [ -z "$MYIP" ]; then
MYIP=$($DIGCMD -6 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"') MYIP=$($DIGCMD -6 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"')
fi fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ ! -z "$MYIP" ]; then if [ ! -z "$MYIP" ]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST` for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do do
@ -160,7 +194,7 @@ if [ ! -z "$MYIP" ]; then
else else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)" echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -K $CFGFILE -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`" STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUS in case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";; "OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
@ -179,7 +213,7 @@ if [ ! -z "$MYIP" ]; then
source $TOTPFILE source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)" TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -K $CFGFILE -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`" STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUST in case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";; "OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";;

View File

@ -14,6 +14,10 @@
# #
# NAME VALUE # NAME VALUE
# #
# If the -e option is given and VALUE is empty, the setting is removed
# from the configuration file if it is set (i.e. existing occurrences
# are commented out and no new setting is added).
#
# If the -c option is given, then the supplied character becomes the comment character # If the -c option is given, then the supplied character becomes the comment character
# #
# If the -w option is given, then setting lines continue onto following # If the -w option is given, then setting lines continue onto following
@ -35,6 +39,7 @@ settings = sys.argv[2:]
delimiter = "=" delimiter = "="
delimiter_re = r"\s*=\s*" delimiter_re = r"\s*=\s*"
erase_setting = False
comment_char = "#" comment_char = "#"
folded_lines = False folded_lines = False
testing = False testing = False
@ -44,6 +49,9 @@ while settings[0][0] == "-" and settings[0] != "--":
# Space is the delimiter # Space is the delimiter
delimiter = " " delimiter = " "
delimiter_re = r"\s+" delimiter_re = r"\s+"
elif opt == "-e":
# Erase settings that have empty values.
erase_setting = True
elif opt == "-w": elif opt == "-w":
# Line folding is possible in this file. # Line folding is possible in this file.
folded_lines = True folded_lines = True
@ -81,7 +89,7 @@ while len(input_lines) > 0:
# See if this line is for any settings passed on the command line. # See if this line is for any settings passed on the command line.
for i in range(len(settings)): for i in range(len(settings)):
# Check that this line contain this setting from the command-line arguments. # Check if this line contain this setting from the command-line arguments.
name, val = settings[i].split("=", 1) name, val = settings[i].split("=", 1)
m = re.match( m = re.match(
"(\s*)" "(\s*)"
@ -91,8 +99,10 @@ while len(input_lines) > 0:
if not m: continue if not m: continue
indent, is_comment, existing_val = m.groups() indent, is_comment, existing_val = m.groups()
# If this is already the setting, do nothing. # If this is already the setting, keep it in the file, except:
if is_comment is None and existing_val == val: # * If we've already seen it before, then remove this duplicate line.
# * If val is empty and erase_setting is on, then comment it out.
if is_comment is None and existing_val == val and not (not val and erase_setting):
# It may be that we've already inserted this setting higher # It may be that we've already inserted this setting higher
# in the file so check for that first. # in the file so check for that first.
if i in found: break if i in found: break
@ -107,8 +117,9 @@ while len(input_lines) > 0:
# the line is already commented, pass it through # the line is already commented, pass it through
buf += line buf += line
# if this option oddly appears more than once, don't add the setting again # if this option already is set don't add the setting again,
if i in found: # or if we're clearing the setting with -e, don't add it
if (i in found) or (not val and erase_setting):
break break
# add the new setting # add the new setting
@ -122,9 +133,10 @@ while len(input_lines) > 0:
# If did not match any setting names, pass this line through. # If did not match any setting names, pass this line through.
buf += line buf += line
# Put any settings we didn't see at the end of the file. # Put any settings we didn't see at the end of the file,
# except settings being cleared.
for i in range(len(settings)): for i in range(len(settings)):
if i not in found: if (i not in found) and not (not val and erase_setting):
name, val = settings[i].split("=", 1) name, val = settings[i].split("=", 1)
buf += name + delimiter + val + "\n" buf += name + delimiter + val + "\n"

View File

@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
# #
# This script will restore the backup made during an installation # This script will restore the backup made during an installation
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
if [ -z "$1" ]; then if [ -z "$1" ]; then
@ -26,7 +27,7 @@ if [ ! -f $1/config.php ]; then
fi fi
echo "Restoring backup from $1" echo "Restoring backup from $1"
service php7.3-fpm stop service php$(php_version)-fpm stop
# remove the current ownCloud/Nextcloud installation # remove the current ownCloud/Nextcloud installation
rm -rf /usr/local/lib/owncloud/ rm -rf /usr/local/lib/owncloud/
@ -45,5 +46,5 @@ chown www-data.www-data $STORAGE_ROOT/owncloud/config.php
sudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off sudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off
service php7.3-fpm start service php$(php_version)-fpm start
echo "Done" echo "Done"