1
0
mirror of https://github.com/mail-in-a-box/mailinabox.git synced 2025-04-03 00:07:05 +00:00

Merge branch 'master' into roundcubesqlitemod

This commit is contained in:
KiekerJan 2023-02-06 14:34:03 +01:00 committed by GitHub
commit 61f9ad583d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
89 changed files with 3087 additions and 569 deletions

71
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,71 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '43 20 * * 0'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

View File

@ -1,6 +1,33 @@
CHANGELOG CHANGELOG
========= =========
Version 61.1 (January 28, 2022)
-------------------------------
* Fixed rsync backups not working with the default port.
* Reverted "Improve error messages in the management tools when external command-line tools are run." because of the possibility of user secrets being included in error messages.
* Fix for TLS certificate SHA fingerprint not being displayed during setup.
Version 61 (January 21, 2023)
-----------------------------
System:
* fail2ban didn't start after setup.
Mail:
* Disable Roundcube password plugin since it was corrupting the user database.
Control panel:
* Fix changing existing backup settings when the rsync type is used.
* Allow setting a custom port for rsync backups.
* Fixes to DNS lookups during status checks when there are timeouts, enforce timeouts better.
* A new check is added to ensure fail2ban is running.
* Fixed a color.
* Improve error messages in the management tools when external command-line tools are run.
Version 60.1 (October 30, 2022) Version 60.1 (October 30, 2022)
------------------------------- -------------------------------
@ -23,12 +50,13 @@ No major features of Mail-in-a-Box have changed in this release, although some m
With the newer version of Ubuntu the following software packages we use are updated: With the newer version of Ubuntu the following software packages we use are updated:
* dovecot is upgraded to 2.3.16, postfix to 3.6.4, opendmark to 1.4 (which adds ARC-Authentication-Results headers), and spampd to 2.53 (alleviating a mail delivery rate limiting bug). * dovecot is upgraded to 2.3.16, postfix to 3.6.4, opendmark to 1.4 (which adds ARC-Authentication-Results headers), and spampd to 2.53 (alleviating a mail delivery rate limiting bug).
* Nextcloud is upgraded to 23.0.4 (contacts to 4.2.0, calendar to 3.5.0). * Nextcloud is upgraded to 24.0.0
* Roundcube is upgraded to 1.6.0. * Roundcube is upgraded to 1.6.0.
* certbot is upgraded to 1.21 (via the Ubuntu repository instead of a PPA). * certbot is upgraded to 1.21 (via the Ubuntu repository instead of a PPA).
* fail2ban is upgraded to 0.11.2. * fail2ban is upgraded to 0.11.2.
* nginx is upgraded to 1.18. * nginx is upgraded to 1.18.
* PHP is upgraded from 7.2 to 8.0. * PHP is upgraded from 7.2 to 8.1.
* bind9 is replaced with unbound
Also: Also:

View File

@ -1,3 +1,64 @@
Modifications are go
====================
This is not the original Mail-in-a-Box. See https://github.com/mail-in-a-box/mailinabox for the real deal! Many thanks to [@JoshData](https://github.com/JoshData) and other [contributors](https://github.com/mail-in-a-box/mailinabox/graphs/contributors).
I made a number of modifications to the original Mail-in-a-Box, some to fix bugs, some to ease maintenance for my personal installation, to learn and to add functionality.
Functionality changes and additions
* Change installation target to Ubuntu 22.04.
* Add geoipblocking on the admin web console
This applies geoip filtering on acces to the admin panel of the box. Order of filtering: block continents that are not allowed, block countries that are not allowed, allow countries that are allowed (overriding continent filtering). Edit /etc/nginx/conf.d/10-geoblock.conf to configure.
* Add geoipblocking for ssh access
This applies geoip filtering for access to the ssh server. Edit /etc/geoiplookup.conf. All countries defined in this file are allowed. Works for alternate ssh ports.
This uses goiplookup from https://github.com/axllent/goiplookup
* Make fail2ban more strict
enable postfix filters, lengthen bantime and findtime
* Add fail2ban jails for both above mentioned geoipblocking filters
* Add fail2ban filters for web scanners and badbots
* Add xapian full text searching to dovecot (from https://github.com/grosjo/fts-xapian)
* Add rkhunter
* Configure domain names for which only www will be hosted
Edit /etc/miabwwwdomains.conf to configure. The box will handle incoming traffic asking for these domain names. The DNS entries are entered in an external DNS provider! If you want this box to handle the DNS entries, simply add a mail alias. (existing functionality of the vanilla Mail-in-a-Box)
* Add some munin plugins
* Update nextcloud to 24.0.0
And updated apps
* Add nextcloud notes app
* Add roundcube context menu plugin
* Add roundcube two factor authentication plugin
* Use shorter TTL values in the DNS server
To be used before for example when changing IP addresses. Shortening TTL values will propagate changes faster. For reference, default TTL is 1 day, short TTL is 5 minutes. To use, edit file /etc/forceshortdnsttl and add a line for each domain for which shorter TTLs should be used. To use short TTLs for all known domains, add "forceshortdnsttl"
* Use the box as a Hidden Master in the DNS system
Thus only the secondary DNS servers are used as public DNS servers. When using a hidden master, no glue records are necessary at your domain hoster. To use, first setup secondary DNS servers via the Custom DNS administration page. At least two secondary servers should be set. When that functions, edit file /etc/usehiddenmasterdns and add a line for each domain for which Hidden Master should be used. To use Hidden Master for all known domains, add "usehiddenmasterdns".
* Daily ip blacklist check
Using check-dnsbl.py from https://github.com/gsauthof/utility
* Updated ssl security for web and email
Removed older cryptos following internet.nl recommendations
* Replace opendkim with dkimpy (https://launchpad.net/dkimpy-milter)
Added support for Ed25519 signing
* Replace bind9 with unbound DNS resolver
* Make backup target folder configurable
set BACKUP_ROOT to the backup target folder (default is same as STORAGE_ROOT)
Bug fixes
* Munin error report fixed [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1555)
* Correct nextcloud carddav url [see github issue](https://github.com/mail-in-a-box/mailinabox/issues/1918)
Maintenance (personal)
* Automatically clean spam and trash folders after 120 days
* Removed Z-Push
* After a backup, restarting of services is moved to before the execution of the after-backup script. This enables mail delivery while the after-backup script runs.
* Add weekly pflogsumm log analysis
* Enable mail delivery to root, forwarded to administrator
* Remove nextcloud skeleton to save disk space
Fun
* Add option to define ADMIN_IP_ADDRESS
Currently only used to ignore fail2ban jails
* Add dynamic dns tools in the tools directory
Can be used to control DNS entries on the mail-in-a-box to point to a machine with a non-fixed (e.g. residential) ip address
Original mailinabox content starts here:
Mail-in-a-Box Mail-in-a-Box
============= =============
@ -60,7 +121,7 @@ Clone this repository and checkout the tag corresponding to the most recent rele
$ git clone https://github.com/mail-in-a-box/mailinabox $ git clone https://github.com/mail-in-a-box/mailinabox
$ cd mailinabox $ cd mailinabox
$ git checkout v60.1 $ git checkout v61.1
Begin the installation. Begin the installation.

2
Vagrantfile vendored
View File

@ -19,7 +19,7 @@ Vagrant.configure("2") do |config|
export PUBLIC_IP=auto export PUBLIC_IP=auto
export PUBLIC_IPV6=auto export PUBLIC_IPV6=auto
export PRIMARY_HOSTNAME=auto export PRIMARY_HOSTNAME=auto
#export SKIP_NETWORK_CHECKS=1 export SKIP_NETWORK_CHECKS=1
# Start the setup script. # Start the setup script.
cd /vagrant cd /vagrant

View File

@ -1262,7 +1262,7 @@ paths:
$ref: '#/components/schemas/MailUserAddResponse' $ref: '#/components/schemas/MailUserAddResponse'
example: | example: |
mail user added mail user added
updated DNS: OpenDKIM configuration updated DNS: DKIM configuration
400: 400:
description: Bad request description: Bad request
content: content:
@ -1863,7 +1863,7 @@ components:
type: string type: string
example: | example: |
mail user added mail user added
updated DNS: OpenDKIM configuration updated DNS: DKIM configuration
description: | description: |
Mail user add response. Mail user add response.

View File

@ -0,0 +1,5 @@
#!/bin/bash
#
doveadm expunge -A mailbox Trash savedbefore 120d
doveadm expunge -A mailbox Spam savedbefore 120d

2
conf/cron/miab_dovecot Normal file
View File

@ -0,0 +1,2 @@
#!/bin/bash
/usr/bin/doveadm fts optimize -A > /dev/null 2>&1

13
conf/dh4096.pem Normal file
View File

@ -0,0 +1,13 @@
-----BEGIN DH PARAMETERS-----
MIICCAKCAgEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3
7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32
nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZp4e
8W5vUsMWTfT7eTDp5OWIV7asfV9C1p9tGHdjzx1VA0AEh/VbpX4xzHpxNciG77Qx
iu1qHgEtnmgyqQdgCpGBMMRtx3j5ca0AOAkpmaMzy4t6Gh25PXFAADwqTs6p+Y0K
zAqCkc3OyX3Pjsm1Wn+IpGtNtahR9EGC4caKAH5eZV9q//////////8CAQI=
-----END DH PARAMETERS-----

View File

@ -0,0 +1,12 @@
[INCLUDES]
before = common.conf
[Definition]
miab-errors=postfix/(submission/)?smtpd.*warning: hostname .* does not resolve to address <HOST>:.+
miab-normal=postfix/(submission/)?smtpd.*warning: hostname .* does not resolve to address <HOST>$
ignoreregex =
failregex = <miab-<mode>>
mode = normal

View File

@ -0,0 +1,7 @@
[INCLUDES]
before = common.conf
[Definition]
failregex=postfix/submission/smtpd.*warning: non-SMTP command from.*\[<HOST>\].*HTTP.*$
ignoreregex =

View File

@ -0,0 +1,24 @@
# Fail2Ban configuration file
#
# Regexp to catch known spambots and software alike. Please verify
# that it is your intent to block IPs which were driven by
# above mentioned bots.
[Definition]
badbotscustom = EmailCollector|WebEMailExtrac|TrackBack/1\.02|sogou music spider|(?:Mozilla/\d+\.\d+ )?Jorgee
badbots = Atomic_Email_Hunter/4\.0|atSpider/1\.0|autoemailspider|bwh3_user_agent|China Local Browse 2\.6|ContactBot/0\.2|ContentSmartz|DataCha0s/2\.0|DBrowse 1\.4b|DBrowse 1\.4d|Demo Bot DOT 16b|Demo Bot Z 16b|DSurf15a 01|DSurf15a 71|DSurf15a 81|DSurf15a VA|EBrowse 1\.4b|Educate Search VxB|EmailSiphon|EmailSpider|EmailWolf 1\.00|ESurf15a 15|ExtractorPro|Franklin Locator 1\.8|FSurf15a 01|Full Web Bot 0416B|Full Web Bot 0516B|Full Web Bot 2816B|Guestbook Auto Submitter|Industry Program 1\.0\.x|ISC Systems iRc Search 2\.1|IUPUI Research Bot v 1\.9a|LARBIN-EXPERIMENTAL \(efp@gmx\.net\)|LetsCrawl\.com/1\.0 \+http\://letscrawl\.com/|Lincoln State Web Browser|LMQueueBot/0\.2|LWP\:\:Simple/5\.803|Mac Finder 1\.0\.xx|MFC Foundation Class Library 4\.0|Microsoft URL Control - 6\.00\.8xxx|Missauga Locate 1\.0\.0|Missigua Locator 1\.9|Missouri College Browse|Mizzu Labs 2\.2|Mo College 1\.9|MVAClient|Mozilla/2\.0 \(compatible; NEWT ActiveX; Win32\)|Mozilla/3\.0 \(compatible; Indy Library\)|Mozilla/3\.0 \(compatible; scan4mail \(advanced version\) http\://www\.peterspages\.net/?scan4mail\)|Mozilla/4\.0 \(compatible; Advanced Email Extractor v2\.xx\)|Mozilla/4\.0 \(compatible; Iplexx Spider/1\.0 http\://www\.iplexx\.at\)|Mozilla/4\.0 \(compatible; MSIE 5\.0; Windows NT; DigExt; DTS Agent|Mozilla/4\.0 efp@gmx\.net|Mozilla/5\.0 \(Version\: xxxx Type\:xx\)|NameOfAgent \(CMS Spider\)|NASA Search 1\.0|Nsauditor/1\.x|PBrowse 1\.4b|PEval 1\.4b|Poirot|Port Huron Labs|Production Bot 0116B|Production Bot 2016B|Production Bot DOT 3016B|Program Shareware 1\.0\.2|PSurf15a 11|PSurf15a 51|PSurf15a VA|psycheclone|RSurf15a 41|RSurf15a 51|RSurf15a 81|searchbot admin@google\.com|ShablastBot 1\.0|snap\.com beta crawler v0|Snapbot/1\.0|Snapbot/1\.0 \(Snap Shots&#44; \+http\://www\.snap\.com\)|sogou develop spider|Sogou Orion spider/3\.0\(\+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sogou spider|Sogou web spider/3\.0\(\+http\://www\.sogou\.com/docs/help/webmasters\.htm#07\)|sohu agent|SSurf15a 11 |TSurf15a 11|Under the Rainbow 2\.2|User-Agent\: Mozilla/4\.0 \(compatible; MSIE 6\.0; Windows NT 5\.1\)|VadixBot|WebVulnCrawl\.unknown/1\.0 libwww-perl/5\.803|Wells Search II|WEP Search 00
failregex = ^<HOST> -.*"(GET|POST|HEAD).*HTTP.*"(?:%(badbots)s|%(badbotscustom)s)"$
ignoreregex =
datepattern = ^[^\[]*\[({DATE})
{^LN-BEG}
# DEV Notes:
# List of bad bots fetched from http://www.user-agents.org
# Generated on Thu Nov 7 14:23:35 PST 2013 by files/gen_badbots.
#
# Author: Yaroslav Halchenko

View File

@ -0,0 +1,6 @@
# Ban requests for non-existing or not-allowed resources
[Definition]
# regex for nginx error.log
failregex = ^.* \[error\] .*2: No such file or directory.*client: <HOST>.*$
ignoreregex = ^.*(robots.txt|favicon.ico).*$

View File

@ -0,0 +1,12 @@
# Fail2Ban filter Mail-in-a-Box geo ip block
[INCLUDES]
before = common.conf
[Definition]
_daemon = mailinabox
failregex = .* - Geoip blocked <HOST>
ignoreregex =

View File

@ -0,0 +1,6 @@
# Ban requests for non-existing or not-allowed resources
[Definition]
failregex = ^.* \[error\] .*2: No such file or directory.*client: <HOST>.*$
ignoreregex = ^.*(robots.txt|favicon.ico).*$

View File

@ -0,0 +1,10 @@
# Fail2Ban filter sshd ip block according to https://www.axllent.org/docs/ssh-geoip/
[INCLUDES]
before = common.conf
[Definition]
failregex = .* DENY geoipblocked connection from <HOST>
ignoreregex =

View File

@ -0,0 +1,238 @@
# Fail2Ban Web Exploits Filter
# Author & Copyright: Mitchell Krog - mitchellkrog@gmail.com
# REPO: https://github.com/mitchellkrogza/Fail2Ban.WebExploits
# V0.1.27
# Last Updated: Tue May 8 11:08:42 SAST 2018
[Definition]
failregex = ^<HOST> -.*(GET|POST|HEAD).*(/\.git/config)
^<HOST> -.*(GET|POST).*/administrator/index\.php.*500
^<HOST> -.*(GET|POST|HEAD).*(/:8880/)
^<HOST> -.*(GET|POST|HEAD).*(/1\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/addons/theme/stv1/_static/image/favicon\.ico)
^<HOST> -.*(GET|POST|HEAD).*(/addons/theme/stv1/_static/ts2/layout\.css)
^<HOST> -.*(GET|POST|HEAD).*(/addons/theme/stv2/_static/ts2/layout\.css)
^<HOST> -.*(GET|POST|HEAD).*(/Admin/Common/HelpLinks\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/admin-console)
^<HOST> -.*(GET|POST|HEAD).*(/admin/inc/xml\.xslt)
^<HOST> -.*(GET|POST|HEAD).*(/administrator/components/com_xcloner-backupandrestore/index2\.php)
# ^<HOST> -.*(GET|POST|HEAD).*(/administrator/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/administrator/manifests/files/joomla\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/admin/mysql2/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/admin/mysql/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/admin/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/admin/pma/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/admin/PMA/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/admin/SouthidcEditor/ButtonImage/standard/componentmenu\.gif)
^<HOST> -.*(GET|POST|HEAD).*(/admin/SouthidcEditor/Dialog/dialog\.js)
^<HOST> -.*(GET|POST|HEAD).*(/admin/SouthidcEditor/ewebeditor\.asp)
^<HOST> -.*(GET|POST|HEAD).*(/API/DW/Dwplugin/SystemLabel/SiteConfig\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/API/DW/Dwplugin/TemplateManage/login_site\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/API/DW/Dwplugin/TemplateManage/manage_site\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/API/DW/Dwplugin/TemplateManage/save_template\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/API/DW/Dwplugin/ThirdPartyTags/SiteFactory\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/api/jsonws/invoke)
^<HOST> -.*(GET|POST|HEAD).*(/app/home/skins/default/style\.css)
^<HOST> -.*(GET|POST|HEAD).*(/app/js/source/wcmlib/WCMConstants\.js)
^<HOST> -.*(GET|POST|HEAD).*(/apple-app-site-association)
^<HOST> -.*(GET|POST|HEAD).*(/app/Tpl/fanwe_1/js/)
^<HOST> -.*(GET|POST|HEAD).*(/app/etc/local\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/Autodiscover/Autodiscover\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/_asterisk/)
^<HOST> -.*(GET|POST|HEAD).*(/backup\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/bencandy\.php)
^<HOST> -.*(GET|POST|HEAD).*(/blog/administrator/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/boaform/admin/formLogin)
^<HOST> -.*(GET|POST|HEAD).*(/cardamom\.html)
^<HOST> -.*(GET|POST|HEAD).*(/cgi-bin/php)
^<HOST> -.*(GET|POST|HEAD).*(/cgi-bin/php5)
^<HOST> -.*(GET|POST|HEAD).*(/cgi/common\.cgi)
^<HOST> -.*(GET|POST|HEAD).*(/CGI/Execute)
^<HOST> -.*(GET|POST|HEAD).*(/check\.proxyradar\.com/azenv\.php)
^<HOST> -.*(GET|POST|HEAD).*(/ckeditor/ckfinder/ckfinder\.html)
^<HOST> -.*(GET|POST|HEAD).*(/ckeditor/ckfinder/install\.txt)
^<HOST> -.*(GET|POST|HEAD).*(/ckfinder/ckfinder\.html)
^<HOST> -.*(GET|POST|HEAD).*(/ckfinder/install\.txt)
^<HOST> -.*(GET|POST|HEAD).*(/ckupload\.php)
^<HOST> -.*(GET|POST|HEAD).*(/claroline/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/clases\.gone\.php)
^<HOST> -.*(GET|POST|HEAD).*(/cms/administrator)
^<HOST> -.*(GET|POST|HEAD).*(/command\.php)
^<HOST> -.*(GET|POST|HEAD).*(/components/com_adsmanager/js/fullnoconflict\.js)
^<HOST> -.*(GET|POST|HEAD).*(/components/com_b2jcontact/css/b2jcontact\.css)
^<HOST> -.*(GET|POST|HEAD).*(/components/com_b2jcontact/router\.php)
^<HOST> -.*(GET|POST|HEAD).*(/components/com_foxcontact/js/jtext\.js)
^<HOST> -.*(GET|POST|HEAD).*(/components/com_sexycontactform/assets/js/index\.html)
^<HOST> -.*(GET|POST|HEAD).*(/console/)
^<HOST> -.*(GET|POST|HEAD).*(/console/auth/reg_newuser\.jsp)
^<HOST> -.*(GET|POST|HEAD).*(/console/include/not_login\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/console/js/CTRSRequestParam\.js)
^<HOST> -.*(GET|POST|HEAD).*(/console/js/CWCMDialogHead\.js)
^<HOST> -.*(GET|POST|HEAD).*(/customer/account/login/referer/)
^<HOST> -.*(GET|POST|HEAD).*(/currentsetting\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/CuteSoft_Client/CuteEditor/Help/default\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/CuteSoft_Client/CuteEditor/ImageEditor/listfiles\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/CuteSoft_Client/CuteEditor/Images/log\.gif)
^<HOST> -.*(GET|POST|HEAD).*(/data/admin/ver\.txt)
^<HOST> -.*(GET|POST|HEAD).*(/database\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/data\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/datacenter/downloadApp/showDownload\.do)
^<HOST> -.*(GET|POST|HEAD).*(/db/)
^<HOST> -.*(GET|POST|HEAD).*(/dbadmin/)
^<HOST> -.*(GET|POST|HEAD).*(/dbadmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/db_backup\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/dbdump\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/db\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/db/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/dump\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/deptWebsiteAction\.do)
^<HOST> -.*(GET|POST|HEAD).*(/eams/static/scripts/grade/course/input\.js)
^<HOST> -.*(GET|POST|HEAD).*(/editor/js/fckeditorcode_ie\.js)
^<HOST> -.*(GET|POST|HEAD).*(\.env\.dev\.local)
^<HOST> -.*(GET|POST|HEAD).*(/\.env\.development\.local)
^<HOST> -.*(GET|POST|HEAD).*(/\.env\.prod\.local)
^<HOST> -.*(GET|POST|HEAD).*(/\.env\.production\.local)
^<HOST> -.*(GET|POST|HEAD).*(/examples/file-manager\.html)
^<HOST> -.*(GET|POST|HEAD).*(/getcfg\.php)
^<HOST> -.*(GET|POST|HEAD).*(/get_password\.php)
^<HOST> -.*(GET|POST|HEAD).*(/\.git/info)
^<HOST> -.*(GET|POST|HEAD).*(/\.git/HEAD)
^<HOST> -.*(GET|POST|HEAD).*(/Hello\.World)
^<HOST> -.*(GET|POST|HEAD).*(/hndUnblock\.cgi)
^<HOST> -.*(GET|POST|HEAD).*(/images/login9/login_33\.jpg)
^<HOST> -.*(GET|POST|HEAD).*(/include/dialog/config\.php)
^<HOST> -.*(GET|POST|HEAD).*(/include/install_ocx\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/index\.action)
^<HOST> -.*(GET|POST|HEAD).*(/ip_js\.php)
^<HOST> -.*(GET|POST|HEAD).*(/issmall/)
^<HOST> -.*(GET|POST|HEAD).*(/jenkins/script)
^<HOST> -.*(GET|POST|HEAD).*(/jenkins/login)
^<HOST> -.*(GET|POST|HEAD).*(/jm-ajax/upload_file/)
^<HOST> -.*(GET|POST|HEAD).*(/jmx-console)
^<HOST> -.*(GET|POST|HEAD).*(/js/tools\.js)
^<HOST> -.*(GET|POST|HEAD).*(/letrokart.sql)
^<HOST> -.*(GET|POST|HEAD).*(/libraries/sfn\.php)
^<HOST> -.*(GET|POST|HEAD).*(/localhost\.sql)
^<HOST> -.*(GET|POST|HEAD).*(login\.destroy\.session)
^<HOST> -.*(GET|POST|HEAD).*(/login/Jeecms\.do)
^<HOST> -.*(GET|POST|HEAD).*(/logo_img\.php)
^<HOST> -.*(GET|POST|HEAD).*(/maintlogin\.jsp)
^<HOST> -.*(GET|POST|HEAD).*(/manager/html)
^<HOST> -.*(GET|POST|HEAD).*(/manager/status)
^<HOST> -.*(GET|POST|HEAD).*(/magmi/conf/magmi\.ini)
^<HOST> -.*(GET|POST|HEAD).*(/master/login\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/media/com_hikashop/js/hikashop\.js)
^<HOST> -.*(GET|POST|HEAD).*(/modules/attributewizardpro/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/columnadverts/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/fieldvmegamenu/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/homepageadvertise2/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/homepageadvertise/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/mod_simplefileuploadv1\.3/elements/udd\.php)
^<HOST> -.*(GET|POST|HEAD).*(/modules/pk_flexmenu/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/pk_vertflexmenu/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/modules/wdoptionpanel/config\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/msd)
^<HOST> -.*(GET|POST|HEAD).*(/msd1\.24\.4)
^<HOST> -.*(GET|POST|HEAD).*(/msd1\.24stable)
^<HOST> -.*(GET|POST|HEAD).*(mstshash=NCRACK_USER)
^<HOST> -.*(GET|POST|HEAD).*(/muieblackcat)
^<HOST> -.*(GET|POST|HEAD).*(/myadmin2/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/myadmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/myadmin/scripts/setup\.php)
^<HOST> -.*(GET|POST|HEAD).*(/MyAdmin/scripts/setup\.php)
^<HOST> -.*(GET|POST|HEAD).*(/mysql-admin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/mysqladmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/mysqldumper)
^<HOST> -.*(GET|POST|HEAD).*(/mySqlDumper)
^<HOST> -.*(GET|POST|HEAD).*(/MySQLDumper)
^<HOST> -.*(GET|POST|HEAD).*(/mysqldump\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/mysql\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/phpadmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/phpma/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/phpMyadmin_bak/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/phpMyAdmin/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/phpMyAdmin/scripts/setup\.php)
^<HOST> -.*(GET|POST|HEAD).*(/plugins/anchor/anchor\.js)
^<HOST> -.*(GET|POST|HEAD).*(/plugins/filemanager/filemanager/js)
^<HOST> -.*(GET|POST|HEAD).*(/plus/download\.php)
^<HOST> -.*(GET|POST|HEAD).*(/plus/heightsearch\.php)
^<HOST> -.*(GET|POST|HEAD).*(/plus/rssmap\.html)
^<HOST> -.*(GET|POST|HEAD).*(/plus/sitemap\.html)
^<HOST> -.*(GET|POST|HEAD).*(/pma/)
^<HOST> -.*(GET|POST|HEAD).*(/PMA/)
^<HOST> -.*(GET|POST|HEAD).*(/PMA2/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pma/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/PMA/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pmamy2/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pmamy/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pma-old/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pma/scripts/setup\.php)
^<HOST> -.*(GET|POST|HEAD).*(/pmd/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/privacy\.txt)
^<HOST> -.*(GET|POST|HEAD).*(/resources/style/images/login/btn\.png)
^<HOST> -.*(GET|POST|HEAD).*(/Scripts/jquery/maticsoft\.jquery\.min\.js)
^<HOST> -.*(GET|POST|HEAD).*(/script/valid_formdata\.js)
^<HOST> -.*(GET|POST|HEAD).*(/siteserver/login\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/siteserver/upgrade/default\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/site\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/sql\.sql)
^<HOST> -.*(GET|POST|HEAD).*(soap:Envelope)
^<HOST> -.*(GET|POST|HEAD).*(/solr/admin/info/system)
^<HOST> -.*(GET|POST|HEAD).*(/stalker_portal/c)
^<HOST> -.*(GET|POST|HEAD).*(/stalker_portal/server/adm/tv-channels/iptv-list-json)
^<HOST> -.*(GET|POST|HEAD).*(/stalker_portal/server/adm/users/users-list-json)
^<HOST> -.*(GET|POST|HEAD).*(/stssys\.htm)
^<HOST> -.*(GET|POST|HEAD).*(/sys\.cache\.php)
^<HOST> -.*(GET|POST|HEAD).*(/system/assets/jquery/jquery-2\.x\.min\.js)
^<HOST> -.*(GET|POST|HEAD).*(/system_api\.php)
^<HOST> -.*(GET|POST|HEAD).*(/template/1/bluewise/_files/jspxcms\.css)
^<HOST> -.*(GET|POST|HEAD).*(/templates/jsn_glass_pro/ext/hikashop/jsn_ext_hikashop\.css)
^<HOST> -.*(GET|POST|HEAD).*(/test_404_page/)
^<HOST> -.*(GET|POST|HEAD).*(/test_for_404/)
^<HOST> -.*(GET|POST|HEAD).*(/temp\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/translate\.sql)
^<HOST> -.*(GET|POST|HEAD).*(Test Wuz Here)
^<HOST> -.*(GET|POST|HEAD).*(/tmUnblock\.cgi)
^<HOST> -.*(GET|POST|HEAD).*(/tools/phpMyAdmin/index\.ph)
^<HOST> -.*(GET|POST|HEAD).*(/uc_server/control/admin/db\.php)
^<HOST> -.*(GET|POST|HEAD).*(/upload/bank-icons/)
^<HOST> -.*(GET|POST|HEAD).*(/UserCenter/css/admin/bgimg/admin_all_bg\.png)
^<HOST> -.*(GET|POST|HEAD).*(/\.user\.ini)
^<HOST> -.*(GET|POST|HEAD).*(\.bitcoin)
^<HOST> -.*(GET|POST|HEAD).*(wallet\.dat)
^<HOST> -.*(GET|POST|HEAD).*(bitcoin\.dat)
^<HOST> -.*(GET|POST|HEAD).*(/magento2/admin)
^<HOST> -.*(GET|POST|HEAD).*(/user/register?element_parents=account)
^<HOST> -.*(GET|POST|HEAD).*(/user/themes/antimatter/js/antimatter\.js)
^<HOST> -.*(GET|POST|HEAD).*(/user/themes/antimatter/js/modernizr\.custom\.71422\.js)
^<HOST> -.*(GET|POST|HEAD).*(/user/themes/antimatter/js/slidebars\.min\.js)
^<HOST> -.*(GET|POST|HEAD).*(/users\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/vendor/phpunit/phpunit)
^<HOST> -.*(GET|POST|HEAD).*(/w00tw00t)
^<HOST> -.*(GET|POST|HEAD).*(/webbuilder/script/locale/wb-lang-zh_CN\.js)
^<HOST> -.*(GET|POST|HEAD).*(/web-console)
^<HOST> -.*(GET|POST|HEAD).*(/webdav)
^<HOST> -.*(GET|POST|HEAD).*(/web/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(/whir_system/login\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/whir_system/module/security/login\.aspx)
^<HOST> -.*(GET|POST|HEAD).*(/wls-wsat/CoordinatorPortType)
^<HOST> -.*(GET|POST|HEAD).*(/wpbase/url\.php)
^<HOST> -.*(GET|POST|HEAD).*(/wp-content/plugins/)
^<HOST> -.*(GET|POST|HEAD).*(/wp-content/uploads/dump\.sql)
^<HOST> -.*(GET|POST|HEAD).*(/wp-includes/wlwmanifest\.xml)
^<HOST> -.*(GET|POST|HEAD).*(/wp-login\.php)
^<HOST> -.*(GET|POST|HEAD).*(/www/phpMyAdmin/index\.php)
^<HOST> -.*(GET|POST|HEAD).*(\x00Cookie:)
^<HOST> -.*(GET|POST|HEAD).*(\x22cache_name_function)
^<HOST> -.*(GET|POST|HEAD).*(\x22JDatabaseDriverMysqli)
^<HOST> -.*(GET|POST|HEAD).*(\x22JSimplepieFactory)
^<HOST> -.*(GET|POST|HEAD).*(\x22sanitize)
^<HOST> -.*(GET|POST|HEAD).*(\x22SimplePie)
^<HOST> -.*(GET|POST|HEAD).*(\x5C0disconnectHandlers)
^<HOST> -.*(GET|POST|HEAD).*(\.\./wp-config.php)
ignoreregex =

View File

@ -0,0 +1,13 @@
# Block clients that generate too many non existing resources
# Do not deploy of you host many websites on your box
# any bad html link will trigger a false positive.
# This jail is meant to catch scanners that try many
# sites.
[badrequests]
enabled = true
port = http,https
filter = nginx-badrequests
logpath = /var/log/nginx/error.log
maxretry = 8
findtime = 15m
bantime = 15m

View File

@ -0,0 +1,17 @@
[geoipblocknginx]
enabled = true
port = http,https
filter = nginx-geoipblock
logpath = /var/log/nginx/geoipblock.log
maxretry = 1
findtime = 120m
bantime = 15m
[geoipblockssh]
enabled = true
port = ssh
filter = ssh-geoipblock
logpath = /var/log/syslog
maxretry = 1
findtime = 120m
bantime = 15m

View File

@ -0,0 +1,9 @@
[nginx-badbots]
enabled = true
port = http,https
filter = nginx-badbots
logpath = /var/log/nginx/access.log
maxretry = 2
[nginx-http-auth]
enabled = true

View File

@ -0,0 +1,44 @@
# typically non smtp commands. Block fast for access to postfix
[miab-postfix-scanner]
enabled = true
port = smtp,465,587
filter = miab-postfix-scanner
logpath = /var/log/mail.log
maxretry = 2
findtime = 1d
bantime = 1h
# ip lookup of hostname does not match. Go easy on block
[miab-pf-rdnsfail]
enabled = true
port = smtp,465,587
mode = normal
filter = miab-postfix-rdnsfail
logpath = /var/log/mail.log
maxretry = 8
findtime = 12h
bantime = 30m
# ip lookup of hostname does not match with failure. More strict block
[miab-pf-rdnsfail-e]
enabled = true
port = smtp,465,587
mode = errors
filter = miab-postfix-rdnsfail[mode=errors]
logpath = /var/log/mail.log
maxretry = 4
findtime = 2d
bantime = 2h
# aggressive filter against ddos etc
[postfix-aggressive]
enabled = true
mode = aggressive
filter = postfix[mode=aggressive]
port = smtp,465,submission
logpath = %(postfix_log)s
backend = %(postfix_backend)s
maxretry = 100
findtime = 15m
bantime = 1h

View File

@ -0,0 +1,12 @@
# Block clients based on a list of specific requests
# The list contains applications that are not installed
# only scanners and bad parties will try too often
# so blocking can be fast and long
[webexploits]
enabled = true
port = http,https
filter = webexploits
logpath = /var/log/nginx/access.log
maxretry = 2
findtime = 4h
bantime = 4h

View File

@ -5,13 +5,16 @@
# Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks # Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks
# ping services over the public interface so we should whitelist that address of # ping services over the public interface so we should whitelist that address of
# ours too. The string is substituted during installation. # ours too. The string is substituted during installation.
ignoreip = 127.0.0.1/8 PUBLIC_IP ::1 PUBLIC_IPV6 ignoreip = 127.0.0.1/8 ::1/128 PUBLIC_IP PUBLIC_IPV6/64 ADMIN_HOME_IP ADMIN_HOME_IPV6
bantime = 15m
findtime = 120m
maxretry = 4
[dovecot] [dovecot]
enabled = true enabled = true
filter = dovecotimap filter = dovecotimap
logpath = /var/log/mail.log logpath = /var/log/mail.log
findtime = 30 findtime = 2m
maxretry = 20 maxretry = 20
[miab-management] [miab-management]
@ -20,7 +23,7 @@ filter = miab-management-daemon
port = http,https port = http,https
logpath = /var/log/syslog logpath = /var/log/syslog
maxretry = 20 maxretry = 20
findtime = 30 findtime = 15m
[miab-munin] [miab-munin]
enabled = true enabled = true
@ -28,15 +31,15 @@ port = http,https
filter = miab-munin filter = miab-munin
logpath = /var/log/nginx/access.log logpath = /var/log/nginx/access.log
maxretry = 20 maxretry = 20
findtime = 30 findtime = 15m
[miab-owncloud] [miab-owncloud]
enabled = true enabled = true
port = http,https port = http,https
filter = miab-owncloud filter = miab-owncloud
logpath = STORAGE_ROOT/owncloud/nextcloud.log logpath = /var/log/nextcloud.log
maxretry = 20 maxretry = 20
findtime = 120 findtime = 15m
[miab-postfix465] [miab-postfix465]
enabled = true enabled = true
@ -52,7 +55,7 @@ port = 587
filter = miab-postfix-submission filter = miab-postfix-submission
logpath = /var/log/mail.log logpath = /var/log/mail.log
maxretry = 20 maxretry = 20
findtime = 30 findtime = 2m
[miab-roundcube] [miab-roundcube]
enabled = true enabled = true
@ -60,11 +63,13 @@ port = http,https
filter = miab-roundcube filter = miab-roundcube
logpath = /var/log/roundcubemail/errors.log logpath = /var/log/roundcubemail/errors.log
maxretry = 20 maxretry = 20
findtime = 30 findtime = 15m
[recidive] [recidive]
enabled = true enabled = true
maxretry = 10 maxretry = 10
bantime = 2w
findtime = 7d
action = iptables-allports[name=recidive] action = iptables-allports[name=recidive]
# In the recidive section of jail.conf the action contains: # In the recidive section of jail.conf the action contains:
# #
@ -79,8 +84,17 @@ action = iptables-allports[name=recidive]
[postfix-sasl] [postfix-sasl]
enabled = true enabled = true
findtime = 7d
[postfix]
enabled = true
# postfix rbl also found by postfix jail, but postfix-rbl is more aggressive (maxretry = 1)
[postfix-rbl]
enabled = true
[sshd] [sshd]
enabled = true enabled = true
maxretry = 7 maxretry = 4
bantime = 3600 bantime = 3600
mode = aggressive

3
conf/geoiplookup.conf Normal file
View File

@ -0,0 +1,3 @@
# UPPERCASE space-separated country codes to ACCEPT
# See e.g. https://dev.maxmind.com/geoip/legacy/codes/iso3166/ for allowable codes
ALLOW_COUNTRIES=""

12
conf/logrotate/mailinabox Normal file
View File

@ -0,0 +1,12 @@
/var/log/roundcubemail/errors.log
/var/log/roundcubemail/sendmail.log
/var/log/nextcloud.log
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
}

View File

@ -49,26 +49,6 @@
client_max_body_size 128M; client_max_body_size 128M;
} }
# Z-Push (Microsoft Exchange ActiveSync)
location /Microsoft-Server-ActiveSync {
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/local/lib/z-push/index.php;
fastcgi_param PHP_VALUE "include_path=.:/usr/share/php:/usr/share/pear:/usr/share/awl/inc";
fastcgi_read_timeout 630;
fastcgi_pass php-fpm;
# Outgoing mail also goes through this endpoint, so increase the maximum
# file upload limit to match the corresponding Postfix limit.
client_max_body_size 128M;
}
location ~* ^/autodiscover/autodiscover.xml$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/local/lib/z-push/autodiscover/autodiscover.php;
fastcgi_param PHP_VALUE "include_path=.:/usr/share/php:/usr/share/pear:/usr/share/awl/inc";
fastcgi_pass php-fpm;
}
# ADDITIONAL DIRECTIVES HERE # ADDITIONAL DIRECTIVES HERE
# Disable viewing dotfiles (.htaccess, .svn, .git, etc.) # Disable viewing dotfiles (.htaccess, .svn, .git, etc.)

View File

@ -7,11 +7,37 @@
rewrite ^/admin$ /admin/; rewrite ^/admin$ /admin/;
rewrite ^/admin/munin$ /admin/munin/ redirect; rewrite ^/admin/munin$ /admin/munin/ redirect;
location /admin/ { location /admin/ {
# By default not blocked
set $block_test 1;
# block the continents
if ($allowed_continent = no) {
set $block_test 0;
}
# in addition, block the countries
if ($denied_country = no) {
set $block_test 0;
}
# allow some countries
if ($allowed_country = yes) {
set $block_test 1;
}
# if 0, then blocked
if ($block_test = 0) {
access_log /var/log/nginx/geoipblock.log geoipblock;
return 444;
}
proxy_pass http://127.0.0.1:10222/; proxy_pass http://127.0.0.1:10222/;
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
add_header X-Frame-Options "DENY"; add_header X-Frame-Options "DENY";
add_header X-Content-Type-Options nosniff; add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "frame-ancestors 'none';"; add_header Content-Security-Policy "frame-ancestors 'none';";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Referrer-Policy "strict-origin";
} }
# Nextcloud configuration. # Nextcloud configuration.

View File

@ -2,7 +2,7 @@
# Note that these settings are repeated in the SMTP and IMAP configuration. # Note that these settings are repeated in the SMTP and IMAP configuration.
# ssl_protocols has moved to nginx.conf in bionic, check there for enabled protocols. # ssl_protocols has moved to nginx.conf in bionic, check there for enabled protocols.
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_dhparam STORAGE_ROOT/ssl/dh2048.pem; ssl_dhparam STORAGE_ROOT/ssl/dh4096.pem;
# as recommended by http://nginx.org/en/docs/http/configuring_https_servers.html # as recommended by http://nginx.org/en/docs/http/configuring_https_servers.html
ssl_session_cache shared:SSL:50m; ssl_session_cache shared:SSL:50m;

View File

@ -7,6 +7,5 @@
## your own --- please do not ask for help from us. ## your own --- please do not ask for help from us.
upstream php-fpm { upstream php-fpm {
server unix:/var/run/php/php8.0-fpm.sock; server unix:/var/run/php/php{{phpver}}-fpm.sock;
} }

View File

@ -0,0 +1,28 @@
# Expose this directory as static files.
root $ROOT;
index index.html index.htm;
location = /robots.txt {
log_not_found off;
access_log off;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
# ADDITIONAL DIRECTIVES HERE
# Disable viewing dotfiles (.htaccess, .svn, .git, etc.)
# This block is placed at the end. Nginx's precedence rules means this block
# takes precedence over all non-regex matches and only regex matches that
# come after it (i.e. none of those, since this is the last one.) That means
# we're blocking dotfiles in the static hosted sites but not the FastCGI-
# handled locations for Nextcloud (which serves user-uploaded files that might
# have this pattern, see #414) or some of the other services.
location ~ /\.(ht|svn|git|hg|bzr) {
log_not_found off;
access_log off;
deny all;
}

View File

@ -0,0 +1,22 @@
# GeoIP databases
geoip_country /usr/share/GeoIP/GeoIP.dat;
geoip_city /usr/share/GeoIP/GeoIPCity.dat;
# map the list of denied countries
# see e.g. https://dev.maxmind.com/geoip/legacy/codes/iso3166/ for allowable
# countries
map $geoip_country_code $denied_country {
default yes;
}
# map the list of allowed countries
map $geoip_country_code $allowed_country {
default no;
}
# map the continents to allow
map $geoip_city_continent_code $allowed_continent {
default yes;
}
log_format geoipblock '[$time_local] - Geoip blocked $remote_addr';

View File

@ -0,0 +1,4 @@
:syslogtag, startswith, "Nextcloud" -/var/log/nextcloud.log
# Stop logging
& stop

68
conf/unbound.conf Normal file
View File

@ -0,0 +1,68 @@
server:
# the working directory.
directory: "/etc/unbound"
# run as the unbound user
username: unbound
verbosity: 0 # uncomment and increase to get more logging.
# logfile: "/var/log/unbound.log" # won't work due to apparmor
# use-syslog: no
# By default listen only to localhost
#interface: ::1
#interface: 127.0.0.1
port: 53
# Only allow localhost to use this Unbound instance.
access-control: 127.0.0.1/8 allow
access-control: ::1/128 allow
# Private IP ranges, which shall never be returned or forwarded as public DNS response.
private-address: 10.0.0.0/8
private-address: 172.16.0.0/12
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: fd00::/8
private-address: fe80::/10
# Functionality
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
# Performance
num-threads: 2
cache-min-ttl: 300
cache-max-ttl: 86400
serve-expired: yes
neg-cache-size: 4M
msg-cache-size: 50m
rrset-cache-size: 100m
so-reuseport: yes
so-rcvbuf: 4m
so-sndbuf: 4m
# Privacy / hardening
# hide server info from clients
hide-identity: yes
hide-version: yes
harden-glue: yes
harden-dnssec-stripped: yes
harden-algo-downgrade: yes
harden-large-queries: yes
harden-short-bufsize: yes
rrset-roundrobin: yes
minimal-responses: yes
identity: "Server"
# Include possible white/blacklists
include: /etc/unbound/lists.d/*.conf
remote-control:
control-enable: yes
control-port: 953

View File

@ -1,21 +1,24 @@
#!/usr/local/lib/mailinabox/env/bin/python #!/usr/local/lib/mailinabox/env/bin/python
# This script performs a backup of all user data: # This script performs a backup of all user data stored under STORAGE_ROOT:
# 1) System services are stopped. # 1) System services are stopped.
# 2) STORAGE_ROOT/backup/before-backup is executed if it exists. # 2) BACKUP_ROOT/backup/before-backup is executed if it exists.
# 3) An incremental encrypted backup is made using duplicity. # 3) An incremental encrypted backup is made using duplicity.
# 4) The stopped services are restarted. # 4) The stopped services are restarted.
# 5) STORAGE_ROOT/backup/after-backup is executed if it exists. # 5) BACKUP_ROOT/backup/after-backup is executed if it exists.
#
# By default BACKUP_ROOT is equal to STORAGE_ROOT. If the variable BACKUP_ROOT is defined in /etc/mailinabox.conf and
# the referenced folder exists, this new target is used instead to store the backups.
import os, os.path, shutil, glob, re, datetime, sys import os, os.path, shutil, glob, re, datetime, sys
import dateutil.parser, dateutil.relativedelta, dateutil.tz import dateutil.parser, dateutil.relativedelta, dateutil.tz
import rtyaml import rtyaml
from exclusiveprocess import Lock from exclusiveprocess import Lock
from utils import load_environment, shell, wait_for_service from utils import load_environment, shell, wait_for_service, get_php_version
def backup_status(env): def backup_status(env):
# If backups are dissbled, return no status. # If backups are disabled, return no status.
config = get_backup_config(env) config = get_backup_config(env)
if config["target"] == "off": if config["target"] == "off":
return { } return { }
@ -25,7 +28,7 @@ def backup_status(env):
backups = { } backups = { }
now = datetime.datetime.now(dateutil.tz.tzlocal()) now = datetime.datetime.now(dateutil.tz.tzlocal())
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
backup_cache_dir = os.path.join(backup_root, 'cache') backup_cache_dir = os.path.join(backup_root, 'cache')
def reldate(date, ref, clip): def reldate(date, ref, clip):
@ -183,7 +186,7 @@ def get_passphrase(env):
# that line is long enough to be a reasonable passphrase. It # that line is long enough to be a reasonable passphrase. It
# only needs to be 43 base64-characters to match AES256's key # only needs to be 43 base64-characters to match AES256's key
# length of 32 bytes. # length of 32 bytes.
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
with open(os.path.join(backup_root, 'secret_key.txt')) as f: with open(os.path.join(backup_root, 'secret_key.txt')) as f:
passphrase = f.readline().strip() passphrase = f.readline().strip()
if len(passphrase) < 43: raise Exception("secret_key.txt's first line is too short!") if len(passphrase) < 43: raise Exception("secret_key.txt's first line is too short!")
@ -213,9 +216,21 @@ def get_duplicity_additional_args(env):
config = get_backup_config(env) config = get_backup_config(env)
if get_target_type(config) == 'rsync': if get_target_type(config) == 'rsync':
# Extract a port number for the ssh transport. Duplicity accepts the
# optional port number syntax in the target, but it doesn't appear to act
# on it, so we set the ssh port explicitly via the duplicity options.
from urllib.parse import urlsplit
try:
port = urlsplit(config["target"]).port
except ValueError:
port = 22
if port is None:
port = 22
return [ return [
"--ssh-options= -i /root/.ssh/id_rsa_miab", f"--ssh-options= -i /root/.ssh/id_rsa_miab -p {port}",
"--rsync-options= -e \"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p 22 -i /root/.ssh/id_rsa_miab\"", f"--rsync-options= -e \"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {port} -i /root/.ssh/id_rsa_miab\"",
] ]
elif get_target_type(config) == 's3': elif get_target_type(config) == 's3':
# See note about hostname in get_duplicity_target_url. # See note about hostname in get_duplicity_target_url.
@ -243,13 +258,14 @@ def get_target_type(config):
def perform_backup(full_backup): def perform_backup(full_backup):
env = load_environment() env = load_environment()
php_fpm = f"php{get_php_version()}-fpm"
# Create an global exclusive lock so that the backup script # Create an global exclusive lock so that the backup script
# cannot be run more than one. # cannot be run more than once.
Lock(die=True).forever() Lock(die=True).forever()
config = get_backup_config(env) config = get_backup_config(env)
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
backup_cache_dir = os.path.join(backup_root, 'cache') backup_cache_dir = os.path.join(backup_root, 'cache')
backup_dir = os.path.join(backup_root, 'encrypted') backup_dir = os.path.join(backup_root, 'encrypted')
@ -278,7 +294,7 @@ def perform_backup(full_backup):
if quit: if quit:
sys.exit(code) sys.exit(code)
service_command("php8.0-fpm", "stop", quit=True) service_command(php_fpm, "stop", quit=True)
service_command("postfix", "stop", quit=True) service_command("postfix", "stop", quit=True)
service_command("dovecot", "stop", quit=True) service_command("dovecot", "stop", quit=True)
service_command("postgrey", "stop", quit=True) service_command("postgrey", "stop", quit=True)
@ -289,7 +305,7 @@ def perform_backup(full_backup):
pre_script = os.path.join(backup_root, 'before-backup') pre_script = os.path.join(backup_root, 'before-backup')
if os.path.exists(pre_script): if os.path.exists(pre_script):
shell('check_call', shell('check_call',
['su', env['STORAGE_USER'], '-c', pre_script, config["target"]], ['su', env['STORAGE_USER'], '--login', '-c', pre_script, config["target"]],
env=env) env=env)
# Run a backup of STORAGE_ROOT (but excluding the backups themselves!). # Run a backup of STORAGE_ROOT (but excluding the backups themselves!).
@ -314,7 +330,7 @@ def perform_backup(full_backup):
service_command("postgrey", "start", quit=False) service_command("postgrey", "start", quit=False)
service_command("dovecot", "start", quit=False) service_command("dovecot", "start", quit=False)
service_command("postfix", "start", quit=False) service_command("postfix", "start", quit=False)
service_command("php8.0-fpm", "start", quit=False) service_command(php_fpm, "start", quit=False)
# Remove old backups. This deletes all backup data no longer needed # Remove old backups. This deletes all backup data no longer needed
# from more than 3 days ago. # from more than 3 days ago.
@ -344,30 +360,30 @@ def perform_backup(full_backup):
] + get_duplicity_additional_args(env), ] + get_duplicity_additional_args(env),
get_duplicity_env_vars(env)) get_duplicity_env_vars(env))
# Change ownership of backups to the user-data user, so that the after-bcakup # Change ownership of backups to the user-data user, so that the after-backup
# script can access them. # script can access them.
if get_target_type(config) == 'file': if get_target_type(config) == 'file':
shell('check_call', ["/bin/chown", "-R", env["STORAGE_USER"], backup_dir]) shell('check_call', ["/bin/chown", "-R", env["STORAGE_USER"], backup_dir])
# Execute a post-backup script that does the copying to a remote server.
# Run as the STORAGE_USER user, not as root. Pass our settings in
# environment variables so the script has access to STORAGE_ROOT.
post_script = os.path.join(backup_root, 'after-backup')
if os.path.exists(post_script):
shell('check_call',
['su', env['STORAGE_USER'], '-c', post_script, config["target"]],
env=env)
# Our nightly cron job executes system status checks immediately after this # Our nightly cron job executes system status checks immediately after this
# backup. Since it checks that dovecot and postfix are running, block for a # backup. Since it checks that dovecot and postfix are running, block for a
# bit (maximum of 10 seconds each) to give each a chance to finish restarting # bit (maximum of 10 seconds each) to give each a chance to finish restarting
# before the status checks might catch them down. See #381. # before the status checks might catch them down. See #381.
wait_for_service(25, True, env, 10) wait_for_service(25, True, env, 10)
wait_for_service(993, True, env, 10) wait_for_service(993, True, env, 10)
# Execute a post-backup script that does the copying to a remote server.
# Run as the STORAGE_USER user, not as root. Pass our settings in
# environment variables so the script has access to STORAGE_ROOT.
post_script = os.path.join(backup_root, 'after-backup')
if os.path.exists(post_script):
shell('check_call',
['su', env['STORAGE_USER'], '--login', '-c', post_script, config["target"]],
env=env, trap=True)
def run_duplicity_verification(): def run_duplicity_verification():
env = load_environment() env = load_environment()
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
config = get_backup_config(env) config = get_backup_config(env)
backup_cache_dir = os.path.join(backup_root, 'cache') backup_cache_dir = os.path.join(backup_root, 'cache')
@ -385,7 +401,8 @@ def run_duplicity_verification():
def run_duplicity_restore(args): def run_duplicity_restore(args):
env = load_environment() env = load_environment()
config = get_backup_config(env) config = get_backup_config(env)
backup_cache_dir = os.path.join(env["STORAGE_ROOT"], 'backup', 'cache') backup_root = get_backup_root(env)
backup_cache_dir = os.path.join(backup_root, 'cache')
shell('check_call', [ shell('check_call', [
"/usr/bin/duplicity", "/usr/bin/duplicity",
"restore", "restore",
@ -408,6 +425,17 @@ def list_target_files(config):
rsync_fn_size_re = re.compile(r'.* ([^ ]*) [^ ]* [^ ]* (.*)') rsync_fn_size_re = re.compile(r'.* ([^ ]*) [^ ]* [^ ]* (.*)')
rsync_target = '{host}:{path}' rsync_target = '{host}:{path}'
# Strip off any trailing port specifier because it's not valid in rsync's
# DEST syntax. Explicitly set the port number for the ssh transport.
user_host, *_ = target.netloc.rsplit(':', 1)
try:
port = target.port
except ValueError:
port = 22
if port is None:
port = 22
target_path = target.path target_path = target.path
if not target_path.endswith('/'): if not target_path.endswith('/'):
target_path = target_path + '/' target_path = target_path + '/'
@ -416,11 +444,11 @@ def list_target_files(config):
rsync_command = [ 'rsync', rsync_command = [ 'rsync',
'-e', '-e',
'/usr/bin/ssh -i /root/.ssh/id_rsa_miab -oStrictHostKeyChecking=no -oBatchMode=yes', f'/usr/bin/ssh -i /root/.ssh/id_rsa_miab -oStrictHostKeyChecking=no -oBatchMode=yes -p {port}',
'--list-only', '--list-only',
'-r', '-r',
rsync_target.format( rsync_target.format(
host=target.netloc, host=user_host,
path=target_path) path=target_path)
] ]
@ -454,7 +482,7 @@ def list_target_files(config):
# separate bucket from path in target # separate bucket from path in target
bucket = target.path[1:].split('/')[0] bucket = target.path[1:].split('/')[0]
path = '/'.join(target.path[1:].split('/')[1:]) + '/' path = '/'.join(target.path[1:].split('/')[1:]) + '/'
# If no prefix is specified, set the path to '', otherwise boto won't list the files # If no prefix is specified, set the path to '', otherwise boto won't list the files
if path == '/': if path == '/':
path = '' path = ''
@ -521,7 +549,7 @@ def backup_set_custom(env, target, target_user, target_pass, min_age):
return "OK" return "OK"
def get_backup_config(env, for_save=False, for_ui=False): def get_backup_config(env, for_save=False, for_ui=False):
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
# Defaults. # Defaults.
config = { config = {
@ -531,7 +559,8 @@ def get_backup_config(env, for_save=False, for_ui=False):
# Merge in anything written to custom.yaml. # Merge in anything written to custom.yaml.
try: try:
custom_config = rtyaml.load(open(os.path.join(backup_root, 'custom.yaml'))) with open(os.path.join(backup_root, 'custom.yaml'), 'r') as f:
custom_config = rtyaml.load(f)
if not isinstance(custom_config, dict): raise ValueError() # caught below if not isinstance(custom_config, dict): raise ValueError() # caught below
config.update(custom_config) config.update(custom_config)
except: except:
@ -556,15 +585,33 @@ def get_backup_config(env, for_save=False, for_ui=False):
config["target"] = "file://" + config["file_target_directory"] config["target"] = "file://" + config["file_target_directory"]
ssh_pub_key = os.path.join('/root', '.ssh', 'id_rsa_miab.pub') ssh_pub_key = os.path.join('/root', '.ssh', 'id_rsa_miab.pub')
if os.path.exists(ssh_pub_key): if os.path.exists(ssh_pub_key):
config["ssh_pub_key"] = open(ssh_pub_key, 'r').read() with open(ssh_pub_key, 'r') as f:
config["ssh_pub_key"] = f.read()
return config return config
def write_backup_config(env, newconfig): def write_backup_config(env, newconfig):
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup') backup_root = get_backup_root(env)
with open(os.path.join(backup_root, 'custom.yaml'), "w") as f: with open(os.path.join(backup_root, 'custom.yaml'), "w") as f:
f.write(rtyaml.dump(newconfig)) f.write(rtyaml.dump(newconfig))
def get_backup_root(env):
# Define environment variable used to store backup path
backup_root_env = "BACKUP_ROOT"
# Read STORAGE_ROOT
backup_root = env["STORAGE_ROOT"]
# If BACKUP_ROOT exists, overwrite backup_root variable
if backup_root_env in env:
tmp = env[backup_root_env]
if tmp and os.path.isdir(tmp):
backup_root = tmp
backup_root = os.path.join(backup_root, 'backup')
return backup_root
if __name__ == "__main__": if __name__ == "__main__":
import sys import sys
if sys.argv[-1] == "--verify": if sys.argv[-1] == "--verify":

View File

@ -47,7 +47,8 @@ def read_password():
return first return first
def setup_key_auth(mgmt_uri): def setup_key_auth(mgmt_uri):
key = open('/var/lib/mailinabox/api.key').read().strip() with open('/var/lib/mailinabox/api.key', 'r') as f:
key = f.read().strip()
auth_handler = urllib.request.HTTPBasicAuthHandler() auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password( auth_handler.add_password(

View File

@ -12,6 +12,7 @@
import os, os.path, re, json, time import os, os.path, re, json, time
import multiprocessing.pool, subprocess import multiprocessing.pool, subprocess
import logging
from functools import wraps from functools import wraps
@ -273,6 +274,7 @@ def dns_update():
try: try:
return do_dns_update(env, force=request.form.get('force', '') == '1') return do_dns_update(env, force=request.form.get('force', '') == '1')
except Exception as e: except Exception as e:
logging.exception('dns update exc')
return (str(e), 500) return (str(e), 500)
@app.route('/dns/secondary-nameserver') @app.route('/dns/secondary-nameserver')
@ -764,14 +766,21 @@ def log_failed_login(request):
# APP # APP
if __name__ == '__main__': if __name__ == '__main__':
logging_level = logging.DEBUG
if "DEBUG" in os.environ: if "DEBUG" in os.environ:
# Turn on Flask debugging. # Turn on Flask debugging.
app.debug = True app.debug = True
logging_level = logging.DEBUG
if not app.debug: if not app.debug:
app.logger.addHandler(utils.create_syslog_handler()) app.logger.addHandler(utils.create_syslog_handler())
#app.logger.info('API key: ' + auth_service.key) #app.logger.info('API key: ' + auth_service.key)
logging.basicConfig(level=logging_level, format='MiaB %(levelname)s:%(module)s.%(funcName)s %(message)s')
logging.info('Logging level set to %s', logging.getLevelName(logging_level))
# Start the application server. Listens on 127.0.0.1 (IPv4 only). # Start the application server. Listens on 127.0.0.1 (IPv4 only).
app.run(port=10222) app.run(port=10222)

View File

@ -9,10 +9,14 @@ export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8 export LANG=en_US.UTF-8
export LC_TYPE=en_US.UTF-8 export LC_TYPE=en_US.UTF-8
source /etc/mailinabox.conf
# On Mondays, i.e. once a week, send the administrator a report of total emails # On Mondays, i.e. once a week, send the administrator a report of total emails
# sent and received so the admin might notice server abuse. # sent and received so the admin might notice server abuse.
if [ `date "+%u"` -eq 1 ]; then if [ `date "+%u"` -eq 1 ]; then
management/mail_log.py -t week | management/email_administrator.py "Mail-in-a-Box Usage Report" management/mail_log.py -t week -r -s -l -g -b | management/email_administrator.py "Mail-in-a-Box Usage Report"
/usr/sbin/pflogsumm -u 5 -h 5 --problems_first /var/log/mail.log.1 | management/email_administrator.py "Postfix log analysis summary"
fi fi
# Take a backup. # Take a backup.
@ -23,3 +27,6 @@ management/ssl_certificates.py -q 2>&1 | management/email_administrator.py "TLS
# Run status checks and email the administrator if anything changed. # Run status checks and email the administrator if anything changed.
management/status_checks.py --show-changes 2>&1 | management/email_administrator.py "Status Checks Change Notice" management/status_checks.py --show-changes 2>&1 | management/email_administrator.py "Status Checks Change Notice"
# Check blacklists
tools/check-dnsbl.py $PUBLIC_IP $PUBLIC_IPV6 2>&1 | management/email_administrator.py "Blacklist Check Result"

View File

@ -8,6 +8,7 @@ import sys, os, os.path, urllib.parse, datetime, re, hashlib, base64
import ipaddress import ipaddress
import rtyaml import rtyaml
import dns.resolver import dns.resolver
import logging
from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains
from ssl_certificates import get_ssl_certificates, check_certificate from ssl_certificates import get_ssl_certificates, check_certificate
@ -24,9 +25,14 @@ def get_dns_domains(env):
# lead to infinite recursion here) and ensure PRIMARY_HOSTNAME is in the list. # lead to infinite recursion here) and ensure PRIMARY_HOSTNAME is in the list.
from mailconfig import get_mail_domains from mailconfig import get_mail_domains
from web_update import get_web_domains from web_update import get_web_domains
from wwwconfig import get_www_domains
domains = set() domains = set()
domains |= set(get_mail_domains(env)) domains |= set(get_mail_domains(env))
domains |= set(get_web_domains(env, include_www_redirects=False)) domains |= set(get_web_domains(env, include_www_redirects=False))
# www_domains are hosted here, but DNS is pointed to our box from somewhere else.
# DNS is thus not hosted by us for these domains.
domains -= set(get_www_domains(set()))
domains.add(env['PRIMARY_HOSTNAME']) domains.add(env['PRIMARY_HOSTNAME'])
return domains return domains
@ -109,21 +115,22 @@ def do_dns_update(env, force=False):
except: except:
shell('check_call', ["/usr/sbin/service", "nsd", "restart"]) shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
# Write the OpenDKIM configuration tables for all of the mail domains. # Write the DKIM configuration tables for all of the mail domains.
from mailconfig import get_mail_domains from mailconfig import get_mail_domains
if write_opendkim_tables(get_mail_domains(env), env):
# Settings changed. Kick opendkim. if write_dkim_tables(get_mail_domains(env), env):
shell('check_call', ["/usr/sbin/service", "opendkim", "restart"]) # Settings changed. Kick dkimpy.
shell('check_call', ["/usr/sbin/service", "dkimpy-milter", "restart"])
if len(updated_domains) == 0: if len(updated_domains) == 0:
# If this is the only thing that changed? # If this is the only thing that changed?
updated_domains.append("OpenDKIM configuration") updated_domains.append("DKIM configuration")
# Clear bind9's DNS cache so our own DNS resolver is up to date. # Clear unbound's DNS cache so our own DNS resolver is up to date.
# (ignore errors with trap=True) # (ignore errors with trap=True)
shell('check_call', ["/usr/sbin/rndc", "flush"], trap=True) shell('check_call', ["/usr/sbin/unbound-control", "flush_zone", ".", "-q"], trap=True)
if len(updated_domains) == 0: if len(updated_domains) == 0:
# if nothing was updated (except maybe OpenDKIM's files), don't show any output # if nothing was updated (except maybe DKIM's files), don't show any output
return "" return ""
else: else:
return "updated DNS: " + ",".join(updated_domains) + "\n" return "updated DNS: " + ",".join(updated_domains) + "\n"
@ -187,17 +194,29 @@ def build_zone(domain, domain_properties, additional_records, env, is_zone=True)
# 'False' in the tuple indicates these records would not be used if the zone # 'False' in the tuple indicates these records would not be used if the zone
# is managed outside of the box. # is managed outside of the box.
if is_zone: if is_zone:
# Obligatory NS record to ns1.PRIMARY_HOSTNAME. # Define ns2.PRIMARY_HOSTNAME or whatever the user overrides.
records.append((None, "NS", "ns1.%s." % env["PRIMARY_HOSTNAME"], False))
# NS record to ns2.PRIMARY_HOSTNAME or whatever the user overrides.
# User may provide one or more additional nameservers # User may provide one or more additional nameservers
secondary_ns_list = get_secondary_dns(additional_records, mode="NS") \ secondary_ns_list = get_secondary_dns(additional_records, mode="NS")
or ["ns2." + env["PRIMARY_HOSTNAME"]]
# Need at least two nameservers in the secondary dns list
useHiddenMaster = False
if os.path.exists("/etc/usehiddenmasterdns") and len(secondary_ns_list) > 1:
with open("/etc/usehiddenmasterdns") as f:
for line in f:
if line.strip() == domain or line.strip() == "usehiddenmasterdns":
useHiddenMaster = True
break
if not useHiddenMaster:
# Obligatory definition of ns1.PRIMARY_HOSTNAME.
records.append((None, "NS", "ns1.%s." % env["PRIMARY_HOSTNAME"], False))
if len(secondary_ns_list) == 0:
secondary_ns_list = ["ns2." + env["PRIMARY_HOSTNAME"]]
for secondary_ns in secondary_ns_list: for secondary_ns in secondary_ns_list:
records.append((None, "NS", secondary_ns+'.', False)) records.append((None, "NS", secondary_ns+'.', False))
# In PRIMARY_HOSTNAME... # In PRIMARY_HOSTNAME...
if domain == env["PRIMARY_HOSTNAME"]: if domain == env["PRIMARY_HOSTNAME"]:
# Set the A/AAAA records. Do this early for the PRIMARY_HOSTNAME so that the user cannot override them # Set the A/AAAA records. Do this early for the PRIMARY_HOSTNAME so that the user cannot override them
@ -295,10 +314,18 @@ def build_zone(domain, domain_properties, additional_records, env, is_zone=True)
if not has_rec(None, "TXT", prefix="v=spf1 "): if not has_rec(None, "TXT", prefix="v=spf1 "):
records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain)) records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain))
# Append the DKIM TXT record to the zone as generated by OpenDKIM. # Append the DKIM TXT record to the zone as generated by DKIMpy.
# Skip if the user has set a DKIM record already. # Skip if the user has set a DKIM record already.
opendkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.txt') dkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-rsa.dns')
with open(opendkim_record_file) as orf: with open(dkim_record_file) as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
records.append((m.group(1), "TXT", val, "Recommended. Provides a way for recipients to verify that this machine sent @%s mail." % domain))
# Also add a ed25519 DKIM record
dkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-ed25519.dns')
with open(dkim_record_file) as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S) m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2))) val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "): if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
@ -494,26 +521,75 @@ def write_nsd_zone(domain, zonefile, records, env, force):
# #
# For the refresh through TTL fields, a good reference is: # For the refresh through TTL fields, a good reference is:
# https://www.ripe.net/publications/docs/ripe-203 # https://www.ripe.net/publications/docs/ripe-203
#
# Time To Refresh How long in seconds a nameserver should wait prior to checking for a Serial Number
# increase within the primary zone file. An increased Serial Number means a transfer is needed to sync
# your records. Only applies to zones using secondary DNS.
# Time To Retry How long in seconds a nameserver should wait prior to retrying to update a zone after
# a failed attempt. Only applies to zones using secondary DNS.
# Time To Expire How long in seconds a nameserver should wait prior to considering data from a secondary
# zone invalid and stop answering queries for that zone. Only applies to zones using secondary DNS.
# Minimum TTL How long in seconds that a nameserver or resolver should cache a negative response.
# To make use of hidden master initialize the DNS to be used as secondary DNS. Then change the following
# in the zone file:
# - Name the secondary DNS server as primary DNS in the SOA record
# - Do not add NS records for the Mail-in-a-Box server
# A hash of the available DNSSEC keys are added in a comment so that when # A hash of the available DNSSEC keys are added in a comment so that when
# the keys change we force a re-generation of the zone which triggers # the keys change we force a re-generation of the zone which triggers
# re-signing it. # re-signing it.
zone = """ zone = """
$ORIGIN {domain}. $ORIGIN {domain}.
$TTL 86400 ; default time to live $TTL {defttl} ; default time to live
@ IN SOA ns1.{primary_domain}. hostmaster.{primary_domain}. ( @ IN SOA {primary_dns}. hostmaster.{primary_domain}. (
__SERIAL__ ; serial number __SERIAL__ ; serial number
7200 ; Refresh (secondary nameserver update interval) {refresh} ; Refresh (secondary nameserver update interval)
3600 ; Retry (when refresh fails, how often to try again, should be lower than the refresh) {retry} ; Retry (when refresh fails, how often to try again)
1209600 ; Expire (when refresh fails, how long secondary nameserver will keep records around anyway) {expire} ; Expire (when refresh fails, how long secondary nameserver will keep records around anyway)
86400 ; Negative TTL (how long negative responses are cached) {negttl} ; Negative TTL (how long negative responses are cached)
) )
""" """
# Default ttl values, following recomendations from zonemaster.iis.se
p_defttl = "1d"
p_refresh = "4h"
p_retry = "1h"
p_expire = "14d"
p_negttl = "12h"
# Shorten dns ttl if file exists. Use before moving domains, changing secondary dns servers etc
if os.path.exists("/etc/forceshortdnsttl"):
with open("/etc/forceshortdnsttl") as f:
for line in f:
if line.strip() == domain or line.strip() == "forceshortdnsttl":
# Override the ttl values
p_defttl = "5m"
p_refresh = "30m"
p_retry = "5m"
p_expire = "1d"
p_negttl = "5m"
break
primary_dns = "ns1." + env["PRIMARY_HOSTNAME"]
# Obtain the secondary nameserver list
additional_records = list(get_custom_dns_config(env))
secondary_ns_list = get_secondary_dns(additional_records, mode="NS")
# Using hidden master for a domain if it is configured
if os.path.exists("/etc/usehiddenmasterdns") and len(secondary_ns_list) > 1:
with open("/etc/usehiddenmasterdns") as f:
for line in f:
if line.strip() == domain or line.strip() == "usehiddenmasterdns":
primary_dns = secondary_ns_list[0]
break
# Replace replacement strings. # Replace replacement strings.
zone = zone.format(domain=domain, primary_domain=env["PRIMARY_HOSTNAME"]) zone = zone.format(domain=domain, primary_dns=primary_dns, primary_domain=env["PRIMARY_HOSTNAME"], defttl=p_defttl,
refresh=p_refresh, retry=p_retry, expire=p_expire, negttl=p_negttl)
# Add records. # Add records.
for subdomain, querytype, value, explanation in records: for subdomain, querytype, value, explanation in records:
@ -760,14 +836,15 @@ def sign_zone(domain, zonefile, env):
######################################################################## ########################################################################
def write_opendkim_tables(domains, env): def write_dkim_tables(domains, env):
# Append a record to OpenDKIM's KeyTable and SigningTable for each domain # Append a record to DKIMpy's KeyTable and SigningTable for each domain
# that we send mail from (zones and all subdomains). # that we send mail from (zones and all subdomains).
opendkim_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.private') dkim_rsa_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-rsa.key')
dkim_ed_key_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/box-ed25519.key')
if not os.path.exists(opendkim_key_file): if not os.path.exists(dkim_rsa_key_file) or not os.path.exists(dkim_ed_key_file):
# Looks like OpenDKIM is not installed. # Looks like DKIMpy is not installed.
return False return False
config = { config = {
@ -789,7 +866,12 @@ def write_opendkim_tables(domains, env):
# signing domain must match the sender's From: domain. # signing domain must match the sender's From: domain.
"KeyTable": "KeyTable":
"".join( "".join(
"{domain} {domain}:mail:{key_file}\n".format(domain=domain, key_file=opendkim_key_file) "{domain} {domain}:box-rsa:{key_file}\n".format(domain=domain, key_file=dkim_rsa_key_file)
for domain in domains
),
"KeyTableEd25519":
"".join(
"{domain} {domain}:box-ed25519:{key_file}\n".format(domain=domain, key_file=dkim_ed_key_file)
for domain in domains for domain in domains
), ),
} }
@ -797,25 +879,26 @@ def write_opendkim_tables(domains, env):
did_update = False did_update = False
for filename, content in config.items(): for filename, content in config.items():
# Don't write the file if it doesn't need an update. # Don't write the file if it doesn't need an update.
if os.path.exists("/etc/opendkim/" + filename): if os.path.exists("/etc/dkim/" + filename):
with open("/etc/opendkim/" + filename) as f: with open("/etc/dkim/" + filename) as f:
if f.read() == content: if f.read() == content:
continue continue
# The contents needs to change. # The contents needs to change.
with open("/etc/opendkim/" + filename, "w") as f: with open("/etc/dkim/" + filename, "w") as f:
f.write(content) f.write(content)
did_update = True did_update = True
# Return whether the files changed. If they didn't change, there's # Return whether the files changed. If they didn't change, there's
# no need to kick the opendkim process. # no need to kick the dkimpy process.
return did_update return did_update
######################################################################## ########################################################################
def get_custom_dns_config(env, only_real_records=False): def get_custom_dns_config(env, only_real_records=False):
try: try:
custom_dns = rtyaml.load(open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml'))) with open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml'), 'r') as f:
custom_dns = rtyaml.load(f)
if not isinstance(custom_dns, dict): raise ValueError() # caught below if not isinstance(custom_dns, dict): raise ValueError() # caught below
except: except:
return [ ] return [ ]
@ -992,6 +1075,7 @@ def set_custom_dns_record(qname, rtype, value, action, env):
def get_secondary_dns(custom_dns, mode=None): def get_secondary_dns(custom_dns, mode=None):
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
resolver.timeout = 10 resolver.timeout = 10
resolver.lifetime = 10
values = [] values = []
for qname, rtype, value in custom_dns: for qname, rtype, value in custom_dns:
@ -1009,10 +1093,17 @@ def get_secondary_dns(custom_dns, mode=None):
# doesn't. # doesn't.
if not hostname.startswith("xfr:"): if not hostname.startswith("xfr:"):
if mode == "xfr": if mode == "xfr":
response = dns.resolver.resolve(hostname+'.', "A", raise_on_no_answer=False) try:
values.extend(map(str, response)) response = resolver.resolve(hostname+'.', "A", raise_on_no_answer=False)
response = dns.resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False) values.extend(map(str, response))
values.extend(map(str, response)) except dns.exception.DNSException:
logging.debug("Secondary dns A lookup exception %s", hostname)
try:
response = resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False)
values.extend(map(str, response))
except dns.exception.DNSException:
logging.debug("Secondary dns AAAA lookup exception %s", hostname)
continue continue
values.append(hostname) values.append(hostname)
@ -1030,16 +1121,33 @@ def set_secondary_dns(hostnames, env):
# Validate that all hostnames are valid and that all zone-xfer IP addresses are valid. # Validate that all hostnames are valid and that all zone-xfer IP addresses are valid.
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
resolver.timeout = 5 resolver.timeout = 5
resolver.lifetime = 5
for item in hostnames: for item in hostnames:
if not item.startswith("xfr:"): if not item.startswith("xfr:"):
# Resolve hostname. # Resolve hostname.
try: tries = 2
response = resolver.resolve(item, "A")
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): while tries > 0:
tries = tries - 1
try: try:
response = resolver.resolve(item, "AAAA") response = resolver.resolve(item, "A")
tries = 0
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
raise ValueError("Could not resolve the IP address of %s." % item) logging.debug('Error on resolving ipv4 address, trying ipv6')
try:
response = resolver.resolve(item, "AAAA")
tries = 0
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
raise ValueError("Could not resolve the IP address of %s." % item)
except (dns.resolver.Timeout):
logging.debug('Timeout on resolving ipv6 address')
if tries < 1:
raise ValueError("Could not resolve the IP address of %s due to timeout." % item)
except (dns.resolver.Timeout):
logging.debug('Timeout on resolving ipv4 address')
if tries < 1:
raise ValueError("Could not resolve the IP address of %s due to timeout." % item)
else: else:
# Validate IP address. # Validate IP address.
try: try:
@ -1071,7 +1179,7 @@ def get_custom_dns_records(custom_dns, qname, rtype):
def build_recommended_dns(env): def build_recommended_dns(env):
ret = [] ret = []
for (domain, zonefile, records) in build_zones(env): for (domain, zonefile, records) in build_zones(env):
# remove records that we don't dislay # remove records that we don't display
records = [r for r in records if r[3] is not False] records = [r for r in records if r[3] is not False]
# put Required at the top, then Recommended, then everythiing else # put Required at the top, then Recommended, then everythiing else

View File

@ -2,7 +2,7 @@
# Reads in STDIN. If the stream is not empty, mail it to the system administrator. # Reads in STDIN. If the stream is not empty, mail it to the system administrator.
import sys import sys, traceback
import html import html
import smtplib import smtplib
@ -25,7 +25,12 @@ subject = sys.argv[1]
admin_addr = "administrator@" + env['PRIMARY_HOSTNAME'] admin_addr = "administrator@" + env['PRIMARY_HOSTNAME']
# Read in STDIN. # Read in STDIN.
content = sys.stdin.read().strip() try:
content = sys.stdin.read().strip()
except:
print("error occured while cleaning input text")
traceback.print_exc()
sys.exit(1)
# If there's nothing coming in, just exit. # If there's nothing coming in, just exit.
if content == "": if content == "":

View File

@ -73,7 +73,8 @@ def scan_files(collector):
continue continue
elif fn[-3:] == '.gz': elif fn[-3:] == '.gz':
tmp_file = tempfile.NamedTemporaryFile() tmp_file = tempfile.NamedTemporaryFile()
shutil.copyfileobj(gzip.open(fn), tmp_file) with gzip.open(fn, 'rb') as f:
shutil.copyfileobj(f, tmp_file)
if VERBOSE: if VERBOSE:
print("Processing file", fn, "...") print("Processing file", fn, "...")
@ -376,7 +377,7 @@ def scan_mail_log_line(line, collector):
if SCAN_BLOCKED: if SCAN_BLOCKED:
scan_postfix_smtpd_line(date, log, collector) scan_postfix_smtpd_line(date, log, collector)
elif service in ("postfix/qmgr", "postfix/pickup", "postfix/cleanup", "postfix/scache", elif service in ("postfix/qmgr", "postfix/pickup", "postfix/cleanup", "postfix/scache",
"spampd", "postfix/anvil", "postfix/master", "opendkim", "postfix/lmtp", "spampd", "postfix/anvil", "postfix/master", "dkimpy", "postfix/lmtp",
"postfix/tlsmgr", "anvil"): "postfix/tlsmgr", "anvil"):
# nothing to look at # nothing to look at
return True return True

View File

@ -533,6 +533,9 @@ def get_required_aliases(env):
# The hostmaster alias is exposed in the DNS SOA for each zone. # The hostmaster alias is exposed in the DNS SOA for each zone.
aliases.add("hostmaster@" + env['PRIMARY_HOSTNAME']) aliases.add("hostmaster@" + env['PRIMARY_HOSTNAME'])
# Setup root alias
aliases.add("root@" + env['PRIMARY_HOSTNAME'])
# Get a list of domains we serve mail for, except ones for which the only # Get a list of domains we serve mail for, except ones for which the only
# email on that domain are the required aliases or a catch-all/domain-forwarder. # email on that domain are the required aliases or a catch-all/domain-forwarder.
@ -566,7 +569,7 @@ def kick(env, mail_result=None):
auto_aliases = { } auto_aliases = { }
# Mape required aliases to the administrator alias (which should be created manually). # Map required aliases to the administrator alias (which should be created manually).
administrator = get_system_administrator(env) administrator = get_system_administrator(env)
required_aliases = get_required_aliases(env) required_aliases = get_required_aliases(env)
for alias in required_aliases: for alias in required_aliases:

View File

@ -343,6 +343,8 @@ def provision_certificates(env, limit_domains):
"certonly", "certonly",
#"-v", # just enough to see ACME errors #"-v", # just enough to see ACME errors
"--non-interactive", # will fail if user hasn't registered during Mail-in-a-Box setup "--non-interactive", # will fail if user hasn't registered during Mail-in-a-Box setup
"--agree-tos", # Automatically agrees to Let's Encrypt TOS
"--register-unsafely-without-email", # The daemon takes care of renewals
"-d", ",".join(domain_list), # first will be main domain "-d", ",".join(domain_list), # first will be main domain
@ -535,7 +537,8 @@ def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring
# Second, check that the certificate matches the private key. # Second, check that the certificate matches the private key.
if ssl_private_key is not None: if ssl_private_key is not None:
try: try:
priv_key = load_pem(open(ssl_private_key, 'rb').read()) with open(ssl_private_key, 'rb') as f:
priv_key = load_pem(f.read())
except ValueError as e: except ValueError as e:
return ("The private key file %s is not a private key file: %s" % (ssl_private_key, str(e)), None) return ("The private key file %s is not a private key file: %s" % (ssl_private_key, str(e)), None)

View File

@ -12,6 +12,7 @@ import dateutil.parser, dateutil.tz
import idna import idna
import psutil import psutil
import postfix_mta_sts_resolver.resolver import postfix_mta_sts_resolver.resolver
import logging
from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_records from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_records
from web_update import get_web_domains, get_domains_with_a_records from web_update import get_web_domains, get_domains_with_a_records
@ -19,16 +20,16 @@ from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_c
from mailconfig import get_mail_domains, get_mail_aliases from mailconfig import get_mail_domains, get_mail_aliases
from utils import shell, sort_domains, load_env_vars_from_file, load_settings from utils import shell, sort_domains, load_env_vars_from_file, load_settings
from backup import get_backup_root
def get_services(): def get_services():
return [ return [
{ "name": "Local DNS (bind9)", "port": 53, "public": False, }, { "name": "Local DNS (unbound)", "port": 53, "public": False, },
#{ "name": "NSD Control", "port": 8952, "public": False, }, { "name": "Local DNS Control (unbound)", "port": 953, "public": False, },
{ "name": "Local DNS Control (bind9/rndc)", "port": 953, "public": False, },
{ "name": "Dovecot LMTP LDA", "port": 10026, "public": False, }, { "name": "Dovecot LMTP LDA", "port": 10026, "public": False, },
{ "name": "Postgrey", "port": 10023, "public": False, }, { "name": "Postgrey", "port": 10023, "public": False, },
{ "name": "Spamassassin", "port": 10025, "public": False, }, { "name": "Spamassassin", "port": 10025, "public": False, },
{ "name": "OpenDKIM", "port": 8891, "public": False, }, { "name": "DKIMpy", "port": 8892, "public": False, },
{ "name": "OpenDMARC", "port": 8893, "public": False, }, { "name": "OpenDMARC", "port": 8893, "public": False, },
{ "name": "Mail-in-a-Box Management Daemon", "port": 10222, "public": False, }, { "name": "Mail-in-a-Box Management Daemon", "port": 10222, "public": False, },
{ "name": "SSH Login (ssh)", "port": get_ssh_port(), "public": True, }, { "name": "SSH Login (ssh)", "port": get_ssh_port(), "public": True, },
@ -49,15 +50,15 @@ def run_checks(rounded_values, env, output, pool, domains_to_check=None):
# check that services are running # check that services are running
if not run_services_checks(env, output, pool): if not run_services_checks(env, output, pool):
# If critical services are not running, stop. If bind9 isn't running, # If critical services are not running, stop. If unbound isn't running,
# all later DNS checks will timeout and that will take forever to # all later DNS checks will timeout and that will take forever to
# go through, and if running over the web will cause a fastcgi timeout. # go through, and if running over the web will cause a fastcgi timeout.
return return
# clear bind9's DNS cache so our DNS checks are up to date # clear unbound's DNS cache so our DNS checks are up to date
# (ignore errors; if bind9/rndc isn't running we'd already report # (ignore errors; if unbound isn't running we'd already report
# that in run_services checks.) # that in run_services checks.)
shell('check_call', ["/usr/sbin/rndc", "flush"], trap=True) shell('check_call', ["/usr/sbin/unbound-control", "flush_zone", ".", "-q"], trap=True)
run_system_checks(rounded_values, env, output) run_system_checks(rounded_values, env, output)
@ -73,6 +74,9 @@ def get_ssh_port():
except FileNotFoundError: except FileNotFoundError:
# sshd is not installed. That's ok. # sshd is not installed. That's ok.
return None return None
except subprocess.CalledProcessError:
# error while calling shell command
return None
returnNext = False returnNext = False
for e in output.split(): for e in output.split():
@ -95,6 +99,12 @@ def run_services_checks(env, output, pool):
fatal = fatal or fatal2 fatal = fatal or fatal2
output2.playback(output) output2.playback(output)
# Check fail2ban.
code, ret = shell('check_output', ["fail2ban-client", "status"], capture_stderr=True, trap=True)
if code != 0:
output.print_error("fail2ban is not running.")
all_running = False
if all_running: if all_running:
output.print_ok("All system services are running.") output.print_ok("All system services are running.")
@ -142,6 +152,8 @@ def check_service(i, service, env):
# IPv4 failed. Try the private IP to see if the service is running but not accessible (except DNS because a different service runs on the private IP). # IPv4 failed. Try the private IP to see if the service is running but not accessible (except DNS because a different service runs on the private IP).
elif service["port"] != 53 and try_connect("127.0.0.1"): elif service["port"] != 53 and try_connect("127.0.0.1"):
output.print_error("%s is running but is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IP'], service['port'])) output.print_error("%s is running but is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IP'], service['port']))
elif try_connect(env["PUBLIC_IPV6"]):
output.print_warning("%s is only running on ipv6 (port %d)." % (service['name'], service['port']))
else: else:
output.print_error("%s is not running (port %d)." % (service['name'], service['port'])) output.print_error("%s is not running (port %d)." % (service['name'], service['port']))
@ -207,7 +219,8 @@ def check_ssh_password(env, output):
# the configuration file. # the configuration file.
if not os.path.exists("/etc/ssh/sshd_config"): if not os.path.exists("/etc/ssh/sshd_config"):
return return
sshd = open("/etc/ssh/sshd_config").read() with open("/etc/ssh/sshd_config", "r") as f:
sshd = f.read()
if re.search("\nPasswordAuthentication\s+yes", sshd) \ if re.search("\nPasswordAuthentication\s+yes", sshd) \
or not re.search("\nPasswordAuthentication\s+no", sshd): or not re.search("\nPasswordAuthentication\s+no", sshd):
output.print_error("""The SSH server on this machine permits password-based login. A more secure output.print_error("""The SSH server on this machine permits password-based login. A more secure
@ -256,7 +269,7 @@ def check_free_disk_space(rounded_values, env, output):
# Check that there's only one duplicity cache. If there's more than one, # Check that there's only one duplicity cache. If there's more than one,
# it's probably no longer in use, and we can recommend clearing the cache # it's probably no longer in use, and we can recommend clearing the cache
# to save space. The cache directory may not exist yet, which is OK. # to save space. The cache directory may not exist yet, which is OK.
backup_cache_path = os.path.join(env['STORAGE_ROOT'], 'backup/cache') backup_cache_path = os.path.join(get_backup_root(env), 'cache')
try: try:
backup_cache_count = len(os.listdir(backup_cache_path)) backup_cache_count = len(os.listdir(backup_cache_path))
except: except:
@ -303,11 +316,13 @@ def run_network_checks(env, output):
# by a spammer, or the user may be deploying on a residential network. We # by a spammer, or the user may be deploying on a residential network. We
# will not be able to reliably send mail in these cases. # will not be able to reliably send mail in these cases.
rev_ip4 = ".".join(reversed(env['PUBLIC_IP'].split('.'))) rev_ip4 = ".".join(reversed(env['PUBLIC_IP'].split('.')))
zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None) zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None, retry = False)
if zen is None: if zen is None:
output.print_ok("IP address is not blacklisted by zen.spamhaus.org.") output.print_ok("IP address is not blacklisted by zen.spamhaus.org.")
elif zen == "[timeout]": elif zen == "[timeout]":
output.print_warning("Connection to zen.spamhaus.org timed out. We could not determine whether your server's IP address is blacklisted. Please try again later.") output.print_warning("Connection to zen.spamhaus.org timed out. We could not determine whether your server's IP address is blacklisted. Please try again later.")
elif zen == "[Not Set]":
output.print_warning("Could not connect to zen.spamhaus.org. We could not determine whether your server's IP address is blacklisted. Please try again later.")
else: else:
output.print_error("""The IP address of this machine %s is listed in the Spamhaus Block List (code %s), output.print_error("""The IP address of this machine %s is listed in the Spamhaus Block List (code %s),
which may prevent recipients from receiving your email. See http://www.spamhaus.org/query/ip/%s.""" which may prevent recipients from receiving your email. See http://www.spamhaus.org/query/ip/%s."""
@ -332,9 +347,9 @@ def run_domain_checks(rounded_time, env, output, pool, domains_to_check=None):
domains_to_check = [ domains_to_check = [
d for d in domains_to_check d for d in domains_to_check
if not ( if not (
d.split(".", 1)[0] in ("www", "autoconfig", "autodiscover", "mta-sts") d.split(".", 1)[0] in ("www", "autoconfig", "autodiscover", "mta-sts")
and len(d.split(".", 1)) == 2 and len(d.split(".", 1)) == 2
and d.split(".", 1)[1] in domains_to_check and d.split(".", 1)[1] in domains_to_check
) )
] ]
@ -517,7 +532,17 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
secondary_ns = custom_secondary_ns or ["ns2." + env['PRIMARY_HOSTNAME']] secondary_ns = custom_secondary_ns or ["ns2." + env['PRIMARY_HOSTNAME']]
existing_ns = query_dns(domain, "NS") existing_ns = query_dns(domain, "NS")
correct_ns = "; ".join(sorted(["ns1." + env['PRIMARY_HOSTNAME']] + secondary_ns)) correct_ns = "; ".join(sorted(["ns1." + env['PRIMARY_HOSTNAME']] + secondary_ns))
# Take hidden master dns into account, the mail-in-a-box is not known as nameserver in that case
if os.path.exists("/etc/usehiddenmasterdns") and len(secondary_ns) > 1:
with open("/etc/usehiddenmasterdns") as f:
for line in f:
if line.strip() == domain or line.strip() == "usehiddenmasterdns":
correct_ns = "; ".join(sorted(secondary_ns))
break
ip = query_dns(domain, "A") ip = query_dns(domain, "A")
probably_external_dns = False probably_external_dns = False
@ -541,7 +566,7 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
for ns in custom_secondary_ns: for ns in custom_secondary_ns:
# We must first resolve the nameserver to an IP address so we can query it. # We must first resolve the nameserver to an IP address so we can query it.
ns_ips = query_dns(ns, "A") ns_ips = query_dns(ns, "A")
if not ns_ips: if not ns_ips or ns_ips in {'[Not Set]', '[timeout]'}:
output.print_error("Secondary nameserver %s is not valid (it doesn't resolve to an IP address)." % ns) output.print_error("Secondary nameserver %s is not valid (it doesn't resolve to an IP address)." % ns)
continue continue
# Choose the first IP if nameserver returns multiple # Choose the first IP if nameserver returns multiple
@ -592,18 +617,19 @@ def check_dnssec(domain, env, output, dns_zonefiles, is_checking_primary=False):
# record that we suggest using is for the KSK (and that's how the DS records were generated). # record that we suggest using is for the KSK (and that's how the DS records were generated).
# We'll also give the nice name for the key algorithm. # We'll also give the nice name for the key algorithm.
dnssec_keys = load_env_vars_from_file(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/%s.conf' % alg_name_map[ds_alg])) dnssec_keys = load_env_vars_from_file(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/%s.conf' % alg_name_map[ds_alg]))
dnsssec_pubkey = open(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/' + dnssec_keys['KSK'] + '.key')).read().split("\t")[3].split(" ")[3] with open(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/' + dnssec_keys['KSK'] + '.key'), 'r') as f:
dnsssec_pubkey = f.read().split("\t")[3].split(" ")[3]
expected_ds_records[ (ds_keytag, ds_alg, ds_digalg, ds_digest) ] = { expected_ds_records[ (ds_keytag, ds_alg, ds_digalg, ds_digest) ] = {
"record": rr_ds, "record": rr_ds,
"keytag": ds_keytag, "keytag": ds_keytag,
"alg": ds_alg, "alg": ds_alg,
"alg_name": alg_name_map[ds_alg], "alg_name": alg_name_map[ds_alg],
"digalg": ds_digalg, "digalg": ds_digalg,
"digalg_name": digalg_name_map[ds_digalg], "digalg_name": digalg_name_map[ds_digalg],
"digest": ds_digest, "digest": ds_digest,
"pubkey": dnsssec_pubkey, "pubkey": dnsssec_pubkey,
} }
# Query public DNS for the DS record at the registrar. # Query public DNS for the DS record at the registrar.
ds = query_dns(domain, "DS", nxdomain=None, as_list=True) ds = query_dns(domain, "DS", nxdomain=None, as_list=True)
@ -739,11 +765,13 @@ def check_mail_domain(domain, env, output):
# Stop if the domain is listed in the Spamhaus Domain Block List. # Stop if the domain is listed in the Spamhaus Domain Block List.
# The user might have chosen a domain that was previously in use by a spammer # The user might have chosen a domain that was previously in use by a spammer
# and will not be able to reliably send mail. # and will not be able to reliably send mail.
dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None) dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None, retry=False)
if dbl is None: if dbl is None:
output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.") output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.")
elif dbl == "[timeout]": elif dbl == "[timeout]":
output.print_warning("Connection to dbl.spamhaus.org timed out. We could not determine whether the domain {} is blacklisted. Please try again later.".format(domain)) output.print_warning("Connection to dbl.spamhaus.org timed out. We could not determine whether the domain {} is blacklisted. Please try again later.".format(domain))
elif dbl == "[Not Set]":
output.print_warning("Could not connect to dbl.spamhaus.org. We could not determine whether the domain {} is blacklisted. Please try again later.".format(domain))
else: else:
output.print_error("""This domain is listed in the Spamhaus Domain Block List (code %s), output.print_error("""This domain is listed in the Spamhaus Domain Block List (code %s),
which may prevent recipients from receiving your mail. which may prevent recipients from receiving your mail.
@ -775,7 +803,7 @@ def check_web_domain(domain, rounded_time, ssl_certificates, env, output):
# website for also needs a signed certificate. # website for also needs a signed certificate.
check_ssl_cert(domain, rounded_time, ssl_certificates, env, output) check_ssl_cert(domain, rounded_time, ssl_certificates, env, output)
def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False): def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False, retry=True):
# Make the qname absolute by appending a period. Without this, dns.resolver.query # Make the qname absolute by appending a period. Without this, dns.resolver.query
# will fall back a failed lookup to a second query with this machine's hostname # will fall back a failed lookup to a second query with this machine's hostname
# appended. This has been causing some false-positive Spamhaus reports. The # appended. This has been causing some false-positive Spamhaus reports. The
@ -785,25 +813,42 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False):
qname += "." qname += "."
# Use the default nameservers (as defined by the system, which is our locally # Use the default nameservers (as defined by the system, which is our locally
# running bind server), or if the 'at' argument is specified, use that host # running unbound server), or if the 'at' argument is specified, use that host
# as the nameserver. # as the nameserver.
resolver = dns.resolver.get_default_resolver() resolver = dns.resolver.get_default_resolver()
if at:
# Make sure at is not a string that cannot be used as a nameserver
if at and at not in {'[Not set]', '[timeout]'}:
resolver = dns.resolver.Resolver() resolver = dns.resolver.Resolver()
resolver.nameservers = [at] resolver.nameservers = [at]
# Set a timeout so that a non-responsive server doesn't hold us back. # Set a timeout so that a non-responsive server doesn't hold us back.
resolver.timeout = 5 resolver.timeout = 5
# The number of seconds to spend trying to get an answer to the question. If the
# lifetime expires a dns.exception.Timeout exception will be raised.
resolver.lifetime = 5
if retry:
tries = 2
else:
tries = 1
# Do the query. # Do the query.
try: while tries > 0:
response = resolver.resolve(qname, rtype) tries = tries - 1
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer): try:
# Host did not have an answer for this query; not sure what the response = resolver.resolve(qname, rtype, search=True)
# difference is between the two exceptions. tries = 0
return nxdomain except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
except dns.exception.Timeout: # Host did not have an answer for this query; not sure what the
return "[timeout]" # difference is between the two exceptions.
logging.debug("No result for dns lookup %s, %s (%d)", qname, rtype, tries)
if tries < 1:
return nxdomain
except dns.exception.Timeout:
logging.debug("Timeout on dns lookup %s, %s (%d)", qname, rtype, tries)
if tries < 1:
return "[timeout]"
# Normalize IP addresses. IP address --- especially IPv6 addresses --- can # Normalize IP addresses. IP address --- especially IPv6 addresses --- can
# be expressed in equivalent string forms. Canonicalize the form before # be expressed in equivalent string forms. Canonicalize the form before
@ -899,19 +944,19 @@ def what_version_is_this(env):
# Git may not be installed and Mail-in-a-Box may not have been cloned from github, # Git may not be installed and Mail-in-a-Box may not have been cloned from github,
# so this function may raise all sorts of exceptions. # so this function may raise all sorts of exceptions.
miab_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) miab_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
tag = shell("check_output", ["/usr/bin/git", "describe", "--abbrev=0"], env={"GIT_DIR": os.path.join(miab_dir, '.git')}).strip() tag = shell("check_output", ["/usr/bin/git", "describe", "--tags", "--abbrev=0"], env={"GIT_DIR": os.path.join(miab_dir, '.git')}).strip()
return tag return tag
def get_latest_miab_version(): def get_latest_miab_version():
# This pings https://mailinabox.email/setup.sh and extracts the tag named in # This pings https://mailinabox.email/setup.sh and extracts the tag named in
# the script to determine the current product version. # the script to determine the current product version.
from urllib.request import urlopen, HTTPError, URLError from urllib.request import urlopen, HTTPError, URLError
from socket import timeout from socket import timeout
try: try:
return re.search(b'TAG=(.*)', urlopen("https://mailinabox.email/setup.sh?ping=1", timeout=5).read()).group(1).decode("utf8") return re.search(b'TAG=(.*)', urlopen("https://mailinabox.email/setup.sh?ping=1", timeout=5).read()).group(1).decode("utf8")
except (HTTPError, URLError, timeout): except (HTTPError, URLError, timeout):
return None return None
def check_miab_version(env, output): def check_miab_version(env, output):
config = load_settings(env) config = load_settings(env)
@ -922,17 +967,23 @@ def check_miab_version(env, output):
this_ver = "Unknown" this_ver = "Unknown"
if config.get("privacy", True): if config.get("privacy", True):
output.print_warning("You are running version Mail-in-a-Box %s. Mail-in-a-Box version check disabled by privacy setting." % this_ver) output.print_warning("You are running version Mail-in-a-Box %s Kiekerjan Edition. Mail-in-a-Box version check disabled by privacy setting." % this_ver)
else: else:
latest_ver = get_latest_miab_version() latest_ver = get_latest_miab_version()
if this_ver == latest_ver: if this_ver[-6:] == "-20.04":
output.print_ok("Mail-in-a-Box is up to date. You are running version %s." % this_ver) this_ver_tag = this_ver[:-6]
elif latest_ver is None: elif this_ver[-3:] == "-kj":
output.print_error("Latest Mail-in-a-Box version could not be determined. You are running version %s." % this_ver) this_ver_tag = this_ver[:-3]
else: else:
output.print_error("A new version of Mail-in-a-Box is available. You are running version %s. The latest version is %s. For upgrade instructions, see https://mailinabox.email. " this_ver_tag = this_ver
% (this_ver, latest_ver))
if this_ver_tag == latest_ver:
output.print_ok("Mail-in-a-Box is up to date. You are running version %s Kiekerjan Edition." % this_ver)
elif latest_ver is None:
output.print_error("Latest Mail-in-a-Box version could not be determined. You are running version %s Kiekerjan Edition." % this_ver)
else:
output.print_error("A new upstream version of Mail-in-a-Box is available. You are running version %s Kiekerjan Edition. The latest version is %s. " % (this_ver, latest_ver))
def run_and_output_changes(env, pool): def run_and_output_changes(env, pool):
import json import json
@ -947,7 +998,8 @@ def run_and_output_changes(env, pool):
# Load previously saved status checks. # Load previously saved status checks.
cache_fn = "/var/cache/mailinabox/status_checks.json" cache_fn = "/var/cache/mailinabox/status_checks.json"
if os.path.exists(cache_fn): if os.path.exists(cache_fn):
prev = json.load(open(cache_fn)) with open(cache_fn, 'r') as f:
prev = json.load(f)
# Group the serial output into categories by the headings. # Group the serial output into categories by the headings.
def group_by_heading(lines): def group_by_heading(lines):

View File

@ -72,11 +72,6 @@
html { html {
filter: invert(100%) hue-rotate(180deg); filter: invert(100%) hue-rotate(180deg);
} }
/* Set explicit background color (necessary for Firefox) */
html {
background-color: #111;
}
/* Override Boostrap theme here to give more contrast. The black turns to white by the filter. */ /* Override Boostrap theme here to give more contrast. The black turns to white by the filter. */
.form-control { .form-control {

View File

@ -45,6 +45,10 @@
<label for="backup-target-rsync-host" class="col-sm-2 control-label">Hostname</label> <label for="backup-target-rsync-host" class="col-sm-2 control-label">Hostname</label>
<div class="col-sm-8"> <div class="col-sm-8">
<input type="text" placeholder="hostname.local" class="form-control" rows="1" id="backup-target-rsync-host"> <input type="text" placeholder="hostname.local" class="form-control" rows="1" id="backup-target-rsync-host">
<div class="small" style="margin-top: 2px">
The hostname at your rsync provider, e.g. <tt>da2327.rsync.net</tt>. Optionally includes a colon
and the provider's non-standard ssh port number, e.g. <tt>u215843.your-storagebox.de:23</tt>.
</div>
</div> </div>
</div> </div>
<div class="form-group backup-target-rsync"> <div class="form-group backup-target-rsync">
@ -259,12 +263,11 @@ function show_custom_backup() {
} else if (r.target == "off") { } else if (r.target == "off") {
$("#backup-target-type").val("off"); $("#backup-target-type").val("off");
} else if (r.target.substring(0, 8) == "rsync://") { } else if (r.target.substring(0, 8) == "rsync://") {
$("#backup-target-type").val("rsync"); const spec = url_split(r.target);
var path = r.target.substring(8).split('//'); $("#backup-target-type").val(spec.scheme);
var host_parts = path.shift().split('@'); $("#backup-target-rsync-user").val(spec.user);
$("#backup-target-rsync-user").val(host_parts[0]); $("#backup-target-rsync-host").val(spec.host);
$("#backup-target-rsync-host").val(host_parts[1]); $("#backup-target-rsync-path").val(spec.path);
$("#backup-target-rsync-path").val('/'+path[0]);
} else if (r.target.substring(0, 5) == "s3://") { } else if (r.target.substring(0, 5) == "s3://") {
$("#backup-target-type").val("s3"); $("#backup-target-type").val("s3");
var hostpath = r.target.substring(5).split('/'); var hostpath = r.target.substring(5).split('/');
@ -344,4 +347,31 @@ function init_inputs(target_type) {
set_host($('#backup-target-s3-host-select').val()); set_host($('#backup-target-s3-host-select').val());
} }
} }
// Return a two-element array of the substring preceding and the substring following
// the first occurence of separator in string. Return [undefined, string] if the
// separator does not appear in string.
const split1_rest = (string, separator) => {
const index = string.indexOf(separator);
return (index >= 0) ? [string.substring(0, index), string.substring(index + separator.length)] : [undefined, string];
};
// Note: The manifest JS URL class does not work in some security-conscious
// settings, e.g. Brave browser, so we roll our own that handles only what we need.
//
// Use greedy separator parsing to get parts of a MIAB backup target url.
// Note: path will not include a leading forward slash '/'
const url_split = url => {
const [ scheme, scheme_rest ] = split1_rest(url, '://');
const [ user, user_rest ] = split1_rest(scheme_rest, '@');
const [ host, path ] = split1_rest(user_rest, '/');
return {
scheme,
user,
host,
path,
}
};
</script> </script>

View File

@ -10,13 +10,13 @@
border-top: none; border-top: none;
padding-top: 0; padding-top: 0;
} }
#system-checks .status-error td { #system-checks .status-error td, .summary-error {
color: #733; color: #733;
} }
#system-checks .status-warning td { #system-checks .status-warning td, .summary-warning {
color: #770; color: #770;
} }
#system-checks .status-ok td { #system-checks .status-ok td, .summary-ok {
color: #040; color: #040;
} }
#system-checks div.extra { #system-checks div.extra {
@ -52,6 +52,9 @@
</div> <!-- /col --> </div> <!-- /col -->
<div class="col-md-pull-3 col-md-8"> <div class="col-md-pull-3 col-md-8">
<div id="system-checks-summary">
</div>
<table id="system-checks" class="table" style="max-width: 60em"> <table id="system-checks" class="table" style="max-width: 60em">
<thead> <thead>
</thead> </thead>
@ -64,6 +67,9 @@
<script> <script>
function show_system_status() { function show_system_status() {
const summary = $('#system-checks-summary');
summary.html("");
$('#system-checks tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>") $('#system-checks tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>")
api( api(
@ -93,6 +99,12 @@ function show_system_status() {
{ }, { },
function(r) { function(r) {
$('#system-checks tbody').html(""); $('#system-checks tbody').html("");
const ok_symbol = "✓";
const error_symbol = "✖";
const warning_symbol = "?";
let count_by_status = { ok: 0, error: 0, warning: 0 };
for (var i = 0; i < r.length; i++) { for (var i = 0; i < r.length; i++) {
var n = $("<tr><td class='status'/><td class='message'><p style='margin: 0'/><div class='extra'/><a class='showhide' href='#'/></tr>"); var n = $("<tr><td class='status'/><td class='message'><p style='margin: 0'/><div class='extra'/><a class='showhide' href='#'/></tr>");
if (i == 0) n.addClass('first') if (i == 0) n.addClass('first')
@ -100,9 +112,12 @@ function show_system_status() {
n.addClass(r[i].type) n.addClass(r[i].type)
else else
n.addClass("status-" + r[i].type) n.addClass("status-" + r[i].type)
if (r[i].type == "ok") n.find('td.status').text("✓")
if (r[i].type == "error") n.find('td.status').text("✖") if (r[i].type == "ok") n.find('td.status').text(ok_symbol);
if (r[i].type == "warning") n.find('td.status').text("?") if (r[i].type == "error") n.find('td.status').text(error_symbol);
if (r[i].type == "warning") n.find('td.status').text(warning_symbol);
count_by_status[r[i].type]++;
n.find('td.message p').text(r[i].text) n.find('td.message p').text(r[i].text)
$('#system-checks tbody').append(n); $('#system-checks tbody').append(n);
@ -122,8 +137,17 @@ function show_system_status() {
n.find('> td.message > div').append(m); n.find('> td.message > div').append(m);
} }
} }
})
// Summary counts
summary.html("Summary: ");
if (count_by_status['error'] + count_by_status['warning'] == 0) {
summary.append($('<span class="summary-ok"/>').text(`All ${count_by_status['ok']} ${ok_symbol} OK`));
} else {
summary.append($('<span class="summary-ok"/>').text(`${count_by_status['ok']} ${ok_symbol} OK, `));
summary.append($('<span class="summary-error"/>').text(`${count_by_status['error']} ${error_symbol} Error, `));
summary.append($('<span class="summary-warning"/>').text(`${count_by_status['warning']} ${warning_symbol} Warning`));
}
})
} }
var current_privacy_setting = null; var current_privacy_setting = null;

View File

@ -14,7 +14,9 @@ def load_env_vars_from_file(fn):
# Load settings from a KEY=VALUE file. # Load settings from a KEY=VALUE file.
import collections import collections
env = collections.OrderedDict() env = collections.OrderedDict()
for line in open(fn): env.setdefault(*line.strip().split("=", 1)) with open(fn, 'r') as f:
for line in f:
env.setdefault(*line.strip().split("=", 1))
return env return env
def save_environment(env): def save_environment(env):
@ -34,7 +36,8 @@ def load_settings(env):
import rtyaml import rtyaml
fn = os.path.join(env['STORAGE_ROOT'], 'settings.yaml') fn = os.path.join(env['STORAGE_ROOT'], 'settings.yaml')
try: try:
config = rtyaml.load(open(fn, "r")) with open(fn, "r") as f:
config = rtyaml.load(f)
if not isinstance(config, dict): raise ValueError() # caught below if not isinstance(config, dict): raise ValueError() # caught below
return config return config
except: except:
@ -175,6 +178,10 @@ def wait_for_service(port, public, env, timeout):
return False return False
time.sleep(min(timeout/4, 1)) time.sleep(min(timeout/4, 1))
def get_php_version():
# Gets the version of PHP installed in the system.
return shell("check_output", ["/usr/bin/php", "-v"])[4:7]
if __name__ == "__main__": if __name__ == "__main__":
from web_update import get_web_domains from web_update import get_web_domains
env = load_environment() env = load_environment()

View File

@ -7,7 +7,8 @@ import os.path, re, rtyaml
from mailconfig import get_mail_domains from mailconfig import get_mail_domains
from dns_update import get_custom_dns_config, get_dns_zones from dns_update import get_custom_dns_config, get_dns_zones
from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate
from utils import shell, safe_domain_name, sort_domains from utils import shell, safe_domain_name, sort_domains, get_php_version
from wwwconfig import get_www_domains
def get_web_domains(env, include_www_redirects=True, include_auto=True, exclude_dns_elsewhere=True): def get_web_domains(env, include_www_redirects=True, include_auto=True, exclude_dns_elsewhere=True):
# What domains should we serve HTTP(S) for? # What domains should we serve HTTP(S) for?
@ -18,11 +19,15 @@ def get_web_domains(env, include_www_redirects=True, include_auto=True, exclude_
# if the user wants to make one. # if the user wants to make one.
domains |= get_mail_domains(env) domains |= get_mail_domains(env)
# Add domains for which we only serve www
domains |= get_www_domains(domains)
if include_www_redirects and include_auto: if include_www_redirects and include_auto:
# Add 'www.' subdomains that we want to provide default redirects # Add 'www.' subdomains that we want to provide default redirects
# to the main domain for. We'll add 'www.' to any DNS zones, i.e. # to the main domain for. We'll add 'www.' to any DNS zones, i.e.
# the topmost of each domain we serve. # the topmost of each domain we serve.
domains |= set('www.' + zone for zone, zonefile in get_dns_zones(env)) domains |= set('www.' + zone for zone, zonefile in get_dns_zones(env))
domains |= set('www.' + wwwdomain for wwwdomain in get_www_domains(get_mail_domains(env)))
if include_auto: if include_auto:
# Add Autoconfiguration domains for domains that there are user accounts at: # Add Autoconfiguration domains for domains that there are user accounts at:
@ -63,7 +68,8 @@ def get_web_domains_with_root_overrides(env):
root_overrides = { } root_overrides = { }
nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml") nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml")
if os.path.exists(nginx_conf_custom_fn): if os.path.exists(nginx_conf_custom_fn):
custom_settings = rtyaml.load(open(nginx_conf_custom_fn)) with open(nginx_conf_custom_fn, 'r') as f:
custom_settings = rtyaml.load(f)
for domain, settings in custom_settings.items(): for domain, settings in custom_settings.items():
for type, value in [('redirect', settings.get('redirects', {}).get('/')), for type, value in [('redirect', settings.get('redirects', {}).get('/')),
('proxy', settings.get('proxies', {}).get('/'))]: ('proxy', settings.get('proxies', {}).get('/'))]:
@ -75,14 +81,21 @@ def do_web_update(env):
# Pre-load what SSL certificates we will use for each domain. # Pre-load what SSL certificates we will use for each domain.
ssl_certificates = get_ssl_certificates(env) ssl_certificates = get_ssl_certificates(env)
# Helper for reading config files and templates
def read_conf(conf_fn):
with open(os.path.join(os.path.dirname(__file__), "../conf", conf_fn), "r") as f:
return f.read()
# Build an nginx configuration file. # Build an nginx configuration file.
nginx_conf = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-top.conf")).read() nginx_conf = read_conf("nginx-top.conf")
nginx_conf = re.sub("{{phpver}}", get_php_version(), nginx_conf)
# Load the templates. # Load the templates.
template0 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx.conf")).read() template0 = read_conf("nginx.conf")
template1 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-alldomains.conf")).read() template1 = read_conf("nginx-alldomains.conf")
template2 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-primaryonly.conf")).read() template2 = read_conf("nginx-primaryonly.conf")
template3 = "\trewrite ^(.*) https://$REDIRECT_DOMAIN$1 permanent;\n" template3 = "\trewrite ^(.*) https://$REDIRECT_DOMAIN$1 permanent;\n"
template4 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-webonlydomains.conf")).read()
# Add the PRIMARY_HOST configuration first so it becomes nginx's default server. # Add the PRIMARY_HOST configuration first so it becomes nginx's default server.
nginx_conf += make_domain_config(env['PRIMARY_HOSTNAME'], [template0, template1, template2], ssl_certificates, env) nginx_conf += make_domain_config(env['PRIMARY_HOSTNAME'], [template0, template1, template2], ssl_certificates, env)
@ -90,6 +103,8 @@ def do_web_update(env):
# Add configuration all other web domains. # Add configuration all other web domains.
has_root_proxy_or_redirect = get_web_domains_with_root_overrides(env) has_root_proxy_or_redirect = get_web_domains_with_root_overrides(env)
web_domains_not_redirect = get_web_domains(env, include_www_redirects=False) web_domains_not_redirect = get_web_domains(env, include_www_redirects=False)
web_only_domains = get_www_domains(get_mail_domains(env))
for domain in get_web_domains(env): for domain in get_web_domains(env):
if domain == env['PRIMARY_HOSTNAME']: if domain == env['PRIMARY_HOSTNAME']:
# PRIMARY_HOSTNAME is handled above. # PRIMARY_HOSTNAME is handled above.
@ -97,7 +112,10 @@ def do_web_update(env):
if domain in web_domains_not_redirect: if domain in web_domains_not_redirect:
# This is a regular domain. # This is a regular domain.
if domain not in has_root_proxy_or_redirect: if domain not in has_root_proxy_or_redirect:
nginx_conf += make_domain_config(domain, [template0, template1], ssl_certificates, env) if domain in web_only_domains:
nginx_conf += make_domain_config(domain, [template0, template4], ssl_certificates, env)
else:
nginx_conf += make_domain_config(domain, [template0, template1], ssl_certificates, env)
else: else:
nginx_conf += make_domain_config(domain, [template0], ssl_certificates, env) nginx_conf += make_domain_config(domain, [template0], ssl_certificates, env)
else: else:
@ -141,11 +159,8 @@ def make_domain_config(domain, templates, ssl_certificates, env):
def hashfile(filepath): def hashfile(filepath):
import hashlib import hashlib
sha1 = hashlib.sha1() sha1 = hashlib.sha1()
f = open(filepath, 'rb') with open(filepath, 'rb') as f:
try:
sha1.update(f.read()) sha1.update(f.read())
finally:
f.close()
return sha1.hexdigest() return sha1.hexdigest()
nginx_conf_extra += "\t# ssl files sha1: %s / %s\n" % (hashfile(tls_cert["private-key"]), hashfile(tls_cert["certificate"])) nginx_conf_extra += "\t# ssl files sha1: %s / %s\n" % (hashfile(tls_cert["private-key"]), hashfile(tls_cert["certificate"]))
@ -153,7 +168,8 @@ def make_domain_config(domain, templates, ssl_certificates, env):
hsts = "yes" hsts = "yes"
nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml") nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml")
if os.path.exists(nginx_conf_custom_fn): if os.path.exists(nginx_conf_custom_fn):
yaml = rtyaml.load(open(nginx_conf_custom_fn)) with open(nginx_conf_custom_fn, 'r') as f:
yaml = rtyaml.load(f)
if domain in yaml: if domain in yaml:
yaml = yaml[domain] yaml = yaml[domain]
@ -199,9 +215,14 @@ def make_domain_config(domain, templates, ssl_certificates, env):
# Add the HSTS header. # Add the HSTS header.
if hsts == "yes": if hsts == "yes":
nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=15768000\" always;\n" nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n"
elif hsts == "preload": elif hsts == "preload":
nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=15768000; includeSubDomains; preload\" always;\n" nginx_conf_extra += "\tadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\" always;\n"
nginx_conf_extra += "\tadd_header X-Frame-Options \"SAMEORIGIN\" always;\n"
nginx_conf_extra += "\tadd_header X-Content-Type-Options nosniff;\n"
nginx_conf_extra += "\tadd_header Content-Security-Policy-Report-Only \"default-src 'self'; font-src *;img-src * data:; script-src *; style-src *;frame-ancestors 'self'\";\n"
nginx_conf_extra += "\tadd_header Referrer-Policy \"strict-origin\";\n"
# Add in any user customizations in the includes/ folder. # Add in any user customizations in the includes/ folder.
nginx_conf_custom_include = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain) + ".conf") nginx_conf_custom_include = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain) + ".conf")
@ -232,6 +253,10 @@ def get_web_root(domain, env, test_exists=True):
if os.path.exists(root) or not test_exists: break if os.path.exists(root) or not test_exists: break
return root return root
def is_default_web_root(domain, env):
root = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain))
return not os.path.exists(root)
def get_web_domains_info(env): def get_web_domains_info(env):
www_redirects = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False)) www_redirects = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False))
has_root_proxy_or_redirect = set(get_web_domains_with_root_overrides(env)) has_root_proxy_or_redirect = set(get_web_domains_with_root_overrides(env))

View File

@ -1,7 +1,11 @@
from daemon import app from daemon import app
import auth, utils import auth, utils, logging
app.logger.addHandler(utils.create_syslog_handler()) app.logger.addHandler(utils.create_syslog_handler())
logging_level = logging.DEBUG
logging.basicConfig(level=logging_level, format='MiaB %(levelname)s:%(module)s.%(funcName)s %(message)s')
logging.info('Logging level set to %s', logging.getLevelName(logging_level))
if __name__ == "__main__": if __name__ == "__main__":
app.run(port=10222) app.run(port=10222)

34
management/wwwconfig.py Normal file
View File

@ -0,0 +1,34 @@
import os.path, idna, sys, collections
def get_www_domains(domains_to_skip):
# Returns the domain names (IDNA-encoded) of all of the domains that are configured to serve www
# on the system.
domains = []
try:
# read a line from text file
with open("/etc/miabwwwdomains.conf") as file_in:
for line in file_in:
# Valid domain check future extention: use validators module
# Only one dot allowed
if line.count('.') == 1:
www_domain = get_domain(line, as_unicode=False)
if www_domain not in domains_to_skip:
domains.append(www_domain)
except:
# ignore failures
pass
return set(domains)
def get_domain(domaintxt, as_unicode=True):
ret = domaintxt.rstrip()
if as_unicode:
try:
ret = idna.decode(ret.encode('ascii'))
except (ValueError, UnicodeError, idna.IDNAError):
pass
return ret

View File

@ -1,7 +1,7 @@
Mail-in-a-Box Security Guide Mail-in-a-Box Security Guide
============================ ============================
Mail-in-a-Box turns a fresh Ubuntu 18.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components. Mail-in-a-Box turns a fresh Ubuntu 22.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components.
This page documents the security posture of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box. This page documents the security posture of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box.

54
setup/additionals.sh Normal file
View File

@ -0,0 +1,54 @@
source /etc/mailinabox.conf
source setup/functions.sh
# Add additional packages
apt_install pflogsumm rkhunter
# Cleanup old spam and trash email
hide_output install -m 755 conf/cron/miab_clean_mail /etc/cron.weekly/
# Reduce logs by not logging mail output in syslog
sed -i "s/\*\.\*;auth,authpriv.none.*\-\/var\/log\/syslog/\*\.\*;mail,auth,authpriv.none \-\/var\/log\/syslog/g" /etc/rsyslog.d/50-default.conf
# Reduce logs by only logging ufw in ufw.log
sed -i "s/#\& stop/\& stop/g" /etc/rsyslog.d/20-ufw.conf
# Add nextcloud logging
hide_output install -m 644 conf/rsyslog/20-nextcloud.conf /etc/rsyslog.d/
restart_service rsyslog
# Create forward for root emails
cat > /root/.forward << EOF;
administrator@$PRIMARY_HOSTNAME
EOF
# Adapt rkhunter cron job to reduce log file production
sed -i "s/--cronjob --report-warnings-only --appendlog/--cronjob --report-warnings-only --no-verbose-logging --appendlog/g" /etc/cron.daily/rkhunter
# Install fake mail script
if [ ! -f /usr/local/bin/mail ]; then
hide_output install -m 755 tools/fake_mail /usr/local/bin
mv -f /usr/local/bin/fake_mail /usr/local/bin/mail
fi
# Adapt rkhunter configuration
tools/editconf.py /etc/rkhunter.conf \
UPDATE_MIRRORS=1 \
MIRRORS_MODE=0 \
WEB_CMD='""' \
APPEND_LOG=1 \
ALLOWHIDDENDIR=/etc/.java
# Check presence of whitelist
if ! grep -Fxq "SCRIPTWHITELIST=/usr/local/bin/mail" /etc/rkhunter.conf > /dev/null; then
echo "SCRIPTWHITELIST=/usr/local/bin/mail" >> /etc/rkhunter.conf
fi
tools/editconf.py /etc/default/rkhunter \
CRON_DAILY_RUN='"true"' \
CRON_DB_UPDATE='"true"' \
APT_AUTOGEN='"true"'
# Should be last, update expected output
rkhunter --propupd

View File

@ -6,6 +6,8 @@
# #
######################################################### #########################################################
GITSRC=kj
if [ -z "$TAG" ]; then if [ -z "$TAG" ]; then
# If a version to install isn't explicitly given as an environment # If a version to install isn't explicitly given as an environment
# variable, then install the latest version. But the latest version # variable, then install the latest version. But the latest version
@ -19,11 +21,11 @@ if [ -z "$TAG" ]; then
# want to display in status checks. # want to display in status checks.
# #
# Allow point-release versions of the major releases, e.g. 22.04.1 is OK. # Allow point-release versions of the major releases, e.g. 22.04.1 is OK.
UBUNTU_VERSION=$( lsb_release -d | sed 's/.*:\s*//' | sed 's/\([0-9]*\.[0-9]*\)\.[0-9]/\1/' ) UBUNTU_VERSION=$( lsb_release -d | sed 's/.*:\s*//' | sed 's/\([0-9]*\.[0-9]*\)\.[0-9]/\1/' )"
if [ "$UBUNTU_VERSION" == "Ubuntu 22.04 LTS" ]; then if [ "$UBUNTU_VERSION" == "Ubuntu 22.04 LTS" ]; then
# This machine is running Ubuntu 22.04, which is supported by # This machine is running Ubuntu 22.04, which is supported by
# Mail-in-a-Box versions 60 and later. # Mail-in-a-Box versions 60 and later.
TAG=v60.1 TAG=v61.1
elif [ "$UBUNTU_VERSION" == "Ubuntu 18.04 LTS" ]; then elif [ "$UBUNTU_VERSION" == "Ubuntu 18.04 LTS" ]; then
# This machine is running Ubuntu 18.04, which is supported by # This machine is running Ubuntu 18.04, which is supported by
# Mail-in-a-Box versions 0.40 through 5x. # Mail-in-a-Box versions 0.40 through 5x.
@ -32,6 +34,7 @@ if [ -z "$TAG" ]; then
echo "a new machine running Ubuntu 22.04. See:" echo "a new machine running Ubuntu 22.04. See:"
echo "https://mailinabox.email/maintenance.html#upgrade" echo "https://mailinabox.email/maintenance.html#upgrade"
TAG=v57a TAG=v57a
GITSRC=miab
elif [ "$UBUNTU_VERSION" == "Ubuntu 14.04 LTS" ]; then elif [ "$UBUNTU_VERSION" == "Ubuntu 14.04 LTS" ]; then
# This machine is running Ubuntu 14.04, which is supported by # This machine is running Ubuntu 14.04, which is supported by
# Mail-in-a-Box versions 1 through v0.30. # Mail-in-a-Box versions 1 through v0.30.
@ -60,12 +63,20 @@ if [ ! -d $HOME/mailinabox ]; then
fi fi
echo Downloading Mail-in-a-Box $TAG. . . echo Downloading Mail-in-a-Box $TAG. . .
git clone \ if [ "$GITSRC" == "miab" ]; then
-b $TAG --depth 1 \ git clone \
https://github.com/mail-in-a-box/mailinabox \ -b $TAG --depth 1 \
$HOME/mailinabox \ https://github.com/mail-in-a-box/mailinabox \
< /dev/null 2> /dev/null $HOME/mailinabox \
< /dev/null 2> /dev/null
else
git clone \
-b $TAG --depth 1 \
https://github.com/kiekerjan/mailinabox \
$HOME/mailinabox \
< /dev/null 2> /dev/null
fi
echo echo
fi fi
@ -73,7 +84,7 @@ fi
cd $HOME/mailinabox cd $HOME/mailinabox
# Update it. # Update it.
if [ "$TAG" != $(git describe) ]; then if [ "$TAG" != $(git describe --tags) ]; then
echo Updating Mail-in-a-Box to $TAG . . . echo Updating Mail-in-a-Box to $TAG . . .
git fetch --depth 1 --force --prune origin tag $TAG git fetch --depth 1 --force --prune origin tag $TAG
if ! git checkout -q $TAG; then if ! git checkout -q $TAG; then

View File

@ -1,46 +1,43 @@
#!/bin/bash #!/bin/bash
# OpenDKIM # DKIM
# -------- # --------
# #
# OpenDKIM provides a service that puts a DKIM signature on outbound mail. # DKIMpy provides a service that puts a DKIM signature on outbound mail.
# #
# The DNS configuration for DKIM is done in the management daemon. # The DNS configuration for DKIM is done in the management daemon.
source setup/functions.sh # load our functions source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
# Install DKIM... # Remove openDKIM if present
echo Installing OpenDKIM/OpenDMARC... apt-get purge -qq -y opendkim opendkim-tools
apt_install opendkim opendkim-tools opendmarc
# Install DKIMpy-Milter
echo Installing DKIMpy/OpenDMARC...
apt_install dkimpy-milter python3-dkim opendmarc
# Make sure configuration directories exist. # Make sure configuration directories exist.
mkdir -p /etc/opendkim; mkdir -p /etc/dkim;
mkdir -p $STORAGE_ROOT/mail/dkim mkdir -p $STORAGE_ROOT/mail/dkim
# Used in InternalHosts and ExternalIgnoreList configuration directives. # Used in InternalHosts and ExternalIgnoreList configuration directives.
# Not quite sure why. # Not quite sure why.
echo "127.0.0.1" > /etc/opendkim/TrustedHosts echo "127.0.0.1" > /etc/dkim/TrustedHosts
# We need to at least create these files, since we reference them later. # We need to at least create these files, since we reference them later.
# Otherwise, opendkim startup will fail touch /etc/dkim/KeyTable
touch /etc/opendkim/KeyTable touch /etc/dkim/SigningTable
touch /etc/opendkim/SigningTable
if grep -q "ExternalIgnoreList" /etc/opendkim.conf; then tools/editconf.py /etc/dkimpy-milter/dkimpy-milter.conf -s \
true # already done #NODOC "MacroList=daemon_name|ORIGINATING" \
else "MacroListVerify=daemon_name|VERIFYING" \
# Add various configuration options to the end of `opendkim.conf`. "Canonicalization=relaxed/simple" \
cat >> /etc/opendkim.conf << EOF; "MinimumKeyBits=1024" \
Canonicalization relaxed/simple "InternalHosts=refile:/etc/dkim/TrustedHosts" \
MinimumKeyBits 1024 "KeyTable=refile:/etc/dkim/KeyTable" \
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts "KeyTableEd25519=refile:/etc/dkim/KeyTableEd25519" \
InternalHosts refile:/etc/opendkim/TrustedHosts "SigningTable=refile:/etc/dkim/SigningTable" \
KeyTable refile:/etc/opendkim/KeyTable "Socket=inet:8892@127.0.0.1"
SigningTable refile:/etc/opendkim/SigningTable
Socket inet:8891@127.0.0.1
RequireSafeKeys false
EOF
fi
# Create a new DKIM key. This creates mail.private and mail.txt # Create a new DKIM key. This creates mail.private and mail.txt
# in $STORAGE_ROOT/mail/dkim. The former is the private key and # in $STORAGE_ROOT/mail/dkim. The former is the private key and
@ -48,16 +45,20 @@ fi
# in our DNS setup. Note that the files are named after the # in our DNS setup. Note that the files are named after the
# 'selector' of the key, which we can change later on to support # 'selector' of the key, which we can change later on to support
# key rotation. # key rotation.
# if [ ! -f "$STORAGE_ROOT/mail/dkim/box-rsa.key" ]; then
# A 1024-bit key is seen as a minimum standard by several providers # All defaults are supposed to be ok, default key for rsa is 2048 bit
# such as Google. But they and others use a 2048 bit key, so we'll dknewkey --ktype rsa $STORAGE_ROOT/mail/dkim/box-rsa
# do the same. Keys beyond 2048 bits may exceed DNS record limits. dknewkey --ktype ed25519 $STORAGE_ROOT/mail/dkim/box-ed25519
if [ ! -f "$STORAGE_ROOT/mail/dkim/mail.private" ]; then
opendkim-genkey -b 2048 -r -s mail -D $STORAGE_ROOT/mail/dkim # Force them into the format dns_update.py expects
sed -i 's/v=DKIM1;/box-rsa._domainkey IN TXT ( "v=DKIM1; s=email;/' $STORAGE_ROOT/mail/dkim/box-rsa.dns
echo '" )' >> $STORAGE_ROOT/mail/dkim/box-rsa.dns
sed -i 's/v=DKIM1;/box-ed25519._domainkey IN TXT ( "v=DKIM1; s=email;/' $STORAGE_ROOT/mail/dkim/box-ed25519.dns
echo '" )' >> $STORAGE_ROOT/mail/dkim/box-ed25519.dns
fi fi
# Ensure files are owned by the opendkim user and are private otherwise. # Ensure files are owned by the dkimpy-milter user and are private otherwise.
chown -R opendkim:opendkim $STORAGE_ROOT/mail/dkim chown -R dkimpy-milter:dkimpy-milter $STORAGE_ROOT/mail/dkim
chmod go-rwx $STORAGE_ROOT/mail/dkim chmod go-rwx $STORAGE_ROOT/mail/dkim
tools/editconf.py /etc/opendmarc.conf -s \ tools/editconf.py /etc/opendmarc.conf -s \
@ -88,29 +89,20 @@ tools/editconf.py /etc/opendmarc.conf -s \
tools/editconf.py /etc/opendmarc.conf -s \ tools/editconf.py /etc/opendmarc.conf -s \
"FailureReportsOnNone=true" "FailureReportsOnNone=true"
# AlwaysAddARHeader Adds an "Authentication-Results:" header field even to # Add DKIMpy and OpenDMARC as milters to postfix, which is how DKIMpy
# unsigned messages from domains with no "signs all" policy. The reported DKIM
# result will be "none" in such cases. Normally unsigned mail from non-strict
# domains does not cause the results header field to be added. This added header
# is used by spamassassin to evaluate the mail for spamminess.
tools/editconf.py /etc/opendkim.conf -s \
"AlwaysAddARHeader=true"
# Add OpenDKIM and OpenDMARC as milters to postfix, which is how OpenDKIM
# intercepts outgoing mail to perform the signing (by adding a mail header) # intercepts outgoing mail to perform the signing (by adding a mail header)
# and how they both intercept incoming mail to add Authentication-Results # and how they both intercept incoming mail to add Authentication-Results
# headers. The order possibly/probably matters: OpenDMARC relies on the # headers. The order possibly/probably matters: OpenDMARC relies on the
# OpenDKIM Authentication-Results header already being present. # DKIM Authentication-Results header already being present.
# #
# Be careful. If we add other milters later, this needs to be concatenated # Be careful. If we add other milters later, this needs to be concatenated
# on the smtpd_milters line. # on the smtpd_milters line.
# #
# The OpenDMARC milter is skipped in the SMTP submission listener by # The OpenDMARC milter is skipped in the SMTP submission listener by
# configuring smtpd_milters there to only list the OpenDKIM milter # configuring smtpd_milters there to only list the DKIMpy milter
# (see mail-postfix.sh). # (see mail-postfix.sh).
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
"smtpd_milters=inet:127.0.0.1:8891 inet:127.0.0.1:8893"\ "smtpd_milters=inet:127.0.0.1:8892 inet:127.0.0.1:8893"\
non_smtpd_milters=\$smtpd_milters \ non_smtpd_milters=\$smtpd_milters \
milter_default_action=accept milter_default_action=accept
@ -118,7 +110,7 @@ tools/editconf.py /etc/postfix/main.cf \
hide_output systemctl enable opendmarc hide_output systemctl enable opendmarc
# Restart services. # Restart services.
restart_service opendkim restart_service dkimpy-milter
restart_service opendmarc restart_service opendmarc
restart_service postfix restart_service postfix

View File

@ -12,7 +12,7 @@ source /etc/mailinabox.conf # load global vars
# Prepare nsd's configuration. # Prepare nsd's configuration.
# We configure nsd before installation as we only want it to bind to some addresses # We configure nsd before installation as we only want it to bind to some addresses
# and it otherwise will have port / bind conflicts with bind9 used as the local resolver # and it otherwise will have port / bind conflicts with unbound used as the local resolver
mkdir -p /var/run/nsd mkdir -p /var/run/nsd
mkdir -p /etc/nsd mkdir -p /etc/nsd
mkdir -p /etc/nsd/zones mkdir -p /etc/nsd/zones
@ -38,7 +38,7 @@ server:
EOF EOF
# Since we have bind9 listening on localhost for locally-generated # Since we have unbound listening on localhost for locally-generated
# DNS queries that require a recursive nameserver, and the system # DNS queries that require a recursive nameserver, and the system
# might have other network interfaces for e.g. tunnelling, we have # might have other network interfaces for e.g. tunnelling, we have
# to be specific about the network interfaces that nsd binds to. # to be specific about the network interfaces that nsd binds to.

84
setup/dovecot-fts-xapian.sh Executable file
View File

@ -0,0 +1,84 @@
#!/bin/bash
#
# IMAP search with xapian
# --------------------------------
#
# By default dovecot uses its own Squat search index that has awful performance
# on large mailboxes and is obsolete. Dovecot 2.1+ has support for using Lucene
# internally but this didn't make it into the Ubuntu packages. Solr uses too
# much memory. Same goes for elasticsearch. fts xapian might be a good match
# for mail-in-a-box. See https://github.com/grosjo/fts-xapian
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# Install packages and basic configuation
# ---------------------------------------
echo "Installing fts-xapian..."
apt_install dovecot-fts-xapian
# Update the dovecot plugin configuration
#
# Break-imap-search makes search work the way users expect, rather than the way
# the IMAP specification expects.
tools/editconf.py /etc/dovecot/conf.d/10-mail.conf \
mail_plugins="fts fts_xapian" \
mail_home="$STORAGE_ROOT/mail/homes/%d/%n"
# Install cronjobs to keep FTS up to date.
hide_output install -m 755 conf/cron/miab_dovecot /etc/cron.daily/
# Install files
if [ ! -f /usr/lib/dovecot/decode2text.sh ]; then
cp -f /usr/share/doc/dovecot-core/examples/decode2text.sh /usr/lib/dovecot
fi
# Create configuration file
cat > /etc/dovecot/conf.d/90-plugin-fts.conf << EOF;
plugin {
plugin = fts fts_xapian
fts = xapian
fts_xapian = partial=3 full=20 verbose=0
fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Junk
fts_autoindex_exclude3 = \Spam
fts_decoder = decode2text
}
service indexer-worker {
vsz_limit = 2G
}
service decode2text {
executable = script /usr/lib/dovecot/decode2text.sh
user = dovecot
unix_listener decode2text {
mode = 0666
}
}
EOF
restart_service dovecot
# Kickoff building the index
# Per doveadm-fts manpage: Scan what mails exist in the full text search index
# and compare those to what actually exist in mailboxes.
# This removes mails from the index that have already been expunged and makes
# sure that the next doveadm index will index all the missing mails (if any).
hide_output doveadm fts rescan -A
# Adds unindexed files to the fts database
# * `-q`: Queues the indexing to be run by indexer process. (will background the indexing)
# * `-A`: All users
# * `'*'`: All folders
doveadm index -A -q \*

View File

@ -4,8 +4,6 @@
# -o pipefail: don't ignore errors in the non-last command in a pipeline # -o pipefail: don't ignore errors in the non-last command in a pipeline
set -euo pipefail set -euo pipefail
PHP_VER=8.0
function hide_output { function hide_output {
# This function hides the output of a command unless the command fails # This function hides the output of a command unless the command fails
# and returns a non-zero exit code. # and returns a non-zero exit code.
@ -219,6 +217,13 @@ function git_clone {
rm -rf $TMPPATH $TARGETPATH rm -rf $TMPPATH $TARGETPATH
git clone -q $REPO $TMPPATH || exit 1 git clone -q $REPO $TMPPATH || exit 1
(cd $TMPPATH; git checkout -q $TREEISH;) || exit 1 (cd $TMPPATH; git checkout -q $TREEISH;) || exit 1
rm -rf $TMPPATH/.git
mv $TMPPATH/$SUBDIR $TARGETPATH mv $TMPPATH/$SUBDIR $TARGETPATH
rm -rf $TMPPATH rm -rf $TMPPATH
} }
function php_version {
php --version | head -n 1 | cut -d " " -f 2 | cut -c 1-3
}
PHP_VER=$(php_version)

41
setup/geoipfilter.sh Normal file
View File

@ -0,0 +1,41 @@
#!/bin/bash
CONFIG_FILE=/etc/geoiplookup.conf
GEOIPLOOKUP=/usr/local/bin/goiplookup
# Check existence of configuration
if [ -f "$CONFIG_FILE" ]; then
source $CONFIG_FILE
# Check required variable exists and is non-empty
if [ -z "$ALLOW_COUNTRIES" ]; then
echo "variable ALLOW_COUNTRIES is not set or empty. No countries are blocked."
exit 0
fi
else
echo "Configuration $CONFIG_FILE does not exist. No countries are blocked."
exit 0
fi
# Check existence of binary
if [ ! -x "$GEOIPLOOKUP" ]; then
echo "Geoip lookup binary $GEOIPLOOKUP does not exist. No countries are blocked."
exit 0
fi
if [ $# -ne 1 -a $# -ne 2 ]; then
echo "Usage: `basename $0` <ip>" 1>&2
exit 0 # return true in case of config issue
fi
COUNTRY=`$GEOIPLOOKUP $1 | awk -F ": " '{ print $2 }' | awk -F "," '{ print $1 }' | head -n 1`
[[ $COUNTRY = "IP Address not found" || $ALLOW_COUNTRIES =~ $COUNTRY ]] && RESPONSE="ALLOW" || RESPONSE="DENY"
logger "$RESPONSE geoipblocked connection from $1 ($COUNTRY) $2"
if [ $RESPONSE = "ALLOW" ]
then
exit 0
else
exit 1
fi

104
setup/geoiptoolssetup.sh Normal file
View File

@ -0,0 +1,104 @@
#!/bin/bash
source setup/functions.sh
echo Installing geoip packages...
# geo ip filtering of ssh entries, based on https://www.axllent.org/docs/ssh-geoip/#disqus_thread
# Install geo ip lookup tool
gunzip -c tools/goiplookup.gz > /usr/local/bin/goiplookup
chmod +x /usr/local/bin/goiplookup
# check that GeoLite2-Country.mmdb is older then 2 months, to not hit the server too often
if [[ ! -d /usr/share/GeoIP || ! -f /usr/share/GeoIP/GeoLite2-Country.mmdb || $(find "/usr/share/GeoIP/GeoLite2-Country.mmdb" -mtime +60 -print) ]]; then
echo updating goiplookup database
goiplookup db-update
else
echo skipping goiplookup database update
fi
# Install geo ip filter script
cp -f setup/geoipfilter.sh /usr/local/bin/
chmod +x /usr/local/bin/geoipfilter.sh
# Install only if not yet exists, to keep user config
if [ ! -f /etc/geoiplookup.conf ]; then
cp -f conf/geoiplookup.conf /etc/
fi
# Add sshd entries for hosts.deny and hosts.allow
if grep -Fxq "sshd: ALL" /etc/hosts.deny
then
echo hosts.deny already configured
else
sed -i '/sshd: /d' /etc/hosts.deny
echo "sshd: ALL" >> /etc/hosts.deny
fi
if grep -Fxq "sshd: ALL: aclexec /usr/local/bin/geoipfilter.sh %a %s" /etc/hosts.allow
then
echo hosts.allow already configured
else
# Make sure all sshd lines are removed
sed -i '/sshd: /d' /etc/hosts.allow
echo "sshd: ALL: aclexec /usr/local/bin/geoipfilter.sh %a %s" >> /etc/hosts.allow
fi
# geo ip filtering of nginx access log, based on
# https://guides.wp-bullet.com/blocking-country-and-continent-with-nginx-geoip-on-ubuntu-18-04/
## Install geo ip lookup files
# check that GeoIP.dat is older then 2 months, to not hit the server too often
if [[ ! -d /usr/share/GeoIP || ! -f /usr/share/GeoIP/GeoIP.dat || $(find "/usr/share/GeoIP/GeoIP.dat" -mtime +60 -print) ]]; then
echo updating GeoIP database
# Move old file away if it exists
if [ -f "/usr/share/GeoIP/GeoIP.dat" ]; then
mv -f /usr/share/GeoIP/GeoIP.dat /usr/share/GeoIP/GeoIP.dat.bak
fi
hide_output wget -P /usr/share/GeoIP/ https://dl.miyuru.lk/geoip/maxmind/country/maxmind.dat.gz
if [ -f "/usr/share/GeoIP/maxmind.dat.gz" ]; then
gunzip -c /usr/share/GeoIP/maxmind.dat.gz > /usr/share/GeoIP/GeoIP.dat
rm -f /usr/share/GeoIP/maxmind.dat.gz
else
echo Did not correctly download maxmind geoip country database
fi
# If new file is not created, move the old file back
if [ ! -f "/usr/share/GeoIP/GeoIP.dat" ]; then
echo GeoIP.dat was not created
if [ -f "/usr/share/GeoIP/GeoIP.dat.bak" ]; then
mv /usr/share/GeoIP/GeoIP.dat.bak /usr/share/GeoIP/GeoIP.dat
fi
fi
# Move old file away if it exists
if [ -f "/usr/share/GeoIP/GeoIPCity.dat" ]; then
mv -f /usr/share/GeoIP/GeoIPCity.dat /usr/share/GeoIP/GeoIPCity.dat.bak
fi
hide_output wget -P /usr/share/GeoIP/ https://dl.miyuru.lk/geoip/maxmind/city/maxmind.dat.gz
if [ -f "/usr/share/GeoIP/maxmind.dat.gz" ]; then
gunzip -c /usr/share/GeoIP/maxmind.dat.gz > /usr/share/GeoIP/GeoIPCity.dat
rm -f /usr/share/GeoIP/maxmind.dat.gz
else
echo Did not correctly download maxmind geoip city database
fi
# If new file is not created, move the old file back
if [ ! -f "/usr/share/GeoIP/GeoIPCity.dat" ]; then
echo GeoIPCity.dat was not created
if [ -f "/usr/share/GeoIP/GeoIPCity.dat.bak" ]; then
mv /usr/share/GeoIP/GeoIPCity.dat.bak /usr/share/GeoIP/GeoIPCity.dat
fi
fi
else
echo skipping GeoIP database update
fi

View File

@ -78,7 +78,7 @@ tools/editconf.py /etc/dovecot/conf.d/10-auth.conf \
"auth_mechanisms=plain login" "auth_mechanisms=plain login"
# Enable SSL, specify the location of the SSL certificate and private key files. # Enable SSL, specify the location of the SSL certificate and private key files.
# Use Mozilla's "Intermediate" recommendations at https://ssl-config.mozilla.org/#server=dovecot&server-version=2.2.33&config=intermediate&openssl-version=1.1.1, # Use Mozilla's "Intermediate" recommendations at https://ssl-config.mozilla.org/#server=dovecot&server-version=2.3.7.2&config=intermediate&openssl-version=1.1.1,
# except that the current version of Dovecot does not have a TLSv1.3 setting, so we only use TLSv1.2. # except that the current version of Dovecot does not have a TLSv1.3 setting, so we only use TLSv1.2.
tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \ tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \
ssl=required \ ssl=required \
@ -86,9 +86,8 @@ tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \
"ssl_key=<$STORAGE_ROOT/ssl/ssl_private_key.pem" \ "ssl_key=<$STORAGE_ROOT/ssl/ssl_private_key.pem" \
"ssl_min_protocol=TLSv1.2" \ "ssl_min_protocol=TLSv1.2" \
"ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \ "ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \
"ssl_prefer_server_ciphers=no" \ "ssl_prefer_server_ciphers=yes" \
"ssl_dh_parameters_length=2048" \ "ssl_dh=<$STORAGE_ROOT/ssl/dh4096.pem"
"ssl_dh=<$STORAGE_ROOT/ssl/dh2048.pem"
# Disable in-the-clear IMAP/POP because there is no reason for a user to transmit # Disable in-the-clear IMAP/POP because there is no reason for a user to transmit
# login credentials outside of an encrypted connection. Only the over-TLS versions # login credentials outside of an encrypted connection. Only the over-TLS versions
@ -202,13 +201,15 @@ chmod -R o-rwx /etc/dovecot
# Ensure mailbox files have a directory that exists and are owned by the mail user. # Ensure mailbox files have a directory that exists and are owned by the mail user.
mkdir -p $STORAGE_ROOT/mail/mailboxes mkdir -p $STORAGE_ROOT/mail/mailboxes
chown -R mail.mail $STORAGE_ROOT/mail/mailboxes mkdir -p $STORAGE_ROOT/mail/homes
chown -R mail:mail $STORAGE_ROOT/mail/mailboxes
chown -R mail:mail $STORAGE_ROOT/mail/homes
# Same for the sieve scripts. # Same for the sieve scripts.
mkdir -p $STORAGE_ROOT/mail/sieve mkdir -p $STORAGE_ROOT/mail/sieve
mkdir -p $STORAGE_ROOT/mail/sieve/global_before mkdir -p $STORAGE_ROOT/mail/sieve/global_before
mkdir -p $STORAGE_ROOT/mail/sieve/global_after mkdir -p $STORAGE_ROOT/mail/sieve/global_after
chown -R mail.mail $STORAGE_ROOT/mail/sieve chown -R mail:mail $STORAGE_ROOT/mail/sieve
# Allow the IMAP/POP ports in the firewall. # Allow the IMAP/POP ports in the firewall.
ufw_allow imaps ufw_allow imaps

View File

@ -91,12 +91,14 @@ tools/editconf.py /etc/postfix/master.cf -s -w \
-o smtpd_tls_wrappermode=yes -o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes -o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission -o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891 -o smtpd_milters=inet:127.0.0.1:8892
-o milter_macro_daemon_name=ORIGINATING
-o cleanup_service_name=authclean" \ -o cleanup_service_name=authclean" \
"submission=inet n - - - - smtpd "submission=inet n - - - - smtpd
-o smtpd_sasl_auth_enable=yes -o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission -o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891 -o smtpd_milters=inet:127.0.0.1:8892
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_tls_security_level=encrypt -o smtpd_tls_security_level=encrypt
-o cleanup_service_name=authclean" \ -o cleanup_service_name=authclean" \
"authclean=unix n - - - 0 cleanup "authclean=unix n - - - 0 cleanup
@ -122,18 +124,18 @@ sed -i "s/PUBLIC_IP/$PUBLIC_IP/" /etc/postfix/outgoing_mail_header_filters
# the world are very far behind and if we disable too much, they may not be able to use TLS and # the world are very far behind and if we disable too much, they may not be able to use TLS and
# won't fall back to cleartext. So we don't disable too much. smtpd_tls_exclude_ciphers applies to # won't fall back to cleartext. So we don't disable too much. smtpd_tls_exclude_ciphers applies to
# both port 25 and port 587, but because we override the cipher list for both, it probably isn't used. # both port 25 and port 587, but because we override the cipher list for both, it probably isn't used.
# Use Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=old&openssl-version=1.1.1 # Use Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.4.13&config=old&openssl-version=1.1.1
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
smtpd_tls_security_level=may\ smtpd_tls_security_level=may\
smtpd_tls_auth_only=yes \ smtpd_tls_auth_only=yes \
smtpd_tls_cert_file=$STORAGE_ROOT/ssl/ssl_certificate.pem \ smtpd_tls_cert_file=$STORAGE_ROOT/ssl/ssl_certificate.pem \
smtpd_tls_key_file=$STORAGE_ROOT/ssl/ssl_private_key.pem \ smtpd_tls_key_file=$STORAGE_ROOT/ssl/ssl_private_key.pem \
smtpd_tls_dh1024_param_file=$STORAGE_ROOT/ssl/dh2048.pem \ smtpd_tls_dh1024_param_file=$STORAGE_ROOT/ssl/dh4096.pem \
smtpd_tls_protocols="!SSLv2,!SSLv3" \ smtpd_tls_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtpd_tls_ciphers=medium \ smtpd_tls_ciphers=medium \
tls_medium_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA \ tls_medium_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 \
smtpd_tls_exclude_ciphers=aNULL,RC4 \ smtpd_tls_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL" \
tls_preempt_cipherlist=no \ tls_preempt_cipherlist=yes \
smtpd_tls_received_header=yes smtpd_tls_received_header=yes
# For ports 465/587 (via the 'mandatory' settings): # For ports 465/587 (via the 'mandatory' settings):
@ -143,7 +145,15 @@ tools/editconf.py /etc/postfix/main.cf \
smtpd_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \ smtpd_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtpd_tls_mandatory_ciphers=high \ smtpd_tls_mandatory_ciphers=high \
tls_high_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 \ tls_high_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 \
smtpd_tls_mandatory_exclude_ciphers=aNULL,DES,3DES,MD5,DES+MD5,RC4 smtpd_tls_mandatory_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL"
# Add block_root_external to block mail send to root@PRIMARY_HOSTNAME. This mail address is only supposed to be used for local
# mail delivery (cron etc)
cat > /etc/postfix/block_root_external << EOF;
root@$PRIMARY_HOSTNAME REJECT
EOF
postmap /etc/postfix/block_root_external
# Prevent non-authenticated users from sending mail that requires being # Prevent non-authenticated users from sending mail that requires being
# relayed elsewhere. We don't want to be an "open relay". On outbound # relayed elsewhere. We don't want to be an "open relay". On outbound
@ -152,9 +162,10 @@ tools/editconf.py /etc/postfix/main.cf \
# * `permit_sasl_authenticated`: Authenticated users (i.e. on port 465/587). # * `permit_sasl_authenticated`: Authenticated users (i.e. on port 465/587).
# * `permit_mynetworks`: Mail that originates locally. # * `permit_mynetworks`: Mail that originates locally.
# * `reject_unauth_destination`: No one else. (Permits mail whose destination is local and rejects other mail.) # * `reject_unauth_destination`: No one else. (Permits mail whose destination is local and rejects other mail.)
# * `block_root_external`: Block mail addressed at root@PRIMARY_HOSTNAME. Root mail is only to receive mails locally send to root.
# permit_mynetworks will allow delivery of mail for root originating locally.
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
smtpd_relay_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_relay_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,hash:/etc/postfix/block_root_external
# ### DANE # ### DANE
@ -181,9 +192,10 @@ tools/editconf.py /etc/postfix/main.cf \
# even if we don't know if it's to the right party, than to not encrypt at all. Instead we'll # even if we don't know if it's to the right party, than to not encrypt at all. Instead we'll
# now see notices about trusted certs. The CA file is provided by the package `ca-certificates`. # now see notices about trusted certs. The CA file is provided by the package `ca-certificates`.
tools/editconf.py /etc/postfix/main.cf \ tools/editconf.py /etc/postfix/main.cf \
smtp_tls_protocols=\!SSLv2,\!SSLv3 \ smtp_tls_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtp_tls_ciphers=medium \ smtp_tls_ciphers=medium \
smtp_tls_exclude_ciphers=aNULL,RC4 \ smtp_tls_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL" \
smtp_tls_mandatory_exclude_ciphers="MD5, DES, ADH, RC4, PSD, SRP, 3DES, eNULL, aNULL" \
smtp_tls_security_level=dane \ smtp_tls_security_level=dane \
smtp_dns_support_level=dnssec \ smtp_dns_support_level=dnssec \
smtp_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \ smtp_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
@ -232,7 +244,7 @@ tools/editconf.py /etc/postfix/main.cf \
# A lot of legit mail servers try to resend before 300 seconds. # A lot of legit mail servers try to resend before 300 seconds.
# As a matter of fact RFC is not strict about retry timer so postfix and # As a matter of fact RFC is not strict about retry timer so postfix and
# other MTA have their own intervals. To fix the problem of receiving # other MTA have their own intervals. To fix the problem of receiving
# e-mails really latter, delay of greylisting has been set to # e-mails really later, delay of greylisting has been set to
# 180 seconds (default is 300 seconds). We will move the postgrey database # 180 seconds (default is 300 seconds). We will move the postgrey database
# under $STORAGE_ROOT. This prevents a "warming up" that would have occured # under $STORAGE_ROOT. This prevents a "warming up" that would have occured
# previously with a migrated or reinstalled OS. We will specify this new path # previously with a migrated or reinstalled OS. We will specify this new path
@ -242,8 +254,9 @@ tools/editconf.py /etc/postfix/main.cf \
# (luckily $STORAGE_ROOT does not currently work with spaces), or it needs to be a # (luckily $STORAGE_ROOT does not currently work with spaces), or it needs to be a
# symlink without spaces that can point to a folder with spaces). We'll just assume # symlink without spaces that can point to a folder with spaces). We'll just assume
# $STORAGE_ROOT won't have spaces to simplify things. # $STORAGE_ROOT won't have spaces to simplify things.
# Postgrey removes entries after 185 days of not being used.
tools/editconf.py /etc/default/postgrey \ tools/editconf.py /etc/default/postgrey \
POSTGREY_OPTS=\""--inet=127.0.0.1:10023 --delay=180 --dbdir=$STORAGE_ROOT/mail/postgrey/db"\" POSTGREY_OPTS=\""--inet=127.0.0.1:10023 --delay=180 --max-age=185 --dbdir=$STORAGE_ROOT/mail/postgrey/db"\"
# If the $STORAGE_ROOT/mail/postgrey is empty, copy the postgrey database over from the old location # If the $STORAGE_ROOT/mail/postgrey is empty, copy the postgrey database over from the old location

View File

@ -51,7 +51,7 @@ driver = sqlite
connect = $db_path connect = $db_path
default_pass_scheme = SHA512-CRYPT default_pass_scheme = SHA512-CRYPT
password_query = SELECT email as user, password FROM users WHERE email='%u'; password_query = SELECT email as user, password FROM users WHERE email='%u';
user_query = SELECT email AS user, "mail" as uid, "mail" as gid, "$STORAGE_ROOT/mail/mailboxes/%d/%n" as home FROM users WHERE email='%u'; user_query = SELECT email AS user, "mail" as uid, "mail" as gid, "$STORAGE_ROOT/mail/homes/%d/%n" as home FROM users WHERE email='%u';
iterate_query = SELECT email AS user FROM users; iterate_query = SELECT email AS user FROM users;
EOF EOF
chmod 0600 /etc/dovecot/dovecot-sql.conf.ext # per Dovecot instructions chmod 0600 /etc/dovecot/dovecot-sql.conf.ext # per Dovecot instructions

View File

@ -7,12 +7,15 @@ source /etc/mailinabox.conf # load global vars
# install Munin # install Munin
echo "Installing Munin (system monitoring)..." echo "Installing Munin (system monitoring)..."
apt_install munin munin-node libcgi-fast-perl apt_install munin munin-node libcgi-fast-perl munin-plugins-extra
# libcgi-fast-perl is needed by /usr/lib/munin/cgi/munin-cgi-graph # libcgi-fast-perl is needed by /usr/lib/munin/cgi/munin-cgi-graph
mkdir -p $STORAGE_ROOT/munin
chown munin:munin $STORAGE_ROOT/munin
# edit config # edit config
cat > /etc/munin/munin.conf <<EOF; cat > /etc/munin/munin.conf <<EOF;
dbdir /var/lib/munin dbdir $STORAGE_ROOT/munin
htmldir /var/cache/munin/www htmldir /var/cache/munin/www
logdir /var/log/munin logdir /var/log/munin
rundir /var/run/munin rundir /var/run/munin
@ -23,19 +26,20 @@ includedir /etc/munin/munin-conf.d
# path dynazoom uses for requests # path dynazoom uses for requests
cgiurl_graph /admin/munin/cgi-graph cgiurl_graph /admin/munin/cgi-graph
# send alerts to the following address
contact.admin.command mail -s "Munin notification \${var:host}" administrator@$PRIMARY_HOSTNAME
contact.admin.always_send warning critical
# a simple host tree # a simple host tree
[$PRIMARY_HOSTNAME] [$PRIMARY_HOSTNAME]
address 127.0.0.1 address 127.0.0.1
# send alerts to the following address
contacts admin contacts admin
contact.admin.command mail -s "Munin notification \${var:host}" administrator@$PRIMARY_HOSTNAME
contact.admin.always_send warning critical
EOF EOF
# The Debian installer touches these files and chowns them to www-data:adm for use with spawn-fcgi # The Debian installer touches these files and chowns them to www-data:adm for use with spawn-fcgi
chown munin. /var/log/munin/munin-cgi-html.log chown munin /var/log/munin/munin-cgi-html.log
chown munin. /var/log/munin/munin-cgi-graph.log chown munin /var/log/munin/munin-cgi-graph.log
# ensure munin-node knows the name of this machine # ensure munin-node knows the name of this machine
# and reduce logging level to warning # and reduce logging level to warning
@ -70,6 +74,23 @@ hide_output systemctl daemon-reload
hide_output systemctl unmask munin.service hide_output systemctl unmask munin.service
hide_output systemctl enable munin.service hide_output systemctl enable munin.service
# Some more munin plugins
if [ -f /usr/share/munin/plugins/postfix_mailstats ] && [ ! -h /etc/munin/plugins/postfix_mailstats ]; then
ln -fs /usr/share/munin/plugins/postfix_mailstats /etc/munin/plugins/
fi
if [ -f /usr/share/munin/plugins/spamstats ] && [ ! -h /etc/munin/plugins/spamstats ]; then
ln -fs /usr/share/munin/plugins/spamstats /etc/munin/plugins/
fi
if [ -f /usr/share/munin/plugins/df_abs ] && [ ! -h /etc/munin/plugins/df_abs ]; then
ln -fs /usr/share/munin/plugins/df_abs /etc/munin/plugins/
fi
if [ -f /usr/share/munin/plugins/fail2ban ] && [ ! -h /etc/munin/plugins/fail2ban ]; then
ln -fs /usr/share/munin/plugins/fail2ban /etc/munin/plugins/
fi
# Restart services. # Restart services.
restart_service munin restart_service munin
restart_service munin-node restart_service munin-node

View File

@ -21,38 +21,38 @@ echo "Installing Nextcloud (contacts/calendar)..."
# we automatically install intermediate versions as needed. # we automatically install intermediate versions as needed.
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and # * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
# copying it from the error message when it doesn't match what is below. # copying it from the error message when it doesn't match what is below.
nextcloud_ver=23.0.10 nextcloud_ver=24.0.9
nextcloud_hash=8831c7862e39460fbb789bacac8729fab0ba02dd nextcloud_hash=e7e7e580f95772c4e390e3b656129282b3967a16
# Nextcloud apps # Nextcloud apps
# -------------- # --------------
# * Find the most recent tag that is compatible with the Nextcloud version above by # * Find the most recent tag that is compatible with the Nextcloud version above by
# consulting the <dependencies>...<nextcloud> node at: # consulting the <dependencies>...<nextcloud> node at:
# https://github.com/nextcloud-releases/contacts/blob/main/appinfo/info.xml # https://github.com/nextcloud-releases/contacts/blob/master/appinfo/info.xml
# https://github.com/nextcloud-releases/calendar/blob/main/appinfo/info.xml # https://github.com/nextcloud-releases/calendar/blob/master/appinfo/info.xml
# https://github.com/nextcloud/user_external/blob/master/appinfo/info.xml # https://github.com/nextcloud/user_external/blob/master/appinfo/info.xml
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and # * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
# copying it from the error message when it doesn't match what is below. # copying it from the error message when it doesn't match what is below.
contacts_ver=4.2.2 contacts_ver=4.2.2
contacts_hash=ca13d608ed8955aa374cb4f31b6026b57ef88887 contacts_hash=ca13d608ed8955aa374cb4f31b6026b57ef88887
calendar_ver=3.5.1 calendar_ver=3.5.5
calendar_hash=c8136a3deb872a3ef73ce1155b58f3ab27ec7110 calendar_hash=8505abcf7b3ab2f32d7ca1593b545e577cbeedb4
user_external_ver=3.0.0 user_external_ver=3.1.0
user_external_hash=0df781b261f55bbde73d8c92da3f99397000972f user_external_hash=399fe1150b28a69aaf5bfcad3227e85706604a44
# Clear prior packages and install dependencies from apt. # Clear prior packages and install dependencies from apt.
apt-get purge -qq -y owncloud* # we used to use the package manager apt-get purge -qq -y owncloud* # we used to use the package manager
apt_install curl php${PHP_VER} php${PHP_VER}-fpm \ apt_install php php-fpm \
php${PHP_VER}-cli php${PHP_VER}-sqlite3 php${PHP_VER}-gd php${PHP_VER}-imap php${PHP_VER}-curl \ php-cli php-sqlite3 php-gd php-imap php-curl php-pear curl \
php${PHP_VER}-dev php${PHP_VER}-gd php${PHP_VER}-xml php${PHP_VER}-mbstring php${PHP_VER}-zip php${PHP_VER}-apcu \ php-dev php-xml php-mbstring php-zip php-apcu php-json \
php${PHP_VER}-intl php${PHP_VER}-imagick php${PHP_VER}-gmp php${PHP_VER}-bcmath php-intl php-imagick php-gmp php-bcmath
# Enable APC before Nextcloud tools are run. # Enable APC before Nextcloud tools are run.
tools/editconf.py /etc/php/$PHP_VER/mods-available/apcu.ini -c ';' \ tools/editconf.py /etc/php/$PHP_VER/mods-available/apcu.ini -c ';' \
apc.enabled=1 \ apc.enabled=1 \
apc.enable_cli=1 apc.enable_cli=1
InstallNextcloud() { InstallNextcloud() {
@ -69,8 +69,8 @@ InstallNextcloud() {
echo "Upgrading to Nextcloud version $version" echo "Upgrading to Nextcloud version $version"
echo echo
# Download and verify # Download and verify
wget_verify https://download.nextcloud.com/server/releases/nextcloud-$version.zip $hash /tmp/nextcloud.zip wget_verify https://download.nextcloud.com/server/releases/nextcloud-$version.zip $hash /tmp/nextcloud.zip
# Remove the current owncloud/Nextcloud # Remove the current owncloud/Nextcloud
rm -rf /usr/local/lib/owncloud rm -rf /usr/local/lib/owncloud
@ -80,6 +80,9 @@ InstallNextcloud() {
mv /usr/local/lib/nextcloud /usr/local/lib/owncloud mv /usr/local/lib/nextcloud /usr/local/lib/owncloud
rm -f /tmp/nextcloud.zip rm -f /tmp/nextcloud.zip
# Empty the skeleton dir to save some space for each new user
rm -rf /usr/local/lib/owncloud/core/skeleton/*
# The two apps we actually want are not in Nextcloud core. Download the releases from # The two apps we actually want are not in Nextcloud core. Download the releases from
# their github repositories. # their github repositories.
mkdir -p /usr/local/lib/owncloud/apps mkdir -p /usr/local/lib/owncloud/apps
@ -95,7 +98,7 @@ InstallNextcloud() {
# Starting with Nextcloud 15, the app user_external is no longer included in Nextcloud core, # Starting with Nextcloud 15, the app user_external is no longer included in Nextcloud core,
# we will install from their github repository. # we will install from their github repository.
if [ -n "$version_user_external" ]; then if [ -n "$version_user_external" ]; then
wget_verify https://github.com/nextcloud-releases/user_external/releases/download/v$version_user_external/user_external-v$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz wget_verify https://github.com/nextcloud/user_external/archive/refs/tags/v$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz
tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/ tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/user_external.tgz rm /tmp/user_external.tgz
fi fi
@ -110,27 +113,27 @@ InstallNextcloud() {
# Make sure permissions are correct or the upgrade step won't run. # Make sure permissions are correct or the upgrade step won't run.
# $STORAGE_ROOT/owncloud may not yet exist, so use -f to suppress # $STORAGE_ROOT/owncloud may not yet exist, so use -f to suppress
# that error. # that error.
chown -f -R www-data.www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud || /bin/true chown -f -R www-data:www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud || /bin/true
# If this isn't a new installation, immediately run the upgrade script. # If this isn't a new installation, immediately run the upgrade script.
# Then check for success (0=ok and 3=no upgrade needed, both are success). # Then check for success (0=ok and 3=no upgrade needed, both are success).
if [ -e $STORAGE_ROOT/owncloud/owncloud.db ]; then if [ -e $STORAGE_ROOT/owncloud/owncloud.db ]; then
# ownCloud 8.1.1 broke upgrades. It may fail on the first attempt, but # ownCloud 8.1.1 broke upgrades. It may fail on the first attempt, but
# that can be OK. # that can be OK.
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ upgrade sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then
echo "Trying ownCloud upgrade again to work around ownCloud upgrade bug..." echo "Trying ownCloud upgrade again to work around ownCloud upgrade bug..."
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ upgrade sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ maintenance:mode --off sudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off
echo "...which seemed to work." echo "...which seemed to work."
fi fi
# Add missing indices. NextCloud didn't include this in the normal upgrade because it might take some time. # Add missing indices. NextCloud didn't include this in the normal upgrade because it might take some time.
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ db:add-missing-indices sudo -u www-data php /usr/local/lib/owncloud/occ db:add-missing-indices
# Run conversion to BigInt identifiers, this process may take some time on large tables. # Run conversion to BigInt identifiers, this process may take some time on large tables.
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ db:convert-filecache-bigint --no-interaction sudo -u www-data php /usr/local/lib/owncloud/occ db:convert-filecache-bigint --no-interaction
fi fi
} }
@ -141,8 +144,26 @@ InstallNextcloud() {
# application version than the database. # application version than the database.
# If config.php exists, get version number, otherwise CURRENT_NEXTCLOUD_VER is empty. # If config.php exists, get version number, otherwise CURRENT_NEXTCLOUD_VER is empty.
#
# Config unlocking, power-mailinabox#86
# If a configuration file already exists, remove the "readonly" tag before starting the upgrade. This is
# necessary (otherwise upgrades will fail).
#
# The lock will be re-applied further down the line when it's safe to do so.
CONFIG_TEMP=$(/bin/mktemp)
if [ -f "$STORAGE_ROOT/owncloud/config.php" ]; then if [ -f "$STORAGE_ROOT/owncloud/config.php" ]; then
CURRENT_NEXTCLOUD_VER=$(php$PHP_VER -r "include(\"$STORAGE_ROOT/owncloud/config.php\"); echo(\$CONFIG['version']);") CURRENT_NEXTCLOUD_VER=$(php -r "include(\"$STORAGE_ROOT/owncloud/config.php\"); echo(\$CONFIG['version']);")
# Unlock configuration directory for upgrades
php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
<?php
include("$STORAGE_ROOT/owncloud/config.php");
\$CONFIG['config_is_read_only'] = false;
echo "<?php\n\\\$CONFIG = ";
var_export(\$CONFIG);
echo ";";
?>
EOF
else else
CURRENT_NEXTCLOUD_VER="" CURRENT_NEXTCLOUD_VER=""
fi fi
@ -184,15 +205,71 @@ if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextc
return 0 return 0
fi fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^20 ]]; then if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^20 ]]; then
InstallNextcloud 21.0.7 f5c7079c5b56ce1e301c6a27c0d975d608bb01c9 4.0.7 45e7cf4bfe99cd8d03625cf9e5a1bb2e90549136 3.0.4 d0284b68135777ec9ca713c307216165b294d0fe # Version 20 is the latest version from the 18.04 version of miab. To upgrade to version 21, install php8.0. This is
# not supported by version 20, but that does not matter, as the InstallNextcloud function only runs the version 21 code.
# Install the ppa
add-apt-repository --yes ppa:ondrej/php
# Prevent installation of old packages
apt-mark hold php7.0-apcu php7.1-apcu php7.2-apcu php7.3-apcu php7.4-apcu
# Install older php version
apt_install php8.0 php8.0-fpm php8.0-apcu php8.0-cli php8.0-sqlite3 php8.0-gd php8.0-imap \
php8.0-curl php8.0-dev php8.0-xml php8.0-mbstring php8.0-zip
# set older php version as default
update-alternatives --set php /usr/bin/php8.0
tools/editconf.py /etc/php/$(php_version)/mods-available/apcu.ini -c ';' \
apc.enabled=1 \
apc.enable_cli=1
# Install nextcloud, this also updates user_external to 2.1.0
InstallNextcloud 21.0.7 f5c7079c5b56ce1e301c6a27c0d975d608bb01c9 4.0.7 45e7cf4bfe99cd8d03625cf9e5a1bb2e90549136 3.0.4 d0284b68135777ec9ca713c307216165b294d0fe 2.1.0 41d4c57371bd085d68421b52ab232092d7dfc882
CURRENT_NEXTCLOUD_VER="21.0.7" CURRENT_NEXTCLOUD_VER="21.0.7"
fi fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^21 ]]; then if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^21 ]]; then
InstallNextcloud 22.2.6 9d39741f051a8da42ff7df46ceef2653a1dc70d9 4.1.0 697f6b4a664e928d72414ea2731cb2c9d1dc3077 3.2.2 ce4030ab57f523f33d5396c6a81396d440756f5f 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f InstallNextcloud 22.2.3 58d2d897ba22a057aa03d29c762c5306211fefd2 4.0.7 45e7cf4bfe99cd8d03625cf9e5a1bb2e90549136 3.0.4 d0284b68135777ec9ca713c307216165b294d0fe 2.1.0 41d4c57371bd085d68421b52ab232092d7dfc882
CURRENT_NEXTCLOUD_VER="22.2.6" CURRENT_NEXTCLOUD_VER="22.2.3"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^22 ]]; then
InstallNextcloud 23.0.2 645cba42cab57029ebe29fb93906f58f7abea5f8 4.0.8 fc626ec02732da13a4c600baae64ab40557afdca 3.0.6 e40d919b4b7988b46671a78cb32a43d8c7cba332 3.0.0 9e7aaf7288032bd463c480bc368ff91869122950
CURRENT_NEXTCLOUD_VER="23.0.2"
# Remove older php version
update-alternatives --auto php
apt-get purge -qq -y php8.0 php8.0-fpm php8.0-apcu php8.0-cli php8.0-sqlite3 php8.0-gd \
php8.0-imap php8.0-curl php8.0-dev php8.0-xml php8.0-mbstring php8.0-zip \
php8.0-common php8.0-opcache php8.0-readline
# Remove the ppa
add-apt-repository --yes --remove ppa:ondrej/php
fi fi
fi fi
# nextcloud version - supported php versions
# 20 - 7.2, 7.3, 7.4
# 21 - 7.3, 7.4, 8.0
# 22 - 7.3, 7.4, 8.0
# 23 - 7.3, 7.4, 8.0
# 24 - 7.4, 8.0, 8.1
#
# ubuntu 18.04 has php 7.2
# ubuntu 22.04 has php 8.1
#
# user_external 2.1.0 supports version 21-22
# user_external 2.1.0 supports version 22-24
#
# upgrade path
# - install ppa: sudo add-apt-repository ppa:ondrej/php
# - upgrade php to version 8.0 (nextcloud will no longer function)
# - upgrade nextcloud to 21 and user_external to 2.1.0
# - upgrade nextcloud to 22
# - upgrade nextcloud to 23 and user_external to 3.0.0
# - upgrade nextcloud to 24
InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash
fi fi
@ -220,9 +297,9 @@ if [ ! -f $STORAGE_ROOT/owncloud/owncloud.db ]; then
'user_backends' => array( 'user_backends' => array(
array( array(
'class' => '\OCA\UserExternal\IMAP', 'class' => '\OCA\UserExternal\IMAP',
'arguments' => array( 'arguments' => array(
'127.0.0.1', 143, null, null, false, false '127.0.0.1', 143, null, null, false, false
), ),
), ),
), ),
'memcache.local' => '\OC\Memcache\APCu', 'memcache.local' => '\OC\Memcache\APCu',
@ -259,12 +336,12 @@ EOF
EOF EOF
# Set permissions # Set permissions
chown -R www-data.www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud chown -R www-data:www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud
# Execute Nextcloud's setup step, which creates the Nextcloud sqlite database. # Execute Nextcloud's setup step, which creates the Nextcloud sqlite database.
# It also wipes it if it exists. And it updates config.php with database # It also wipes it if it exists. And it updates config.php with database
# settings and deletes the autoconfig.php file. # settings and deletes the autoconfig.php file.
(cd /usr/local/lib/owncloud; sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/index.php;) (cd /usr/local/lib/owncloud; sudo -u www-data php /usr/local/lib/owncloud/index.php;)
fi fi
# Update config.php. # Update config.php.
@ -279,8 +356,7 @@ fi
# the correct domain name if the domain is being change from the previous setup. # the correct domain name if the domain is being change from the previous setup.
# Use PHP to read the settings file, modify it, and write out the new settings array. # Use PHP to read the settings file, modify it, and write out the new settings array.
TIMEZONE=$(cat /etc/timezone) TIMEZONE=$(cat /etc/timezone)
CONFIG_TEMP=$(/bin/mktemp) php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
php$PHP_VER <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
<?php <?php
include("$STORAGE_ROOT/owncloud/config.php"); include("$STORAGE_ROOT/owncloud/config.php");
@ -294,6 +370,8 @@ include("$STORAGE_ROOT/owncloud/config.php");
\$CONFIG['logtimezone'] = '$TIMEZONE'; \$CONFIG['logtimezone'] = '$TIMEZONE';
\$CONFIG['logdateformat'] = 'Y-m-d H:i:s'; \$CONFIG['logdateformat'] = 'Y-m-d H:i:s';
\$CONFIG['log_type'] = 'syslog';
\$CONFIG['syslog_tag'] = 'Nextcloud';
\$CONFIG['mail_domain'] = '$PRIMARY_HOSTNAME'; \$CONFIG['mail_domain'] = '$PRIMARY_HOSTNAME';
@ -311,28 +389,41 @@ var_export(\$CONFIG);
echo ";"; echo ";";
?> ?>
EOF EOF
chown www-data.www-data $STORAGE_ROOT/owncloud/config.php
chown www-data:www-data $STORAGE_ROOT/owncloud/config.php
# Enable/disable apps. Note that this must be done after the Nextcloud setup. # Enable/disable apps. Note that this must be done after the Nextcloud setup.
# The firstrunwizard gave Josh all sorts of problems, so disabling that. # The firstrunwizard gave Josh all sorts of problems, so disabling that.
# user_external is what allows Nextcloud to use IMAP for login. The contacts # user_external is what allows Nextcloud to use IMAP for login. The contacts
# and calendar apps are the extensions we really care about here. # and calendar apps are the extensions we really care about here.
hide_output sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/console.php app:disable firstrunwizard hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:disable firstrunwizard
hide_output sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/console.php app:enable user_external hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable user_external
hide_output sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/console.php app:enable contacts hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable contacts
hide_output sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/console.php app:enable calendar hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable calendar
# When upgrading, run the upgrade script again now that apps are enabled. It seems like # When upgrading, run the upgrade script again now that apps are enabled. It seems like
# the first upgrade at the top won't work because apps may be disabled during upgrade? # the first upgrade at the top won't work because apps may be disabled during upgrade?
# Check for success (0=ok, 3=no upgrade needed). # Check for success (0=ok, 3=no upgrade needed).
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ upgrade sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
# Disable default apps that we don't support # Disable default apps that we don't support
sudo -u www-data \ sudo -u www-data \
php$PHP_VER /usr/local/lib/owncloud/occ app:disable photos dashboard activity \ php /usr/local/lib/owncloud/occ app:disable photos dashboard activity \
| (grep -v "No such app enabled" || /bin/true) | (grep -v "No such app enabled" || /bin/true)
# Install interesting apps
(sudo -u www-data php /usr/local/lib/owncloud/occ app:install notes) || true
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable notes
(sudo -u www-data php /usr/local/lib/owncloud/occ app:install twofactor_totp) || true
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable twofactor_totp
# upgrade apps
sudo -u www-data php /usr/local/lib/owncloud/occ app:update --all
# Set PHP FPM values to support large file uploads # Set PHP FPM values to support large file uploads
# (semicolon is the comment character in this file, hashes produce deprecation warnings) # (semicolon is the comment character in this file, hashes produce deprecation warnings)
tools/editconf.py /etc/php/$PHP_VER/fpm/php.ini -c ';' \ tools/editconf.py /etc/php/$PHP_VER/fpm/php.ini -c ';' \
@ -363,7 +454,7 @@ sqlite3 $STORAGE_ROOT/owncloud/owncloud.db "UPDATE oc_users_external SET backend
cat > /etc/cron.d/mailinabox-nextcloud << EOF; cat > /etc/cron.d/mailinabox-nextcloud << EOF;
#!/bin/bash #!/bin/bash
# Mail-in-a-Box # Mail-in-a-Box
*/5 * * * * root sudo -u www-data php$PHP_VER -f /usr/local/lib/owncloud/cron.php */5 * * * * root sudo -u www-data php -f /usr/local/lib/owncloud/cron.php
EOF EOF
chmod +x /etc/cron.d/mailinabox-nextcloud chmod +x /etc/cron.d/mailinabox-nextcloud

View File

@ -7,7 +7,7 @@ if [[ $EUID -ne 0 ]]; then
exit 1 exit 1
fi fi
# Check that we are running on Ubuntu 20.04 LTS (or 20.04.xx). # Check that we are running on Ubuntu 22.04 LTS (or 22.04.xx).
if [ "$( lsb_release --id --short )" != "Ubuntu" ] || [ "$( lsb_release --release --short )" != "22.04" ]; then if [ "$( lsb_release --id --short )" != "Ubuntu" ] || [ "$( lsb_release --release --short )" != "22.04" ]; then
echo "Mail-in-a-Box only supports being installed on Ubuntu 22.04, sorry. You are running:" echo "Mail-in-a-Box only supports being installed on Ubuntu 22.04, sorry. You are running:"
echo echo

View File

@ -9,7 +9,7 @@ if [ -z "${NONINTERACTIVE:-}" ]; then
if [ ! -f /usr/bin/dialog ] || [ ! -f /usr/bin/python3 ] || [ ! -f /usr/bin/pip3 ]; then if [ ! -f /usr/bin/dialog ] || [ ! -f /usr/bin/python3 ] || [ ! -f /usr/bin/pip3 ]; then
echo Installing packages needed for setup... echo Installing packages needed for setup...
apt-get -q -q update apt-get -q -q update
apt_get_quiet install dialog python3 python3-pip || exit 1 apt_get_quiet install dialog file python3 python3-pip || exit 1
fi fi
# Installing email_validator is repeated in setup/management.sh, but in setup/management.sh # Installing email_validator is repeated in setup/management.sh, but in setup/management.sh
@ -119,6 +119,24 @@ if [ -z "${PUBLIC_IP:-}" ]; then
fi fi
fi fi
if [ -z "${ADMIN_HOME_IP:-}" ]; then
if [ -z "${DEFAULT_ADMIN_HOME_IP:-}" ]; then
input_box "Admin Home IP Address" \
"Enter the public IP address of the admin home, as given to you by your ISP.
This will be used to prevent banning of the administrator IP address.
\n\nAdmin Home IP address:" \
"" \
ADMIN_HOME_IP
else
ADMIN_HOME_IP=$DEFAULT_ADMIN_HOME_IP
fi
fi
if [ -z "${ADMIN_HOME_IP:-}" ]; then
ADMIN_HOME_IP=""
fi
# Same for IPv6. But it's optional. Also, if it looks like the system # Same for IPv6. But it's optional. Also, if it looks like the system
# doesn't have an IPv6, don't ask for one. # doesn't have an IPv6, don't ask for one.
if [ -z "${PUBLIC_IPV6:-}" ]; then if [ -z "${PUBLIC_IPV6:-}" ]; then
@ -193,6 +211,11 @@ if [ -z "${STORAGE_ROOT:-}" ]; then
STORAGE_ROOT=$([[ -z "${DEFAULT_STORAGE_ROOT:-}" ]] && echo "/home/$STORAGE_USER" || echo "$DEFAULT_STORAGE_ROOT") STORAGE_ROOT=$([[ -z "${DEFAULT_STORAGE_ROOT:-}" ]] && echo "/home/$STORAGE_USER" || echo "$DEFAULT_STORAGE_ROOT")
fi fi
# Set BACKUP_ROOT to default (empty) value, unless we've already got it from a previous run.
if [ -z "${BACKUP_ROOT:-}" ]; then
BACKUP_ROOT=""
fi
# Show the configuration, since the user may have not entered it manually. # Show the configuration, since the user may have not entered it manually.
echo echo
echo "Primary Hostname: $PRIMARY_HOSTNAME" echo "Primary Hostname: $PRIMARY_HOSTNAME"
@ -206,7 +229,10 @@ fi
if [ "$PRIVATE_IPV6" != "$PUBLIC_IPV6" ]; then if [ "$PRIVATE_IPV6" != "$PUBLIC_IPV6" ]; then
echo "Private IPv6 Address: $PRIVATE_IPV6" echo "Private IPv6 Address: $PRIVATE_IPV6"
fi fi
if [ -n "$ADMIN_HOME_IP" ]; then
echo "Admin Home IP Address: $ADMIN_HOME_IP"
fi
if [ -f /usr/bin/git ] && [ -d .git ]; then if [ -f /usr/bin/git ] && [ -d .git ]; then
echo "Mail-in-a-Box Version: " $(git describe) echo "Mail-in-a-Box Version: " $(git describe --tags)
fi fi
echo echo

View File

@ -195,3 +195,4 @@ chmod 770 $STORAGE_ROOT/mail/spamassassin
restart_service spampd restart_service spampd
restart_service dovecot restart_service dovecot
systemctl enable spamassassin.service

View File

@ -28,7 +28,7 @@ source /etc/mailinabox.conf # load global vars
if [ ! -f /usr/bin/openssl ] \ if [ ! -f /usr/bin/openssl ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ] \ || [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ] \ || [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then || [ ! -f $STORAGE_ROOT/ssl/dh4096.pem ]; then
echo "Creating initial SSL certificate and perfect forward secrecy Diffie-Hellman parameters..." echo "Creating initial SSL certificate and perfect forward secrecy Diffie-Hellman parameters..."
fi fi
@ -40,6 +40,9 @@ apt_install openssl
mkdir -p $STORAGE_ROOT/ssl mkdir -p $STORAGE_ROOT/ssl
# make directory readable
chmod 755 $STORAGE_ROOT/ssl
# Generate a new private key. # Generate a new private key.
# #
# The key is only as good as the entropy available to openssl so that it # The key is only as good as the entropy available to openssl so that it
@ -63,7 +66,7 @@ mkdir -p $STORAGE_ROOT/ssl
if [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ]; then if [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ]; then
# Set the umask so the key file is never world-readable. # Set the umask so the key file is never world-readable.
(umask 077; hide_output \ (umask 077; hide_output \
openssl genrsa -out $STORAGE_ROOT/ssl/ssl_private_key.pem 2048) openssl genrsa -out $STORAGE_ROOT/ssl/ssl_private_key.pem 4096)
fi fi
# Generate a self-signed SSL certificate because things like nginx, dovecot, # Generate a self-signed SSL certificate because things like nginx, dovecot,
@ -90,9 +93,7 @@ if [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ]; then
ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem
fi fi
# Generate some Diffie-Hellman cipher bits. # We no longer generate Diffie-Hellman cipher bits. Following rfc7919 we use
# openssl's default bit length for this is 1024 bits, but we'll create # a predefined finite field group, in this case ffdhe4096 from
# 2048 bits of bits per the latest recommendations. # https://raw.githubusercontent.com/internetstandards/dhe_groups/master/ffdhe4096.pem
if [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then cp -f conf/dh4096.pem $STORAGE_ROOT/ssl/
openssl dhparam -out $STORAGE_ROOT/ssl/dh2048.pem 2048
fi

View File

@ -14,9 +14,14 @@ source setup/preflight.sh
# Python may not be able to read/write files. This is also # Python may not be able to read/write files. This is also
# in the management daemon startup script and the cron script. # in the management daemon startup script and the cron script.
# Make sure we have locales at all (some images are THAT minimal)
apt_get_quiet install locales
if ! locale -a | grep en_US.utf8 > /dev/null; then if ! locale -a | grep en_US.utf8 > /dev/null; then
echo "Generating locales..."
# Generate locale if not exists # Generate locale if not exists
hide_output locale-gen en_US.UTF-8 echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen
hide_output locale-gen
fi fi
export LANGUAGE=en_US.UTF-8 export LANGUAGE=en_US.UTF-8
@ -53,8 +58,8 @@ chmod +x /usr/local/bin/mailinabox
# Ask the user for the PRIMARY_HOSTNAME, PUBLIC_IP, and PUBLIC_IPV6, # Ask the user for the PRIMARY_HOSTNAME, PUBLIC_IP, and PUBLIC_IPV6,
# if values have not already been set in environment variables. When running # if values have not already been set in environment variables. When running
# non-interactively, be sure to set values for all! Also sets STORAGE_USER and # non-interactively, be sure to set values for all! Also sets STORAGE_USER,
# STORAGE_ROOT. # STORAGE_ROOT and BACKUP_ROOT.
source setup/questions.sh source setup/questions.sh
# Run some network checks to make sure setup on this machine makes sense. # Run some network checks to make sure setup on this machine makes sense.
@ -85,7 +90,7 @@ f=$STORAGE_ROOT
while [[ $f != / ]]; do chmod a+rx "$f"; f=$(dirname "$f"); done; while [[ $f != / ]]; do chmod a+rx "$f"; f=$(dirname "$f"); done;
if [ ! -f $STORAGE_ROOT/mailinabox.version ]; then if [ ! -f $STORAGE_ROOT/mailinabox.version ]; then
setup/migrate.py --current > $STORAGE_ROOT/mailinabox.version setup/migrate.py --current > $STORAGE_ROOT/mailinabox.version
chown $STORAGE_USER.$STORAGE_USER $STORAGE_ROOT/mailinabox.version chown $STORAGE_USER:$STORAGE_USER $STORAGE_ROOT/mailinabox.version
fi fi
# Save the global options in /etc/mailinabox.conf so that standalone # Save the global options in /etc/mailinabox.conf so that standalone
@ -95,29 +100,34 @@ fi
cat > /etc/mailinabox.conf << EOF; cat > /etc/mailinabox.conf << EOF;
STORAGE_USER=$STORAGE_USER STORAGE_USER=$STORAGE_USER
STORAGE_ROOT=$STORAGE_ROOT STORAGE_ROOT=$STORAGE_ROOT
BACKUP_ROOT=$BACKUP_ROOT
PRIMARY_HOSTNAME=$PRIMARY_HOSTNAME PRIMARY_HOSTNAME=$PRIMARY_HOSTNAME
PUBLIC_IP=$PUBLIC_IP PUBLIC_IP=$PUBLIC_IP
PUBLIC_IPV6=$PUBLIC_IPV6 PUBLIC_IPV6=$PUBLIC_IPV6
PRIVATE_IP=$PRIVATE_IP PRIVATE_IP=$PRIVATE_IP
PRIVATE_IPV6=$PRIVATE_IPV6 PRIVATE_IPV6=$PRIVATE_IPV6
MTA_STS_MODE=${DEFAULT_MTA_STS_MODE:-enforce} MTA_STS_MODE=${DEFAULT_MTA_STS_MODE:-enforce}
ADMIN_HOME_IP=$ADMIN_HOME_IP
ADMIN_HOME_IPV6=
EOF EOF
# Start service configuration. # Start service configuration.
source setup/system.sh source setup/system.sh
source setup/geoiptoolssetup.sh
source setup/ssl.sh source setup/ssl.sh
source setup/dns.sh source setup/dns.sh
source setup/mail-postfix.sh source setup/mail-postfix.sh
source setup/mail-dovecot.sh source setup/mail-dovecot.sh
source setup/mail-users.sh source setup/mail-users.sh
source setup/dovecot-fts-xapian.sh
source setup/dkim.sh source setup/dkim.sh
source setup/spamassassin.sh source setup/spamassassin.sh
source setup/web.sh source setup/web.sh
source setup/webmail.sh source setup/webmail.sh
source setup/nextcloud.sh source setup/nextcloud.sh
source setup/zpush.sh
source setup/management.sh source setup/management.sh
source setup/munin.sh source setup/munin.sh
source setup/additionals.sh
# Wait for the management daemon to start... # Wait for the management daemon to start...
until nc -z -w 4 127.0.0.1 10222 until nc -z -w 4 127.0.0.1 10222
@ -167,7 +177,7 @@ if management/status_checks.py --check-primary-hostname; then
echo "If you have a DNS problem put the box's IP address in the URL" echo "If you have a DNS problem put the box's IP address in the URL"
echo "(https://$PUBLIC_IP/admin) but then check the TLS fingerprint:" echo "(https://$PUBLIC_IP/admin) but then check the TLS fingerprint:"
openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint -sha256\ openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint -sha256\
| sed "s/SHA256 Fingerprint=//" | sed "s/SHA256 Fingerprint=//i"
else else
echo https://$PUBLIC_IP/admin echo https://$PUBLIC_IP/admin
echo echo
@ -175,7 +185,7 @@ else
echo the certificate fingerprint matches: echo the certificate fingerprint matches:
echo echo
openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint -sha256\ openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint -sha256\
| sed "s/SHA256 Fingerprint=//" | sed "s/SHA256 Fingerprint=//i"
echo echo
echo Then you can confirm the security exception and continue. echo Then you can confirm the security exception and continue.
echo echo

View File

@ -82,6 +82,8 @@ fi
# (See https://discourse.mailinabox.email/t/journalctl-reclaim-space-on-small-mailinabox/6728/11.) # (See https://discourse.mailinabox.email/t/journalctl-reclaim-space-on-small-mailinabox/6728/11.)
tools/editconf.py /etc/systemd/journald.conf MaxRetentionSec=10day tools/editconf.py /etc/systemd/journald.conf MaxRetentionSec=10day
hide_output systemctl restart systemd-journald.service
# ### Add PPAs. # ### Add PPAs.
# We install some non-standard Ubuntu packages maintained by other # We install some non-standard Ubuntu packages maintained by other
@ -99,11 +101,6 @@ hide_output add-apt-repository -y universe
# Install the duplicity PPA. # Install the duplicity PPA.
hide_output add-apt-repository -y ppa:duplicity-team/duplicity-release-git hide_output add-apt-repository -y ppa:duplicity-team/duplicity-release-git
# Stock PHP is now 8.1, but we're transitioning through 8.0 because
# of Nextcloud.
hide_output add-apt-repository --y ppa:ondrej/php
# ### Update Packages # ### Update Packages
# Update system packages to make sure we have the latest upstream versions # Update system packages to make sure we have the latest upstream versions
@ -124,6 +121,9 @@ apt_get_quiet autoremove
# Install basic utilities. # Install basic utilities.
# #
# * haveged: Provides extra entropy to /dev/random so it doesn't stall
# when generating random numbers for private keys (e.g. during
# ldns-keygen).
# * unattended-upgrades: Apt tool to install security updates automatically. # * unattended-upgrades: Apt tool to install security updates automatically.
# * cron: Runs background processes periodically. # * cron: Runs background processes periodically.
# * ntp: keeps the system time correct # * ntp: keeps the system time correct
@ -137,9 +137,9 @@ apt_get_quiet autoremove
echo Installing system packages... echo Installing system packages...
apt_install python3 python3-dev python3-pip python3-setuptools \ apt_install python3 python3-dev python3-pip python3-setuptools \
netcat-openbsd wget curl git sudo coreutils bc file \ netcat-openbsd wget curl git sudo coreutils bc \
pollinate openssh-client unzip \ haveged pollinate openssh-client unzip \
unattended-upgrades cron ntp fail2ban rsyslog unattended-upgrades cron ntp fail2ban rsyslog file
# ### Suppress Upgrade Prompts # ### Suppress Upgrade Prompts
# When Ubuntu 20 comes out, we don't want users to be prompted to upgrade, # When Ubuntu 20 comes out, we don't want users to be prompted to upgrade,
@ -254,6 +254,21 @@ APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Verbose "0"; APT::Periodic::Verbose "0";
EOF EOF
# Adjust apt update and upgrade timers such that they're always before daily status
# checks and thus never report upgrades unless user intervention is necessary.
mkdir -p /etc/systemd/system/apt-daily.timer.d
cat > /etc/systemd/system/apt-daily.timer.d/override.conf <<EOF;
[Timer]
RandomizedDelaySec=5h
EOF
mkdir -p /etc/systemd/system/apt-daily-upgrade.timer.d
cat > /etc/systemd/system/apt-daily-upgrade.timer.d/override.conf <<EOF;
[Timer]
OnCalendar=
OnCalendar=*-*-* 23:30
EOF
# ### Firewall # ### Firewall
# Various virtualized environments like Docker and some VPSs don't provide #NODOC # Various virtualized environments like Docker and some VPSs don't provide #NODOC
@ -263,20 +278,21 @@ if [ -z "${DISABLE_FIREWALL:-}" ]; then
# Install `ufw` which provides a simple firewall configuration. # Install `ufw` which provides a simple firewall configuration.
apt_install ufw apt_install ufw
# Allow incoming connections to SSH.
ufw_limit ssh;
# ssh might be running on an alternate port. Use sshd -T to dump sshd's #NODOC # ssh might be running on an alternate port. Use sshd -T to dump sshd's #NODOC
# settings, find the port it is supposedly running on, and open that port #NODOC # settings, find the port it is supposedly running on, and open that port #NODOC
# too. #NODOC # too. #NODOC
SSH_PORT=$(sshd -T 2>/dev/null | grep "^port " | sed "s/port //") #NODOC SSH_PORT=$(sshd -T 2>/dev/null | grep "^port " | sed "s/port //") #NODOC
if [ ! -z "$SSH_PORT" ]; then if [ ! -z "$SSH_PORT" ]; then
if [ "$SSH_PORT" != "22" ]; then if [ "$SSH_PORT" != "22" ]; then
echo Opening alternate SSH port $SSH_PORT. #NODOC
echo Opening alternate SSH port $SSH_PORT. #NODOC ufw_limit $SSH_PORT #NODOC
ufw_limit $SSH_PORT #NODOC else
# Allow incoming connections to SSH.
fi ufw_limit ssh;
fi
else
# Allow incoming connections to SSH.
ufw_limit ssh;
fi fi
ufw --force enable; ufw --force enable;
@ -314,45 +330,42 @@ fi #NODOC
# DNS server, which won't work for RBLs. So we really need a local recursive # DNS server, which won't work for RBLs. So we really need a local recursive
# nameserver. # nameserver.
# #
# We'll install `bind9`, which as packaged for Ubuntu, has DNSSEC enabled by default via "dnssec-validation auto". # We'll install unbound, which as packaged for Ubuntu, has DNSSEC enabled by default.
# We'll have it be bound to 127.0.0.1 so that it does not interfere with # We'll have it be bound to 127.0.0.1 so that it does not interfere with
# the public, recursive nameserver `nsd` bound to the public ethernet interfaces. # the public, recursive nameserver `nsd` bound to the public ethernet interfaces.
#
# About the settings: # remove bind9 in case it is still there
# apt-get purge -qq -y bind9 bind9-utils
# * Adding -4 to OPTIONS will have `bind9` not listen on IPv6 addresses
# so that we're sure there's no conflict with nsd, our public domain # Install unbound and dns utils (e.g. dig)
# name server, on IPV6. apt_install unbound python3-unbound bind9-dnsutils
# * The listen-on directive in named.conf.options restricts `bind9` to
# binding to the loopback interface instead of all interfaces. # Configure unbound
# * The max-recursion-queries directive increases the maximum number of iterative queries. cp -f conf/unbound.conf /etc/unbound/unbound.conf.d/miabunbound.conf
# If more queries than specified are sent, bind9 returns SERVFAIL. After flushing the cache during system checks,
# we ran into the limit thus we are increasing it from 75 (default value) to 100. mkdir -p /etc/unbound/lists.d
apt_install bind9
tools/editconf.py /etc/default/named \ systemctl restart unbound
"OPTIONS=\"-u bind -4\""
if ! grep -q "listen-on " /etc/bind/named.conf.options; then unbound-control -q status
# Add a listen-on directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tlisten-on { 127.0.0.1; };\n}/" /etc/bind/named.conf.options # Only reset the local dns settings if unbound server is running, otherwise we'll
fi # up with a system with an unusable internet connection
if ! grep -q "max-recursion-queries " /etc/bind/named.conf.options; then if [ $? -ne 0 ]; then
# Add a max-recursion-queries directive if it doesn't exist inside the options block. echo "Recursive DNS server not active"
sed -i "s/^}/\n\tmax-recursion-queries 100;\n}/" /etc/bind/named.conf.options exit 1
fi fi
# First we'll disable systemd-resolved's management of resolv.conf and its stub server. # Modify systemd settings
# Breaking the symlink to /run/systemd/resolve/stub-resolv.conf means
# systemd-resolved will read it for DNS servers to use. Put in 127.0.0.1,
# which is where bind9 will be running. Obviously don't do this before
# installing bind9 or else apt won't be able to resolve a server to
# download bind9 from.
rm -f /etc/resolv.conf rm -f /etc/resolv.conf
tools/editconf.py /etc/systemd/resolved.conf DNSStubListener=no tools/editconf.py /etc/systemd/resolved.conf \
DNS=127.0.0.1 \
DNSSEC=yes \
DNSStubListener=no
echo "nameserver 127.0.0.1" > /etc/resolv.conf echo "nameserver 127.0.0.1" > /etc/resolv.conf
# Restart the DNS services. # Restart the DNS services.
restart_service bind9
systemctl restart systemd-resolved systemctl restart systemd-resolved
# ### Fail2Ban Service # ### Fail2Ban Service
@ -360,12 +373,41 @@ systemctl restart systemd-resolved
# Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc. # Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc.
rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore
rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config
if [ ! -z "$ADMIN_HOME_IPV6" ]; then
ADMIN_HOME_IPV6_FB="${ADMIN_HOME_IPV6}/64"
else
ADMIN_HOME_IPV6_FB=""
fi
cat conf/fail2ban/jails.conf \ cat conf/fail2ban/jails.conf \
| sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \ | sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \
| sed "s/PUBLIC_IP/$PUBLIC_IP/g" \ | sed "s/PUBLIC_IP/$PUBLIC_IP/g" \
| sed "s/ADMIN_HOME_IPV6/$ADMIN_HOME_IPV6_FB/g" \
| sed "s/ADMIN_HOME_IP/$ADMIN_HOME_IP/g" \
| sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \ | sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \
> /etc/fail2ban/jail.d/mailinabox.conf > /etc/fail2ban/jail.d/00-mailinabox.conf
cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/ cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/
cp -f conf/fail2ban/jail.d/* /etc/fail2ban/jail.d/
# If SSH port is not default, add the not default to the ssh jail
if [ ! -z "$SSH_PORT" ]; then
# create backup copy
cp -f /etc/fail2ban/jail.conf /etc/fail2ban/jail.conf.miab_old
if [ "$SSH_PORT" != "22" ]; then
# Add alternative SSH port
sed -i "s/port[ ]\+=[ ]\+ssh$/port = ssh,$SSH_PORT/g" /etc/fail2ban/jail.conf
sed -i "s/port[ ]\+=[ ]\+ssh$/port = ssh,$SSH_PORT/g" /etc/fail2ban/jail.d/geoipblock.conf
else
# Set SSH port to default
sed -i "s/port[ ]\+=[ ]\+ssh/port = ssh/g" /etc/fail2ban/jail.conf
sed -i "s/port[ ]\+=[ ]\+ssh/port = ssh/g" /etc/fail2ban/jail.d/geoipblock.conf
fi
fi
# fail2ban should be able to look back far enough because we increased findtime of recidive jail
tools/editconf.py /etc/fail2ban/fail2ban.conf dbpurgeage=7d
# On first installation, the log files that the jails look at don't all exist. # On first installation, the log files that the jails look at don't all exist.
# e.g., The roundcube error log isn't normally created until someone logs into # e.g., The roundcube error log isn't normally created until someone logs into
@ -373,3 +415,8 @@ cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/
# scripts will ensure the files exist and then fail2ban is given another # scripts will ensure the files exist and then fail2ban is given another
# restart at the very end of setup. # restart at the very end of setup.
restart_service fail2ban restart_service fail2ban
systemctl enable fail2ban
# Create a logrotate entry
cp -f conf/logrotate/mailinabox /etc/logrotate.d/

View File

@ -19,7 +19,7 @@ fi
echo "Installing Nginx (web server)..." echo "Installing Nginx (web server)..."
apt_install nginx php${PHP_VER}-cli php${PHP_VER}-fpm idn2 apt_install nginx php-cli php-fpm idn2 libnginx-mod-http-geoip
rm -f /etc/nginx/sites-enabled/default rm -f /etc/nginx/sites-enabled/default
@ -53,6 +53,12 @@ tools/editconf.py /etc/php/$PHP_VER/fpm/php.ini -c ';' \
tools/editconf.py /etc/php/$PHP_VER/fpm/php.ini -c ';' \ tools/editconf.py /etc/php/$PHP_VER/fpm/php.ini -c ';' \
default_charset="UTF-8" default_charset="UTF-8"
# Set higher timeout since fts searches with Roundcube may take longer
# than the default 60 seconds. We will also match Roundcube's timeout to the
# same value
tools/editconf.py /etc/php/$(php_version)/fpm/php.ini -c ';' \
default_socket_timeout=180
# Configure the path environment for php-fpm # Configure the path environment for php-fpm
tools/editconf.py /etc/php/$PHP_VER/fpm/pool.d/www.conf -c ';' \ tools/editconf.py /etc/php/$PHP_VER/fpm/pool.d/www.conf -c ';' \
env[PATH]=/usr/local/bin:/usr/bin:/bin \ env[PATH]=/usr/local/bin:/usr/bin:/bin \
@ -145,6 +151,15 @@ if [ ! -f $STORAGE_ROOT/www/default/index.html ]; then
fi fi
chown -R $STORAGE_USER $STORAGE_ROOT/www chown -R $STORAGE_USER $STORAGE_ROOT/www
# Copy geoblock config file, but only if it does not exist to keep user config
if [ ! -f /etc/nginx/conf.d/10-geoblock.conf ]; then
cp -f conf/nginx/conf.d/10-geoblock.conf /etc/nginx/conf.d/
fi
# touch logfiles that might not exist
touch /var/log/nginx/geoipblock.log
chown www-data /var/log/nginx/geoipblock.log
# Start services. # Start services.
restart_service nginx restart_service nginx
restart_service php$PHP_VER-fpm restart_service php$PHP_VER-fpm

56
setup/webmail.sh Executable file → Normal file
View File

@ -22,9 +22,9 @@ source /etc/mailinabox.conf # load global vars
echo "Installing Roundcube (webmail)..." echo "Installing Roundcube (webmail)..."
apt_install \ apt_install \
dbconfig-common \ dbconfig-common \
php${PHP_VER}-cli php${PHP_VER}-sqlite3 php${PHP_VER}-intl php${PHP_VER}-common php${PHP_VER}-curl php${PHP_VER}-imap \ php-cli php-sqlite3 php-intl php-json php-common php-curl php-imap \
php${PHP_VER}-gd php${PHP_VER}-pspell php${PHP_VER}-mbstring libjs-jquery libjs-jquery-mousewheel libmagic1 \ php-gd php-pspell libjs-jquery libjs-jquery-mousewheel libmagic1 php-mbstring \
sqlite3 sqlite3
# Install Roundcube from source if it is not already present or if it is out of date. # Install Roundcube from source if it is not already present or if it is out of date.
# Combine the Roundcube version number with the commit hash of plugins to track # Combine the Roundcube version number with the commit hash of plugins to track
@ -34,16 +34,20 @@ apt_install \
# https://github.com/mfreiholz/persistent_login/commits/master # https://github.com/mfreiholz/persistent_login/commits/master
# https://github.com/stremlau/html5_notifier/commits/master # https://github.com/stremlau/html5_notifier/commits/master
# https://github.com/mstilkerich/rcmcarddav/releases # https://github.com/mstilkerich/rcmcarddav/releases
# https://github.com/johndoh/roundcube-contextmenu
# https://github.com/alexandregz/twofactor_gauthenticator
# The easiest way to get the package hashes is to run this script and get the hash from # The easiest way to get the package hashes is to run this script and get the hash from
# the error message. # the error message.
VERSION=1.6.0 VERSION=1.6.1
HASH=fd84b4fac74419bb73e7a3bcae1978d5589c52de HASH=0e1c771ab83ea03bde1fd0be6ab5d09e60b4f293
PERSISTENT_LOGIN_VERSION=bde7b6840c7d91de627ea14e81cf4133cbb3c07a # version 5.2 PERSISTENT_LOGIN_VERSION=bde7b6840c7d91de627ea14e81cf4133cbb3c07a # version 5.2
HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+ HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+
CARDDAV_VERSION=4.4.3 CARDDAV_VERSION=4.4.6
CARDDAV_HASH=74f8ba7aee33e78beb9de07f7f44b81f6071b644 CARDDAV_HASH=82c5428f7086a09c9a77576d8887d65bb24a1da4
CONTEXT_MENU_VERSION=dd13a92a9d8910cce7b2234f45a0b2158214956c # version 3.3.1
TWOFACT_COMMIT=06e21b0c03aeeb650ee4ad93538873185f776f8b # master @ 21-04-2022
UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION:$CONTEXT_MENU_VERSION:$TWOFACT_COMMIT
# paths that are often reused. # paths that are often reused.
RCM_DIR=/usr/local/lib/roundcubemail RCM_DIR=/usr/local/lib/roundcubemail
@ -82,16 +86,22 @@ if [ $needs_update == 1 ]; then
# install roundcube html5_notifier plugin # install roundcube html5_notifier plugin
git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' ${RCM_PLUGIN_DIR}/html5_notifier git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' ${RCM_PLUGIN_DIR}/html5_notifier
# download and verify the full release of the carddav plugin # download and verify the full release of the carddav plugin. Can't use git_clone because repository does not include all dependencies
wget_verify \ wget_verify \
https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \ https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \
$CARDDAV_HASH \ $CARDDAV_HASH \
/tmp/carddav.tar.gz /tmp/carddav.tar.gz
# unzip and cleanup # unzip and cleanup
tar -C ${RCM_PLUGIN_DIR} -zxf /tmp/carddav.tar.gz tar -C ${RCM_PLUGIN_DIR} --no-same-owner -zxf /tmp/carddav.tar.gz
rm -f /tmp/carddav.tar.gz rm -f /tmp/carddav.tar.gz
# install roundcube context menu plugin
git_clone https://github.com/johndoh/roundcube-contextmenu.git $CONTEXT_MENU_VERSION '' ${RCM_PLUGIN_DIR}/contextmenu
# install two factor totp authenticator
git_clone https://github.com/alexandregz/twofactor_gauthenticator.git $TWOFACT_COMMIT '' ${RCM_PLUGIN_DIR}/twofactor_gauthenticator
# record the version we've installed # record the version we've installed
echo $UPDATE_KEY > ${RCM_DIR}/version echo $UPDATE_KEY > ${RCM_DIR}/version
fi fi
@ -123,7 +133,7 @@ cat > $RCM_CONFIG <<EOF;
'verify_peer_name' => false, 'verify_peer_name' => false,
), ),
); );
\$config['imap_timeout'] = 15; \$config['imap_timeout'] = 180;
\$config['smtp_host'] = 'tls://127.0.0.1'; \$config['smtp_host'] = 'tls://127.0.0.1';
\$config['smtp_conn_options'] = array( \$config['smtp_conn_options'] = array(
'ssl' => array( 'ssl' => array(
@ -135,7 +145,7 @@ cat > $RCM_CONFIG <<EOF;
\$config['product_name'] = '$PRIMARY_HOSTNAME Webmail'; \$config['product_name'] = '$PRIMARY_HOSTNAME Webmail';
\$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things \$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things
\$config['des_key'] = '$SECRET_KEY'; # 37 characters -> ~256 bits for AES-256, see above \$config['des_key'] = '$SECRET_KEY'; # 37 characters -> ~256 bits for AES-256, see above
\$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav'); \$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav', 'markasjunk', 'contextmenu', 'twofactor_gauthenticator');
\$config['skin'] = 'elastic'; \$config['skin'] = 'elastic';
\$config['login_autocomplete'] = 2; \$config['login_autocomplete'] = 2;
\$config['login_username_filter'] = 'email'; \$config['login_username_filter'] = 'email';
@ -152,7 +162,7 @@ EOF
cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF; cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF;
<?php <?php
/* Do not edit. Written by Mail-in-a-Box. Regenerated on updates. */ /* Do not edit. Written by Mail-in-a-Box. Regenerated on updates. */
\$prefs['_GLOBAL']['hide_preferences'] = true; \$prefs['_GLOBAL']['hide_preferences'] = false;
\$prefs['_GLOBAL']['suppress_version_warning'] = true; \$prefs['_GLOBAL']['suppress_version_warning'] = true;
\$prefs['ownCloud'] = array( \$prefs['ownCloud'] = array(
'name' => 'ownCloud', 'name' => 'ownCloud',
@ -161,8 +171,8 @@ cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF;
'url' => 'https://${PRIMARY_HOSTNAME}/cloud/remote.php/dav/addressbooks/users/%u/contacts/', 'url' => 'https://${PRIMARY_HOSTNAME}/cloud/remote.php/dav/addressbooks/users/%u/contacts/',
'active' => true, 'active' => true,
'readonly' => false, 'readonly' => false,
'refresh_time' => '02:00:00', 'refresh_time' => '00:30:00',
'fixed' => array('username','password'), 'fixed' => array('username'),
'preemptive_auth' => '1', 'preemptive_auth' => '1',
'hide' => false, 'hide' => false,
); );
@ -171,7 +181,7 @@ EOF
# Create writable directories. # Create writable directories.
mkdir -p /var/log/roundcubemail /var/tmp/roundcubemail $STORAGE_ROOT/mail/roundcube mkdir -p /var/log/roundcubemail /var/tmp/roundcubemail $STORAGE_ROOT/mail/roundcube
chown -R www-data.www-data /var/log/roundcubemail /var/tmp/roundcubemail $STORAGE_ROOT/mail/roundcube chown -R www-data:www-data /var/log/roundcubemail /var/tmp/roundcubemail $STORAGE_ROOT/mail/roundcube
# Ensure the log file monitored by fail2ban exists, or else fail2ban can't start. # Ensure the log file monitored by fail2ban exists, or else fail2ban can't start.
sudo -u www-data touch /var/log/roundcubemail/errors.log sudo -u www-data touch /var/log/roundcubemail/errors.log
@ -195,18 +205,18 @@ usermod -a -G dovecot www-data
# set permissions so that PHP can use users.sqlite # set permissions so that PHP can use users.sqlite
# could use dovecot instead of www-data, but not sure it matters # could use dovecot instead of www-data, but not sure it matters
chown root.www-data $STORAGE_ROOT/mail chown root:www-data $STORAGE_ROOT/mail
chmod 775 $STORAGE_ROOT/mail chmod 775 $STORAGE_ROOT/mail
chown root.www-data $STORAGE_ROOT/mail/users.sqlite chown root:www-data $STORAGE_ROOT/mail/users.sqlite
chmod 664 $STORAGE_ROOT/mail/users.sqlite chmod 664 $STORAGE_ROOT/mail/users.sqlite
# Fix Carddav permissions: # Fix Carddav permissions:
chown -f -R root.www-data ${RCM_PLUGIN_DIR}/carddav chown -f -R root:www-data ${RCM_PLUGIN_DIR}/carddav
# root.www-data need all permissions, others only read # root:www-data need all permissions, others only read
chmod -R 774 ${RCM_PLUGIN_DIR}/carddav chmod -R 774 ${RCM_PLUGIN_DIR}/carddav
# Run Roundcube database migration script (database is created if it does not exist) # Run Roundcube database migration script (database is created if it does not exist)
php$PHP_VER ${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcube ${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcube
chown www-data:www-data $STORAGE_ROOT/mail/roundcube/roundcube.sqlite chown www-data:www-data $STORAGE_ROOT/mail/roundcube/roundcube.sqlite
chmod 664 $STORAGE_ROOT/mail/roundcube/roundcube.sqlite chmod 664 $STORAGE_ROOT/mail/roundcube/roundcube.sqlite
@ -221,5 +231,5 @@ sed -i.miabold 's/^[^#]\+.\+PRAGMA journal_mode = WAL.\+$/#&/' \
sqlite3 $STORAGE_ROOT/mail/roundcube/roundcube.sqlite 'PRAGMA journal_mode=WAL;' sqlite3 $STORAGE_ROOT/mail/roundcube/roundcube.sqlite 'PRAGMA journal_mode=WAL;'
# Enable PHP modules. # Enable PHP modules.
phpenmod -v $PHP_VER imap phpenmod -v php imap
restart_service php$PHP_VER-fpm restart_service php$PHP_VER-fpm

View File

@ -1,107 +0,0 @@
#!/bin/bash
#
# Z-Push: The Microsoft Exchange protocol server
# ----------------------------------------------
#
# Mostly for use on iOS which doesn't support IMAP IDLE.
#
# Although Ubuntu ships Z-Push (as d-push) it has a dependency on Apache
# so we won't install it that way.
#
# Thanks to http://frontender.ch/publikationen/push-mail-server-using-nginx-and-z-push.html.
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# Prereqs.
echo "Installing Z-Push (Exchange/ActiveSync server)..."
apt_install \
php${PHP_VER}-soap php${PHP_VER}-imap libawl-php php$PHP_VER-xml
phpenmod -v $PHP_VER imap
# Copy Z-Push into place.
VERSION=2.6.2
TARGETHASH=f0e8091a8030e5b851f5ba1f9f0e1a05b8762d80
needs_update=0 #NODOC
if [ ! -f /usr/local/lib/z-push/version ]; then
needs_update=1 #NODOC
elif [[ $VERSION != $(cat /usr/local/lib/z-push/version) ]]; then
# checks if the version
needs_update=1 #NODOC
fi
if [ $needs_update == 1 ]; then
# Download
wget_verify "https://github.com/Z-Hub/Z-Push/archive/refs/tags/$VERSION.zip" $TARGETHASH /tmp/z-push.zip
# Extract into place.
rm -rf /usr/local/lib/z-push /tmp/z-push
unzip -q /tmp/z-push.zip -d /tmp/z-push
mv /tmp/z-push/*/src /usr/local/lib/z-push
rm -rf /tmp/z-push.zip /tmp/z-push
rm -f /usr/sbin/z-push-{admin,top}
echo $VERSION > /usr/local/lib/z-push/version
fi
# Configure default config.
sed -i "s^define('TIMEZONE', .*^define('TIMEZONE', '$(cat /etc/timezone)');^" /usr/local/lib/z-push/config.php
sed -i "s/define('BACKEND_PROVIDER', .*/define('BACKEND_PROVIDER', 'BackendCombined');/" /usr/local/lib/z-push/config.php
sed -i "s/define('USE_FULLEMAIL_FOR_LOGIN', .*/define('USE_FULLEMAIL_FOR_LOGIN', true);/" /usr/local/lib/z-push/config.php
sed -i "s/define('LOG_MEMORY_PROFILER', .*/define('LOG_MEMORY_PROFILER', false);/" /usr/local/lib/z-push/config.php
sed -i "s/define('BUG68532FIXED', .*/define('BUG68532FIXED', false);/" /usr/local/lib/z-push/config.php
sed -i "s/define('LOGLEVEL', .*/define('LOGLEVEL', LOGLEVEL_ERROR);/" /usr/local/lib/z-push/config.php
# Configure BACKEND
rm -f /usr/local/lib/z-push/backend/combined/config.php
cp conf/zpush/backend_combined.php /usr/local/lib/z-push/backend/combined/config.php
# Configure IMAP
rm -f /usr/local/lib/z-push/backend/imap/config.php
cp conf/zpush/backend_imap.php /usr/local/lib/z-push/backend/imap/config.php
sed -i "s%STORAGE_ROOT%$STORAGE_ROOT%" /usr/local/lib/z-push/backend/imap/config.php
# Configure CardDav
rm -f /usr/local/lib/z-push/backend/carddav/config.php
cp conf/zpush/backend_carddav.php /usr/local/lib/z-push/backend/carddav/config.php
# Configure CalDav
rm -f /usr/local/lib/z-push/backend/caldav/config.php
cp conf/zpush/backend_caldav.php /usr/local/lib/z-push/backend/caldav/config.php
# Configure Autodiscover
rm -f /usr/local/lib/z-push/autodiscover/config.php
cp conf/zpush/autodiscover_config.php /usr/local/lib/z-push/autodiscover/config.php
sed -i "s/PRIMARY_HOSTNAME/$PRIMARY_HOSTNAME/" /usr/local/lib/z-push/autodiscover/config.php
sed -i "s^define('TIMEZONE', .*^define('TIMEZONE', '$(cat /etc/timezone)');^" /usr/local/lib/z-push/autodiscover/config.php
# Some directories it will use.
mkdir -p /var/log/z-push
mkdir -p /var/lib/z-push
chmod 750 /var/log/z-push
chmod 750 /var/lib/z-push
chown www-data:www-data /var/log/z-push
chown www-data:www-data /var/lib/z-push
# Add log rotation
cat > /etc/logrotate.d/z-push <<EOF;
/var/log/z-push/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
}
EOF
# Restart service.
restart_service php$PHP_VER-fpm
# Fix states after upgrade
hide_output php$PHP_VER /usr/local/lib/z-push/z-push-admin.php -a fixstates

443
tools/check-dnsbl.py Executable file
View File

@ -0,0 +1,443 @@
#!/usr/bin/env python3
# From https://github.com/gsauthof/utility Thanks!
# 2016, Georg Sauthoff <mail@georg.so>, GPLv3+
import argparse
import csv
# require dnspython >= 1.15
# because of: https://github.com/rthalley/dnspython/issues/206
import dns.resolver
import dns.reversename
import logging
import re
import sys
import time
default_blacklists = [
('zen.spamhaus.org' , 'Spamhaus SBL, XBL and PBL' ),
('dnsbl.sorbs.net' , 'SORBS aggregated' ),
('safe.dnsbl.sorbs.net' , "'safe' subset of SORBS aggregated"),
('ix.dnsbl.manitu.net' , 'Heise iX NiX Spam' ),
('truncate.gbudb.net' , 'Exclusively Spam/Malware' ),
('dnsbl-1.uceprotect.net' , 'Trapserver Cluster' ),
('cbl.abuseat.org' , 'Net of traps' ),
('dnsbl.cobion.com' , 'used in IBM products' ),
('psbl.surriel.com' , 'passive list, easy to unlist' ),
('db.wpbl.info' , 'Weighted private' ),
('bl.spamcop.net' , 'Based on spamcop users' ),
('dyna.spamrats.com' , 'Dynamic IP addresses' ),
('spam.spamrats.com' , 'Manual submissions' ),
('auth.spamrats.com' , 'Suspicious authentications' ),
('dnsbl.inps.de' , 'automated and reported' ),
('bl.blocklist.de' , 'fail2ban reports etc.' ),
('all.s5h.net' , 'traps' ),
('rbl.realtimeblacklist.com' , 'lists ip ranges' ),
('b.barracudacentral.org' , 'traps' ),
('hostkarma.junkemailfilter.com', 'Autotected Virus Senders' ),
('ubl.unsubscore.com' , 'Collected Opt-Out Addresses' ),
('0spam.fusionzero.com' , 'Spam Trap' ),
('bl.nordspam.com' , 'NordSpam IP addresses' ),
('rbl.nordspam.com' , 'NordSpam Domain list ' ),
('combined.mail.abusix.zone' , 'Abusix aggregated' ),
('black.dnsbl.brukalai.lt' , 'Brukalai.lt junk mail' ),
('light.dnsbl.brukalai.lt' , 'Brukalai.lt abuse' ),
]
# blacklists disabled by default because they return mostly garbage
garbage_blacklists = [
# The spfbl.net operator doesn't publish clear criteria that lead to a
# blacklisting.
# When an IP address is blacklisted the operator can't name a specific
# reason for the blacklisting. The blacklisting details page just names
# overly generic reasons like:
# 'This IP was flagged due to misconfiguration of the e-mail service or
# the suspicion that there is no MTA at it.'
# When contacting the operator's support, they can't back up such
# claims.
# There are additions of IP addresses to the spfbl.net blacklist that
# have a properly configured MTA running and that aren't listed in any
# other blacklist. Likely, those additions are caused by a bug in the
# spfbl.net update process. But their support is uninterested in
# improving that process. Instead they want to externalize maintenance
# work by asking listed parties to waste some time on their manual
# delisting process.
# Suspiciously, you can even whitelist your listed address via
# transferring $ 1.50 via PayPal. Go figure.
# Thus, the value of querying this blacklist is utterly low as
# you get false-positive results, very likely.
('dnsbl.spfbl.net' , 'Reputation Database' ),
]
# See also:
# https://en.wikipedia.org/wiki/DNSBL
# https://tools.ietf.org/html/rfc5782
# https://en.wikipedia.org/wiki/Comparison_of_DNS_blacklists
# some lists provide detailed stats, i.e. the actual listed addresses
# useful for testing
log_format = '%(asctime)s - %(levelname)-8s - %(message)s [%(name)s]'
log_date_format = '%Y-%m-%d %H:%M:%S'
## Simple Setup
# Note that the basicConfig() call is a NOP in Jupyter
# because Jupyter calls it before
logging.basicConfig(format=log_format, datefmt=log_date_format, level=logging.WARNING)
log = logging.getLogger(__name__)
def mk_arg_parser():
p = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description = 'Check if mailservers are in any blacklist (DNSBL)',
epilog='''Don't panic if a server is listed in some blacklist.
See also https://en.wikipedia.org/wiki/Comparison_of_DNS_blacklists for the
mechanics and policies of the different lists.
2016, Georg Sauthoff <mail@georg.so>, GPLv3+''')
p.add_argument('dests', metavar='DESTINATION', nargs='+',
help = 'servers, a MX lookup is done if it is a domain')
p.add_argument('--bl', action='append', default=[],
help='add another blacklist')
p.add_argument('--bl-file', help='read more DNSBL from a CSV file')
p.add_argument('--clear', action='store_true',
help='clear default list of DNSBL')
# https://blog.cloudflare.com/dns-resolver-1-1-1-1/
p.add_argument('--cloudflare', action='store_true',
help="use Cloudflare's public DNS nameservers")
p.add_argument('--debug', action='store_true',
help='print debug log messages')
# cf. https://en.wikipedia.org/wiki/Google_Public_DNS
p.add_argument('--google', action='store_true',
help="use Google's public DNS nameservers")
p.add_argument('--rev', action='store_true', default=True,
help='check reverse DNS record for each domain (default: on)')
p.add_argument('--mx', action='store_true', default=True,
help='try to folow MX entries')
p.add_argument('--no-mx', dest='mx', action='store_false',
help='ignore any MX records')
p.add_argument('--no-rev', action='store_false', dest='rev',
help='disable reverse DNS checking')
p.add_argument('--ns', action='append', default=[],
help='use one or more alternate nameserverse')
# cf. https://en.wikipedia.org/wiki/OpenDNS
p.add_argument('--opendns', action='store_true',
help="use Cisco's public DNS nameservers")
# cf. https://quad9.net/faq/
p.add_argument('--quad9', action='store_true',
help="use Quad9's public DNS nameservers (i.e. the filtering ones)")
p.add_argument('--retries', type=int, default=5,
help='Number of retries if request times out (default: 5)')
p.add_argument('--with-garbage', action='store_true',
help=('also include low-quality blacklists that are maintained'
' by clueless operators and thus easily return false-positives'))
return p
def parse_args(*a):
p = mk_arg_parser()
args = p.parse_args(*a)
args.bls = default_blacklists
if args.clear:
args.bls = []
for bl in args.bl:
args.bls.append((bl, ''))
if args.bl_file:
args.bls = args.bls + read_csv_bl(args.bl_file)
if args.with_garbage:
args.bls.extend(garbage_blacklists)
if args.google:
args.ns = args.ns + ['8.8.8.8', '2001:4860:4860::8888', '8.8.4.4', '2001:4860:4860::8844']
if args.opendns:
args.ns = args.ns + ['208.67.222.222', '2620:0:ccc::2', '208.67.220.220', '2620:0:ccd::2']
if args.cloudflare:
args.ns += ['1.1.1.1', '2606:4700:4700::1111', '1.0.0.1', '2606:4700:4700::1001']
if args.quad9:
args.ns += ['9.9.9.9', '2620:fe::fe', '149.112.112.112', '2620:fe::9']
if args.ns:
dns.resolver.default_resolver = dns.resolver.Resolver(configure=False)
dns.resolver.default_resolver.nameservers = args.ns
if args.debug:
l = logging.getLogger() # root logger
l.setLevel(logging.DEBUG)
return args
def read_csv_bl(filename):
with open(filename, newline='') as f:
reader = csv.reader(f)
xs = [ row for row in reader
if len(row) > 0 and not row[0].startswith('#') ]
return xs
v4_ex = re.compile('^[.0-9]+$')
v6_ex = re.compile('^[:0-9a-fA-F]+$')
def get_addrs(dest, mx=True):
if v4_ex.match(dest) or v6_ex.match(dest):
return [ (dest, None) ]
domains = [ dest ]
if mx:
try:
r = dns.resolver.resolve(dest, 'mx', search=True)
domains = [ answer.exchange for answer in r ]
log.debug('destinatin {} has MXs: {}'
.format(dest, ', '.join([str(d) for d in domains])))
except dns.resolver.NoAnswer:
pass
addrs = []
for domain in domains:
for t in ['a', 'aaaa']:
try:
r = dns.resolver.resolve(domain, t, search=True)
except dns.resolver.NoAnswer:
continue
xs = [ ( answer.address, domain ) for answer in r ]
addrs = addrs + xs
log.debug('domain {} has addresses: {}'
.format(domain, ', '.join([x[0] for x in xs])))
if not addrs:
raise ValueError("There isn't any a/aaaa DNS record for {}".format(domain))
return addrs
def check_dnsbl(addr, bl):
rev = dns.reversename.from_address(addr)
domain = str(rev.split(3)[0]) + '.' + bl
try:
r = dns.resolver.resolve(domain, 'a', search=True)
except (dns.resolver.NXDOMAIN, dns.resolver.NoNameservers, dns.resolver.NoAnswer):
return 0
address = list(r)[0].address
try:
r = dns.resolver.resolve(domain, 'txt', search=True)
txt = list(r)[0].to_text()
except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN):
txt = ''
log.error('OMG, {} is listed in DNSBL {}: {} ({})'.format(
addr, bl, address, txt))
return 1
def check_rdns(addrs):
errs = 0
for (addr, domain) in addrs:
log.debug('Check if there is a reverse DNS record that maps address {} to {}'
.format(addr, domain))
try:
r = dns.resolver.resolve(dns.reversename.from_address(addr), 'ptr', search=True)
a = list(r)[0]
target = str(a.target).lower()
source = str(domain).lower()
log.debug('Reserve DNS record for {} points to {}'.format(addr, target))
if domain and source + '.' != target and source != target:
log.error('domain {} resolves to {}, but the reverse record resolves to {}'.
format(domain, addr, target))
errs = errs + 1
except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
log.error('There is no reverse DNS record for {}'.format(addr))
errs = errs + 1
return errs
return errs
def run(args):
log.debug('Checking {} DNS blacklists'.format(args.bls.__len__()))
errs = 0
for dest in args.dests:
addrs = get_addrs(dest, mx=args.mx)
if args.rev:
errs = errs + check_rdns(addrs)
old_errs = errs
ls = [ ( (x[0], x[1], y) for x in addrs for y in args.bls) ]
i = 0
while ls:
ms = []
for addr, domain, bl in ls[0]:
log.debug('Checking if address {} (via {}) is listed in {} ({})'
.format(addr, dest, bl[0], bl[1]))
try:
errs = errs + check_dnsbl(addr, bl[0])
except dns.exception.Timeout as e:
m = 'Resolving {}/{} in {} timed out: {}'.format(
addr, domain, bl[0], e)
if i >= args.retries:
log.warn(m)
else:
log.debug(m)
ms.append( (addr, domain, bl) )
ls.pop(0)
if ms and i + 1 < args.retries:
ls.append(ms)
log.debug('({}) Retrying {} timed-out entries'.format(i, len(ms)))
time.sleep(23+i*23)
i = i + 1
if old_errs < errs:
log.error('{} is listed in {} blacklists'.format(dest, errs - old_errs))
return 0 if errs == 0 else 1
def main(*a):
args = parse_args(*a)
return run(args)
if __name__ == '__main__':
if 'IPython' in sys.modules:
# do something different when running inside a Jupyter notebook
pass
else:
sys.exit(main())
##### Scratch area:
#
#
## In[ ]:
#
#check_rdns([('89.238.75.224', 'georg.so')])
#
#
## In[ ]:
#
#r = dns.resolver.resolve(dns.reversename.from_address('89.238.75.224'), 'ptr', search=True)
#a = list(r)[0]
#a.target.to_text()
#
#
## In[ ]:
#
#tr = dns.resolver.default_resolver
#
#
## In[ ]:
#
#dns.resolver.default_resolver = dns.resolver.Resolver(configure=False)
## some DNSBLs might block public DNS servers (because of the volume) such that
## false-negatives are generated with them
## e.g. Google's Public DNS
#dns.resolver.default_resolver.nameservers = ['8.8.8.8', '2001:4860:4860::8888', '8.8.4.4', '2001:4860:4860::8844']
#
#
## In[ ]:
#
#dns.resolver.default_resolver = dns.resolver.Resolver(configure=False)
## OpenDNS
#dns.resolver.default_resolver.nameservers = ['208.67.222.222', '2620:0:ccc::2', '208.67.220.220', '2620:0:ccd::2']
#
#
## In[ ]:
#
#tr.nameservers
#
#
## In[ ]:
#
#dns.resolver.default_resolver = tr
#
#
## In[ ]:
#
#dns.__version__
#
#
## In[ ]:
#
## as of 2016-11, listed
#r = dns.resolver.resolve('39.227.103.116.zen.spamhaus.org', 'txt', search=True)
#answer = list(r)[0]
#answer.to_text()
#
#
## In[ ]:
#
#check_dnsbl('116.103.227.39', 'zen.spamhaus.org')
#
#
## In[ ]:
#
## as of 2016-11, not listed
#check_dnsbl('217.146.132.159', 'zen.spamhaus.org')
#
#
## In[ ]:
#
#get_addrs('georg.so')
#
#
## In[ ]:
#
#parse_args(['georg.so'])
#
#
## In[ ]:
#
#a = dns.resolver.resolve('georg.so', 'MX', search=True)
#
#
## In[ ]:
#
#print(dns.resolver.Resolver.query.__doc__)
#
#
## In[ ]:
#
#[ str(x.exchange) for x in a ]
#
#
## In[ ]:
#
#[ x.exchange for x in a]
#dns.resolver.resolve(list(a)[0].exchange, 'a', search=True)
#
#
## In[ ]:
#
#r = dns.reversename.from_address('89.238.75.224')
#str(r.split(3)[0])
#
#
## In[ ]:
#
## should throw NoAnswer
#a = dns.resolver.resolve('escher.lru.li', 'mx', search=True)
##b = list(a)
#a
#
#
## In[ ]:
#
#a = dns.resolver.resolve('georg.so', 'a', search=True)
#b = list(a)[0]
#b.address
#dns.reversename.from_address(b.address)
#
#
## In[ ]:
#
## should throw NXDOMAIN
#rs = str(r.split(3)[0])
#dns.resolver.resolve(rs + '.zen.spamhaus.org', 'A' , search=True)
#
#
## In[ ]:
#
#s = dns.reversename.from_address('2a00:1828:2000:164::12')
#str(s.split(3)[0])

47
tools/create_dns_blocklist.sh Executable file
View File

@ -0,0 +1,47 @@
#!/bin/bash
set -euo pipefail
# Download select set of malware blocklists from The Firebog's "The Big Blocklist
# Collection" [0] and block access to them with Unbound by returning NXDOMAIN.
#
# Usage:
# # create the blocklist
# create_dns_blocklist.sh > ~/blocklist.conf
# sudo mv ~/blocklist.conf /etc/unbound/lists.d
#
# # check list contains valid syntax. If not valid, remove blocklist.conf,
# # otherwise unbound will not work
# sudo unbound-checkconf
# > unbound-checkconf: no errors in /etc/unbound/unbound.con
#
# # reload unbound configuration
# sudo unbound-control reload
#
#
# [0]: https://firebog.net
(
# Malicious Lists
curl -sSf "https://raw.githubusercontent.com/DandelionSprout/adfilt/master/Alternate%20versions%20Anti-Malware%20List/AntiMalwareHosts.txt" ;
curl -sSf "https://osint.digitalside.it/Threat-Intel/lists/latestdomains.txt" ;
curl -sSf "https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt" ;
curl -sSf "https://v.firebog.net/hosts/Prigent-Crypto.txt" ;
curl -sSf "https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt" ;
curl -sSf "https://phishing.army/download/phishing_army_blocklist_extended.txt" ;
curl -sSf "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt" ;
curl -sSf "https://raw.githubusercontent.com/Spam404/lists/master/main-blacklist.txt" ;
curl -sSf "https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Risk/hosts" ;
curl -sSf "https://urlhaus.abuse.ch/downloads/hostfile/" ;
# curl -sSf "https://v.firebog.net/hosts/Prigent-Malware.txt" ;
# curl -sSf "https://v.firebog.net/hosts/Shalla-mal.txt" ;
) |
cat | # Combine all lists into one
grep -v '#' | # Remove comments lines
grep -v '::' | # Remove universal ipv6 address
tr -d '\r' | # Normalize line endings by removing Windows carriage returns
sed -e 's/0\.0\.0\.0\s\{0,\}//g' | # Remove ip address from start of line
sed -e 's/127\.0\.0\.1\s\{0,\}//g' |
sed -e '/^$/d' | # Remove empty line
sort -u | # Sort and remove duplicates
awk '{print "local-zone: " ""$1"" " always_nxdomain"}' # Convert to Unbound configuration

2
tools/dyndns/cronjob.sh Normal file
View File

@ -0,0 +1,2 @@
#!/bin/sh
cd /opt/dyndns && ./dyndns.sh >> /var/log/dyndns.log 2>/dev/null

2
tools/dyndns/dyndns.cfg Normal file
View File

@ -0,0 +1,2 @@
USER_NAME="<admin mail address @box>"
USER_PASS="<admin password>"

View File

@ -0,0 +1 @@
<miabdomain>.<tld>

View File

@ -0,0 +1,3 @@
vpn.<miabdomain>.<tld>
nas.<miabdomain>.<tld>

235
tools/dyndns/dyndns.sh Executable file
View File

@ -0,0 +1,235 @@
#!/bin/bash
# based on dm-dyndns v1.0, dmurphy@dmurphy.com
# Shell script to provide dynamic DNS to a mail-in-the-box platform.
# Requirements:
# dig installed
# curl installed
# oathtool installed if totp is to be used
# OpenDNS myip service availability (myip.opendns.com 15)
# Mailinabox host (see https://mailinabox.email 2)
# Mailinabox admin username/password in the CFGFILE below
# one line file of the format (curl cfg file):
# user = “username:password”
# Dynamic DNS name to be set
# DYNDNSNAMELIST file contains one hostname per line that needs to be set to this IP.
#----- Contents of dyndns.cfg file below ------
#----- user credentials -----------------------
#USER_NAME="admin@mydomain.com"
#USER_PASS="MYADMINPASSWORD"
#----- Contents of dyndns.domain below --------
#<miabdomain.tld>
#------ Contents of dyndns.dynlist below ------
#vpn.mydomain.com
#nas.mydomain.com
#------ Contents of dyndns.totp ---------------
#- only needed in case of TOTP authentication -
#TOTP_KEY=ABCDEFGABCFEXXXXXXXX
MYNAME="dyndns"
CFGFILE="$MYNAME.cfg"
TOTPFILE="$MYNAME.totp"
DOMFILE="$MYNAME.domain"
DIGCMD="/usr/bin/dig"
CURLCMD="/usr/bin/curl"
CATCMD="/bin/cat"
OATHTOOLCMD="/usr/bin/oathtool"
DYNDNSNAMELIST="$MYNAME.dynlist"
IGNORESTR=";; connection timed out; no servers could be reached"
if [ ! -x $DIGCMD ]; then
echo "$MYNAME: dig command $DIGCMD not found. Check and fix please."
exit 99
fi
if [ ! -x $CURLCMD ]; then
echo "$MYNAME: curl command $CURLCMD not found. Check and fix please."
exit 99
fi
if [ ! -x $CATCMD ]; then
echo "$MYNAME: cat command $CATCMD not found. Check and fix please."
exit 99
fi
DOMAIN=$(cat $DOMFILE)
MIABHOST="box.$DOMAIN"
noww="$(date +"%F %T")"
echo "$noww: running dynamic dns update for $DOMAIN"
if [ ! -f $CFGFILE ]; then
echo "$MYNAME: $CFGFILE not found. Check and fix please."
exit 99
fi
if [ ! -f $DYNDNSNAMELIST ]; then
echo "$MYNAME: $DYNDNSNAMELIST not found. Check and fix please."
exit 99
fi
source $CFGFILE
AUTHSTR="Authorization: Basic $(echo $USER_NAME:$USER_PASS | base64 -w 0)"
MYIP="`$DIGCMD +short myip.opendns.com @resolver1.opendns.com`"
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver2.opendns.com`"
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver3.opendns.com`"
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short myip.opendns.com @resolver4.opendns.com`"
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP=$($DIGCMD -4 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"')
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ ! -z "$MYIP" ]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do
PREVIP="`$DIGCMD A +short $DYNDNSNAME @$MIABHOST`"
if [ -z "$PREVIP" ]; then
echo "$MYNAME: dig output was blank."
fi
if [ "x$PREVIP" = "x$MYIP" ]; then
echo "$MYNAME: $DYNDNSNAME ipv4 hasn't changed."
else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv4 OK.";;
"invalid-totp-token"|"missing-totp-token") echo "$MYNAME: invalid TOTP token. Retrying with TOTP token"
if [ ! -x $AOTHTOOLCMD ]; then
echo "$MYNAME: oathtool command $OATHTOOLCMD not found. Check and fix please."
exit 99
fi
if [ ! -f $TOTPFILE ]; then
echo "$MYNAME: $TOTPFILE not found. Check and fix please."
exit 99
fi
source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv4 OK.";;
"invalid-totp-token") echo "$MYNAME: invalid TOTP token.";;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUST (2)";;
esac
;;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUS (1)";;
esac
fi
done
else
echo "$MYNAME: No ipv4 address found. Check myaddr.google and myip.opendns.com services."
exit 99
fi
# Now to do the same for ipv6
MYIP="`$DIGCMD +short AAAA @resolver1.ipv6-sandbox.opendns.com myip.opendns.com -6`"
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP="`$DIGCMD +short AAAA @resolver2.ipv6-sandbox.opendns.com myip.opendns.com -6`"
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ -z "$MYIP" ]; then
MYIP=$($DIGCMD -6 +short TXT o-o.myaddr.l.google.com @ns1.google.com | tr -d '"')
fi
if [ "$MYIP" = "$IGNORESTR" ]; then
MYIP=""
fi
if [ ! -z "$MYIP" ]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do
PREVIP="`$DIGCMD AAAA +short $DYNDNSNAME @$MIABHOST`"
if [ -z "$PREVIP" ]; then
echo "$MYNAME: dig output was blank."
fi
if [ "x$PREVIP" = "x$MYIP" ]; then
echo "$MYNAME: $DYNDNSNAME ipv6 hasn't changed."
else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv6 OK.";;
"invalid-totp-token"|"missing-totp-token") echo "$MYNAME: invalid TOTP token. Retrying with TOTP token"
if [ ! -x $AOTHTOOLCMD ]; then
echo "$MYNAME: oathtool command $OATHTOOLCMD not found. Check and fix please."
exit 99
fi
if [ ! -f $TOTPFILE ]; then
echo "$MYNAME: $TOTPFILE not found. Check and fix please."
exit 99
fi
source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv6 OK.";;
"invalid-totp-token") echo "$MYNAME: invalid TOTP token.";;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUST (2)";;
esac
;;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUS (1)";;
esac
fi
done
else
echo "$MYNAME: No ipv6 address found. Check myaddr.google and myip.opendns.com services."
exit 99
fi
exit 0

1
tools/dyndns/dyndns.totp Normal file
View File

@ -0,0 +1 @@
TOTP_KEY=<TOTP_KEY>

223
tools/dyndns/dyndns2.sh Executable file
View File

@ -0,0 +1,223 @@
#!/bin/bash
# based on dm-dyndns v1.0, dmurphy@dmurphy.com
# Shell script to provide dynamic DNS to a mail-in-the-box platform.
# Requirements:
# curl installed
# oathtool installed if totp is to be used
# OpenDNS myip service availability (myip.opendns.com 15)
# Mailinabox host (see https://mailinabox.email 2)
# Mailinabox admin username/password in the CFGFILE below
# one line file of the format (curl cfg file):
# user = “username:password”
# Dynamic DNS name to be set
# DYNDNSNAMELIST file contains one hostname per line that needs to be set to this IP.
#----- Contents of dyndns.cfg file below ------
#----- user credentials -----------------------
#USER_NAME="admin@mydomain.com"
#USER_PASS="MYADMINPASSWORD"
#----- Contents of dyndns.domain below --------
#<miabdomain.tld>
#------ Contents of dyndns.dynlist below ------
#vpn.mydomain.com
#nas.mydomain.com
#------ Contents of dyndns.totp ---------------
#- only needed in case of TOTP authentication -
#TOTP_KEY=ABCDEFGABCFEXXXXXXXX
MYNAME="dyndns"
CFGFILE="$MYNAME.cfg"
TOTPFILE="$MYNAME.totp"
DOMFILE="$MYNAME.domain"
CURLCMD="/usr/bin/curl"
DIGCMD="/usr/bin/dig"
CATCMD="/bin/cat"
OATHTOOLCMD="/usr/bin/oathtool"
DYNDNSNAMELIST="$MYNAME.dynlist"
IGNORESTR=";; connection timed out; no servers could be reached"
if [ ! -x $CURLCMD ]; then
echo "$MYNAME: curl command $CURLCMD not found. Check and fix please."
exit 99
fi
if [ ! -x $DIGCMD ]; then
echo "$MYNAME: dig command $DIGCMD not found. Check and fix please."
exit 99
fi
if [ ! -x $CATCMD ]; then
echo "$MYNAME: cat command $CATCMD not found. Check and fix please."
exit 99
fi
DOMAIN=$(cat $DOMFILE)
MIABHOST="box.$DOMAIN"
noww="$(date +"%F %T")"
echo "$noww: running dynamic dns update for $DOMAIN"
if [ ! -f $CFGFILE ]; then
echo "$MYNAME: $CFGFILE not found. Check and fix please."
exit 99
fi
if [ ! -f $DYNDNSNAMELIST ]; then
echo "$MYNAME: $DYNDNSNAMELIST not found. Check and fix please."
exit 99
fi
source $CFGFILE
AUTHSTR="Authorization: Basic $(echo $USER_NAME:$USER_PASS | base64 -w 0)"
# Test an IP address for validity:
# Usage:
# valid_ipv4 IP_ADDRESS
# if [[ $? -eq 0 ]]; then echo good; else echo bad; fi
# OR
# if valid_ipv4 IP_ADDRESS; then echo good; else echo bad; fi
#
function valid_ipv4()
{
local ip=$1
local stat=1
if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
OIFS=$IFS
IFS='.'
ip=($ip)
IFS=$OIFS
[[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
&& ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
stat=$?
fi
return $stat
}
MYIP="`$CURLCMD -4 -s icanhazip.com`"
if [[ "`valid_ipv4 ${MYIP}`" -ne 0 ]]; then
MYIP="`$CURLCMD -4 -s api64.ipify.org`"
fi
if [[ "`valid_ipv4 ${MYIP}`" -eq 0 ]]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do
PREVIP="`$DIGCMD A +short $DYNDNSNAME @$MIABHOST`"
if [ -z "$PREVIP" ]; then
echo "$MYNAME: dig output was blank."
fi
if [ "x$PREVIP" == "x$MYIP" ]; then
echo "$MYNAME: $DYNDNSNAME ipv4 hasn't changed."
else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv4 OK.";;
"invalid-totp-token"|"missing-totp-token") echo "$MYNAME: invalid TOTP token. Retrying with TOTP token"
if [ ! -x $AOTHTOOLCMD ]; then
echo "$MYNAME: oathtool command $OATHTOOLCMD not found. Check and fix please."
exit 99
fi
if [ ! -f $TOTPFILE ]; then
echo "$MYNAME: $TOTPFILE not found. Check and fix please."
exit 99
fi
source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/A`"
case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv4 OK.";;
"invalid-totp-token") echo "$MYNAME: invalid TOTP token.";;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUST (2)";;
esac
;;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUS (1)";;
esac
fi
done
else
echo "$MYNAME: No ipv4 address found."
fi
# Now to do the same for ipv6
function valid_ipv6()
{
local IP_ADDR=$1
local stat=1
if python3 -c "import ipaddress; ipaddress.IPv6Network('${IP_ADDR}')" 2>/dev/null; then
stat=0
fi
return $stat
}
MYIP="`$CURLCMD -6 -s icanhazip.com`"
if [[ "`valid_ipv6 ${MYIP}`" -ne 0 ]]; then
MYIP="`$CURLCMD -6 -s api64.ipify.org`"
fi
if [[ "`valid_ipv6 ${MYIP}`" -eq 0 ]]; then
for DYNDNSNAME in `$CATCMD $DYNDNSNAMELIST`
do
PREVIP="`$DIGCMD AAAA +short $DYNDNSNAME @$MIABHOST`"
if [ -z "$PREVIP" ]; then
echo "$MYNAME: dig output was blank."
fi
if [ "x$PREVIP" = "x$MYIP" ]; then
echo "$MYNAME: $DYNDNSNAME ipv6 hasn't changed."
else
echo "$MYNAME: $DYNDNSNAME changed (previously: $PREVIP, now: $MYIP)"
STATUS="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUS in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv6 OK.";;
"invalid-totp-token"|"missing-totp-token") echo "$MYNAME: invalid TOTP token. Retrying with TOTP token"
if [ ! -x $AOTHTOOLCMD ]; then
echo "$MYNAME: oathtool command $OATHTOOLCMD not found. Check and fix please."
exit 99
fi
if [ ! -f $TOTPFILE ]; then
echo "$MYNAME: $TOTPFILE not found. Check and fix please."
exit 99
fi
source $TOTPFILE
TOTP="X-Auth-Token: $(oathtool --totp -b -d 6 $TOTP_KEY)"
STATUST="`$CURLCMD -X PUT -u $USER_NAME:$USER_PASS -H "$TOTP" -s -d $MYIP https://$MIABHOST/admin/dns/custom/$DYNDNSNAME/AAAA`"
case $STATUST in
"OK") echo "$MYNAME: mailinabox API returned OK, cmd succeeded but no update.";;
"updated DNS: $DOMAIN") echo "$MYNAME: mailinabox API updated $DYNDNSNAME ipv6 OK.";;
"invalid-totp-token") echo "$MYNAME: invalid TOTP token.";;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUST (2)";;
esac
;;
*) echo "$MYNAME: other status from mailinabox API. Please check: $STATUS (1)";;
esac
fi
done
else
echo "$MYNAME: No ipv6 address found."
exit 99
fi
exit 0

18
tools/dyndns/readme.txt Normal file
View File

@ -0,0 +1,18 @@
Files:
- dyndns.sh
Dynamic DNS main script. There should be no need to edit it.
- dyndns.domain
Fill with the top level domain of your MIAB box.
- dyndns.dynlist
Fill with subdomains for which the dynamic dns IP should be updated. One per line.
- dyndns.totp
Fill with TOTP key. Can be found in the MIAB sqlite database
- dyndns.cfg
Fill with admin user and password
- cronjob.sh
cronjob file. Edit where needed
How to use:
- Put dyndns.sh, dyndns.domain, dyndns.dynlist, dyndns.totp and dyndns.cfg in a folder on your target system. E.g. /opt/dyndns
- Put the cronjob.sh in a cron folder. E.g. /etc/cron.daily
- Edit the files appropriately

View File

@ -76,7 +76,8 @@ for setting in settings:
found = set() found = set()
buf = "" buf = ""
input_lines = list(open(filename)) with open(filename, "r") as f:
input_lines = list(f)
while len(input_lines) > 0: while len(input_lines) > 0:
line = input_lines.pop(0) line = input_lines.pop(0)

22
tools/fake_mail Normal file
View File

@ -0,0 +1,22 @@
#!/bin/bash
# Save the command-line information passed to the function
# so that I can translate info to call sendmail
if read -t 0; then
message=`cat`
fi
script="$0"
for arg in "$@"; do
if [ "$lastarg" == "-s" ]; then
subject="$arg"
fi
if [[ $arg =~ [[:space:]] ]]; then
arg=\"$arg\"
fi
lastarg="$arg"
done
# send message using sendmail
echo "Subject: $subject
$message" | sendmail -F "`hostname -f`" "$lastarg"

BIN
tools/goiplookup.gz Normal file

Binary file not shown.

View File

@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
# #
# This script will restore the backup made during an installation # This script will restore the backup made during an installation
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
if [ -z "$1" ]; then if [ -z "$1" ]; then
@ -26,7 +27,7 @@ if [ ! -f $1/config.php ]; then
fi fi
echo "Restoring backup from $1" echo "Restoring backup from $1"
service php8.0-fpm stop service php$(php_version)-fpm stop
# remove the current ownCloud/Nextcloud installation # remove the current ownCloud/Nextcloud installation
rm -rf /usr/local/lib/owncloud/ rm -rf /usr/local/lib/owncloud/
@ -40,10 +41,10 @@ cp "$1/owncloud.db" $STORAGE_ROOT/owncloud/
cp "$1/config.php" $STORAGE_ROOT/owncloud/ cp "$1/config.php" $STORAGE_ROOT/owncloud/
ln -sf $STORAGE_ROOT/owncloud/config.php /usr/local/lib/owncloud/config/config.php ln -sf $STORAGE_ROOT/owncloud/config.php /usr/local/lib/owncloud/config/config.php
chown -f -R www-data.www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud chown -f -R www-data:www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud
chown www-data.www-data $STORAGE_ROOT/owncloud/config.php chown www-data:www-data $STORAGE_ROOT/owncloud/config.php
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ maintenance:mode --off sudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off
service php8.0-fpm start service php$(php_version)-fpm start
echo "Done" echo "Done"

View File

@ -8,7 +8,7 @@
source /etc/mailinabox.conf # load global vars source /etc/mailinabox.conf # load global vars
ADMIN=$(./mail.py user admins | head -n 1) ADMIN=$(./management/cli.py user admins | head -n 1)
test -z "$1" || ADMIN=$1 test -z "$1" || ADMIN=$1
echo I am going to unlock admin features for $ADMIN. echo I am going to unlock admin features for $ADMIN.
@ -20,4 +20,4 @@ echo
echo Press enter to continue. echo Press enter to continue.
read read
sudo -u www-data php$PHP_VER /usr/local/lib/owncloud/occ group:adduser admin $ADMIN && echo Done. sudo -u www-data php /usr/local/lib/owncloud/occ group:adduser admin $ADMIN && echo Done.

View File

@ -17,13 +17,8 @@ accesses = set()
# Scan the current and rotated access logs. # Scan the current and rotated access logs.
for fn in glob.glob("/var/log/nginx/access.log*"): for fn in glob.glob("/var/log/nginx/access.log*"):
# Gunzip if necessary. # Gunzip if necessary.
if fn.endswith(".gz"):
f = gzip.open(fn)
else:
f = open(fn, "rb")
# Loop through the lines in the access log. # Loop through the lines in the access log.
with f: with (gzip.open if fn.endswith(".gz") else open)(fn, "rb") as f:
for line in f: for line in f:
# Find lines that are GETs on the bootstrap script by either curl or wget. # Find lines that are GETs on the bootstrap script by either curl or wget.
# (Note that we purposely skip ...?ping=1 requests which is the admin panel querying us for updates.) # (Note that we purposely skip ...?ping=1 requests which is the admin panel querying us for updates.)
@ -43,7 +38,8 @@ for date, ip in accesses:
# Since logs are rotated, store the statistics permanently in a JSON file. # Since logs are rotated, store the statistics permanently in a JSON file.
# Load in the stats from an existing file. # Load in the stats from an existing file.
if os.path.exists(outfn): if os.path.exists(outfn):
existing_data = json.load(open(outfn)) with open(outfn, "r") as f:
existing_data = json.load(f)
for date, count in existing_data: for date, count in existing_data:
if date not in by_date: if date not in by_date:
by_date[date] = count by_date[date] = count

View File

@ -124,13 +124,14 @@ def generate_documentation():
""") """)
parser = Source.parser() parser = Source.parser()
for line in open("setup/start.sh"): with open("setup/start.sh", "r") as start_file:
try: for line in start_file:
fn = parser.parse_string(line).filename() try:
except: fn = parser.parse_string(line).filename()
continue except:
if fn in ("setup/start.sh", "setup/preflight.sh", "setup/questions.sh", "setup/firstuser.sh", "setup/management.sh"): continue
continue if fn in ("setup/start.sh", "setup/preflight.sh", "setup/questions.sh", "setup/firstuser.sh", "setup/management.sh"):
continue
import sys import sys
print(fn, file=sys.stderr) print(fn, file=sys.stderr)
@ -401,7 +402,8 @@ class BashScript(Grammar):
@staticmethod @staticmethod
def parse(fn): def parse(fn):
if fn in ("setup/functions.sh", "/etc/mailinabox.conf"): return "" if fn in ("setup/functions.sh", "/etc/mailinabox.conf"): return ""
string = open(fn).read() with open(fn, "r") as f:
string = f.read()
# tokenize # tokenize
string = re.sub(".* #NODOC\n", "", string) string = re.sub(".* #NODOC\n", "", string)