Compare commits

...

697 Commits
v0.19a ... main

Author SHA1 Message Date
Joshua Tauberer a332be6a7b
Fixed bugs found by the ShellCheck linter (#1457) 2024-04-03 09:25:32 -04:00
Teal Dulcet c7faccf1fa Fixed SC2244: Prefer explicit -n to check non-empty string. 2024-04-03 09:22:50 -04:00
Teal Dulcet ec497efa69 Quote echo commands to preserve whitespace. 2024-04-03 09:22:50 -04:00
Teal Dulcet 55a8be4aa9 Removed unnecessary bc commands. 2024-04-03 09:22:50 -04:00
Teal Dulcet 3399b25084 Replaced the pwd command with Bash's $PWD variable. 2024-04-03 09:22:50 -04:00
Teal Dulcet 2afd0451c1 Fixed SC2007: Use $((..)) instead of deprecated $[..]. 2024-04-03 09:22:50 -04:00
Teal Dulcet 27cf11d8ec Fixed SC2005: Useless echo. 2024-04-03 09:22:50 -04:00
Teal Dulcet 44d9f6eebd Fixed SC2236: Use -n instead of ! -z. 2024-04-03 09:22:50 -04:00
Teal Dulcet 4b7d4ba0a6 Fixed SC2166: Prefer [ p ] && [ q ] as [ p -a q ] is not well defined. 2024-04-03 09:22:50 -04:00
Teal Dulcet 67bcaea71e Fixed SC2091: Remove surrounding $() to avoid executing output. 2024-04-03 09:22:50 -04:00
Teal Dulcet bdf4155bed Fixed SC2046: Quote to prevent word splitting. 2024-04-03 09:21:34 -04:00
Teal Dulcet f1888f2043 Fixed SC2148: Add a shebang. 2024-04-03 09:21:34 -04:00
Teal Dulcet 33559bb844 Fixed SC2164: Use 'cd ... || exit' in case cd fails. 2024-04-03 09:21:34 -04:00
Teal Dulcet 30c4681e80 Fixed SC2086: Double quote to prevent globbing and word splitting. 2024-04-03 09:20:20 -04:00
Teal Dulcet 133bae1300 Fixed SC2006: Use $(...) notation instead of legacy backticks `...`. 2024-04-03 05:17:25 -07:00
Joshua Tauberer 830c83daa1 v68 2024-04-01 10:55:52 -04:00
Joshua Tauberer 7382c18e8f CHANGELOG entries 2024-04-01 10:54:21 -04:00
Joshua Tauberer fa72e015ee Update SMTP Smuggling protection to the 'long-term fix'
* Revert "Guard against SMTP smuggling", commit faf23f150c, by restoring the setting to its default.
* Revert "[security] SMTP smuggling: update short term fix (#2346)", commmit e931e103fe, by restoring the setting to its default.
* Set smtpd_forbid_bare_newline=normalize.
2024-03-23 13:15:32 -04:00
KiekerJan 1a239c55bb
More robust reading of sshd configuration (#2330)
Use sshd -T instead of directly reading the configuration files
2024-03-23 11:16:40 -04:00
Gio 9b450469eb
Mail guide: OS X -> macOS (#2306) 2024-03-23 09:04:43 -04:00
jvolkenant 163b1a297e
Silence "wal" output on setup using hide_output (#2368) 2024-03-23 08:49:24 -04:00
Joshua Tauberer 18b8f9ab4b Revert "Allow customizations to Roundcube settings to persist between updates by including a configuration override file, if it exists (#2333)"
This reverts commit 1b8cdeb644.

It didn't execute. I should have tried it first.
2024-03-10 08:25:34 -04:00
KiekerJan 0b1d92388a
Take spamhaus return codes into account in status check and postfix config (#2332) 2024-03-10 08:09:36 -04:00
Crag-Monkey 1b8cdeb644
Allow customizations to Roundcube settings to persist between updates by including a configuration override file, if it exists (#2333) 2024-03-10 08:02:16 -04:00
Bastian Bittorf 1053340124
setup/preflight.sh: fix some minor shellcheck complaints (#2342)
This file passes shellcheck now without errors.
This paritally fixes #1457 - the former errors where:

$ shellcheck setup/preflight.sh

In setup/preflight.sh line 1:
^-- SC2148 (error): Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

In setup/preflight.sh line 29:
if [ $TOTAL_PHYSICAL_MEM -lt 490000 ]; then
     ^-----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ "$TOTAL_PHYSICAL_MEM" -lt 490000 ]; then

In setup/preflight.sh line 31:
	TOTAL_PHYSICAL_MEM=$(expr \( \( $TOTAL_PHYSICAL_MEM \* 1024 \) / 1000 \) / 1000)
                             ^--^ SC2003 (style): expr is antiquated. Consider rewriting this using $((..)), ${} or [[ ]].
                                        ^-----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
	TOTAL_PHYSICAL_MEM=$(expr \( \( "$TOTAL_PHYSICAL_MEM" \* 1024 \) / 1000 \) / 1000)

In setup/preflight.sh line 38:
if [ $TOTAL_PHYSICAL_MEM -lt 750000 ]; then
     ^-----------------^ SC2086 (info): Double quote to prevent globbing and word splitting.

Did you mean:
if [ "$TOTAL_PHYSICAL_MEM" -lt 750000 ]; then

For more information:
  https://www.shellcheck.net/wiki/SC2148 -- Tips depend on target shell and y...
  https://www.shellcheck.net/wiki/SC2086 -- Double quote to prevent globbing ...
  https://www.shellcheck.net/wiki/SC2003 -- expr is antiquated. Consider rewr...
2024-03-10 08:01:13 -04:00
Joshua Tauberer 315d2cf691
Fixed errors found by the Ruff Python linter (#2343) 2024-03-10 07:57:19 -04:00
Teal Dulcet dbc2b5eee0 Fixed ISC003 (explicit-string-concatenation): Explicitly concatenated string should be implicitly concatenated 2024-03-10 07:56:49 -04:00
Teal Dulcet 775a4223de Fixed F821 (undefined-name): Undefined name `e` 2024-03-10 07:56:49 -04:00
Teal Dulcet 618c466b84 Fixed SIM114 (if-with-same-arms): Combine `if` branches using logical `or` operator 2024-03-10 07:56:49 -04:00
Teal Dulcet a32354fd91 Fixed PLR5501 (collapsible-else-if): Use `elif` instead of `else` then `if`, to reduce indentation 2024-03-10 07:56:49 -04:00
Teal Dulcet 1d79f9bb2b Fixed PERF401 (manual-list-comprehension): Use a list comprehension to create a transformed list 2024-03-10 07:56:49 -04:00
Teal Dulcet cacf6d2006 Fixed E721 (type-comparison): Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks 2024-03-10 07:56:49 -04:00
Teal Dulcet f0377dd59e Fixed SIM105 (suppressible-exception) 2024-03-10 07:56:49 -04:00
Teal Dulcet 6a47133e3f Fixed F811 (redefined-while-unused): Redefinition of unused `sys` from line 10 2024-03-10 07:56:49 -04:00
Teal Dulcet 7f456d8e8b Fixed ISC002 (multi-line-implicit-string-concatenation): Implicitly concatenated string literals over multiple lines 2024-03-10 07:56:49 -04:00
Teal Dulcet e466b9bb53 Fixed RUF005 (collection-literal-concatenation) 2024-03-10 07:56:49 -04:00
Teal Dulcet 0e9193651d Fixed PLW1514 (unspecified-encoding): `open` in text mode without explicit `encoding` argument 2024-03-10 07:56:49 -04:00
Teal Dulcet a02b59d4e4 Fixed F401 (unused-import): `socket.timeout` imported but unused 2024-03-10 07:56:49 -04:00
Teal Dulcet 15bddcbc39 Fixed RUF010 (explicit-f-string-type-conversion): Use explicit conversion flag 2024-03-10 07:56:49 -04:00
Teal Dulcet c719fce40a Fixed UP032 (f-string): Use f-string instead of `format` call 2024-03-10 07:56:49 -04:00
Teal Dulcet 3111cf56de Fixed EM102 (f-string-in-exception): Exception must not use an f-string literal, assign to variable first 2024-03-10 07:56:49 -04:00
Teal Dulcet 6508d47da1 Fixed C405 (unnecessary-literal-set): Unnecessary `list` literal (rewrite as a `set` literal) 2024-03-10 07:56:49 -04:00
Teal Dulcet 9b961b7ba0 Fixed UP024 (os-error-alias): Replace aliased errors with `OSError` 2024-03-10 07:56:49 -04:00
Teal Dulcet b13cef9b1d Fixed PIE790 (unnecessary-placeholder): Unnecessary `pass` statement 2024-03-10 07:56:49 -04:00
Teal Dulcet 8b9d3ec094 Fixed W292 (missing-newline-at-end-of-file): No newline at end of file 2024-03-10 07:56:49 -04:00
Teal Dulcet d1d3d08d70 Fixed B006 (mutable-argument-default): Do not use mutable data structures for argument defaults 2024-03-10 07:56:49 -04:00
Teal Dulcet 922c59ddaf Fixed SIM212 (if-expr-with-twisted-arms): Use `with_lines if with_lines else []` instead of `[] if not with_lines else with_lines` 2024-03-10 07:56:49 -04:00
Teal Dulcet 20a99c0ab8 Fixed UP041 (timeout-error-alias): Replace aliased errors with `TimeoutError` 2024-03-10 07:56:49 -04:00
Teal Dulcet 54af4725f9 Fixed C404 (unnecessary-list-comprehension-dict): Unnecessary `list` comprehension (rewrite as a `dict` comprehension) 2024-03-10 07:56:49 -04:00
Teal Dulcet fd4fcdaf53 Fixed E712 (true-false-comparison): Comparison to `False` should be `cond is False` or `if not cond:` 2024-03-10 07:56:49 -04:00
Teal Dulcet d661d623dc Fixed RUF017 (quadratic-list-summation): Avoid quadratic list summation 2024-03-10 07:56:49 -04:00
Teal Dulcet f621789298 Fixed SIM118 (in-dict-keys): Use `key in dict` instead of `key in dict.keys()` 2024-03-10 07:56:49 -04:00
Teal Dulcet ec32e1d578 Fixed E703 (useless-semicolon): Statement ends with an unnecessary semicolon 2024-03-10 07:56:49 -04:00
Teal Dulcet 57dcd4bb51 Fixed E713 (not-in-test): Test for membership should be `not in` 2024-03-10 07:56:49 -04:00
Teal Dulcet 845393b6e0 Fixed RET503 (implicit-return): Missing explicit `return` at the end of function able to return non-`None` value 2024-03-10 07:56:49 -04:00
Teal Dulcet c585c1ecf6 Fixed W291 (trailing-whitespace): Trailing whitespace 2024-03-10 07:56:49 -04:00
Teal Dulcet e0e6f1081b Fixed C414 (unnecessary-double-cast-or-process): Unnecessary `list` call within `sorted()` 2024-03-10 07:56:49 -04:00
Teal Dulcet 4999ed7b1c Fixed Q003 (avoidable-escaped-quote): Change outer quotes to avoid escaping inner quotes 2024-03-10 07:54:51 -04:00
Teal Dulcet ca8f06d590 Fixed PLR1711 (useless-return): Useless `return` statement at end of function 2024-03-10 07:54:51 -04:00
Teal Dulcet 57d05c1ab2 Fixed B007 (unused-loop-control-variable) 2024-03-10 07:54:51 -04:00
Teal Dulcet c953e5784d Fixed C401 (unnecessary-generator-set): Unnecessary generator (rewrite as a `set` comprehension) 2024-03-10 07:54:51 -04:00
Teal Dulcet 81a4da0181 Fixed SIM110 (reimplemented-builtin) 2024-03-10 07:54:51 -04:00
Teal Dulcet 99d3929f99 Fixed E711 (none-comparison) 2024-03-10 07:54:51 -04:00
Teal Dulcet 541f31b1ba Fixed FURB113 (repeated-append) 2024-03-10 07:54:51 -04:00
Teal Dulcet e8d1c037cb Fixed SIM102 (collapsible-if): Use a single `if` statement instead of nested `if` statements 2024-03-10 07:54:51 -04:00
Teal Dulcet 67b9d0b279 Fixed PLW0108 (unnecessary-lambda): Lambda may be unnecessary; consider inlining inner function 2024-03-10 07:54:51 -04:00
Teal Dulcet 3d72c32b1d Fixed W605 (invalid-escape-sequence) 2024-03-10 07:54:51 -04:00
Teal Dulcet 14a5613dc8 Fixed UP031 (printf-string-formatting): Use format specifiers instead of percent format 2024-03-10 07:54:51 -04:00
Teal Dulcet 64540fbb44 Fixed UP034 (extraneous-parentheses): Avoid extraneous parentheses 2024-03-10 07:54:51 -04:00
Teal Dulcet eefc0514b2 Fixed UP030 (format-literals): Use implicit references for positional format fields 2024-03-10 07:54:51 -04:00
Teal Dulcet fba92de051 Fixed SIM108 (if-else-block-instead-of-if-exp) 2024-03-10 07:54:51 -04:00
Teal Dulcet 51dc7615f7 Fixed RSE102 (unnecessary-paren-on-raise-exception): Unnecessary parentheses on raised exception 2024-03-10 07:54:51 -04:00
Teal Dulcet 13b38cc04d Fixed F841 (unused-variable) 2024-03-10 07:54:51 -04:00
Teal Dulcet 2b426851f9 Fixed UP032 (f-string): Use f-string instead of `format` call 2024-03-10 07:54:51 -04:00
Teal Dulcet b7f70b17ac Fixed RET504 (unnecessary-assign) 2024-03-10 07:54:51 -04:00
Teal Dulcet 6bfd1e5140 Fixed W293 (blank-line-with-whitespace): Blank line contains whitespace 2024-03-10 07:54:51 -04:00
Teal Dulcet 555ecc1ebb Fixed PIE810 (multiple-starts-ends-with): Call `startswith` once with a `tuple` 2024-03-10 07:54:51 -04:00
Teal Dulcet dd61844ced Fixed EM101 (raw-string-in-exception): Exception must not use a string literal, assign to variable first 2024-03-10 07:54:51 -04:00
Teal Dulcet 49124cc9ca Fixed PLR6201 (literal-membership): Use a `set` literal when testing for membership 2024-03-10 07:54:51 -04:00
Teal Dulcet cb922ec286 Fixed UP015 (redundant-open-modes): Unnecessary open mode parameters 2024-03-10 07:54:49 -04:00
Teal Dulcet 0ee64f2fe8 Fixed F401 (unused-import) 2024-03-10 07:54:21 -04:00
KiekerJan 785c337fb3
Make reading of previous status check result more robust (#2347) 2024-03-10 07:27:04 -04:00
KiekerJan 293d56c781
Update javascript libraries used by control panel (#2351) 2024-03-10 07:26:33 -04:00
KiekerJan 040d0cbb7c
Update roundcube to 1.6.6 (#2360) 2024-03-10 07:24:29 -04:00
Michael Heuberger 111468efb9
Bump Nextcloud to v26.0.12 (#2310)
Also
- bumps calendar and contacts apps
- reformats some comments (line-breaking)
- adds extra comments for the next developer
2024-03-10 07:22:51 -04:00
John James Jacoby 4ad679da47
Issue-2354: Silence "wal" output on setup (#2356)
Silence "wal" output from RoundCube Sqlite customization, inside of webmail.sh.

Co-authored-by: solomon-s-b

Fixes #2354.
2024-03-10 07:16:03 -04:00
solomon-s-b 84919fefa4
Fix miab-munin.conf filter not capturing HTTP/2.0 (#2359) 2024-03-10 07:15:25 -04:00
KiekerJan e931e103fe
[security] SMTP smuggling: update short term fix (#2346)
Update short term fix according to postfix advisory at https://www.postfix.org/smtp-smuggling.html.
2024-01-10 09:34:06 -05:00
Joshua Tauberer 7646095b94 v67 2023-12-22 08:56:43 -05:00
Joshua Tauberer faf23f150c Guard against SMTP smuggling
This short-term workaround is recommended at https://www.postfix.org/smtp-smuggling.html:

    smtpd_data_restrictions=reject_unauth_pipelining
2023-12-22 08:54:15 -05:00
Joshua Tauberer 8e4e9add78 Version 66 2023-12-17 16:31:18 -05:00
KiekerJan fa8c7ddef5
Upgrade roundcube to 1.6.5 (#2329) 2023-12-04 09:23:36 -05:00
bilogic 6d6ce25e03
Allow specifying another repo to install from in bootstrap.sh (#2334) 2023-12-04 09:22:54 -05:00
Joshua Tauberer 371f5bc1b2 Fix virtualenv creation reported in #2335 2023-11-28 07:25:50 -05:00
Joshua Tauberer 0314554207 Version 65 2023-10-27 06:02:22 -04:00
matidau 46d55f7866
Update zpush.sh to version 2.7.1 (#2315)
Updating to latest release, bugfixes no new features.
2023-10-26 09:04:13 -04:00
KiekerJan 2bbc317873
Update Roundcube to 1.6.4 (#2317) 2023-10-26 09:03:29 -04:00
clpo13 28f929dc13
Fix typo in system-backup.html: Amazone -> Amazon (#2311) 2023-10-10 13:22:19 -04:00
Joshua Tauberer e419b62034 Version 64 2023-09-02 19:46:24 -04:00
Joshua Tauberer a966913963
Fix command line arguments for duplicity 2.1 (#2301) 2023-09-02 15:54:16 -04:00
Joshua Tauberer 08defb12be Add a new backup.py command to print the duplicity command to the console to help debugging 2023-09-02 07:49:41 -04:00
Jeff Volkenant 7be687e601 Move source and target positional arguments to the end, required for Duplicity 2.1.0
(Modified by JT.)
2023-09-02 07:28:48 -04:00
Aaron Ten Clay 62efe985f1
Disable OpenDMARC sending reports (#2299)
OpenDMARC report messages, while potentially useful for peer operators of mail servers, are abusable and should not be enabled by default. This change prioritizes the safety of the Box's reputation.
2023-09-02 07:10:04 -04:00
Alex df44056bae
Fix checksums in nextcloud.sh (#2293) 2023-09-02 07:07:12 -04:00
Dmytro Kyrychuk 3148c621d2
Fix issue with slash (/) characters in B2 Application Key (#2281)
Urlencode B2 Application Key when saving configuration, urldecode it
back when reading. Duplicity accepts urlencoded target directly, no
decoding is necessary when backup is performed.

Resolve #1964
2023-09-02 07:03:24 -04:00
Michael Heuberger 81866de229
Amend --always option to all git describe commands (#2275) 2023-09-02 06:59:39 -04:00
matidau 674ce92e92
Fix z-push-admin broken in v60 (#2263)
Update zpush.sh to create two sbin bash scripts for z-push-admin and z-push-top using PHP_VER.
2023-09-02 06:55:15 -04:00
Darren Sanders c034b0f789 Fix how the value is being passed for the gpg-options parameter
Duplicity v2.1.0 backups are failing with the error:
"... --gpg-options expected one argument".

The issue is that duplicity v2.1.0 began using the argparse Python
library and the parse_known_args function. This function
interprets the argument being passed, "--cipher-algo=AES256",
as an argument name (because of the leading '-') and not as an
argument value. Because of that it exits with an error and
reports that the --gpg-options arg is missing its value.

Adding an extra set of quotes around this string causes
parse_known_args to interpret the string as an argument
value.
2023-08-30 16:34:17 -07:00
Joshua Tauberer cd45d08409 Version 63 2023-07-29 12:11:29 -04:00
Michael Heuberger 98628622c7
Bump Nextcloud to v25.0.7 (#2268)
Also
- bumps calendar and contacts apps
- adds extra migration steps between these versions
- adds cron job for Calendar updates
- rotates nextloud log file after upgrading
- adds primary key indices migrations
- adjusts configs slightly
- adds more well-known entries in nginx to improve service discovery
- reformats some comments (line-breaking)
2023-06-16 11:49:55 -04:00
Joshua Tauberer 8b19d15735 Version 62 2023-05-20 08:57:32 -04:00
matidau 93380b243f
Update zpush.sh to version 2.7.0 (#2236) 2023-05-13 10:27:42 -04:00
Joshua Tauberer fb0a3b0489
Restore Roundcube's password reset tool by removing `PRAGMA journal_mode = WAL` from Roundcube source (#2199) 2023-05-13 10:26:41 -04:00
Joshua Tauberer 3bc9d07aeb Roundcube 1.6.1 2023-05-13 07:00:54 -04:00
Joshua Tauberer 51ed030917 Allow setting the S3 region name in backup settings to pass to duplicity
It's stuffed inside the username portion of the target URL. We already mangle the target before passing it to duplicity so there wasn't a need for a new field.

Fixes the issue raised in #2200, #2216.
2023-05-13 07:00:29 -04:00
Joshua Tauberer e828d63a85 Allow secondary DNS xfr: items to be hostnames that are resolved to IP addresses when generating the nsd configuration 2023-05-13 07:00:10 -04:00
Joshua Tauberer 0ee0784bde Changelog entries 2023-05-13 06:59:49 -04:00
Peter Tóth 6d43d24552
Improve control panel panel switching behaviour by using the URL fragment (#2252) 2023-05-13 06:49:34 -04:00
Peter Tóth 963fb9f2e6
email_administrator.py: fix report formatting (#2249) 2023-05-13 06:40:31 -04:00
KiekerJan c9584148a0
Fix issue where sshkeygen fails when ipv6 is disabled (#2248) 2023-05-13 06:39:46 -04:00
Tomas P 9a33f9c5ff
Fix dynazoom due to change in handling su (#2247)
Seems that in Ubuntu 22.04 the behavior in su changed, making - ( alias for -l, --login ) mutually exclusive with --preserve-environment which is required for passing enviroment variables for cgi to work for dynazoom in munin.dropping - fixes the issue
2023-05-13 06:38:00 -04:00
Michael Heuberger 95530affbf
Bump Nextcloud to v23.0.12 and its apps (#2244) 2023-05-13 06:37:24 -04:00
Hugh Secker-Walker f72be0be7c
feat(rsync-backup-ui): Add a Copy button to put public key on clipboard in rsync UI (#2227) 2023-05-13 06:36:31 -04:00
KiekerJan 8aa98b25b5 Update configuration of Roundcube password plugin for Roundcube 1.6 2023-05-13 06:22:28 -04:00
KiekerJan 3c15081673 Remove journal PRAGMA from Roundcube source which broke the database for postfix
See #2185.
2023-05-13 06:20:13 -04:00
Joshua Tauberer 01d8e9f3b4 Revert "Disable Roundcube password plugin since it was corrupting the user database (#2198)"
This reverts commit 1587248762.

See subsequent commits.
2023-05-13 06:20:13 -04:00
Adam Elaoumari 88260bb610
Fixed year in changelog (#2241)
Fixed year of version 61.1 (2022 -> 2023)
2023-03-08 10:29:02 -05:00
Joshua Tauberer 6f94412204 v61.1 2023-01-28 11:25:21 -05:00
Joshua Tauberer c77d1697a7 Revert "Improve error messages in the management tools when external command-line tools are run"
Command line arguments have user secrets in some cases which should not be included in error messages.

This reverts commit 26709a3c1d.

Reported by AK.
2023-01-28 11:24:38 -05:00
Hugh Secker-Walker 31bbef3401
chore(setup): Make sed fingerprint patterns in start.sh be case insensitive (#2201) 2023-01-28 11:12:40 -05:00
Hugh Secker-Walker 7af713592a
feat(status page): Add summary of ok/error/warning counts (#2204)
* feat(status page): Add summary of ok/error/warning counts

* simplify a bit

---------

Co-authored-by: Hugh Secker-Walker <hsw+miac@hodain.net>
Co-authored-by: Joshua Tauberer <jt@occams.info>
2023-01-28 11:11:17 -05:00
Hugh Secker-Walker 4408cb1fba
fix(rsync-backup): Provide default port 22 for rsync usage in backup.py (#2226)
Co-authored-by: Hugh Secker-Walker <hsw+miac@hodain.net>
2023-01-28 11:04:46 -05:00
Joshua Tauberer 5e3e4a2161 v61 2023-01-21 08:20:48 -05:00
Joshua Tauberer 61d1ea1ea7 Changelog entries 2023-01-15 10:17:10 -05:00
Joshua Tauberer b3743a31e9 Add a status checks check that fail2ban is running using fail2ban-client 2023-01-15 10:17:10 -05:00
Joshua Tauberer 26709a3c1d Improve error messages in the management tools when external command-line tools are run 2023-01-15 10:17:10 -05:00
jcm-shove-it 20ec6c2080
Updated security.md to reflect the support of ubuntu 22.04 (#2219) 2023-01-15 10:05:36 -05:00
Steven Conaway 7a79153afe
Remove old darkmode background color (#2218)
Removing this old background color solves the problem of the bottom of short pages (like `/admin`'s login page) being white. The background was being set to black, which would be inverted, so it'd appear white. Since the `filter:` css has [~97% support](https://caniuse.com/?search=filter), I think that this change should be made. Tested on latest versions of Chrome (mac and iOS), Firefox, and Safari (mac and iOS).
2023-01-15 10:05:13 -05:00
Hugh Secker-Walker a2565227f2
feat(rsync-port): Add support for non-standard ssh port for rsync backup (#2208) 2023-01-15 10:03:05 -05:00
Hugh Secker-Walker 02b34ce699
fix(backup-display): Fix parsing of rsync target in system-backup.html, fixes #2206 (#2207) 2023-01-15 10:01:07 -05:00
Hugh Secker-Walker 820a39b865
chore(python open): Refactor open and gzip.open to use context manager (#2203)
Co-authored-by: Hugh Secker-Walker <hsw+miac@hodain.net>
2023-01-15 08:28:43 -05:00
Hugh Secker-Walker 57047d96e9
chore(setup): Update obsolete chown group syntax (#2202)
Co-authored-by: Hugh Secker-Walker <hsw+miac@hodain.net>
2023-01-15 08:25:36 -05:00
KiekerJan 1587248762
Disable Roundcube password plugin since it was corrupting the user database (#2198) 2023-01-15 08:22:43 -05:00
KiekerJan 0fc5105da5
Fixes to DNS lookups during status checks when there are timeouts, enforce timeouts better (#2191)
* add dns query handling changes

* replace exception pass with error message

* simplify dns exception catching

* Add not set case to blacklist lookup result handling
2023-01-15 08:20:08 -05:00
KiekerJan c29593b5ef
explicitly enable fail2ban which didn't start (#2190) 2023-01-15 08:10:04 -05:00
Joshua Tauberer 3314c4f7de v60.1 2022-10-30 08:18:13 -04:00
Joshua Tauberer 1f60236985 Upgrade Nextcloud to 23.0.4 (contacts to 4.2.0, calendar to 3.5.0)
This fixes the monthly view calendar items being in random order.
2022-10-30 08:16:54 -04:00
alento-group 32c68874c5
Fix NSD not restarting (#2182)
A previous commit (0a970f4bb2) broke nsd restarting. This fixes that change by reverting it.

Josh added: Use nsd-control with reconfig and reload if they succeed and only fall back to restarting nsd if they fail

Co-authored-by: Joshua Tauberer <jt@occams.info>
2022-10-30 08:16:03 -04:00
Joshua Tauberer 286a4bd9e7 Remove stray quote in bootstrap.sh
Reported at https://discourse.mailinabox.email/t/version-60-for-ubuntu-22-04-is-released/9558/4.
2022-10-12 06:11:02 -04:00
Joshua Tauberer ddf8e857fd
Support Ubuntu 22.04 Jammy Jellyfish (#2083) 2022-10-11 21:18:34 -04:00
Joshua Tauberer 4d5ff0210b Version 60 2022-10-11 21:14:31 -04:00
Joshua Tauberer 89cd9fb611 Increase gunicorn's worker timeout since some /admin commands take a long time 2022-10-08 08:23:48 -04:00
Joshua Tauberer 22a6270657 Remove old setup step to uninstall acme library 2022-10-08 08:23:48 -04:00
Joshua Tauberer 0a970f4bb2 Use nsd-control to refresh nsd after zone files are rewritten rather than 'service nsd restart'
I am not sure if this was the problem but nsd didn't serve updated zonefiles on my box and 'service nsd restart' must have been used, so maybe it doesn't reload zones.
2022-10-08 07:24:57 -04:00
Joshua Tauberer 9b111e2493 Update to Nextcloud 23.0.8 (contacts 4.2.0, calendar 3.5.0) 2022-10-08 07:23:21 -04:00
jvolkenant b8feb77ef4
Move postgrey database under $STORAGE_ROOT (#2077) 2022-09-24 13:17:55 -04:00
Joshua Tauberer 3c44604316 Install 'file' package
The command is used in mailinabox-postgrey-whitelist. Reported missing (on systems that don't install it by default) in #2083.
2022-09-24 10:10:50 -04:00
Steve Hay 1e1a054686
BUGFIX: Correctly handle the multiprocessing for run_checks in the management daemon (#2163)
See discussion here: #2083

Co-authored-by: Steve Hay <hay.steve@gmail.com>
2022-09-24 09:56:27 -04:00
kiekerjan d584a41e60
Update Roundcube to 1.6.0 (#2153) 2022-09-17 09:20:20 -04:00
downtownallday 56074ae035 Tighten roundcube session config (#2138)
Merges #2138.
2022-09-17 09:09:00 -04:00
downtownallday 30631b0fc5 Fix undefined variable 'val' in tools/editconf.py (#2137)
Merges #2137.
2022-09-17 09:09:00 -04:00
Steve Hay 84da4e6000 Update dovecot to use same DH parameters file as the other services
Originally from #2157.
2022-09-17 09:07:54 -04:00
Joshua Tauberer 58ded74181 Restore the backup S3 host select box if an S3 target has been set
Also remove unnecessary import added in 7cda439c. Was a mistake from edits during PR review.
2022-09-17 09:07:54 -04:00
Steve Hay 3fd2e3efa9
Replace Flask built-in WSGI server with gunicorn (#2158) 2022-09-17 08:03:16 -04:00
Steve Hay 7cda439c80
Port boto to boto3 and fix asyncio issue in the management daemon (#2156)
Co-authored-by: Steve Hay <hay.steve@gmail.com>
2022-09-17 07:57:12 -04:00
Joshua Tauberer 91fc74b408 Setup fixes for Ubuntu 22.04
Nextcloud:
* The Nextcloud user_external 1.0.0 package for Nextcloud 21.0.7 isn't available from Nextcloud's releases page, but it's not needed in an intermediate upgrade step (hopefully), so we can skip it.
* Nextcloud updgrade steps should not be elifs because multiple intermediate upgrades may be needed.
* Continue if the user_external backend migration fails. Maybe it's not necessary. It gives a scary error message though.
* Remove a line that removes an old file that hasn't been in use since 2019 and the expectation is that Ubuntu 22.04 installations are on fresh machines.

Backups:
* For duplicity, we now need boto3 for AWS.
2022-09-03 07:50:36 -04:00
Sudheesh Singanamalla d7244ed920
Fixes #2149 Append ; in policy strings for DMARC settings (#2151)
Signed-off-by: Sudheesh Singanamalla <sudheesh@cloudflare.com>
2022-08-19 13:23:42 -04:00
David Duque e0c0b5053c Upgrade Nextcloud External User Backend to v3.0.0
Co-Authored-By: Joshua Tauberer <jt@occams.info>
2022-07-28 14:42:51 -04:00
Joshua Tauberer 268b31685d Ensure STORAGE_ROOT has a+rx permission since processes run by different system users need to access files within it 2022-07-28 14:42:51 -04:00
Joshua Tauberer ab71abbc7c Update to latest cryptography Python package, add missing source at top of management.sh so it can run standalone (needs STORAGE_ROOT) 2022-07-28 14:42:51 -04:00
Joshua Tauberer 87e6df9e28 Fix roundcube dependency missing imap and unneeded ldap 2022-07-28 14:42:51 -04:00
Felix Matouschek 558f2db31f system.sh: Remove no longer needed haveged (#2090)
Starting from kernels 5.6 haveged is obsolete. Therefore remove it in
Ubuntu 22.04.

See https://github.com/jirka-h/haveged/issues/57
2022-07-28 14:42:51 -04:00
Joshua Tauberer c23dd701f0 Start changelog and instructions updates for version 60 supporting Ubuntu 22.04
To scan for updated apt packages in Ubuntu 22.04, I ran on Ubuntu 18.04 and 22.04 and compared the output:

```
for package in openssl openssh-client haveged pollinate fail2ban ufw bind9 nsd ldnsutils nginx dovecot-core postfix opendkim opendkim-tools opendmarc postgrey spampd razor pyzor dovecot-antispam sqlite3 duplicity certbot munin munin-node php python3; do
  echo -n "$package ";
  dpkg-query --showformat='${Version}' --show $package;
  echo
done
```
2022-07-28 14:42:51 -04:00
Joshua Tauberer 0a7b9d5089 Update dovecot, spampd settings for Ubuntu 22.04
* dovecot's ssl_protocols became ssl_min_protocol in 2.3
* spampd fixed a bug so we can remove lmtp_destination_recipient_limit=1 in postfix
2022-07-28 14:34:45 -04:00
Joshua Tauberer 1eddf9a220 Upgrade to Nextcloud 23.0.4
The first version supporting PHP 8.0 is Nextcloud 21. Therefore we can add migrations only to Nextcloud 21 forward, and so we only support migrating from Nextcloud 20 (Mail-in-a-Box versions v0.51+). Migration steps through Nextcloud 21 and 22 are added.

Also:

* Fix PHP APUc settings to be before Nextcloud tools are run.
2022-07-28 14:34:45 -04:00
Joshua Tauberer 78d71498fa Upgrade from PHP 7.2 to 8.0 for Ubuntu 22.04
* Add the PHP PPA.
* Specify the version when invoking the php CLI.
* Specify the version in package names.
* Update paths to 8.0 (using a variable in the setup scripts).
* Update z-push's php-xsl dependency to php8.0-xml.
* php-json is now built-into PHP.

Although PHP 8.1 is the stock version in Ubuntu 22.04, it's not supported by Nextcloud yet, and it likely will never be supported by the the version of Nextcloud that succeeds the last version of Nextcloud that supports PHP 7.2, and we have to install the next version so that an upgrade is permitted, so skipping to PHP 8.1 may not be easily possible.
2022-07-28 14:02:46 -04:00
Joshua Tauberer b41a0ad80e Drop some hacks that we needed for Ubuntu 18.04
* certbot's PPA is no longer needed because a recent version is now included in the Ubuntu respository.
* Un-pin b2sdk (reverts 69d8fdef99 and d829d74048).
* Revert boto+s3 workaround for duplicity (partial revert of 99474b348f).
* Revert old "fix boto 2 conflict on Google Compute Engine instances" (cf33be4596) which is probably no longer needed.
2022-07-28 14:02:46 -04:00
Rauno Moisto 78569e9a88 Fix DeprecationWarning in dnspython query vs resolve method
The resolve method disables resolving relative names by default. This change probably makes a7710e90 unnecessary. @JoshData added some additional changes from query to resolve.
2022-07-28 14:02:46 -04:00
Daniel Mabbett 8cb360fe36 Configure nsd listening interfaces before installing nsd so that it does not interfere with bind9 2022-07-28 14:02:46 -04:00
Joshua Tauberer f534a530d4 Update and drop some package and file names for Ubuntu 22.04
* Fix path to bind9 startup options file in Ubuntu 22.04.
* tinymce has not been a Roundcube requirement recently and is no longer a package in Ubuntu 22.04
* Upgrade Vagrant box to Ubuntu 22.04
2022-07-28 14:02:46 -04:00
Joshua Tauberer 2abcafd670 Update Ubuntu version checks from 18.04 to 22.04 2022-07-28 14:02:44 -04:00
Joshua Tauberer 3c3d62ac27 Version 57a 2022-06-19 08:58:09 -04:00
Joshua Tauberer d829d74048 Pin b2sdk to version 1.14.1 in the virtualenv also
We install b2sdk in two places: Once globally for duplicity (see
9d8fdef9915127f016eb6424322a149cdff25d7 for #2125) and once in
a virtualenv used by our control panel. The latter wasn't pinned
when the former was but should be to fix new Python compatibility
issues.

Anyone who updated Python packages recently (so anyone who upgraded
Mail-in-a-Box) started encountering these issues.

Fixes #2131.

See https://discourse.mailinabox.email/t/backblaze-b2-backup-not-working-since-v57/9231.
2022-06-18 13:15:59 -04:00
Joshua Tauberer 2aca421415 Version 57 2022-06-12 08:18:42 -04:00
Joshua Tauberer 99474b348f Update backup to be compatible with duplicity 0.8.23
We were using duplicity 0.8.21-ppa202111091602~ubuntu1 from the duplicity PPA probably until June 5, which is when my box automatically updated to 0.8.23-ppa202205151528~ubuntu18.04.1. Starting with that version, two changes broke backups:

* The default s3 backend was changed to boto3. But boto3 depends on the AWS SDK which does not support Ubuntu 18.04, so we can't install it. Instead, we map s3: backup target URLs to the boto+s3 scheme which tells duplicity to use legacy boto. This should be reverted when we can switch to boto3.
* Contrary to the documentation, the s3 target no longer accepts a S3 hostname in the URL. It now reads the bucket from the hostname part of the URL. So we now drop the hostname from our target URL before passing it to duplicity and we pass the endpoint URL in a separate command-line argument. (The boto backend was dropped from duplicity's "uses_netloc" in 74d4cf44b1 (f5a07610d36bd242c3e5b98f8348879a468b866a_37_34), but other changes may be related.)

The change of target URL (due to both changes) seems to also cause duplicity to store cached data in a different directory within $STORAGE_ROOT/backup/cache, so on the next backup it will re-download cached manifest/signature files. Since the cache directory will still hold the prior data which is no longer needed, it might be a good idea to clear out the cache directory to save space. A system status checks message is added about that.

Fixes #2123
2022-06-12 08:17:48 -04:00
Joshua Tauberer 8bebaf6a48 Simplify duplicity command line by omitting rsync options if the backup target type is not rsync 2022-06-11 15:12:31 -04:00
jbandholz 9004bb6e8e
Add IPV6 addresses to fail2ban ignoreip (#2069)
Update jails.conf to include IPV6 localhost and external ip to ignoreip line.  Update system.sh to include IPV6 address in replacement.  See mail-in-a-box#2066 for details.
2022-06-05 09:40:54 -04:00
m-picc 69d8fdef99
Specify b2sdk version 1.14.1 (#2125)
pin b2sdk version to 1.14.1 to resolve exception that occurs when attempting to use backblaze backups. See https://github.com/mail-in-a-box/mailinabox/issues/2124 for details.
2022-06-05 09:24:32 -04:00
Austin Ewens eeee712cf3
Switched to using tags over releases for NextCloud contacts/calendar (#2105)
See [mailinabox issue #2088](https://github.com/mail-in-a-box/mailinabox/issues/2088). This also updates the commit hashes to for anyone updating from NextCloud version 17 (as shown in the related issue) since a different hash is used for tags vs releases.

This was tested and verified to work on a setup previously running v0.44 and then updating to the latest version (v56).
2022-05-04 17:09:53 -04:00
Joshua Tauberer 8f42d97b54
Merge pull request #2109 from lamberete/main 2022-05-04 17:08:48 -04:00
lamberete 6e40c69cb5
Error message using IPv4 instead of failing IPv6.
One of the error messages around IPv6 was using the IPv4 for the output, making the error message confusing.
2022-03-26 13:50:24 +01:00
lamberete c0e54f87d7
Sorting ds records on report.
When building the part of the report about the current DS records founded, they are added in the same order as they were received when calling query_dns(), which can differ from run to run. This was making the difflib.SequenceMatcher() method to find the same line removed and added one line later, and sending an Status Checks Change Notice email with the same line added and removed when there was actually no real changes.
2022-03-26 13:45:49 +01:00
Joshua Tauberer 3a7de051ee Version 56 (January 19, 2022) 2022-01-19 16:59:34 -05:00
Darek Kowalski f11cb04a72
Update Vagrant private IP address, fix issue #2062 (#2064) 2022-01-08 18:29:23 -05:00
Joshua Tauberer cb564a130a Fix DNS secondary nameserver refesh failure retry period
Fixes #1979
2022-01-08 09:38:41 -05:00
Joshua Tauberer d1d6318862 Set systemd journald log retention to 10 days (from no limit) to reduce disk usage 2022-01-08 09:11:48 -05:00
Joshua Tauberer 34b7a02f4f Update Roundcube to 1.5.2 2022-01-08 09:00:12 -05:00
Joshua Tauberer a312acc3bc Update to Nextcloud 20.0.8 and update apps 2022-01-08 09:00:12 -05:00
Joshua Tauberer aab1ec691c CHANGELOG entries 2022-01-08 07:46:24 -05:00
Erik Hennig 520caf6557
fix: typo in system backup template (#2081) 2022-01-02 08:11:41 -05:00
jvolkenant c92fd02262
Don't die if column already exists on Nextcloud 18 upgrade (#2078) 2021-12-25 10:17:34 -05:00
Arno Hautala a85c429a85
regex change to exclude comma from sasl_username (#2074)
as proposed in #2071 by @jvolkenant
2021-12-19 08:33:59 -05:00
Ilnahro 50a5cb90bc
Include rsync to the installed basic packages (#2067)
Some VPS providers strip this package from their Ubuntu 18.04 VM images. This will help avoid errors.
2021-11-30 19:50:01 -05:00
steadfasterX aac878dce5
fix: key flag id for KSK, fix format (#2063)
as mentioned (https://github.com/mail-in-a-box/mailinabox/pull/2033#issuecomment-976365087) KSK is 257, not 256
2021-11-23 11:06:17 -05:00
jvolkenant 58b0323b36
Update persistent_login for Roundcube 1.5 (#2055) 2021-11-04 18:59:10 -04:00
kiekerjan 646f971d8b
Update mailinabox.yml (#2054)
The examples for login and logout use GET instead of POST. GET gives me an error when using it, while POST seems to work.
2021-10-31 12:49:26 -04:00
Felix Spöttel 86067be142
fix(docs): set a schema for /logout responses (#2051)
* this remedies an OpenAPI syntax violation resulting in a redoc-cli crash
2021-10-27 12:27:54 -04:00
Joshua Tauberer c67ff241c4
Updates to security.md 2021-10-23 08:57:05 -04:00
Joshua Tauberer 7b4cd443bf
How to report security issues 2021-10-22 18:49:16 -04:00
Joshua Tauberer 34017548d5 Don't crash if a custom DNS entry is not under a zone managed by the box, fixes #1961 2021-10-22 18:39:53 -04:00
Joshua Tauberer 65861c68b7 Version 55 2021-10-18 20:40:51 -04:00
Joshua Tauberer 71a7a3e201 Upgrade to Roundcube 1.5 2021-10-18 20:40:51 -04:00
Richard Willis 1c3bca53bb
Fix broken link in external-dns.html (#2045) 2021-10-18 07:36:48 -04:00
ukfhVp0zms b643cb3478
Update calendar/contacts android app info (#2044)
DAVdroid has been renamed to DAVx⁵ and price increased from $3.69 to $5.99.
CardDAV-Sync free is no longer in beta.
CalDAV-Sync price increased from $2.89 to $2.99.
2021-10-13 19:09:05 -04:00
Joshua Tauberer 113b7bd827 Disable SMTPUTF8 in Postfix because Dovecot LMTP doesn't support it and bounces messages that require SMTPUTF8
By not advertising SMTPUTF8 support at the start, senders may opt to transmit recipient internationalized domain names in IDNA form instead, which will be deliverable.

Incoming mail with internationalized domains was probably working prior to our move to Ubuntu 18.04 when postfix's SMTPUTF8 support became enabled by default.

The previous commit is retained because Mail-in-a-Box users might prefer to keep SMTPUTF8 on for outbound mail, if they are not using internationalized domains for email, in which case the previous commit fixes the 'relay access denied' error even if the emails aren't deliverable.
2021-09-24 08:11:36 -04:00
Joshua Tauberer 3e19f85fad Add domain maps from Unicode forms of internationalized domains to their ASCII forms
When an email is received by Postfix using SMTPUTF8 and the recipient domain is a Unicode internationalized domain, it was failing to be delivered (bouncing with 'relay access denied') because our users and aliases tables only store ASCII (IDNA) forms of internationalized domains. In this commit, domain maps are added to the auto_aliases table from the Unicode form of each mail domain to its IDNA form, if those forms are different. The Postfix domains query is updated to look at the auto_aliases table now as well, since it is the only table with Unicode forms of the mail domains.

However, mail delivery is still not working since the Dovecot LMTP server does not support SMTPUTF8, and mail still bounces but with an error that SMTPUTF8 is not supported.
2021-09-24 08:11:36 -04:00
Joshua Tauberer 11e84d0d40 Move automatically generated aliases to a separate database table
They really should never have been conflated with the user-provided aliases.

Update the postfix alias map to query the automatically generated aliases with lowest priority.
2021-09-24 08:11:36 -04:00
Joshua Tauberer 79966e36e3 Set a cookie for /admin/munin pages to grant access to Munin reports
The /admin/munin routes used the same Authorization: header logic as the other API routes, but they are browsed directly in the browser because they are handled as static pages or as a proxy to a CGI script.

This required users to enter their email username/password for HTTP basic authentication in the standard browser auth prompt, which wasn't ideal (and may leak the password in browser storage). It also stopped working when MFA was enabled for user accounts.

A token is now set in a cookie when visiting /admin/munin which is then checked in the routes that proxy the Munin pages. The cookie's lifetime is kept limited to limit the opportunity for any unknown CSRF attacks via the Munin CGI script.
2021-09-24 08:11:36 -04:00
Joshua Tauberer 66b15d42a5 CHANGELOG entries 2021-09-24 08:11:36 -04:00
drpixie df46e1311b
Include NSD config files from /etc/nsd/nsd.conf.d/*.conf (#2035)
And write MIAB dns zone config into /etc/nsd/nsd.conf.d/zones.conf. Delete lingering old zones.conf file.

Co-authored-by: Joshua Tauberer <jt@occams.info>
2021-09-24 08:07:40 -04:00
Elsie Hupp 353084ce67
Use "smart invert" for dark mode (#2038)
* Use "smart invert" for dark mode

Signed-off-by: Elsie Hupp <9206310+elsiehupp@users.noreply.github.com>

* Add more contrast to form controls

Co-authored-by: Joshua Tauberer <jt@occams.info>
2021-09-19 09:53:03 -04:00
mailinabox-contributor 91079ab934
add numeric flag value to DNSSEC DS status message (#2033)
Some registrars (e.g. Porkbun) accept Key Data when creating a DS RR,
but accept only a numeric flags value to indicate the key type (256 for KSK, 257 for ZSK).

https://datatracker.ietf.org/doc/html/rfc5910#section-4.3
2021-09-10 16:12:41 -04:00
Joshua Tauberer e5909a6287 Allow non-admin login to the control panel and show/hide menu items depending on the login state
* When logged out, no menu items are shown.
* When logged in, Log Out is shown.
* When logged in as an admin, the remaining menu items are also shown.
* When logged in as a non-admin, the mail and contacts/calendar instruction pages are shown.

Fixes #1987
2021-09-06 09:23:58 -04:00
Joshua Tauberer 26932ecb10 Add a 'welcome' panel to the control panel and make it the default page instead of the status checks which take too long to load
Fixes #2014
2021-09-06 09:23:58 -04:00
Joshua Tauberer e884c4774f Replace HMAC-based session API keys with tokens stored in memory in the daemon process
Since the session cache clears keys after a period of time, this fixes #1821.

Based on https://github.com/mail-in-a-box/mailinabox/pull/2012, and so:

Co-Authored-By: NewbieOrange <NewbieOrange@users.noreply.github.com>

Also fixes #2029 by not revealing through the login failure error message whether a user exists or not.
2021-09-06 09:23:58 -04:00
Joshua Tauberer 53ec0f39cb Use 'secrets' to generate the system API key and remove some debugging-related code
* Rename the 'master' API key to be called the 'system' API key
* Generate the key using the Python secrets module which is meant for this
* Remove some debugging helper code which will be obsoleted by the upcoming changes for session keys
2021-09-06 09:23:58 -04:00
Joshua Tauberer 700188c443 Roundcube 1.5 RC 2021-09-06 09:23:58 -04:00
David Duque ba80d9e72d
Show backup retention period form when configuring B2 backups (#2024) 2021-08-23 06:25:41 -04:00
Joshua Tauberer a71a58e816
Re-order DS record algorithms by digest type and revise warning message (#2002) 2021-08-22 14:45:56 -04:00
Joshua Tauberer 67b5711c68 Recommend that DS records be updated to not use SHA1 and exclude MUST NOT methods (SHA1) and the unlikely option RSASHA1-NSEC3-SHA1 (7) + SHA-384 (4) from the DS record suggestions 2021-08-22 14:43:46 -04:00
myfirstnameispaul 20ccda8710 Re-order DS record algorithms by digest type and revise warning message.
Note that 7, 4 is printed last in the status checks page but does not appear in the file, and I couldn't figure out why.
2021-08-22 14:29:36 -04:00
NewbieOrange 0ba841c7b6
fail2ban now supports ipv6 (#2015)
Since fail2ban 0.10.0, ipv6 support has been added. The current Ubuntu 18.04 repository has fail2ban 0.10.2, which does have ipv6 protection.
2021-08-22 14:13:58 -04:00
lamkin daad122236
Ignore bad encoding in email addresses when parsing maillog files (#2017)
local/domain parts of email address should be standard ASCII or
UTF-8. Some email addresses contain extended ASCII, leading to
decode failure by the UTF-8 codec (and thus failure of the
Usage-Report script)

This change allows maillog parsing to continue over lines
containing such addresses
2021-08-16 11:46:32 -04:00
NewbieOrange 21ad26e452
Disable auto-complete for 2FA code in the control panel login form (#2013) 2021-07-28 16:39:40 -04:00
Joshua Tauberer 4cb46ea465 v0.54 2021-06-20 15:50:04 -04:00
Joshua Tauberer 35fa3fe891 Changelog entries 2021-05-15 16:50:19 -04:00
Joshua Tauberer d510c8ae2a Enable and recommend port 465 for mail submission instead of port 587 (fixes #1849)
Port 465 with "implicit" (i.e. always-on) TLS is a more secure approach than port 587 with explicit (i.e. optional and only on with STARTTLS). Although we reject credentials on port 587 without STARTTLS, by that point credentials have already been sent.
2021-05-15 16:42:14 -04:00
Joshua Tauberer e283a12047 Add null SPF, DMARC, and MX records for automatically generated autoconfig, autodiscover, and mta-sts subdomains; add null MX records for custom A-record subdomains
All A/AAAA-resolvable domains that don't send or receive mail should have these null records.

This simplifies the handling of domains a bit by handling automatically generated subdomains more like other domains.
2021-05-15 16:42:14 -04:00
Joshua Tauberer e421addf1c Pre-load domain purpopses when building DNS zonefiles rather than querying mail domains at each subdomain 2021-05-09 08:16:07 -04:00
Joshua Tauberer 354a774989 Remove a debug line added in 8cda58fb 2021-05-09 07:34:44 -04:00
Joshua Tauberer aaa81ec879 Fix indentation issue in bc4ae51c2d 2021-05-08 09:06:18 -04:00
Joshua Tauberer dbd6dae5ce Fix exit status issue cased by 69fc2fdd 2021-05-08 09:02:48 -04:00
John @ S4 d4c5872547
Make clear that non-AWS S3 backups are supported (#1947)
Just a few wording changes to show that it is possible to make S3 backups to other services than AWS - prompted by a thread on MIAB discourse.
2021-05-08 08:32:58 -04:00
Thomas Urban 3701e05d92
Rewrite envelope from address in sieve forwards (#1949)
Fixes #1946.
2021-05-08 08:30:53 -04:00
Hala Alajlan bc4ae51c2d
Handle query dns timeout unhandled error (#1950)
Co-authored-by: hala alajlan <halalajlan@gmail.com>
2021-05-08 08:26:40 -04:00
Jawad Seddar 12aaebfc54
`custom.yaml`: add support for X-Frame-Options header and proxy_redirect off (#1954) 2021-05-08 08:25:33 -04:00
jvolkenant 49813534bd
Updated Nextcloud to 20.0.8, contacts to 3.5.1, calendar to 2.2.0 (#1960) 2021-05-08 08:24:04 -04:00
jvolkenant 16e81e1439
Fix to allow for non forced "enforce" MTA_STS_MODE (#1970) 2021-05-08 08:18:49 -04:00
Joshua Tauberer b7b67e31b7 Merged point release branch for v0.53a
Changed the Z-Push download URL.
2021-05-08 08:14:39 -04:00
Joshua Tauberer 2e7f2835e7 v0.53a 2021-05-08 08:13:37 -04:00
Joshua Tauberer 8a5f9f464a Download Z-Push from alternate site
The old server has been down for a few days.

Solution from https://discourse.mailinabox.email/t/temporary-fix-for-failed-wget-o-tmp-z-push-zip-https-stash-z-hub-io/8028. Fixes #1974.
2021-05-08 07:59:53 -04:00
Joshua Tauberer 69fc2fdd3a Hide spurrious Nextcloud setup output 2021-05-03 19:41:00 -04:00
Joshua Tauberer 9b07d86bf7 Use $(...) notation instead of legacy backtick notation for embedded shell commands
shellcheck reported

    SC2006: Use $(...) notation instead of legacy backticked `...`.

Fixed by applying shellcheck's diff output as a patch.
2021-05-03 19:28:23 -04:00
Joshua Tauberer ae3feebd80 Fix warnings reported by shellcheck
* SC2068: Double quote array expansions to avoid re-splitting elements.
* SC2186: tempfile is deprecated. Use mktemp instead.
* SC2124: Assigning an array to a string! Assign as array, or use * instead of @ to concatenate.
* SC2102: Ranges can only match single chars (mentioned due to duplicates).
* SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
2021-05-03 19:25:09 -04:00
Joshua Tauberer 2c295bcafd Upgrade the Roundcube persistent login cookie encryption to AES-256-CBC and increase the key length accordingly
This change will force everyone to be logged out of Roundcube since the encryption key and cipher won't match anyone's already-set cookie, but this happens anyway after every Mail-in-a-Box update since we generate a new key each time already.

Fixes #1968.
2021-04-23 17:04:56 -04:00
Joshua Tauberer 8cda58fb22 Speed up status checks a bit by removing a redundant check if the PRIMARY_HOSTNAME certificate is signed and valid 2021-04-12 19:42:12 -04:00
Joshua Tauberer 178c587654 Migrate to the ECDSAP256SHA256 (13) DNSSEC algorithm
* Stop generating RSASHA1-NSEC3-SHA1 keys on new installs since it is no longer recommended, but preserve the key on existing installs so that we continue to sign zones with existing keys to retain the chain of trust with existing DS records.
* Start generating ECDSAP256SHA256 keys during setup, the current best practice (in addition to RSASHA256 which is also ok). See https://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xhtml#dns-sec-alg-numbers-1 and https://www.cloudflare.com/dns/dnssec/ecdsa-and-dnssec/.
* Sign zones using all available keys rather than choosing just one based on the TLD to enable rotation/migration to the new key and to give the user some options since not every registrar/TLD supports every algorithm.
* Allow a user to drop a key from signing specific domains using DOMAINS= in our key configuration file. Signing the zones with extraneous keys may increase the size of DNS responses, which isn't ideal, although I don't know if this is a problem in practice. (Although a user can delete the RSASHA1-NSEC3-SHA1 key file, the other keys will be re-generated on upgrade.)
* When generating zonefiles, add a hash of all of the DNSSEC signing keys so that when the keys change the zone is definitely regenerated and re-signed.
* In status checks, if DNSSEC is not active (or not valid), offer to use all of the keys that have been generated (for RSASHA1-NSEC3-SHA1 on existing installs, RSASHA256, and now ECDSAP256SHA256) with all digest types, since not all registers support everything, but list them in an order that guides users to the best practice.
* In status checks, if the deployed DS record doesn't use a ECDSAP256SHA256 key, prompt the user to update their DS record.
* In status checks, if multiple DS records are set, only fail if none are valid. If some use ECDSAP256SHA256 and some don't, remind the user to delete the DS records that don't.
* Don't fail if the DS record uses the SHA384 digest (by pre-generating a DS record with that digest type) but don't recommend it because it is not in the IANA mandatory list yet (https://www.iana.org/assignments/ds-rr-types/ds-rr-types.xhtml).

See #1953
2021-04-12 19:42:12 -04:00
Joshua Tauberer 34569d24a9 v0.53 2021-04-11 12:45:37 -04:00
Joshua Tauberer 6653dbb2e2 Sort the Custom DNS by zone and qname, and add an option to go back to the old sort order (creation order)
Update the zone grouping style on the users and aliases page to match.

Fixes #1927
2021-02-28 09:40:32 -05:00
Joshua Tauberer 5fc1162355 Other CHANGELOG entries 2021-02-28 08:22:30 -05:00
Paul a839602cba
Enable sending DMARC failure reports (#1929)
Configures opendmarc to send failure reports for domains that request them, including when p=none.

The emails are sent as the package default of package name and user@hostname: OpenDMARC Filter <opendmarc@box.example.com>

Note I have been running this for several months with a configuration I did not include in the PR to have reports BCC'd to me (FailureReportsBcc postmaster@example.com). Very low load for my personal server of rarely more than a dozen emails sent out per day.

I am not familiar with editing scripts, so apologies in advance and please feel free to correct me.
2021-02-28 08:21:15 -05:00
Joshua Tauberer f21a41dc84 Merge #1932, with some edits 2021-02-28 08:16:50 -05:00
davDevOps 055ac07663 Update roundcube to 1.4.11
roundcube Bug Fixes:

Fix for Cross-Site Scripting (XSS) via HTML messages with malicious CSS content
General Improvements from roundcube's Issue Tracker
2021-02-28 08:14:17 -05:00
davDevOps c7b295f403 Update zpush to 2.6.2 2021-02-28 08:05:40 -05:00
Joshua Tauberer d36a2cc938 Enable Backblaze B2 backups
This reverts commit b1d703a5e7 and adds python3-setuptools per the first version of #1899 which fixes an installation error for the b2sdk Python package.
2021-02-28 08:04:14 -05:00
jeremitu 82ca54df96
Fixed #1894 log date over year change, START_DATE < END_DATE now. (#1905)
* Fixed #1894 log date over year change, START_DATE < END_DATE now.

* Corrected mail_log.py argument help and message.

Co-authored-by: Jarek <jarek@box.jurasz.de>
2021-02-28 07:59:26 -05:00
jvolkenant af62e7a99b
Fixes unbound variable when upgrading from Nextcloud 13 (#1913) 2021-02-06 16:49:43 -05:00
Joshua Tauberer 90d63fd208 v0.52 2021-01-31 08:48:14 -05:00
Joshua Tauberer e81963e585 Remove the instructions for checking that release tags are signed by me since I am not going to do that anymore 2021-01-31 08:47:59 -05:00
Joshua Tauberer b1d703a5e7 Disable Backblaze B2 backups until #1899 is resolved 2021-01-31 08:33:56 -05:00
Felix Spöttel e3d98b781e
Warn when connection to Spamhaus times out (#1817) 2021-01-28 18:22:43 -05:00
jvolkenant 50d50ba653
Update zpush to 2.6.1 (#1908) 2021-01-28 18:20:19 -05:00
Josh Brown 879467d358
Fix typo in users.html (#1895)
lettters -> letters
fixes #1888
2021-01-05 21:12:01 -05:00
Nicolas North 8025c41ee4
Bump TTL for NS records to 1800 (30 min) to 86400 (1 day) as some registries require this (#1892)
Co-authored-by: Nicolas North [norðurljósahviða] <nz@tillverka.xyz>
2021-01-03 17:57:54 -05:00
Josh Brown 7a5d729a53
Fix misspelling (#1893)
Change Blackblaze to Backblaze. Include B2 as the integration name.
2021-01-03 17:54:31 -05:00
jcm-shove-it e2f9cd845a
Update roundcube to 1.4.10 (#1891) 2020-12-28 08:11:33 -05:00
Joshua Tauberer e26cf4512c Update CHANGELOG 2020-12-25 17:28:34 -05:00
jvolkenant c7280055a8
Implement SPF/DMARC checks, add spam weight to those mails (#1836) 2020-12-25 17:22:24 -05:00
Hilko 003e8b7bb1
Adjust max-recursion-queries to fix alternating rdns status (#1876) 2020-12-25 17:19:16 -05:00
Hilko 3422cc61ce
Include en_US.UTF-8 locale in daemon startup (#1883)
Fixes #1881.
2020-12-19 19:11:58 -05:00
Hilko 8664afa997
Implement Backblaze for Backup (#1812)
* Installing b2sdk for b2 support
* Added Duplicity PPA so the most recent version is used
* Implemented list_target_files for b2
* Implemented b2 in frontend
* removed python2 boto package
2020-11-26 07:13:31 -05:00
Joshua Tauberer 82229ce04b Document how to start the control panel from the command line and in debugging use a stable API key 2020-11-26 07:11:49 -05:00
Richard Willis f66e609d3f
Api spec cleanup (#1869)
* Fix indentation

* Add parameter definition and remove unused model

* Update version

* Quote example string
2020-11-26 06:56:04 -05:00
Victor b85b86e6de
Add download zonefile button to external DNS page (#1853)
Co-authored-by: Joshua Tauberer <jt@occams.info>
2020-11-16 06:03:41 -05:00
Joshua Tauberer 7fd35bbd11 Disable default Nextcloud apps that we don't support
Contacts and calendar are the only supported apps in Mail-in-a-Box.

Files can't be disabled.

Fixes #1864
2020-11-15 17:17:58 -05:00
gumida 7ce41e3865
Changed mta-sts.txt end of line from LF to CRLF per RFC 8461 (#1863) 2020-11-15 07:54:34 -05:00
Joshua Tauberer 92221f9efb v0.51 2020-11-14 10:05:20 -05:00
Joshua Tauberer 0bd3977cde CHANGELOG updates 2020-10-31 10:36:40 -04:00
Joshua Tauberer 6a979f4f52
Add TOTP two-factor authentication to admin panel login (#1814)
* add user interface for managing 2fa

* update user schema with 2fa columns

* implement two factor check during login

* Use pyotp for validating TOTP codes

* also implements resynchronisation support via `pyotp`'s `valid_window option

* Update API route naming, update setup page

* Rename /two-factor-auth/ => /2fa/
* Nest totp routes under /2fa/totp/
* Update ids and methods in panel to allow for different setup types

* Autofocus otp input when logging in, update layout

* Extract TOTPStrategy class to totp.py

* this decouples `TOTP` validation and storage logic from `auth` and moves it to `totp`
* reduce `pyotp.validate#valid_window` from `2` to `1`

* Update OpenApi docs, rename /2fa/ => /mfa/

* Decouple totp from users table by moving to totp_credentials table

* this allows implementation of other mfa schemes in the future (webauthn)
* also makes key management easier and enforces one totp credentials per user on db-level

* Add sqlite migration

* Rename internal validate_two_factor_secret => validate_two_factor_secret

* conn.close() if mru_token update can't .commit()

* Address review feedback, thanks @hija

* Use hmac.compare_digest() to compare mru_token

* Safeguard against empty mru_token column

* hmac.compare_digest() expects arguments of type string, make sure we don't pass None
 * Currently, this cannot happen but we might not want to store `mru_token` during setup

* Do not log failed login attempts for MissingToken errors

* Due to the way that the /login UI works, this persists at least one failed login each time a user logs into the admin panel. This in turn triggers fail2ban at some point.

* Add TOTP secret to user_key hash

thanks @downtownallday
* this invalidates all user_keys after TOTP status is changed for user
* after changing TOTP state, a login is required
* due to the forced login, we can't and don't need to store the code used for setup in `mru_code`

* Typo

* Reorganize the MFA backend methods

* Reorganize MFA front-end and add label column

* Fix handling of bad input when enabling mfa

* Update openAPI docs

* Remove unique key constraint on foreign key user_id in mfa table

* Don't expose mru_token and secret for enabled mfas over HTTP

* Only update mru_token for matched mfa row

* Exclude mru_token in user key hash

* Rename tools/mail.py to management/cli.py

* Add MFA list/disable to the management CLI so admins can restore access if MFA device is lost

Co-authored-by: Joshua Tauberer <jt@occams.info>
2020-10-31 10:27:38 -04:00
Joshua Tauberer 545e7a52e4 Add MFA list/disable to the management CLI so admins can restore access if MFA device is lost 2020-10-31 10:23:43 -04:00
David Duque 48c233ebe5
Update Roundcube to version 1.4.9 (#1830) 2020-10-31 10:01:14 -04:00
Michael Kroes 9a588de754
Upgrade Nextcloud to version 20.0.1 (#1848) 2020-10-31 09:58:26 -04:00
Joshua Tauberer ac9ecc3bd3 Rename tools/mail.py to management/cli.py 2020-10-29 15:41:54 -04:00
David Duque 8b166f3041
Display certificate expiry dates in ISO format (#1841) 2020-10-16 16:22:36 -04:00
Joshua Tauberer 5509420637 s/Days/Retention Days/ on the backup settings page 2020-10-15 14:11:43 -04:00
Felix Spöttel 7d6c7b6610
Increase mta-sts max_age to one week (#1829)
This aligns the policy with the example policy found in the  spec
see https://tools.ietf.org/html/rfc8461#section-3.2
2020-10-02 21:27:21 -04:00
Felix Spöttel 1f0e493b8c Exclude mru_token in user key hash 2020-09-30 12:34:26 +02:00
Felix Spöttel ada2167d08 Only update mru_token for matched mfa row 2020-09-29 20:05:58 +02:00
Felix Spöttel be5032ffbe Don't expose mru_token and secret for enabled mfas over HTTP 2020-09-29 19:46:02 +02:00
Felix Spöttel 00b3a3b0a9 Remove unique key constraint on foreign key user_id in mfa table 2020-09-29 19:39:40 +02:00
Felix Spöttel 6d82c0035a Update openAPI docs 2020-09-28 21:27:24 +02:00
Felix Spöttel 4dced10a3f Fix handling of bad input when enabling mfa 2020-09-28 21:06:59 +02:00
Joshua Tauberer b80f225691 Reorganize MFA front-end and add label column 2020-09-27 08:31:23 -04:00
0pis 7f0f28f8e3
Use tabs instead of spaces in nginx conf (#1827)
* conf/nginx-primaryonly.conf: Use tabs instead of spaces
* management/web_update.py: Includes the tabs so they display with the correct indentation when added to the local.conf

Co-authored-by: 0pis <0pis>
2020-09-27 07:13:33 -04:00
Joshua Tauberer a8ea456b49 Reorganize the MFA backend methods 2020-09-26 09:58:25 -04:00
Joshua Tauberer 03bff5292b v0.50
v0.50 (September 25, 2020)
--------------------------

Setup:

* When upgrading from versions before v0.40, setup will now warn that ownCloud/Nextcloud data cannot be migrated rather than failing the installation.

Mail:

* An MTA-STS policy for incoming mail is now published (in DNS and over HTTPS) when the primary hostname and email address domain both have a signed TLS certificate installed, allowing senders to know that an encrypted connection should be enforced.
* The per-IP connection limit to the IMAP server has been doubled to allow more devices to connect at once, especially with multiple users behind a NAT.

DNS:

* autoconfig and autodiscover subdomains and CalDAV/CardDAV SRV records are no longer generated for domains that don't have user accounts since they are unnecessary.
* IPv6 addresses can now be specified for secondary DNS nameservers in the control panel.

TLS:

* TLS certificates are now provisioned in groups by parent domain to limit easy domain enumeration and make provisioning more resilient to errors for particular domains.

Control Panel:

* The control panel API is now fully documented at https://mailinabox.email/api-docs.html.
* User passwords can now have spaces.
* Status checks for automatic subdomains have been moved into the section for the parent domain.
* Typo fixed.

Web:

* The default web page served on fresh installations now adds the `noindex` meta tag.
* The HSTS header is revised to also be sent on non-success responses.
2020-09-25 07:43:30 -04:00
Joshua Tauberer e891a9a3f3 Update CHANGELOG 2020-09-21 15:59:38 -04:00
Joshua Tauberer 51aedcf6c3 Drop the MTA-STS TLSRPT record unless set explicitly 2020-09-21 15:57:17 -04:00
b-k 853008ddcc
Be more forgiving of people who missed the train on upgrading NextCloud (#1813)
Co-authored-by: B <ben@klemens.org>
2020-09-21 15:45:58 -04:00
Felix Spöttel 7d6427904f Typo 2020-09-12 16:38:44 +02:00
Felix Spöttel dcb93d071c Add TOTP secret to user_key hash
thanks @downtownallday
* this invalidates all user_keys after TOTP status is changed for user
* after changing TOTP state, a login is required
* due to the forced login, we can't and don't need to store the code used for setup in `mru_code`
2020-09-12 16:34:06 +02:00
Felix Spöttel 2ea97f0643 Do not log failed login attempts for MissingToken errors
* Due to the way that the /login UI works, this persists at least one failed login each time a user logs into the admin panel. This in turn triggers fail2ban at some point.
2020-09-06 13:08:44 +02:00
Felix Spöttel 4791c2fc62 Safeguard against empty mru_token column
* hmac.compare_digest() expects arguments of type string, make sure we don't pass None
 * Currently, this cannot happen but we might not want to store `mru_token` during setup
2020-09-06 13:03:54 +02:00
Felix Spöttel 49c333221a Use hmac.compare_digest() to compare mru_token 2020-09-06 12:54:45 +02:00
Felix Spöttel 481a333dc0 Address review feedback, thanks @hija 2020-09-04 20:28:15 +02:00
Felix Spöttel b0df35eba0 conn.close() if mru_token update can't .commit() 2020-09-03 20:39:03 +02:00
Felix Spöttel 08ae3d2b7f Rename internal validate_two_factor_secret => validate_two_factor_secret 2020-09-03 19:48:54 +02:00
Felix Spöttel 7c4eb0fb70 Add sqlite migration 2020-09-03 19:39:29 +02:00
Felix Spöttel ee01eae55e Decouple totp from users table by moving to totp_credentials table
* this allows implementation of other mfa schemes in the future (webauthn)
* also makes key management easier and enforces one totp credentials per user on db-level
2020-09-03 19:07:21 +02:00
Felix Spöttel 89b301afc7 Update OpenApi docs, rename /2fa/ => /mfa/ 2020-09-03 13:54:28 +02:00
Felix Spöttel ce70f44c58 Extract TOTPStrategy class to totp.py
* this decouples `TOTP` validation and storage logic from `auth` and moves it to `totp`
* reduce `pyotp.validate#valid_window` from `2` to `1`
2020-09-03 11:19:19 +02:00
Felix Spöttel 6594e19a1f Autofocus otp input when logging in, update layout 2020-09-02 20:30:08 +02:00
Felix Spöttel 8597646a12 Update API route naming, update setup page
* Rename /two-factor-auth/ => /2fa/
* Nest totp routes under /2fa/totp/
* Update ids and methods in panel to allow for different setup types
2020-09-02 19:41:06 +02:00
Felix Spöttel f205c48564 Use pyotp for validating TOTP codes
* also implements resynchronisation support via `pyotp`'s `valid_window option
2020-09-02 19:12:15 +02:00
Felix Spöttel 3c3683429b implement two factor check during login 2020-09-02 17:23:32 +02:00
Felix Spöttel a7a66929aa add user interface for managing 2fa
* update user schema with 2fa columns
2020-09-02 16:48:23 +02:00
Joshua Tauberer 0d72566c99 Merge v0.48 point release branch 2020-08-26 14:11:56 -04:00
Joshua Tauberer 62db58eaaf v0.48 2020-08-26 14:11:01 -04:00
Joshua Tauberer 891de8d6c3 Upgrade Roundcube to 1.4.8
Merges #1809
2020-08-26 14:10:04 -04:00
Richard Willis 62b9b1f15f
Add OpenAPI HTTP spec (#1804) 2020-08-22 15:44:19 -04:00
David Duque 94da7bb088
status_checks.py: Properly terminate the process pools (#1795)
* Only spawn a thread pool when strictly needed

For --check-primary-hostname, the pool is not used.
When exiting, the other processes are left alive and will hang.

* Acquire pools with the 'with' statement
2020-08-09 11:42:39 -04:00
Joshua Tauberer 65983b8ac7 Merge v0.47 point release branch 2020-07-29 10:27:06 -04:00
hija 56d0289ed9 v0.47 2020-07-29 10:24:56 -04:00
Marcus Bointon f253c40012 [backport] Add rate limiting of SSH in the firewall (#1770)
See #1767. Backport of cfc8fb484c.
2020-07-29 10:24:23 -04:00
Joshua Tauberer 4bbe4af377 Update CHANGELOG 2020-07-29 10:23:02 -04:00
Hilko 2c34a6df2b Update roundcube to 1.4.7 2020-07-29 10:15:12 -04:00
Hilko 1098e2b48e
Add noindex to www_default meta tags (#1791) 2020-07-29 10:03:33 -04:00
Richard Willis c50170b816
Update "Remove Alias" modal title (#1800) 2020-07-29 10:01:20 -04:00
Marcus Bointon cd518e6820
Raise Dovecot per user connection limit (#1799) 2020-07-27 06:37:52 -04:00
David Duque 967409b157
Drop requirement for passwords to have no spaces (#1789) 2020-07-16 07:23:11 -04:00
David Duque 1b2711fc42
Add 'always' modifier to the HSTS add_header directive (#1790)
This will make it so that the HSTS header is sent regardless of the request status code (until this point it would only be sent if "the response code equals 200, 201, 206, 301, 302, 303, 307, or 308." - according to thttp://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header)
2020-07-16 07:21:14 -04:00
David Duque e6102eacfb
AXFR Transfers (for secondary DNS servers): Allow IPv6 addresses (#1787) 2020-07-08 18:26:47 -04:00
Joshua Tauberer 6fd3195275 Fix MTA-STS policy id so it does not have invalid characters, fixes #1779 2020-06-12 13:09:11 -04:00
Joshua Tauberer 224242dfde Merge v0.46 point release branch 2020-06-11 12:25:49 -04:00
Joshua Tauberer 049bfb6f7f v0.46 2020-06-11 12:23:18 -04:00
Joshua Tauberer 12d60d102b Update Roundcube to 1.4.6
Fixes #1776
2020-06-11 12:21:17 -04:00
Joshua Tauberer 9db2fc7f05 In web proxies, add X-{Forwarded-{Host,Proto},Real-IP} and 'proxy_set_header Host' when there is a flag
Merges #1432, more or less.
2020-06-11 12:20:17 -04:00
Joshua Tauberer e03a6541ce Don't make autoconfig/autodiscover subdomains and SRV records when the parent domain has no user accounts
These subdomains/records are for automatic configuration of mail clients, but if there are no user accounts on a domain, there is no need to publish a DNS record, provision a TLS certificate, or create an nginx server config block.
2020-06-11 12:20:17 -04:00
Faye Duxovni 41642f2f59 [backport] Fix roundcube error log file path in setup script (#1775) 2020-06-11 12:16:53 -04:00
Vasek Sraier df9bb263dc
daily_tasks.sh: redirect stderr to stdout (#1768)
When the management commands fail, they can print something to the standard error output.
The administrator would never notice, because it wouldn't be send to him with the usual emails.
Fixes #1763
2020-06-07 09:56:45 -04:00
Faye Duxovni 339c330b4f
Fix roundcube error log file path in setup script (#1775) 2020-06-07 09:50:04 -04:00
Marcus Bointon cfc8fb484c
Add rate limiting of SSH in the firewall (#1770)
See #1767.
2020-06-07 09:47:51 -04:00
Joshua Tauberer bc1be9d70a readme fixes 2020-05-30 08:15:31 -04:00
Joshua Tauberer 3a4b8da8fd More for MTA-STS for incoming mail
* Create the mta_sts A/AAAA records even if there is no valid TLS certificate because we can't get a TLS certificate if we don't set up the domains.
* Make the policy id in the TXT record stable by using a hash of the policy file so that the DNS record doesn't change every day, which means no nightly notification and also it allows for longer caching by sending MTAs.
2020-05-30 08:04:09 -04:00
Joshua Tauberer 37dad9d4bb Provision certificates from Let's Encrypt grouped by DNS zone
Folks didn't want certificates exposing all of the domains hosted by the server (although this can already be found on the internet).

Additionally, if one domain fails (usually because of a misconfiguration), it would be nice if not everything fails. So grouping them helps with that.

Fixes #690.
2020-05-29 15:38:18 -04:00
Joshua Tauberer b805f8695e Move status checks for www, autoconfig, autodiscover, and mta-sts to within the section for the parent domain
Since we're checking the MTA-STS policy, there's no need to check that the domain resolves etc. directly.
2020-05-29 15:38:13 -04:00
Joshua Tauberer 10bedad3a3 MTA-STS tweaks, add status check using postfix-mta-sts-resolver, change to enforce 2020-05-29 15:36:52 -04:00
A. Schippers afc9f9686a
Publish MTA-STS policy for incoming mail (#1731)
Co-authored-by: Daniel Mabbett <triumph_2500@hotmail.com>
2020-05-29 15:30:07 -04:00
Joshua Tauberer 7de8fc9bc0 v0.45 2020-05-16 06:45:23 -04:00
yeuna92 c87b62b8c2
Fix path to Roundcube error log in fail2ban jails.conf (#1761) 2020-05-11 08:59:42 -04:00
clonejo 8fe33da85d Run nightly tasks on a random minute after 03:00 to avoid overload (#1754)
- The MIAB version check regularly fails at 03:00, presumably because a
  large portion of installations is checking mailinabox.email at the same
  time.
- At installation time, the time of the nightly clock is configured to
  run at a random minute after 03:00, but before 04:00.
- Users might expect the nightly tasks to be over at a certain time and
  run their own custom tasks afterwards. This could thus interfere with
  custom backup routines.
- This breaks reproducibility of the installation process.
- Users might also be surprised by the nightly task time changing after
  updating MIAB.
2020-05-10 19:54:45 -04:00
Joshua Tauberer c202a5cbc6 Changlog entries 2020-05-10 19:46:25 -04:00
Joshua Tauberer 1353949e42 Upgrade Roundcube to 1.4.4, Nextcloud to 17.0.6, Z-Push to 2.5.2 2020-05-10 19:44:12 -04:00
Joshua Tauberer c19f8c9ee6 Change Mozilla autoconfig useGlobalPreferredServer property to false
Fixes #1736.
2020-05-10 19:29:01 -04:00
Michael Becker 40b21c466d
Fypo fix in users.html (#1748) 2020-04-13 22:10:52 -04:00
Stefan f52749b403
Better return codes after errors in the setup scripts (#1741) 2020-04-11 14:18:44 -04:00
Sumit d67e09f334
Allowing adding nginx aliases in www/custom.yaml (#1742)
with this nginx will keep on proxying requests and serve static content
instead of passing this responsibility to proxied server

Without this the one needs to run an additional server to server static
content on the proxied url
2020-04-11 14:17:46 -04:00
Daniel Davis e224fc6656
Delete unused function apt_add_repository_to_unattended_upgrades (#1721)
The function apt_add_repository_to_unattended_upgrades is defined
but never called anywhere. It appears that automatic apt updates
are handled in system.sh where the file /etc/apt/apt.conf.d/02periodic
is created. The last call was removed in bbfa01f33a.

Co-authored-by: ddavis32 <dan@nthdegreesoftware.com>
2020-03-08 09:49:39 -04:00
Joshua Tauberer 5e47677f7a
Merge mail log script fixes for UTF-8 issue and Feb 29 issue (#1734) 2020-03-08 09:37:43 -04:00
Jarek Jurasz db9637ce4f Fix Feb 29 issue #1733 2020-03-03 20:59:28 +01:00
Jarek Jurasz f908bc364e mail_log.py reading forward #1593 2020-03-03 20:56:30 +01:00
Joshua Tauberer 30c2c60f59 v0.44 2020-02-15 07:15:09 -05:00
Joshua Tauberer ab5ce01bdd Some changelog entries 2020-01-22 03:36:02 -05:00
Joshua Tauberer ddadb6c28a Roundcube 1.4.2 2020-01-22 03:25:53 -05:00
Joshua Tauberer 23be1031b8 Remove security.md's information about port 25 which is out of date 2020-01-22 03:25:30 -05:00
Michael Kroes faee29ba8b Bump Nextcloud to 17.0.2 (#1702) 2020-01-22 03:06:17 -05:00
E.M. Makat b86bf07d57 Fix spelling of 'guarantee' (#1703) 2020-01-22 02:58:40 -05:00
jvolkenant e6294049bc Update Roundcube persistent_login plugin (#1712) 2020-01-22 02:58:04 -05:00
Joshua Tauberer 30885bcc8a Downgrade TLS settings for port 25, partially reverting f53b18ebb9
Port 25 now is aligned with Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=old&openssl-version=1.1.1.

See #1705
2020-01-20 14:52:23 -05:00
Bart a67f90593d Replace dead link with archive.org link (#1698) 2019-12-19 18:33:36 -05:00
Joshua Tauberer 385340da46 install openssh-client which provides ssh-keygen and is not present on desktop Ubuntu by default 2019-12-12 11:27:39 -05:00
jvolkenant 0271e549bb Fix typo in InstallNextcloud calls (#1693) 2019-12-10 19:01:09 -05:00
Joshua Tauberer f53b18ebb9 Upgrade TLS settings 2019-12-01 17:49:36 -05:00
Joshua Tauberer 8567a9b719 Fix upgrade issue broken by 802e7a1f4d 2019-12-01 17:44:12 -05:00
Vasek Sraier ad9d732608 OpenDKIM canonicalization changed to relaxed for mail headers (#1620)
Because Mailman reformats headers it breaks DKIM signatures. SPF also does
not apply in mailing lists. This together causes DMARC to fail and mark the
email as invalid. This fixes DKIM signatures for Mailman-based mailing lists
and makes sure DMARC test is passed.
2019-12-01 16:24:38 -05:00
jvolkenant aa15670dc2 Fixed multiple commented add_header entries in /etc/spamassassin/local.cf (#1641) 2019-12-01 16:23:02 -05:00
jvolkenant 81176c8e4b Fix to prevent multiple commented entries in dovecot conf (#1642) 2019-12-01 16:22:17 -05:00
Carl Reinke 960b5d5bbd Don't use ifquery to check interface state since it is no longer installed (#1689) 2019-12-01 16:21:38 -05:00
Carl Reinke 802e7a1f4d Copy systemd service files before linking to avoid issue with order of mounting filesystems (#1688) 2019-12-01 16:15:04 -05:00
Michael Kroes 52c68c6510 Implement Nextcloud php-fpm recommended performance tuning settings (#1679) 2019-12-01 16:13:33 -05:00
Michael Kroes 54b1ee9a3d Nextcloud 17 (#1676) 2019-12-01 16:11:00 -05:00
Francesco Montanari 6e3dee8b3b Upgrade RoundCube to 1.4.1 and set the default skin to elastic (#1673)
* Upgrade RoundCube to 1.4.0 and set the default skin to elastic
* Install php-ldap extension
* Remove smtp parameters that are now the default
2019-12-01 16:10:04 -05:00
Matthias Hähnel cd62fd9826 Update usage hint in backup.py (#1662)
removed explicit call of the system python, cause the file has a shebang with the mail-in-a-box shipped python. 
for me the system python complaint, that it is missing some modules
2019-11-23 08:04:22 -05:00
Michael Kroes 91638c7fe0 Removed the postgrey option that specifies which whitelist file to use. This allows the usage of a .local verion (#1675) 2019-11-23 07:58:29 -05:00
Michael Kroes ff8170d5ab Align nextcloud cron job with recommended settings (#1680) 2019-11-23 07:51:22 -05:00
Joshua Tauberer f6f75f6fab Don't fail when resolving zone transfer IP addresses since a nameserver may not have an IPv6 address 2019-11-19 09:57:33 -05:00
Edwin Schaap 2f54f39f31 If xfr is subnet, do not create "notify" entry (#1672) 2019-11-10 11:58:22 -05:00
Victor fa792f664e Use correct setting for .editorconfig indent_style (#1670) 2019-11-03 13:31:29 -05:00
Joshua Tauberer b50dfb7f93 changelog entries 2019-11-02 15:57:14 -04:00
Dan Jensen cde4e0caca Change SSL notification email subject (#1653)
Previously the notification email sent when a box's SSL certificate
is automatically updated said, "Error Provisioning TLS Certificate"
even when there was no error. This changes the subject line to "TLS
Certificate Provisioning Results", which is more accurate.
2019-11-02 15:29:05 -04:00
jvolkenant df80b9fc71 Allow user_external for Nextcloud 16 (and eventually 17) (#1655) 2019-11-02 15:28:36 -04:00
notEvil 7558ffd4f3 Allow dns zone transfer from IPv6 (#1643) 2019-10-28 06:31:50 -04:00
Victor 50e9e8af30 Sort custom dns table based on fqdn, rtype, and value (#1651) 2019-10-28 06:29:40 -04:00
jvolkenant ed02e2106b Update zpush to 2.5.1 (#1654) 2019-10-28 06:27:54 -04:00
Jeff Volkenant 24a567c3be Fix mailinabox-postgrey-whitelist cron job return code for file over 28 days
Merges #1639
2019-10-05 16:27:21 -04:00
Brendan Hide 70f05e9d52 Ensure the universe repository is enabled
A minimal Ubuntu server installation might not have universe enabled by
default. By adding it, we ensure we can install packages only available
in universe, such as python3-pip

Merges #1650.
2019-10-05 16:14:12 -04:00
Michael Kroes 889118aeb6 Upgraded Nextcloud to 16.0.5 (#1648)
* Upgraded Nextcloud to 16.0.5

* Improved Nextcloud upgrade detection
2019-10-05 16:12:00 -04:00
Joshua Tauberer a70ba94b0c add autoconfig domains before subtracting domains with overridden A records so that a custom DNS record can be used to suppress TLS certificate generation for those domains if needed 2019-09-10 07:11:16 -04:00
Joshua Tauberer 9e29564f48 v0.43 2019-09-01 07:43:47 -04:00
Joshua Tauberer 5aeced5c2e add a test for fail2ban monitoring managesieve 2019-08-31 09:15:41 -04:00
Joshua Tauberer 46f64e0e0a fail2ban should watch for managesieve logins too, fixes #1622 2019-08-31 09:04:17 -04:00
Joshua Tauberer 4971b63501 changelog entries 2019-08-31 08:52:32 -04:00
Joshua Tauberer 3ff9817325 document the xfr: CIDR notation, fix spaces vs tabs and syntax error, broken by c7377e602d, #1616 2019-08-31 08:50:44 -04:00
jvolkenant d6becddbe5 Change Nextcloud upgrade logic to look at STORAGE_ROOT's config.php version vs /usr/local's version.php version (#1632)
* Download and verify Nextcloud download before deleting old install directory
* Changed install logic to look at config.php and not version.php for database version number. When restoring from a backup, config.php in STORAGE_ROOT will hold the Nextcloud version that corresponds to the user's database and version.php in /usr/local won't even exist, so we were missing Nextcloud migration steps. In other cases they should be the same.
2019-08-31 08:50:36 -04:00
Michael Kroes 1d6793d124 Update the Postgrey whitelist to a newer version monthly (#1611)
Automatically update the Postgrey whitelist to a newer version once a month.
2019-08-31 08:38:41 -04:00
Kim Schulz c7377e602d make it possible to use subnet addresses for axfr (#1616)
it is sometimes needed to be able to set axfr to more than just one ip address. This can be done with multiple xfr: in  the secondary dns input but if you need to add an entire subnet segment (xxx.xxx.xxx.0/yy) then it will not work.
With this patch it is now possible to use a subnet as input for xfr the same way as if it was an ip address.
2019-08-31 08:00:18 -04:00
Snacho 08021ea19f Fix an issue when Secondary NS has multiple A records (#1633)
If a custom secondary NS server has multiple A records status_checks.py will fail with a timeout and Web UI won't load.
2019-08-31 07:58:12 -04:00
cmharper 295d481603 Upgraded roundcube to 1.3.10 (#1634) 2019-08-31 07:55:38 -04:00
captainwasabi c4cb828f65 Fix rsync backup options string: extraneous single quotes causing problems (#1629)
The resulting command had nested single quotes which doesn't work

I think this fixes all/most of the issues in #1627.  I am getting a full backup, then the next time it's run I get an incremental.  running from the CLI with --status looks good, --verify looks good, and --list looks good.
2019-08-13 05:57:05 -04:00
captainwasabi 0657f9e875 add proper check for DNS error in list_target_files (#1625)
The elif needed to check to see if the string was in the listing of results of the shell command.  As it was the conditional was just the string which always evaluates to true and was therefore giving a misleading error message.
2019-08-13 05:47:11 -04:00
Joshua Tauberer e37768ca86 v0.42b 2019-08-03 11:49:32 -04:00
jvolkenant bea5eb0dda Add interm upgrade step from Nextcloud 13 -> 14 (#1605) 2019-07-12 06:41:16 -04:00
jvolkenant fd5b11823c Add AAAA records for autodiscover & autoconfig (#1606) 2019-07-10 06:28:37 -04:00
Joshua Tauberer 5fc1944f04 pull v0.42, go back to v0.41 2019-07-05 11:56:54 -04:00
Joshua Tauberer 39fd4ce16c v0.42 2019-07-04 21:34:55 -04:00
Joshua Tauberer c0f4d5479f changelog updates 2019-06-16 11:40:40 -04:00
jvolkenant 193763f8f0 Update to Nextcloud 15.0.8, Contacts to 3.1.1, and Calendar to 1.6.5 (#1577)
* Update to Nextcloud 15.0.7, Contacts to 3.1.1, and Calendar to 1.6.5
* Enabled localhost-only insecure IMAP login for localhost Nextcloud auth
* Add package php-imagick and BigInt conversion
* added support for /cloud/oc[sm]-provider/ endpoint
2019-06-16 11:10:52 -04:00
jvolkenant 79759ea5a3 Upgrade Z-Push to 2.5.0 (#1581) 2019-06-16 11:07:45 -04:00
jvolkenant 6e5ceab0f8 hide virtualenv output (#1578) 2019-05-15 11:59:32 -07:00
jvolkenant c6fa0d23df check that munin-cron is not running (via cron) when it is run in setup, fixes #660 (#1579) 2019-05-15 11:58:40 -07:00
cmharper 85e59245fd hide 'RTNETLINK answers: Network is unreachable' error message during setup if IPv6 is not available (#1576) 2019-05-15 11:57:06 -07:00
jvolkenant 4232a1205c fix dovecot message about SSLv2 not supported by OpenSSL (#1580) 2019-05-15 11:46:52 -07:00
Michael Heuberger 0d4c693792 Add missing login form method to keep LastPass happy (#1565) 2019-05-12 05:10:34 -07:00
Pascal Garber 77b2246010 Backup Amazon S3: Added support for custom endpoints (#1427) 2019-05-12 05:09:30 -07:00
jvolkenant aff80ac58c Autodiscovery fix for additional hosted email domains, Fixes #941 (#1467) 2019-05-09 10:13:23 -07:00
just4t 25fec63a03 RAM limit to 502Mb to meet EC2 & Vultr 512Mb inst. (#1560)
AS told here: https://github.com/mail-in-a-box/mailinabox/pull/1534
2019-04-14 16:33:50 -04:00
dexbleeker 9b46637aff Update Roundcube to version 1.3.9 (#1546) 2019-04-14 14:19:21 -04:00
mbraem fb25013334 user privileges is a set (#1551)
fixes #1540
2019-04-14 14:17:43 -04:00
Joshua Tauberer dd7a2aa8a6 v0.41 2019-02-26 18:17:50 -05:00
Joshua Tauberer 149552f79b systemctl link should use -f to avoid an error if a system service already exists with that name but points to a different file
https://discourse.mailinabox.email/t/new-error-failed-systemctl-link-conf-mailinabox-service/4626/2
2019-02-26 18:16:26 -05:00
Joshua Tauberer adddd95e38 add lmtp_destination_recipient_limit=1 to work around spampd bug, see #1523 2019-02-25 13:20:57 -05:00
Ryan Stubbs bad38840d8 Fix type on alias edit page (#1520) 2019-02-11 20:14:56 -05:00
Yoann Colin 10050aa601 Upgrade to NextCloud 14 (#1504)
* Upgraded Nextcloud from 13.0.6 to 14.0.6.
* Upgraded Contacts from 2.1.5 to 2.1.8.
* Upgraded Calendar from 1.6.1 to 1.6.4.
* Cleanup unsupported version upgrades: Since an upgrade to v0.30 is mandatory before moving upward, I removed the checks for Nextcloud prior version 12.
* Fix the storage root path.
* Add missing indices. Thx @yodax for your feedback.
2019-02-08 21:24:03 -05:00
jvolkenant c60e3dc842 fail2ban ssh/ssh-ddos and sasl are now sshd and postfix-sasl (fixes #1453, merges #1454)
* fail2ban ssh/ssh-ddos and sasl are now sshd and postfix-sasl

* specified custom datepattern for miab-owncloud.conf
2019-01-18 09:40:51 -05:00
Joshua Tauberer c7659d9053 v0.40 2019-01-12 08:24:15 -05:00
Joshua Tauberer cd3fb1b487 fix bootstrap.sh to not confuse the status checks about the latest version 2019-01-09 09:03:43 -05:00
Joshua Tauberer 29e77d25fc merge branch 'ubuntu_bionic' 2019-01-09 08:53:10 -05:00
Joshua Tauberer e56c55efe8 write changelog summary for the Ubuntu 18.04 upgrade 2019-01-09 08:52:51 -05:00
Joshua Tauberer 8e0d9b9f21 update list of tls ciphers supported 2019-01-09 08:52:51 -05:00
Joshua Tauberer 6e60b47cb5 update bootstrap.sh script to detect the operating system and choose a different version tag depending on whether the box is running Ubuntu 14.04 or Ubuntu 18.04 2019-01-09 08:52:51 -05:00
Joshua Tauberer a3add03706 Merge branch 'master' into ubuntu_bionic 2019-01-09 07:00:44 -05:00
Joshua Tauberer 7b592b1e99 v0.30 - the last Ubuntu 14.04 release 2019-01-09 06:31:56 -05:00
Joshua Tauberer a67aa4cfd4 changelog 2019-01-09 06:17:27 -05:00
Dean Perry 31b743b164 Fix some more $DEFAULT_PUBLIC_IP issues (#1494) 2018-12-26 15:39:47 -05:00
jvolkenant 71f1c92b9e bash strict mode fixes (#1482) 2018-12-13 20:30:05 -05:00
EliterScripts e80a1dd4b7 fix DEFAULT_PUBLIC_IP unbound variable error (#1488)
This will fix this error while installing:
setup/questions.sh: line 95: DEFAULT_PUBLIC_IP: unbound variable
2018-12-13 20:28:21 -05:00
jvolkenant b7e9a90005 roundcube: upgrade carddav plugin to 3.0.3 & updated migrate.py (#1479)
* roundcube:  upgrade carddav plugin to 3.0.3 & updated migrate.py

* Check for db first and clear sessions to force re-login
2018-12-03 15:33:36 -05:00
Joshua Tauberer 0d4565e71d merge master branch 2018-12-02 18:19:15 -05:00
Joshua Tauberer 703a9376ef fix /etc /usr permissions for Scaleway, see #1438 2018-12-02 18:16:40 -05:00
Joshua Tauberer b3b798adf2 changelog entries 2018-12-02 18:03:17 -05:00
Joshua Tauberer bd54b41041 add missing rsyslog to apt install line
see #1438
2018-12-02 18:02:00 -05:00
Joshua Tauberer a211ad422b add a note on the aliases page that aliases should not be used to forward to outside domains
fixes #1198
2018-12-02 18:02:00 -05:00
Joshua Tauberer ef28a1defd show the Mail-in-a-Box version in the system status checks even when the new-version check is disabled
fixes #922
2018-12-02 18:02:00 -05:00
Joshua Tauberer c5c413b447 remove user account mailbox size from the control panel because it takes way too long to compute on very large mailboxes
fixes #531
2018-12-02 18:02:00 -05:00
Joshua Tauberer d2beb3919b document password character limitation
fixes #407
2018-12-02 18:02:00 -05:00
Achilleas Pipinellis a7dded8182 Add a logfile entry to the NSD conf file (#1434)
Having a log file can help debugging when something goes wrong and
NSD doesn't fail or MiaB doesn't notify you.

See
https://discourse.mailinabox.email/t/dns-email-domain-becomes-inaccessible-every-few-hours/3770
2018-12-02 18:00:16 -05:00
jeff-h 000363492e Improve greylisting explanation. (#1447)
Hopefully this improves the accuracy of the greylisting description.
2018-12-02 17:58:26 -05:00
jeff-h 5be74dec6e Improve postgrey logging (#1448)
We can't presume the redelivery timeframe of the sending server. However, we do know the blacklist timeframe within which we will reject a redelivery.
2018-12-02 17:57:37 -05:00
Joshua Tauberer 9ddca42c91 add 'nameserver' to resolv.conf, fixes #1450 2018-11-30 10:46:54 -05:00
Joshua Tauberer ff6d8fc672 remove the ppa directory since we're no longer supporting a PPA for Ubuntu 18.04 2018-11-30 10:46:54 -05:00
Joshua Tauberer 870b82637a fix some wrong variable names, fixes #1353 2018-11-30 10:46:54 -05:00
Joshua Tauberer dc6458623d add a note on the aliases page that aliases should not be used to forward to outside domains
fixes #1198
2018-11-30 10:46:54 -05:00
Joshua Tauberer 60f9c9e3b7 show the Mail-in-a-Box version in the system status checks even when the new-version check is disabled
fixes #922
2018-11-30 10:46:54 -05:00
Joshua Tauberer e5e0c64395 turn on bash strict mode to better catch setup errors
fixes #893
2018-11-30 10:46:54 -05:00
Joshua Tauberer aa52f52d02 disable SMTP AUTH on port 25 to stop it accidentally being used for submission
fixes #830
2018-11-30 10:46:54 -05:00
Joshua Tauberer b05b06c74a remove user account mailbox size from the control panel because it takes way too long to compute on very large mailboxes
fixes #531
2018-11-30 10:46:54 -05:00
Joshua Tauberer 7f8f4518e3 document password character limitation
fixes #407
2018-11-30 10:46:54 -05:00
Joshua Tauberer 86e2cfb6c8 remove old duplicity migration code from 2015, see 42322455 2018-11-30 10:46:54 -05:00
Holger Just 0335595e7e Update Roundcube to version 1.3.8 (#1475)
https://github.com/roundcube/roundcubemail/releases/tag/1.3.8
2018-11-25 10:40:21 -05:00
jvolkenant 8d5670068a fixes nginx warning about duplicate ssl configuration (#1460) 2018-10-25 15:18:21 -04:00
jvolkenant c9b3d88108 Fixes #1437 - package python-virtualenv is now called just virtualenv (#1452) 2018-10-24 17:20:48 -04:00
Joshua Tauberer 16f38042ec v0.29 released, closes #1440 2018-10-24 16:12:25 -04:00
Joshua Tauberer 2f494e9a1c CHANGELOG fixes/updates 2018-10-24 16:09:59 -04:00
Joshua Tauberer f739662392 duplicity started creating signature files with invalid filenames, fixes #1431 2018-10-13 16:16:30 -04:00
Michael Kroes 6eb9055275 Upgrade NextCloud to 13.06 (#1436) 2018-10-09 07:09:54 -04:00
Joshua Tauberer 3dbd6c994a update bind9 configuration 2018-10-03 14:28:43 -04:00
Joshua Tauberer bc4bdca752 update reference to Ubuntu 14.04 to 18.04 in README.md and security.md and drop mentions of our custom packages that we no longer maintain 2018-10-03 13:00:15 -04:00
Joshua Tauberer bbfa01f33a update to PHP 7.2
* drop the ondrej/php PPA since PHP 7.x is available directly from Ubuntu 18.04
* intall PHP 7.2 which is just the "php" package in Ubuntu 18.04
* some package names changed, some unnecessary packages are no longer provided
* update paths
2018-10-03 13:00:15 -04:00
Joshua Tauberer f6a641ad23 remove some cleanup steps that are no longer needed since we aren't supporting upgrades of existing machines and, even if we did, we aren't supporting upgrades from really old versions of Mail-in-a-Box 2018-10-03 13:00:15 -04:00
Joshua Tauberer 51972fd129 fix some comments 2018-10-03 13:00:15 -04:00
Joshua Tauberer bb43a2127c turn the x64/i686 architecture check into a warning since I'm not sure if we have any architecture requirements anymore, beyond what Ubuntu supports 2018-10-03 13:00:15 -04:00
Christopher A. DeFlumeri d96613b8fe minimal changeset to get things working on 18.04
@joshdata squashed pull request #1398, removed some comments, and added these notes:

* The old init.d script for the management daemon is replaced with a systemd service.
* A systemd service configuration is added to configure permissions for munin on startup.
* nginx SSL settings are updated because nginx's options and defaults have changed, and we now enable http2.
* Automatic SSHFP record generation is updated to know that 22 is the default SSH daemon port, since it is no longer explicit in sshd_config.
* The dovecot-lucene package is dropped because the Mail-in-a-Box PPA where we built the package has not been updated for Ubuntu 18.04.
* The stock postgrey package is installed instead of the one from our PPA (which we no longer support), which loses the automatic whitelisting of DNSWL.org-whitelisted senders.
* Drop memcached and the status check for memcached, which we used to use with ownCloud long ago but are no longer installing.
* Other minor changes.
2018-10-03 13:00:06 -04:00
Joshua Tauberer 504a9b0abc certbot uses a new directory path for API v02 accounts and we should check that before creating a new account or else we'll try to create a new account on each setup run (which certbot just fails on) 2018-09-03 13:07:24 -04:00
Joshua Tauberer 842fbb3d72 auto-agree to Let's Encrypt's terms of service during setup
fixes #1409

This reverts commit 82844ca651 ("make certbot auto-agree to TOS if NONINTERACTIVE=1 env var is set (#1399)") and instead *always* auto-agree. If we don't auto-agree, certbot asks the user interactively, but our "curl | bash" setup line does not permit interactive prompts, so certbot failed to register and all certificate things were broken until the command was re-run interactively.
2018-09-03 13:06:34 -04:00
Joshua Tauberer a5d5a073c7 update Z-Push to 2.4.4
Starting with 2.4, Z-Push no longer provides tarballs on their download server. The only options are getting the code from their git repository or using one of their distribution packages. Their Ubuntu 18.04 packaes don't seem to actually work in Ubuntu 18.04, so thinking ahead that's currently a bad choice. In 78d1c9be6e we switched from doing a git clone to using wget on their downloads server because of a problem with something related to stash.z-hub.io's SSL certificate. But wget also seems to work on their source code repository, so we can use that.
2018-09-02 11:29:44 -04:00
Joshua Tauberer d4b122ee94 update to Nextcloud 13.0.5 2018-08-24 11:11:52 -04:00
Joshua Tauberer 052a1f3b26 update to Roundcube 1.3.7 2018-08-24 10:47:22 -04:00
Joshua Tauberer 180b054dbc small code cleanup testing if the utf8 locale is installed 2018-08-24 09:49:08 -04:00
Joshua Tauberer cb162da5fe
Merge pull request #1412 from hlxnd/pr
Use ISO 8601 on backups table dates, fixes #1397
2018-08-05 15:16:05 -04:00
hlxnd de9c556ad7 Add missing PHP end tag 2018-08-05 15:27:35 +02:00
hlxnd f420294819 Use ISO 8601 on backups table dates. 2018-08-05 15:26:45 +02:00
Joshua Tauberer 738e0a6e17 v0.28 released, closes #1405 2018-07-30 11:14:38 -04:00
Pascal Garber e0d46d1eb5 Use Nextcloud’s occ command to unlock the admin (#1406) 2018-07-25 15:37:09 -04:00
Joshua Tauberer 7f37abca05 add php7.0-curl to webmail.sh
see 7ee91f6ae6
see #1268
closes #1259
2018-07-22 09:19:36 -04:00
Joshua Tauberer 2f467556bd new ssl cert provisioning broke if a domain doesnt yet have a cert, fixes #1392 2018-07-19 11:40:49 -04:00
Joshua Tauberer 15583ec10d updated CHANGELOG 2018-07-19 11:27:37 -04:00
Nils Norman Haukås 78d1c9be6e failing z-push installation: replace git clone with wget_verify
git clone (which uses curl) underneath was failing. Curiously, the same
git clone command would work on my macos host machine.

From the screenshot it looks like curl was somehow not able to negotiate
the connection. Might have been a missing CA certificate for Comodo, but
I was not able to determine if that was the issue.

fixes #1393
closes #1387
closes #1400
2018-07-19 11:25:57 -04:00
dev9 b0b5d8e792 Fix .mobileconfig so CalDAV calendar works on Mac OS X (#1402)
The previous CalDAVPrincipalURL "/cloud/remote.php/caldav/calendars/" causes an error in OS X.

See: https://discourse.mailinabox.email/t/caldav-with-macos-10-12-2-does-not-work/1649 and other similar issues.

The correct CalDAVPrincipalURL: https://discourse.mailinabox.email/t/caldav-with-macos-10-12-2-does-not-work/1649 but it turns out you can just leave the key/value out completely and OS X/iOS are able to auto discover the correct URL.
2018-07-19 11:17:38 -04:00
Nils 82844ca651 make certbot auto-agree to TOS if NONINTERACTIVE=1 env var is set (#1399) 2018-07-15 11:24:15 -04:00
Joshua Tauberer 2a72c800f6 replace free_tls_certificates with certbot 2018-06-29 16:46:21 -04:00
Joshua Tauberer 8be23d5ef6 ssl_certificates: reuse query_dns function in status_checks and simplify calls by calling normalize_ip within query_dns 2018-06-29 16:46:21 -04:00
Joshua Tauberer f9a0e39cc9 cryptography is now distributed as a wheel and no longer needs system development packages to be installed or pip/setuptools workarounds 2018-06-29 16:46:21 -04:00
Joshua Tauberer 0c0a079354 v0.27 2018-06-14 07:49:20 -04:00
Joshua Tauberer 42e86610ba changelog entry 2018-05-12 09:43:41 -04:00
yeah 7c62f4b8e9 Update Roundcube to 1.3.6 (#1376) 2018-04-17 11:54:24 -04:00
Joshua Tauberer 1eba7b0616 send the mail_log.py report to the box admin every Monday 2018-02-25 11:55:06 -05:00
Joshua Tauberer 9c7820f422 mail_log.py: include sent mail in the logins report in a new smtp column 2018-02-24 09:24:15 -05:00
Joshua Tauberer 87ec4e9f82 mail_log.py: refactor the dovecot login collector 2018-02-24 09:24:14 -05:00
Joshua Tauberer 08becf7fa3 the hidden feature for proxying web requests now sets X-Forwarded-For 2018-02-24 09:24:14 -05:00
Joshua Tauberer 5eb4a53de1 remove old tools/update-subresource-integrity.py script which isn't used now that we download all admin page remote assets during setup 2018-02-24 09:24:14 -05:00
Joshua Tauberer 598ade3f7a changelog entry 2018-02-24 09:24:09 -05:00
xetorixik 8f399df5bb Update Roundcube to 1.3.4 and Z-push to 2.3.9 (#1354) 2018-02-21 08:22:57 -05:00
Joshua Tauberer ae73dc5d30 v0.26c 2018-02-13 10:46:02 -05:00
Joshua Tauberer c409b2efd0 CHANGELOG entries 2018-02-13 10:44:07 -05:00
Joshua Tauberer 6961840c0e wrap wget in hide_output so that wget errors are shown
Our wget_verify function uses wget to download a file and then check
the file's hash. If wget fails, i.e. because of a 404 or other HTTP
or network error, we exited setup without displaying any output because
normally there are no errors and -q keeps the setup output clean.

Wrapping wget with our hide_output function, and dropping -q, captures
wget's output and shows it and exits setup just if wget fails.

see #1297
2018-02-13 10:38:10 -05:00
yeah 6162a9637c Add some development instructions to CONTRIBUTING.md (#1348) 2018-02-05 08:41:19 -05:00
Jan Schulz-Hofen 47c968e71b Upgrade Nextcloud from 12.0.3 to 12.0.5 2018-02-04 10:13:30 -05:00
Jan Schulz-Hofen ed3e2aa712 Use new .tar.bz2 source files for ownCloud and fix upgrade paths 2018-02-04 10:13:30 -05:00
NatCC fe597da7aa Update users.html (#1345)
Passwords must be eight characters long; when passwords are changed via the users page the dialog states that passwords need to be at least four characters but only eight or more are acceptable.
2018-02-03 17:49:11 -05:00
Joshua Tauberer 61e9888a85 Cdon't try to generate a CSR in the control panel until both the domain and country are selected
Fixes #1338.

See 0e9680fda63c33ace3f34ca7126617fb0efe8ffc, a52c56e571.
2018-01-28 09:08:24 -05:00
Joshua Tauberer 35fed8606e only spawn one process for the management daemon
In 0088fb4553 I changed the management daemon's startup
script from a symlink to a Python script to a bash script that activated the new virtualenv
and then launched Python. As a result, the init.d script that starts the daemon would
write the pid of bash to the pidfile, and when trying to kill it, it would kill bash but
not the Python process.

Using exec to start Python fixes this problem by making the Python process have the pid
that the init.d script knows about.

fixes #1339
2018-01-28 09:08:19 -05:00
Joshua Tauberer ef6f121491 when generating a CSR in the control panel, don't set empty attributes
Same as in a52c56e571.

Fixes #1338.
2018-01-28 09:07:54 -05:00
Joshua Tauberer ec3aab0eaa v0.26b 2018-01-25 09:27:17 -05:00
Joshua Tauberer 8c69b9e261 update CHANGELOG 2018-01-25 09:23:04 -05:00
Joshua Tauberer e7150e3bc6 pin acme to v0.20, which is the last version compatible with free_tls_certificates
free_tls_certificates uses acme.jose, which in acme v0.21 was moved to a new Python package.

See #1328
2018-01-20 11:23:45 -05:00
Joshua Tauberer 8d6d84d87f run mailconfig.py's email address validator outside of the virtualenv during questions.sh
We don't have the virtualenv this early in setup.

Broken by 0088fb4553.

Fixes #1326.

See https://discourse.mailinabox.email/t/that-is-not-a-valid-email-error-during-mailinabox-installation/2793.
2018-01-20 10:59:37 -05:00
barrybingo a6a1cc7ae0 Reduce munin-node log level to warning (#1330) 2018-01-19 12:00:44 -05:00
Joshua Tauberer b5c0736d27 release v0.26 2018-01-18 17:10:23 -05:00
Joshua Tauberer 8ee7de6ff3 no need to do a second apt-get update after 'installing' the PHP7 PPA if the PPA was already installed 2018-01-15 13:28:18 -05:00
Joshua Tauberer 0088fb4553 install Python 3 packages in a virtualenv
The cryptography package has created all sorts of installation trouble over the last few years, probably because of mismatches between OS-installed packages and pip-installed packages. Using a virtualenv for all Python packages used by the management daemon should make sure everything is consistent.

See #1298, see #1264.
2018-01-15 13:27:04 -05:00
Joshua Tauberer b2d103145f remove php5 packages from webmail.sh
The PHP5 packages have a dependency on (apache2 or php5-cgi or php5-fpm), and since removing php5-fpm apache2 started getting installed during setup, which caused a conflict with nginx of course.

These packages don't seem to be needed by Roundcube or Nextcloud --- Roundcube includes the ones it needs.

see #1264, #1298
2018-01-15 11:29:12 -05:00
Joshua Tauberer fc9e279cec partial revert of 441bd350, accidentally uncommented something 2018-01-15 10:33:05 -05:00
yeah 257983d559 Fix typo in CHANGELOG.md (#1312) 2017-12-25 17:46:31 -05:00
Joshua Tauberer e924459140 revert f25801e/#1233 - use Mozilla intermediate ciphers for IMAP/POP not modern ciphers
fixes #1300
2017-12-24 14:41:41 -05:00
Joshua Tauberer 441bd35053 update CHANGELOG 2017-12-23 18:01:41 -05:00
Michael Kroes a0e603a3c6 Change z-push to use the git repository instead of the tar ball (#1305) 2017-12-23 17:51:18 -05:00
sam-banks 88604074d6 Bugfix for free command (#1278)
A quick fix - there's no "o" option for free.
2017-12-18 08:21:28 -05:00
yeah d43111eb48 Add X-Spam-Score header to checked mail (#1292)
To enable users to do custom spam filtering based on score, it's helpful to render the actual spam score as a float in a separate header rather than as part of X-Spam-Status where it only appears in a comma separated list.
2017-12-18 08:17:47 -05:00
Jim Bailey 6729588d8c Changed temp_dir to /var/temp/roundcube to avoid loss on reboot. (#1302) 2017-12-18 08:12:45 -05:00
Joshua Tauberer 5f14eca67f merge v0.25 security release 2017-11-15 11:27:30 -05:00
Joshua Tauberer 8944cd7980 v0.25 2017-11-15 11:27:00 -05:00
yeah 2bbbc9dfa3 Update Roundcube to protect against CVE-2017-16651
See https://roundcube.net/news/2017/11/08/security-updates-1.3.3-1.2.7-and-1.1.10.

merges #1287
2017-11-15 11:14:21 -05:00
John Olten 544f155948 Add support for DNS wildcard [merges #1281] 2017-11-15 11:10:59 -05:00
Joshua Tauberer f080eabb3a run apt-get autoremove after updating system packages
Old kernels can build up and some packages may not be needed anymore.

See https://discourse.mailinabox.email/t/storage-space-decreasing/2525/5.
2017-11-15 11:05:43 -05:00
Jānis (Yannis) 7bf377eed1 use RSASHA256 for .lv domains DNSSEC (#1277) 2017-10-31 18:01:47 -04:00
Nicolas North cd554cf480 document the "local" alias pointing to this box in Custom DNS (#1261) 2017-10-20 17:20:21 -04:00
Michael Kroes e5448405ae add php7.0-mbstring to webmail.sh (#1268) 2017-10-15 07:53:01 -04:00
Tristan Hill a7eff8fb35 turn off apt verbose in unattended upgrades (#1255) 2017-10-06 08:16:40 -04:00
Fabian Bucher 341aa8695a update F-Droid DAVdroid link (#1253)
the information about the invalid link comes from here -> https://discourse.mailinabox.email/t/admin-sync-guide-contacts-and-calendar-davdroid-3-69-free-here/2528
2017-10-04 17:47:15 -04:00
Joshua Tauberer 5efdd72f41 update TLS test to record changes in the ciphers we offer on the open ports 2017-10-03 12:01:10 -04:00
Joshua Tauberer f25801e88d Merge #1233 - Limit Dovecot ciphers to the Mozilla modern set 2017-10-03 11:55:16 -04:00
Joshua Tauberer cc7be13098 update nginx cipher list to Mozilla's current intermediate ciphers and update HSTS header to be six months
* The Mozilla recommendations must have been updated in the last few years.
* The HSTS header must have >=6 months to get an A+ at ssllabs.com/ssltest.
2017-10-03 11:47:32 -04:00
Joshua Tauberer 2556e3fbc2 HSTS header does not belong here, will result in multiple headers 2017-10-03 11:38:15 -04:00
Joshua Tauberer 00898b2ff5 v0.24 2017-10-03 10:49:04 -04:00
Joshua Tauberer 35b8a149d8 fix dns regex: underscores are allowed in domain names even though they are not allowed in hostnames 2017-09-22 12:31:49 -04:00
Joshua Tauberer d0423afd18 Nextcloud install shouldn't fail if php-fpm isn't already running 2017-09-22 11:10:48 -04:00
Joshua Tauberer edf42df835 update Roundcube (1.3.1), persistent login plugin, Z-Push (2.3.8), and Nextcloud (12.0.3) 2017-09-22 11:10:40 -04:00
Joshua Tauberer 734745a4a6 Nextcloud 12.0.2, fix Nextcloud 12 upgrades seeing the wrong version
Nextcloud 12 adds a new OC_VersionCanBeUpgradedFrom field to /usr/local/lib/owncloud/version.php which lists
prior NC/OC version numbers, which confuses our check for what the installed version is. Make our regex more strict.

merges #1238
2017-09-01 07:58:07 -04:00
dofl dbebaba8b9 switch PHP's process manager to on demand
merges #1216
2017-08-30 13:39:25 -04:00
Joshua Tauberer cb765dfe2a changelog entries 2017-08-30 13:11:58 -04:00
Lloyd Smart 81258e2189 Implement upstream issue #1228 for stronger dh parameters in Dovecot. (#1232) 2017-08-30 13:04:22 -04:00
Lloyd Smart 4dd4b4232a Limited ciphers to the Mozilla modern set from https://mozilla.github.io/server-side-tls/ssl-config-generator/ as requested in issue #1228. 2017-08-29 15:02:58 +01:00
Marius Blüm 48ff664ee9 Remove the ? from "Log out" (#1231)
Signed-off-by: Marius Blüm <marius@lineone.io>
2017-08-23 19:46:45 -04:00
Michael Kroes a52c56e571 only set the CN field when generating initial CSR to prevent issues with the php7 ppa version of openssl (#1223)
OpenSSL 1.1.0f now validates the other subject fields and rejects the empty string (for the country?) because it isn't two characters.
2017-07-30 08:11:39 -04:00
Jon Hermansen 6ace97e482 update PPA build URL for postgrey 1.35. Fixes #1211 (#1212) 2017-07-21 15:13:57 -04:00
Git Repository 19a928e4ec [Issue #1159] Remove any +tag name in email alias before checking privileges (#1181)
* [Issue #1159] Remove any +tag name in email alias before checking privileges

* Move priprivileged email check after the conversion to unicode so only IDNA serves as input
2017-07-21 11:10:16 -04:00
Michael Kroes 78f2fe213e Secondary name server could not be set (#1209) 2017-07-21 08:20:37 -04:00
Michael Kroes a16855ecf0 Backup script should now stop php7.0-fpm instead of php5-fpm (#1206) 2017-07-17 09:45:40 -04:00
yodax d773140502 Update to Nextcloud 12 using PHP7
* Install PHP7 via a PPA, enable unattended upgrades for the PPA, and switch all of our PHP configuration to the PHP7 install.
* Keep installing PHP5 for ownCloud/Nextcloud packages because we need it to possibly run transitional updates to ownCloud/Nextcloud versions less than 12. But replace PHP5 packages with PHP7 packages elsewhere.
* Update to Nextcloud 12 which requires PHP7, with a transitional upgrade to Nextcloud 11.0.3.
* Disable TLS cert validation by Roundcube when connecting to localhost IMAP and SMTP. Validation became the default in PHP7 but we don't necessarily have a (non-self-)signed certificate and it definitely isn't valid for the IP address 127.0.0.1.

Merges #1140
2017-07-14 06:48:22 -04:00
Michael Kroes 2c324d0bc9 web_domains should also normalize ipv6 addresses (#1201) 2017-07-13 07:16:12 -04:00
Joshua Tauberer 2bd6cc4d6b update to Z-Push 2.3.7 2017-07-10 18:01:21 -04:00
Joshua Tauberer b11157e0b6 updated to Roundcube 1.3, but unfortunately dropping the vacation plugin
Switched to the -complete download which has vendored assets. See https://github.com/mail-in-a-box/mailinabox/pull/1140.
2017-07-10 17:31:59 -04:00
François Deppierraz 46ba62b7b1 Add support for NS records in custom domains (#1177) 2017-06-11 07:56:30 -04:00
Joshua Tauberer 4c36d6e6c9 release v0.23a 2017-05-31 07:42:18 -04:00
Michael Kroes e49c99890b fetch whole bootstrap - fixes missing icons in admin (#1185) 2017-05-31 07:36:17 -04:00
Joshua Tauberer a13fd90347 v0.23 2017-05-30 06:50:42 -04:00
Git Repository 18f1689f45 changed the location we store the web-assets for the admin pages to /usr/local/mailinabox (#1179) 2017-05-23 19:22:53 -04:00
Git Repository 8234a5a9f4 download jQuery and Bootstrap during setup and serve locally so that we don't rely on a CDN which is blocked in some parts of the world (#1167) (#1171) 2017-05-08 07:25:16 -04:00
Michael Kroes 1d9f9ea617 Fix two typos in setup/owncloud.sh regarding the setting of the hostname (#1172) 2017-05-08 07:23:59 -04:00
Michael Kroes fbb38c3881 Add changelog for custom dns CAA records (#1173) 2017-05-08 07:23:12 -04:00
Git Repository 2caddb41eb #1161 Move the config line for mail_domain to always reset the PRIMARY_HOST (#1163) 2017-05-06 08:18:50 -04:00
Michael Kroes d2b7204319 Add support for adding a custom "CAA" DNS record (#1155) 2017-04-30 08:58:00 -04:00
Michael Kroes 68ebca8a15 Update Z-Push to 2.3.6 (#1166) 2017-04-30 07:24:36 -04:00
Joshua Tauberer 9c9dcdbf0a update README to link to http://z-push.org/ now that we are on the main line 2017-04-24 17:34:53 -04:00
Joshua Tauberer 0c4c2e51bb bump to Nextcloud 10.0.5 2017-04-24 17:31:54 -04:00
Joshua Tauberer 828512b95a changelog entries 2017-04-17 07:51:01 -04:00
Joshua Tauberer add985ce5d letencrypt now supports idna, remove the check/block 2017-04-17 07:45:08 -04:00
Michael Kroes 416dbebf45 update z-push to 2.3.5 on the upstream repository z-push.org (#1153) 2017-04-17 07:42:44 -04:00
Git Repository 2a046a22f4 changed roundcube theme to 'larry' (#1138)
Updated the setup file to use roundcube's 'larry' theme as the default.
2017-04-17 07:29:50 -04:00
yodax b66f12dd4c Fix rsync backup. The path was not append properly 2017-04-17 07:25:47 -04:00
yodax 6e04eb490f Add check to prevent division by zero during backup status 2017-04-17 07:25:47 -04:00
Michael Kroes cd39c2b53f Merge pull request #1151 from phol/master
Corrected typo in setup/dns.sh
2017-04-10 18:52:38 +02:00
Pieter 5da168466d Corrected typo in setup/dns.sh 2017-04-10 18:37:09 +02:00
Joas Schilling a5f39784dd remove nginx error pages for nextcloud (#1141)
They are known to cause troubles, for more information see
https://github.com/nextcloud/server/issues/3847
2017-04-04 07:42:50 -04:00
Michael Kroes a072730fb8 Wrap normalize_ip in try..except (#1139)
closes #1134
2017-04-03 16:53:53 -04:00
Joshua Tauberer 00c61dbcdd changelog entry for migration to Nextcloud 2017-04-02 07:53:56 -04:00
Joshua Tauberer 10bf40250b merge #1121 - migration from ownCloud to Nextcloud
branch 'nextcloud' of https://github.com/yeah/mailinabox
2017-04-02 07:47:31 -04:00
Joshua Tauberer 453091f1fb v0.22 released 2017-04-02 07:34:14 -04:00
Jan Schulz-Hofen 48e0f39179 Rename ownCloud to Nextcloud in safe places
e.g. code comments and user-facing prompts/outputs which can be safely changed without risking to break anything
2017-04-02 11:19:21 +02:00
Jan Schulz-Hofen bb641cdfba Move from ownCloud to Nextcloud 2017-03-28 11:16:04 +07:00
Joshua Tauberer 255a65ac98 suppress rmcarddav's php version check
Since it says "RCMCardDAV requires at least PHP 5.6.18. Older versions might work", let's hope for the best.

Also hiding its preferences panel in settings since if it doesn't work, we don't want folks using it for anything but connecting to ownCloud contacts.
2017-03-27 08:18:05 -04:00
yeah c7badb80d1 Set default user password length to 8 in non-interactive setups (#1123)
To comply with #1098 and avoid failed setups while testing with Vagrant
2017-03-26 13:23:34 -04:00
Joshua Tauberer 653cb7ce10 roundcube 1.2.4, persistent login plugin 2017-03-26 09:50:00 -04:00
Joshua Tauberer d7d8964afc changlog entries 2017-03-26 09:31:35 -04:00
yeah 6c3696a54a Upgrade ownCloud to 9.1.4 to address security vulnerabilities, refs #1111 (#1120)
* Move variable assignment up and do not use call arguments directly

* Upgrade ownCloud to latest patch release 9.1.4

also move owncloud hash to its own variable
2017-03-26 09:20:27 -04:00
Rinze de Laat 9c9cae2096 Added an alternative mail log scanning script for use from the command line (and monitoring, at a later stage)
merges #970
2017-03-26 09:13:35 -04:00
Théo Segonds 423f1907d0 Fix zpush compatibility list link (#1076) 2017-03-26 09:09:00 -04:00
Sean Watson 86621392f6 support SSHFP records for custom domains (#1114) 2017-03-09 09:05:52 -05:00
Sean Watson 368b9c50d0 add DSA and ED25519 SSHFP records if those keys are present (#1078) 2017-03-01 08:02:41 -05:00
Jan Schulz-Hofen 3830facf78 set dovecot vsz_limit to 1/3 of available memory (#1096)
The `default_vsz_limit` is the maximum amount of virtual memory that can be allocated. It should be set *reasonably high* to avoid allocation issues with larger mailboxes. We're setting it to 1/3 of the total available memory (physical mem + swap) to be sure.

See here for discussion:
- https://www.dovecot.org/list/dovecot/2012-August/137569.html
- https://www.dovecot.org/list/dovecot/2011-December/132455.html
2017-03-01 07:59:48 -05:00
Manuel d4baac2363 at the end of setup show SHA256 tls cert hash instead of SHA1 hash (#1108) 2017-03-01 07:57:03 -05:00
NatCC f88c907a29 Update jails.conf - SSH fail2ban jail (#1105)
SSH fail2ban jail is not enabled by default and so the jail does not load.
2017-02-21 09:32:28 -05:00
Ian Beringer 89222d519a Fix date delta display for deltas greater than 1 year (#1099) 2017-02-15 18:24:32 -05:00
Dominik Murzynowski 36bef2ee16 Change password min-length to 8 characters (#1098) 2017-02-14 14:24:59 -05:00
Norman S f6b20a810f Enforce pip to use python 2.7 for boto (#1093) 2017-02-10 09:44:40 -05:00
Norman S f2ff14100e Change password min-length to four characters (#1094)
in order to correlate with the management interface.
2017-02-10 09:43:11 -05:00
Joshua Tauberer 2c86fa3755 merge v0.21c hot fix release 2017-02-01 11:26:32 -05:00
Joshua Tauberer 3c05fc94ff v0.21c 2017-02-01 11:01:11 -05:00
Joshua Tauberer 2e00530944 upgrade acme package 2017-02-01 11:01:11 -05:00
Joshua Tauberer 32d6728dc9 fix pip breaking due to setuptools/pip/cryptography problem
pip<6.1 + setuptools>=34 have a problem with packages that
try to update setuptools during installation, like cryptography.
See https://github.com/pypa/pip/issues/4253. The Ubuntu 14.04
package versions are pip 1.5.4 and setuptools 3.3. When we
install cryptography under those versions, it tries to update
setuptools to version 34, which became available about 10 days
ago, and then pip gets permanently broken with errors like
"ImportError: No module named 'packaging'".

The easiest work-around on systems that aren't already broken is
to upgrade pip and setuptools individually before we install any
package that tries to update setuptools.

Also try to detect a broken system and forcibly remove setuptools
first before trying to install/upgrade pip.

fixes #1080, fixes #1081, fixes #1086
see #1083
see https://discourse.mailinabox.email/t/error-with-pip-and-python/1880
see https://discourse.mailinabox.email/t/error-installing-mib/1875
2017-02-01 10:29:28 -05:00
wsteitz a3c71fe14f move unzip installation from owncload to system setup (#1077) 2017-01-22 10:37:54 -05:00
Joshua Tauberer a24977a96e normalize_ip for ipv6 still not correct, was broken if box has no IPv6 address 2017-01-18 07:51:59 -05:00
Joshua Tauberer e694f57673 changelog entries 2017-01-15 11:23:59 -05:00
Joshua Tauberer cd59de6314 update roundcube to 1.2.3 2017-01-15 11:17:17 -05:00
Joshua Tauberer a081d04082 move the custom exclusive process code from utils.py into a new python package named exclusiveprocess 2017-01-15 11:02:23 -05:00
Bill Cromie 09577816f8 adds optional vagrant-cachier if you have the plugin installed (#1028) 2017-01-15 10:47:36 -05:00
Bill Cromie 2647febbf5 cardav plugin for roundcube (#1029) 2017-01-15 10:46:33 -05:00
guyzmo bd0635728c added editorconfig setup (#1037) 2017-01-15 10:44:13 -05:00
Jonathan Chun 584cfe42c4 compare IPv6 addresses correctly with normalization (#1052) 2017-01-15 10:41:12 -05:00
Michael Kroes 41601a592f Improve error handling when doing update checks (#1065)
* Added an error message to handle exceptions when the setup script is trying to determine the latest Miab version
2017-01-15 10:35:33 -05:00
Bill Cromie 18c253eeda adding a fully qualified domain name for the hostname and ignoring the .vagrant dir (#1027) 2016-12-20 16:32:06 -05:00
guyzmo 34d58fb720 Fix/rsync issues (#1036)
* Fixed issue with relative path for rsync relative names

Actually using the parsed URL `path` part, instead of doing a lousy split().
Renamed the `p` variable into something more sensible (`target`).

Fixes: #1019

* Added more verbose error messages upon rsync failures

fixes #1033

* Added command to test file listing
2016-12-17 09:29:48 -05:00
Joshua Tauberer 99d0afd650 secondary nameserver check fails if domain has custom DNS (round-robin) multiple A records
fixes #834
2016-12-07 07:02:52 -05:00
Joshua Tauberer cd717ec94e nightly TLS certificate provisioning should omit warnings about domains it cant provision for 2016-12-07 07:02:52 -05:00
Joshua Tauberer 0b7f477b96 v0.21b
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJYRevlAAoJELkgQfTBC92BqS8IALs7LHQAujNvFYnAN5OqJV0y
 3RW5rB1Y0EROWH3YDZBEk6tmZ4+rtF3U6q2aHxiJ0BwsJW9KlO8rlqVUFRdEW+US
 BZVhBhBrZLBxmnkS8JF5rCA+xcga1iIL7uSALAikpJK3PxtmMgjEvt9jc2e1VXDC
 isHMSCHCLzBVx29RFfWf+3LmSkmg5UWMCNjxKLLJcalV/hIRg+zFT7zdJdaKnI7Z
 cj56o+j75sLm+4ftM+n05Q9iTw5yGg2Qqx7bJCDpWEGJClkeuy9n//X5a63XlWbk
 IIj3YMAUZj+5wZ7zneN6A9MUMp24OLOztQRWzn7XJR18gSFkdg7RZgWA0eERC1g=
 =tkCm
 -----END PGP SIGNATURE-----

merge hot-fix release v0.21b
2016-12-05 17:36:32 -05:00
Joshua Tauberer ab2367e98a v0.21b 2016-12-05 17:36:11 -05:00
Corey Hinshaw 384c3b5e3d Change ownership of roundcube DB after running migrations (#1024)
* Fix #1023 by changing ownership of roundcube DB after running migrations

* Set mode of roundcube sqlite database during setup
2016-12-05 17:32:31 -05:00
Corey Hinshaw d91368c478 Change ownership of roundcube DB after running migrations (#1024)
* Fix #1023 by changing ownership of roundcube DB after running migrations

* Set mode of roundcube sqlite database during setup
2016-12-05 17:31:20 -05:00
wsteitz 61105b1ec3 remove all references to justtesting.email (#1003) (#1005) 2016-11-30 12:55:18 -05:00
Leo Koppelkamm b6f90e10c1 Allow larger messages to be checked by SpamAssassin (#1006)
Additionally, add the spam report headers to all emails, in order to make it easier to debug false negatives.
2016-11-30 12:55:03 -05:00
Michael Kroes 3af5e55035 Upgrade to ownCloud 9.1.2 (#1010)
* Update owncloud to 9.1.2

* Upgrade to ownCloud 9.1.2 from 9.1.1 would fail because the guid of 9.1.1 matched with the regex for the version of 8.x
2016-11-30 12:54:27 -05:00
Joshua Tauberer e03b071e8b missed changelog header 2016-11-30 12:50:38 -05:00
Joshua Tauberer df93d82d0f v0.21 released 2016-11-30 12:42:24 -05:00
Christian Koptein 59913a5e4c Added Hacker-News Reference Nov 2016 (#1014) 2016-11-28 07:24:57 -05:00
Michael Kroes c3605f6211 Check if update-manager release-upgrades configuration file is present before editing (#996) 2016-11-13 17:36:33 -05:00
Joshua Tauberer 96b3a29800 rsync backup broke other things 2016-11-12 09:59:06 -05:00
Joshua Tauberer abb6a1a070 changelog entries 2016-11-12 09:34:52 -05:00
guyzmo 041b5f883f Support for rsync+ssh backup target (#678)
* Added support for backup to a remote server using rsync

* updated web interface to get data from user
* added way to list files from server

It’s not using the “username” field of the yaml configuration
file to minimise the amount of patches needed. So the username
is actually sorted within the rsync URL.

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* Added ssh key generation upon installation for root user.

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* Removed stale blank lines, and fixed typo

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* fix backup-location lines, by switching it from id to class

* Various web UI fixes

- fixed user field being shadowed ;
- fixed settings reading comparaison ;
- fixed forgotten min-age field.

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* Added SSH Public Key shown on the web interface UI

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* trailing spaces.

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* fixed the extraneous environment

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* Updated key setup

- made key lower in bits, but stronger (using -a option),
- made ssh-keygen run in background using nohup,
- added independent key file, as id_rsa_miab,
- added ssh-options to all duplicity calls to use the id_rsa_miab keyfile,
- changed path to the public key display

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* added rsync options for ssh identity support

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* removed strict host checking for all backup operations

Signed-off-by: Bernard `Guyzmo` Pratz <guyzmo+github@m0g.net>

* Remove nohup from ssh-keygen so errors aren't hidden. Also only generate a key if none exists yet

* Add trailing slash when checking a remote backup. Also check if we actually can read the remote size

* Factorisation of the repeated rsync/ssh options

cf https://github.com/mail-in-a-box/mailinabox/pull/678#discussion_r81478919

* Updated message SSH key creation

https://github.com/mail-in-a-box/mailinabox/pull/678#discussion_r81478886
2016-11-12 09:28:55 -05:00
yodax 3b78a8d9d6 If ufw isn't installed on the machine the status checks shouldn't fail 2016-11-12 09:25:34 -05:00
Scott Bronson 6ea1a06a12 suppress Ubuntu's upgrade prompts (#992)
On every login we're notified:

  New release '16.04.1 LTS' available.
  Run 'do-release-upgrade' to upgrade to it.

Disable this so that an eager yet inattentive admin
doesn't accidentally follow these instructions.
2016-11-08 21:41:02 -05:00
Michael Kroes 2b00478b8b Check if apc is disabled during ownCloud setup, if so enable it (#983) 2016-10-24 07:59:34 -04:00
Michael Kroes 155bcfc654 Add ownCloud 9.1.1 to the changelog (#984) 2016-10-21 10:12:46 -04:00
Tristan Hill 4b07a6aa8f disable nested checker checks (#972)
fixes #967
2016-10-18 14:15:33 -04:00
Michael Kroes 2151d81453 update to ownCloud 9.1.1 (with intermediate upgrades) (#894)
[this is a squashed merge from-]

* Install owncoud 9.1 and provide an upgrade path from 8.2. This also disables memcached and goes with apc. The upgrade fails with memcached.

* Remove php apc setting

* Add dav migrations for each user

* Add some comments to the code

* When upgrading owncloud from 8.2.3 to 9.1.0 the backup of 8.2.3 was overwritten when going from 9.0 to 9.1

* Add upgrade path from 8.1.1. Only do an upgrade check if owncloud was previously installed.

* Stop php5-fpm before owncloud upgrade to prevent database locks

* Fix fail2ban tests for owncloud 9

* When upgrading owncloud copy the database to the user-data/owncloud-backup directory

* Remove not need unzip directives during owncloud extraction. Directory is removed beforehand so a normal extraction is fine

* Improve backup of owncloud installation and provide a post installation restore script. Update the owncloud version number to 9.1.1. Update the calendar and contacts apps to the latest versions

* Separate the ownCloud upgrades visually in the console output.
2016-10-18 06:04:13 -04:00
Michael Kroes fd6226187a lower memory requirements to 512MB, display a warning if system memory is below 768MB. (#952) 2016-10-15 15:41:25 -04:00
rxcomm bbe27df413 SSHFP record creation should scan nonstandard SSH port if necessary (#974)
* sshfp records from nonstandard ports

If port 22 is not open, dns_update.py will not create SSHFP records
because it only scans port 22 for keys. This commit modifies
dns_update.py to parse the sshd_config file for open ports, and
then obtains keys from one of them (even if port 22 is not open).

* modified test of s per JoshData request

* edit CHANGELOG per JoshData

* fix typo
2016-10-15 15:36:13 -04:00
Michael Kroes a658abc95f Fix status checks for ufw when the system doesn't support iptables (#961) 2016-10-08 14:35:19 -04:00
Joshua Tauberer 9331dbc519 merge z-push-from-name #940 2016-10-08 14:32:57 -04:00
Steve Gregg 8b5eba21c0 Correct typo of "PRIORITY" in the template (#965) 2016-10-05 18:43:50 -04:00
yodax da5497cd1c Update changelog entries 2016-09-28 08:37:24 +02:00
Michael Kroes a27ec68467 Merge pull request #951 from MariusBluem/remove-certificate-providers
Remove Certificate Providers / Fix #950
2016-09-28 08:33:11 +02:00
Marius Blüm 3ac4b8aca8
Remove Certificate Providers / Fix #950
Signed-off-by: Marius Blüm <marius@lineone.io>
2016-09-27 15:06:50 +02:00
Michael Kroes 02feeafe6a change bayes_file_mode to world writable (merges #931)
fixes #534, again, hopefully
2016-09-23 15:14:21 -04:00
Marius Blüm 5f0376bfbf Fix typo in alias-page, fixes #943 (merges #949)
Signed-off-by: Marius Blüm <marius@lineone.io>
2016-09-23 15:11:37 -04:00
Joshua Tauberer 4e4fe90fc7 v0.20 2016-09-23 07:49:13 -04:00
Joshua Tauberer 3cd5a6eee7 changelog entries 2016-09-23 07:46:01 -04:00
Joshua Tauberer c26bc841a2 more for dnspython exception with IPv6 addresses
fixes #945, corrects prev commit (#947) in case of multiple AAAA records, adds changelog
2016-09-23 07:41:24 -04:00
Mathis Hoffmann 163daea41c dnspython exception with IPv6 addresses
see #945, merges #947
2016-09-23 07:35:53 -04:00
Corey Hinshaw d8316119eb Use Roundcube identities to populate Z-Push From name 2016-09-19 11:10:44 -04:00
Scott Bronson 102b2d46ab typo fix: seconday -> secondary (#939) 2016-09-18 08:10:49 -04:00
Joshua Tauberer 58541c467f merge #936 - fix wonky free disk space messages - from cmsirbu/master
fix status_checks.py free disk space reporting, fixes #932
2016-09-16 07:31:57 -04:00
cs@twoflower 00bd23eb04 fix status_checks.py free disk space reporting #932 2016-09-15 17:01:21 +01:00
Joshua Tauberer d73d1c6900 changelog typos 2016-08-24 07:47:55 -04:00
Joshua Tauberer fc0abd5b4d confirm that fail2ban is protecting pop3s, closes #629 2016-08-22 19:18:23 -04:00
Joshua Tauberer 27b4edfc76 v0.19b
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJXuHvJAAoJELkgQfTBC92B2IsIAJl+tQkkVp5cu4zuSLOpHj73
 LFGGCrGTSMwuyNbnklkLmLIfRxlmNfHNfQqHYhxJQq7JVLuDRJS2rTJnSWGg4PuE
 vyrjOEFNNqFp9cy00j6NMUUcJa4kte4cvMg3Sonz7JkVwS3fxp7hSgZknYOjlLvh
 R/FmrqVhpDtTZRtMjcQaCtCTWUEETYFLsJZ2iZkIlpGhoxPGEhKZquNrT0s3qrNv
 Rwf6O3i9RIS/bOu2lWI+ymdStPVJnn+deRTBWPpsxXdNC/NG9+gWiqGgRnjTBbMO
 uzH1hYct+J6TWeNpesECfMMjTOZ+T7yrRJc1s9ThuLokyAlo9yf4E5YFziZ0hi4=
 =JxNp
 -----END PGP SIGNATURE-----

merge v0.19b hot fix release
2016-08-20 11:50:26 -04:00
Joshua Tauberer ba75ff7820 v0.19b 2016-08-20 11:48:08 -04:00
Joshua Tauberer a14b17794b simplify how munin-cgi-graph is called to reduce the attack surface area
Seems like if REQUEST_METHOD is set to GET, then we can drop two redundant ways the query string is given. munin-cgi-graph itself reads the environment variables only, but its calls to Perl's CGI::param will look at the command line if REQUEST_METHOD is not used, otherwise it uses environment variables like CGI used to work.

Since this is all behind admin auth anyway, there isn't a public vulnerability. #914 was opened without comment which lead me to notice the redundancy and worry about a vulnerability, before I realized this is admin-only anyway.

The vulnerability was created by 6d6f3ea391.

See #914.

This is the v0.19b hotfix commit.
2016-08-20 11:47:44 -04:00
Joshua Tauberer 35a360ef0b simplify how munin-cgi-graph is called to reduce the attack surface area
Seems like if REQUEST_METHOD is set to GET, then we can drop two redundant ways the query string is given. munin-cgi-graph itself reads the environment variables only, but its calls to Perl's CGI::param will look at the command line if REQUEST_METHOD is not used, otherwise it uses environment variables like CGI used to work.

Since this is all behind admin auth anyway, there isn't a public vulnerability. #914 was opened without comment which lead me to notice the redundancy and worry about a vulnerability, before I realized this is admin-only anyway.
2016-08-19 12:42:43 -04:00
Joshua Tauberer 86457e5bc4 merge: fail2ban broke, released v0.19a 2016-08-18 08:39:31 -04:00
Joshua Tauberer 8cf2e468bd [merge #900] Adding a Code of Conduct
Merge pull request #900 from mail-in-a-box/code_of_conduct
2016-08-15 20:10:37 -04:00
Joshua Tauberer 440a545010 some improvements suggested by the community 2016-08-15 20:09:05 -04:00
Marius Blüm 942bcfc7c5 Update Bootstrap to 3.3.7 (#909)
Signed-off-by: Marius Blüm <marius@lineone.io>
2016-08-15 18:06:12 -04:00
ReadmeCritic 4f2d16a31d Update README URLs based on HTTP redirects (#908) 2016-08-15 11:07:09 -04:00
Joshua Tauberer e9368de462 [merge #902] Upgrade ownCloud from 8.2.3 to 8.2.7
Merge https://github.com/mar1u5/mailinabox

fixes #901
2016-08-13 17:36:08 -04:00
Marius Blüm 6f165d0aeb
Update Changelog
Signed-off-by: Marius Blüm <marius@lineone.io>
2016-08-09 00:58:10 +02:00
Marius Blüm 6c22c0533e
Upgrade ownCloud from 8.2.3 to 8.2.7
Signed-off-by: Marius Blüm <marius@lineone.io>
2016-08-09 00:53:15 +02:00
Joshua Tauberer d38b732b0a add a Code of Conduct 2016-08-08 08:19:42 -04:00
106 changed files with 9675 additions and 3756 deletions

30
.editorconfig Normal file
View File

@ -0,0 +1,30 @@
# EditorConfig helps developers define and maintain consistent
# coding styles between different editors and IDEs
# editorconfig.org
root = true
[*]
indent_style = space
indent_size = 4
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[Makefile]
indent_style = tab
indent_size = 4
[Vagrantfile]
indent_size = 2
[*.rb]
indent_size = 2
[*.py]
indent_style = tab
[*.js]
indent_size = 2

3
.gitignore vendored
View File

@ -4,3 +4,6 @@ management/__pycache__/
tools/__pycache__/
externals/
.env
.vagrant
api/docs/api-docs.html
*.code-workspace

View File

@ -1,6 +1,785 @@
CHANGELOG
=========
Version 68 (April 1, 2024)
--------------------------
Package updates:
* Roundcube updated to version 1.6.6.
* Nextcloud is updated to version 26.0.12.
Mail:
* Updated postfix's configuration to guard against SMTP smuggling to the long-term fix (https://www.postfix.org/smtp-smuggling.html).
Control Panel:
* Improved reporting of Spamhaus response codes.
* Improved detection of SSH port.
* Fixed an error if last saved status check results were corrupted.
* Other minor fixes.
Other:
* fail2ban is updated to see "HTTP/2.0" requests to munin also.
* Internal improvements to the code to make it more reliable and readable.
Version 67 (December 22, 2023)
------------------------------
* Guard against a newly published vulnerability called SMTP Smuggling. See https://sec-consult.com/blog/detail/smtp-smuggling-spoofing-e-mails-worldwide/.
Version 66 (December 17, 2023)
------------------------------
* Some users reported an error installing Mail-in-a-Box related to the virtualenv command. This is hopefully fixed.
* Roundcube is updated to 1.6.5 fixing a security vulnerability.
* For Mail-in-a-Box developers, a new setup variable is added to pull the source code from a different repository.
Version 65 (October 27, 2023)
-----------------------------
* Roundcube updated to 1.6.4 fixing a security vulnerability.
* zpush.sh updated to version 2.7.1.
* Fixed a typo in the control panel.
Version 64 (September 2, 2023)
------------------------------
* Fixed broken installation when upgrading from Mail-in-a-Box version 56 (Nextcloud 22) and earlier because of an upstream packaging issue.
* Fixed backups to work with the latest duplicity package which was not backwards compatible.
* Fixed setting B2 as a backup target with a slash in the application key.
* Turned off OpenDMARC diagnostic reports sent in response to incoming mail.
* Fixed some crashes when using an unrelased version of Mail-in-a-Box.
* Added z-push administration scripts.
Version 63 (July 27, 2023)
--------------------------
* Nextcloud updated to 25.0.7.
Version 62 (May 20, 2023)
-------------------------
Package updates:
* Nextcloud updated to 23.0.12 (and its apps also updated).
* Roundcube updated to 1.6.1.
* Z-Push to 2.7.0, which has compatibility for Ubuntu 22.04, so it works again.
Mail:
* Roundcube's password change page is now working again.
Control panel:
* Allow setting the backup location's S3 region name for non-AWS S3-compatible backup hosts.
* Control panel pages can be opened in a new tab/window and bookmarked and browser history navigation now works.
* Add a Copy button to put the rsync backup public key on clipboard.
* Allow secondary DNS xfr: items added in the control panel to be hostnames too.
* Fixed issue where sshkeygen fails when IPv6 is disabled.
* Fixed issue opening munin reports.
* Fixed report formatting in status emails sent to the administrator.
Version 61.1 (January 28, 2023)
-------------------------------
* Fixed rsync backups not working with the default port.
* Reverted "Improve error messages in the management tools when external command-line tools are run." because of the possibility of user secrets being included in error messages.
* Fix for TLS certificate SHA fingerprint not being displayed during setup.
Version 61 (January 21, 2023)
-----------------------------
System:
* fail2ban didn't start after setup.
Mail:
* Disable Roundcube password plugin since it was corrupting the user database.
Control panel:
* Fix changing existing backup settings when the rsync type is used.
* Allow setting a custom port for rsync backups.
* Fixes to DNS lookups during status checks when there are timeouts, enforce timeouts better.
* A new check is added to ensure fail2ban is running.
* Fixed a color.
* Improve error messages in the management tools when external command-line tools are run.
Version 60.1 (October 30, 2022)
-------------------------------
* A setup issue where the DNS server nsd isn't running at the end of setup is (hopefully) fixed.
* Nextcloud is updated to 23.0.10 (contacts to 4.2.2, calendar to 3.5.1).
Version 60 (October 11, 2022)
-----------------------------
This is the first release for Ubuntu 22.04.
**Before upgrading**, you must **first upgrade your existing Ubuntu 18.04 box to Mail-in-a-Box v0.51 or later**, if you haven't already done so. That may not be possible after Ubuntu 18.04 reaches its end of life in April 2023, so please complete the upgrade well before then. (If you are not using Nextcloud's contacts or calendar, you can migrate to the latest version of Mail-in-a-Box from any previous version.)
For complete upgrade instructions, see:
https://discourse.mailinabox.email/t/version-60-for-ubuntu-22-04-is-about-to-be-released/9558
No major features of Mail-in-a-Box have changed in this release, although some minor fixes were made.
With the newer version of Ubuntu the following software packages we use are updated:
* dovecot is upgraded to 2.3.16, postfix to 3.6.4, opendmark to 1.4 (which adds ARC-Authentication-Results headers), and spampd to 2.53 (alleviating a mail delivery rate limiting bug).
* Nextcloud is upgraded to 23.0.4 (contacts to 4.2.0, calendar to 3.5.0).
* Roundcube is upgraded to 1.6.0.
* certbot is upgraded to 1.21 (via the Ubuntu repository instead of a PPA).
* fail2ban is upgraded to 0.11.2.
* nginx is upgraded to 1.18.
* PHP is upgraded from 7.2 to 8.0.
Also:
* Roundcube's login session cookie was tightened. Existing sessions may require a manual logout.
* Moved Postgrey's database under $STORAGE_ROOT.
Version 57a (June 19, 2022)
---------------------------
* The Backblaze backups fix posted in Version 57 was incomplete. It's now fixed.
Version 57 (June 12, 2022)
--------------------------
Setup:
* Fixed issue upgrading from Mail-in-a-Box v0.40-v0.50 because of a changed URL that Nextcloud is downloaded from.
Backups:
* Fixed S3 backups which broke with duplicity 0.8.23.
* Fixed Backblaze backups which broke with latest b2sdk package by rolling back its version.
Control panel:
* Fixed spurious changes in system status checks messages by sorting DNSSEC DS records.
* Fixed fail2ban lockout over IPv6 from excessive loads of the system status checks.
* Fixed an incorrect IPv6 system status check message.
Version 56 (January 19, 2022)
-----------------------------
Software updates:
* Roundcube updated to 1.5.2 (from 1.5.0), and the persistent_login and CardDAV (to 4.3.0 from 3.0.3) plugins are updated.
* Nextcloud updated to 20.0.14 (from 20.0.8), contacts to 4.0.7 (from 3.5.1), and calendar to 3.0.4 (from 2.2.0).
Setup:
* Fixed failed setup if a previous attempt failed while updating Nextcloud.
Control panel:
* Fixed a crash if a custom DNS entry is not under a zone managed by the box.
* Fix DNSSEC instructions typo.
Other:
* Set systemd journald log retention to 10 days (from no limit) to reduce disk usage.
* Fixed log processing for submission lines that have a sasl_sender or other extra information.
* Fix DNS secondary nameserver refesh failure retry period.
Version 55 (October 18, 2021)
-----------------------------
Mail:
* "SMTPUTF8" is now disabled in Postfix. Because Dovecot still does not support SMTPUTF8, incoming mail to internationalized addresses was bouncing. This fixes incoming mail to internationalized domains (which was probably working prior to v0.40), but it will prevent sending outbound mail to addresses with internationalized local-parts.
* Upgraded to Roundcube 1.5.
Control panel:
* The control panel menus are now hidden before login, but now non-admins can log in to access the mail and contacts/calendar instruction pages.
* The login form now disables browser autocomplete in the two-factor authentication code field.
* After logging in, the default page is now a fast-loading welcome page rather than the slow-loading system status checks page.
* The backup retention period option now displays for B2 backup targets.
* The DNSSEC DS record recommendations are cleaned up and now recommend changing records that use SHA1.
* The Munin monitoring pages no longer require a separate HTTP basic authentication login and can be used if two-factor authentication is turned on.
* Control panel logins are now tied to a session backend that allows true logouts (rather than an encrypted cookie).
* Failed logins no longer directly reveal whether the email address corresponds to a user account.
* Browser dark mode now inverts the color scheme.
Other:
* Fail2ban's IPv6 support is enabled.
* The mail log tool now doesn't crash if there are email addresess in log messages with invalid UTF-8 characters.
* Additional nsd.conf files can be placed in /etc/nsd.conf.d.
v0.54 (June 20, 2021)
---------------------
Mail:
* Forwarded mail using mail filter rules (in Roundcube; "sieve" rules) stopped re-writing the envelope address at some point, causing forwarded mail to often be marked as spam by the final recipient. These forwards will now re-write the envelope as the Mail-in-a-Box user receiving the mail to comply with SPF/DMARC rules.
* Sending mail is now possible on port 465 with the "SSL" or "TLS" option in mail clients, and this is now the recommended setting. Port 587 with STARTTLS remains available but should be avoided when configuring new mail clients.
* Roundcube's login cookie is updated to use a new encryption algorithm (AES-256-CBC instead of DES-EDE-CBC).
DNS:
* The ECDSAP256SHA256 DNSSEC algorithm is now available. If a DS record is set for any of your domain names that have DNS hosted on your box, you will be prompted by status checks to update the DS record at your convenience.
* Null MX records are added for domains that do not serve mail.
Contacts/calendar:
* Updated Nextcloud to 20.0.8, contacts to 3.5.1, calendar to 2.2.0 (#1960).
Control panel:
* Fixed a crash in the status checks.
* Small wording improvements.
Setup:
* Minor improvements to the setup scripts.
v0.53a (May 8, 2021)
--------------------
The download URL for Z-Push has been revised becaue the old URL stopped working.
v0.53 (April 12, 2021)
----------------------
Software updates:
* Upgraded Roundcube to version 1.4.11 addressing a security issue, and its desktop notifications plugin.
* Upgraded Z-Push (for Exchange/ActiveSync) to version 2.6.2.
Control panel:
* Backblaze B2 is now a supported backup protocol.
* Fixed an issue in the daily mail reports.
* Sort the Custom DNS by zone and qname, and add an option to go back to the old sort order (creation order).
Mail:
* Enable sending DMARC failure reports to senders that request them.
Setup:
* Fixed error when upgrading from Nextcloud 13.
v0.52 (January 31, 2021)
------------------------
Software updates:
* Upgraded Roundcube to version 1.4.10.
* Upgraded Z-Push to 2.6.1.
Mail:
* Incoming emails with SPF/DKIM/DMARC failures now get a higher spam score, and these messages are more likely to appear in the junk folder, since they are often spam/phishing.
* Fixed the MTA-STS policy file's line endings.
Control panel:
* A new Download button in the control panel's External DNS page can be used to download the required DNS records in zonefile format.
* Fixed the problem when the control panel would report DNS entries as Not Set by increasing a bind query limit.
* Fixed a control panel startup bug on some systems.
* Improved an error message on a DNS lookup timeout.
* A typo was fixed.
DNS:
* The TTL for NS records has been increased to 1 day to comply with some registrar requirements.
System:
* Nextcloud's photos, dashboard, and activity apps are disabled since we only support contacts and calendar.
v0.51 (November 14, 2020)
-------------------------
Software updates:
* Upgraded Nextcloud from 17.0.6 to 20.0.1 (with Contacts from 3.3.0 to 3.4.1 and Calendar from 2.0.3 to 2.1.2)
* Upgraded Roundcube to version 1.4.9.
Mail:
* The MTA-STA max_age value was increased to the normal one week.
Control panel:
* Two-factor authentication can now be enabled for logins to the control panel. However, keep in mind that many online services (including domain name registrars, cloud server providers, and TLS certificate providers) may allow an attacker to take over your account or issue a fraudulent TLS certificate with only access to your email address, and this new two-factor authentication does not protect access to your inbox. It therefore remains very important that user accounts with administrative email addresses have strong passwords.
* TLS certificate expiry dates are now shown in ISO8601 format for clarity.
v0.50 (September 25, 2020)
--------------------------
Setup:
* When upgrading from versions before v0.40, setup will now warn that ownCloud/Nextcloud data cannot be migrated rather than failing the installation.
Mail:
* An MTA-STS policy for incoming mail is now published (in DNS and over HTTPS) when the primary hostname and email address domain both have a signed TLS certificate installed, allowing senders to know that an encrypted connection should be enforced.
* The per-IP connection limit to the IMAP server has been doubled to allow more devices to connect at once, especially with multiple users behind a NAT.
DNS:
* autoconfig and autodiscover subdomains and CalDAV/CardDAV SRV records are no longer generated for domains that don't have user accounts since they are unnecessary.
* IPv6 addresses can now be specified for secondary DNS nameservers in the control panel.
TLS:
* TLS certificates are now provisioned in groups by parent domain to limit easy domain enumeration and make provisioning more resilient to errors for particular domains.
Control panel:
* The control panel API is now fully documented at https://mailinabox.email/api-docs.html.
* User passwords can now have spaces.
* Status checks for automatic subdomains have been moved into the section for the parent domain.
* Typo fixed.
Web:
* The default web page served on fresh installations now adds the `noindex` meta tag.
* The HSTS header is revised to also be sent on non-success responses.
v0.48 (August 26, 2020)
-----------------------
Security fixes:
* Roundcube is updated to version 1.4.8 fixing additional cross-site scripting (XSS) vulnerabilities.
v0.47 (July 29, 2020)
---------------------
Security fixes:
* Roundcube is updated to version 1.4.7 fixing a cross-site scripting (XSS) vulnerability with HTML messages with malicious svg/namespace (CVE-2020-15562) (https://roundcube.net/news/2020/07/05/security-updates-1.4.7-1.3.14-and-1.2.11).
* SSH connections are now rate-limited at the firewall level (in addition to fail2ban).
v0.46 (June 11, 2020)
---------------------
Security fixes:
* Roundcube is updated to version 1.4.6 (https://roundcube.net/news/2020/06/02/security-updates-1.4.5-and-1.3.12).
v0.45 (May 16, 2020)
--------------------
Security fixes:
* Fix missing brute force login protection for Roundcube logins.
Software updates:
* Upgraded Roundcube from 1.4.2 to 1.4.4.
* Upgraded Nextcloud from 17.0.2 to 17.0.6 (with Contacts from 3.1.6 to 3.3.0 and Calendar from 1.7.1 to v2.0.3)
* Upgraded Z-Push to 2.5.2.
System:
* Nightly backups now occur on a random minute in the 3am hour (in the system time zone). The minute is chosen during Mail-in-a-Box installation/upgrade and remains the same until the next upgrade.
* Fix for mail log statistics report on leap days.
* Fix Mozilla autoconfig useGlobalPreferredServer setting.
Web:
* Add a new hidden feature to set nginx alias in www/custom.yaml.
Setup:
* Improved error handling.
v0.44 (February 15, 2020)
-------------------------
System:
* TLS settings have been upgraded following Mozilla's recommendations for servers. TLS1.2 and 1.3 are now the only supported protocols for web, IMAP, and SMTP (submission).
* Fixed an issue starting services when Mail-in-a-Box isn't on the root filesystem.
* Changed some performance options affecting Roundcube and Nextcloud.
Software updates:
* Upgraded Nextcloud from 15.0.8 to 17.0.2 (with Contacts from 3.1.1 to 3.1.6 and Calendar from 1.6.5 to 1.7.1)
* Upgraded Z-Push to 2.5.1.
* Upgraded Roundcube from 1.3.10 to 1.4.2 and changed the default skin (theme) to Elastic.
Control panel:
* The Custom DNS list of records is now sorted.
* The emails that report TLS provisioning results now has a less scary subject line.
Mail:
* Fetching of updated whitelist for greylisting was fetching each day instead of every month.
* OpenDKIM signing has been changed to 'relaxed' mode so that some old mail lists that forward mail can do so.
DNS:
* Automatic autoconfig.* subdomains can now be suppressed with custom DNS records.
* DNS zone transfer now works with IPv6 addresses.
Setup:
* An Ubuntu package source was missing on systems where it defaults off.
v0.43 (September 1, 2019)
-------------------------
Security fixes:
* A security issue was discovered in rsync backups. If you have enabled rsync backups, the file `id_rsa_miab` may have been copied to your backup destination. This file can be used to access your backup destination. If the file was copied to your backup destination, we recommend that you delete the file on your backup destination, delete `/root/.ssh/id_rsa_miab` on your Mail-in-a-Box, then re-run Mail-in-a-Box setup, and re-configure your SSH public key at your backup destination according to the instructions in the Mail-in-a-Box control panel.
* Brute force attack prevention was missing for the managesieve service.
Setup:
* Nextcloud was not upgraded properly after restoring Mail-in-a-Box from a backup from v0.40 or earlier.
Mail:
* Upgraded Roundcube to 1.3.10.
* Fetch an updated whitelist for greylisting on a monthly basis to reduce the number of delayed incoming emails.
Control panel:
* When using secondary DNS, it is now possible to specify a subnet range with the `xfr:` option.
* Fixed an issue when the secondary DNS option is used and the secondary DNS hostname resolves to multiple IP addresses.
* Fix a bug in how a backup configuration error is shown.
v0.42b (August 3, 2019)
-----------------------
Changes:
* Decreased the minimum supported RAM to 502 Mb.
* Improved mail client autoconfiguration.
* Added support for S3-compatible backup services besides Amazon S3.
* Fixed the control panel login page to let LastPass save passwords.
* Fixed an error in the user privileges API.
* Silenced some spurrious messages.
Software updates:
* Upgraded Roundcube from 1.3.8 to 1.3.9.
* Upgraded Nextcloud from 14.0.6 to 15.0.8 (with Contacts from 2.1.8 to 3.1.1 and Calendar from 1.6.4 to 1.6.5).
* Upgraded Z-Push from 2.4.4 to 2.5.0.
Note that v0.42 (July 4, 2019) was pulled shortly after it was released to fix a Nextcloud upgrade issue.
v0.41 (February 26, 2019)
-------------------------
System:
* Missing brute force login attack prevention (fail2ban) filters which stopped working on Ubuntu 18.04 were added back.
* Upgrades would fail if Mail-in-a-Box moved to a different directory in `systemctl link`.
Mail:
* Incoming messages addressed to more than one local user were rejected because of a bug in spampd packaged by Ubuntu 18.04. A workaround was added.
Contacts/Calendar:
* Upgraded Nextcloud from 13.0.6 to 14.0.6.
* Upgraded Contacts from 2.1.5 to 2.1.8.
* Upgraded Calendar from 1.6.1 to 1.6.4.
v0.40 (January 12, 2019)
------------------------
This is the first release for Ubuntu 18.04. This version and versions going forward can **only** be installed on Ubuntu 18.04; however, upgrades of existing Ubuntu 14.04 boxes to the latest version supporting Ubuntu 14.04 (v0.30) continue to work as normal.
When **upgrading**, you **must first upgrade your existing Ubuntu 14.04 Mail-in-a-Box box** to the latest release supporting Ubuntu 14.04 --- that's v0.30 --- before you migrate to Ubuntu 18.04. If you are running an older version of Mail-in-a-Box which has an old version of ownCloud or Nextcloud, you will *not* be able to upgrade your data because older versions of ownCloud and Nextcloud that are required to perform the upgrade *cannot* be run on Ubuntu 18.04. To upgrade from Ubuntu 14.04 to Ubuntu 18.04, you **must create a fresh Ubuntu 18.04 machine** before installing this version. In-place upgrades of servers are not supported. Since Ubuntu's support for Ubuntu 14.04 has almost ended, everyone is encouraged to create a new Ubuntu 18.04 machine and migrate to it.
For complete upgrade instructions, see:
https://discourse.mailinabox.email/t/mail-in-a-box-version-v0-40-and-moving-to-ubuntu-18-04/4289
The changelog for this release follows.
Setup:
* Mail-in-a-Box now targets Ubuntu 18.04 LTS, which will have support from Ubuntu through 2022.
* Some of the system packages updated in virtue of using Ubuntu 18.04 include postfix (2.11=>3.3) nsd (4.0=>4.1), nginx (1.4=>1.14), PHP (7.0=>7.2), Python (3.4=>3.6), fail2ban (0.8=>0.10), Duplicity (0.6=>0.7).
* [Unofficial Bash Strict Mode](http://redsymbol.net/articles/unofficial-bash-strict-mode/) is turned on for setup, which might catch previously uncaught issues during setup.
Mail:
* IMAP server-side full text search is no longer supported because we were using a custom-built `dovecot-lucene` package that we are no longer maintaining.
* Sending email is now disabled on port 25 --- you must log in to port 587 to send email, per the long-standing mail instructions.
* Greylisting may delay more emails from new senders. We were using a custom-built postgrey package previously that whitelisted sending domains in dnswl.org, but we are no longer maintaining that package.
v0.30 (January 9, 2019)
-----------------------
Setup:
* Update to Roundcube 1.3.8 and the CardDAV plugin to 3.0.3.
* Add missing rsyslog package to install line since some OS images don't have it installed by default.
* A log file for nsd was added.
Control Panel:
* The users page now documents that passwords should only have ASCII characters to prevent character encoding mismaches between clients and the server.
* The users page no longer shows user mailbox sizes because this was extremely slow for very large mailboxes.
* The Mail-in-a-Box version is now shown in the system status checks even when the new-version check is disabled.
* The alises page now warns that alises should not be used to forward mail off of the box. Mail filters within Roundcube are better for that.
* The explanation of greylisting has been improved.
v0.29 (October 25, 2018)
------------------------
* Starting with v0.28, TLS certificate provisioning wouldn't work on new boxes until the mailinabox setup command was run a second time because of a problem with the non-interactive setup.
* Update to Nextcloud 13.0.6.
* Update to Roundcube 1.3.7.
* Update to Z-Push 2.4.4.
* Backup dates listed in the control panel now use an internationalized format.
v0.28 (July 30, 2018)
---------------------
System:
* We now use EFF's `certbot` to provision TLS certificates (from Let's Encrypt) instead of our home-grown ACME library.
Contacts/Calendar:
* Fix for Mac OS X autoconfig of the calendar.
Setup:
* Installing Z-Push broke because of what looks like a change or problem in their git server HTTPS certificate. That's fixed.
v0.27 (June 14, 2018)
---------------------
Mail:
* A report of box activity, including sent/received mail totals and logins by user, is now emailed to the box's administrator user each week.
* Update Roundcube to version 1.3.6 and Z-Push to version 2.3.9.
Control Panel:
* The undocumented feature for proxying web requests to another server now sets X-Forwarded-For.
v0.26c (February 13, 2018)
--------------------------
Setup:
* Upgrades from v0.21c (February 1, 2017) or earlier were broken because the intermediate versions of ownCloud used in setup were no longer available from ownCloud.
* Some download errors had no output --- there is more output on error now.
Control Panel:
* The background service for the control panel was not restarting on updates, leaving the old version running. This was broken in v0.26 and is now fixed.
* Installing your own TLS/SSL certificate had been broken since v0.24 because the new version of openssl became stricter about CSR generation parameters.
* Fixed password length help text.
Contacts/Calendar:
* Upgraded Nextcloud from 12.0.3 to 12.0.5.
v0.26b (January 25, 2018)
-------------------------
* Fix new installations which broke at the step of asking for the user's desired email address, which was broken by v0.26's changes related to the control panel.
* Fix the provisioning of TLS certificates by pinning a Python package we rely on (acme) to an earlier version because our code isn't yet compatible with its current version.
* Reduce munin's log_level from debug to warning to prevent massive log files.
v0.26 (January 18, 2018)
------------------------
Security:
* HTTPS, IMAP, and POP's TLS settings have been updated to Mozilla's intermediate cipher list recommendation. Some extremely old devices that use less secure TLS ciphers may no longer be able to connect to IMAP/POP.
* Updated web HSTS header to use longer six month duration.
Mail:
* Adding attachments in Roundcube broke after the last update for some users after rebooting because a temporary directory was deleted on reboot. The temporary directory is now moved from /tmp to /var so that it is persistent.
* `X-Spam-Score` header is added to incoming mail.
Control panel:
* RSASHA256 is now used for DNSSEC for .lv domains.
* Some documentation/links improvements.
Installer:
* We now run `apt-get autoremove` at the start of setup to clear out old packages, especially old kernels that take up a lot of space. On the first run, this step may take a long time.
* We now fetch Z-Push from its tagged git repository, fixing an installation problem.
* Some old PHP5 packages are removed from setup, fixing an installation bug where Apache would get installed.
* Python 3 packages for the control panel are now installed using a virtualenv to prevent installation errors due to conflicts in the cryptography/openssl packages between OS-installed packages and pip-installed packages.
v0.25 (November 15, 2017)
-------------------------
This update is a security update addressing [CVE-2017-16651, a vulnerability in Roundcube webmail that allows logged-in users to access files on the local filesystem](https://roundcube.net/news/2017/11/08/security-updates-1.3.3-1.2.7-and-1.1.10).
Mail:
* Update to Roundcube 1.3.3.
Control Panel:
* Allow custom DNS records to be set for DNS wildcard subdomains (i.e. `*`).
v0.24 (October 3, 2017)
-----------------------
System:
* Install PHP7 via a PPA. Switch to the on-demand process manager.
Mail:
* Updated to [Roundcube 1.3.1](https://roundcube.net/news/2017/06/26/roundcube-webmail-1.3.0-released), but unfortunately dropping the Vacation plugin because it has not been supported by its author and is not compatible with Roundcube 1.3, and updated the persistent login plugin.
* Updated to [Z-Push 2.3.8](http://download.z-push.org/final/2.3/z-push-2.3.8.txt).
* Dovecot now uses stronger 2048 bit DH params for better forward secrecy.
Nextcloud:
* Nextcloud updated to 12.0.3, using PHP7.
Control Panel:
* Nameserver (NS) records can now be set on custom domains.
* Fix an erroneous status check error due to IPv6 address formatting.
* Aliases for administrative addresses can now be set to send mail to +tag administrative addresses.
v0.23a (May 31, 2017)
---------------------
Corrects a problem in the new way third-party assets are downloaded during setup for the control panel, since v0.23.
v0.23 (May 30, 2017)
--------------------
Mail:
* The default theme for Roundcube was changed to the nicer Larry theme.
* Exchange/ActiveSync support has been replaced with z-push 2.3.6 from z-push.org (rather than z-push-contrib).
ownCloud (now Nextcloud):
* ownCloud is replaced with Nextcloud 10.0.5.
* Fixed an error in Owncloud/Nextcloud setup not updating domain when changing hostname.
Control Panel/Management:
* Fix an error in the control panel showing rsync backup status.
* Fix an error in the control panel related to IPv6 addresses.
* TLS certificates for internationalized domain names can now be provisioned from Let's Encrypt automatically.
* Third-party assets used in the control panel (jQuery/Bootstrap) are now downloaded during setup and served from the box rather than from a CDN.
DNS:
* Add support for custom CAA records.
v0.22 (April 2, 2017)
---------------------
Mail:
* The CardDAV plugin has been added to Roundcube so that your ownCloud contacts are available in webmail.
* Upgraded to Roundcube 1.2.4 and updated the persistent login plugin.
* Allow larger messages to be checked by SpamAssassin.
* Dovecot's vsz memory limit has been increased proportional to system memory.
* Newly set user passwords must be at least eight characters.
ownCloud:
* Upgraded to ownCloud 9.1.4.
Control Panel/Management:
* The status checks page crashed when the mailinabox.email website was down - that's fixed.
* Made nightly re-provisioning of TLS certificates less noisy.
* Fixed bugs in rsync backup method and in the list of recent backups.
* Fixed incorrect status checks errors about IPv6 addresses.
* Fixed incorrect status checks errors for secondary nameservers if round-robin custom A records are set.
* The management mail_log.py tool has been rewritten.
DNS:
* Added support for DSA, ED25519, and custom SSHFP records.
System:
* The SSH fail2ban jail was not activated.
Installation:
* At the end of installation, the SHA256 -- rather than SHA1 -- hash of the system's TLS certificate is shown.
v0.21c (February 1, 2017)
-------------------------
Installations and upgrades started failing about 10 days ago with the error "ImportError: No module named 'packaging'" after an upstream package (Python's setuptools) was updated by its maintainers. The updated package conflicted with Ubuntu 14.04's version of another package (Python's pip). This update upgrades both packages to remove the conflict.
If you already encountered the error during installation or upgrade of Mail-in-a-Box, this update may not correct the problem on your existing system. See https://discourse.mailinabox.email/t/v0-21c-release-fixes-python-package-installation-issue/1881 for help if the problem persists after upgrading to this version of Mail-in-a-Box.
v0.21b (December 4, 2016)
-------------------------
This update corrects a first-time installation issue introduced in v0.21 caused by the new Exchange/ActiveSync feature.
v0.21 (November 30, 2016)
-------------------------
This version updates ownCloud, which may include security fixes, and makes some other smaller improvements.
Mail:
* Header privacy filters were improperly running on the contents of forwarded email --- that's fixed.
* We have another go at fixing a long-standing issue with training the spam filter (because of a file permissions issue).
* Exchange/ActiveSync will now use your display name set in Roundcube in the From: line of outgoing email.
ownCloud:
* Updated ownCloud to version 9.1.1.
Control panel:
* Backups can now be made using rsync-over-ssh!
* Status checks failed if the system doesn't support iptables or doesn't have ufw installed.
* Added support for SSHFP records when sshd listens on non-standard ports.
* Recommendations for TLS certificate providers were removed now that everyone mostly uses Let's Encrypt.
System:
* Ubuntu's "Upgrade to 16.04" notice is suppressed since you should not do that.
* Lowered memory requirements to 512MB, display a warning if system memory is below 768MB.
v0.20 (September 23, 2016)
--------------------------
ownCloud:
* Updated to ownCloud to 8.2.7.
Control Panel:
* Fixed a crash that occurs when there are IPv6 DNS records due to a bug in dnspython 1.14.0.
* Improved the wonky low disk space check.
v0.19b (August 20, 2016)
------------------------
This update corrects a security issue introduced in v0.18.
* A remote code execution vulnerability is corrected in how the munin system monitoring graphs are generated for the control panel. The vulnerability involves an administrative user visiting a carefully crafted URL.
v0.19a (August 18, 2016)
------------------------
@ -48,7 +827,7 @@ v0.18 (May 15, 2016)
ownCloud:
* Updated to ownCloud to 8.2.3
* Updated to ownCloud to 8.2.3
Mail:
@ -134,7 +913,6 @@ v0.16 (January 30, 2016)
------------------------
This update primarily adds automatic SSL (now "TLS") certificate provisioning from Let's Encrypt (https://letsencrypt.org/).
* The Sieve port is now open so tools like the Thunderbird Sieve program can be used to edit mail filters.
Control Panel:
@ -573,4 +1351,4 @@ v0.02 (September 21, 2014)
v0.01 (August 19, 2014)
-----------------------
First release.
First versioned release after a year of unversioned development.

48
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,48 @@
# Mail-in-a-Box Code of Conduct
Mail-in-a-Box is an open source community project about working, as a group, to empower ourselves and others to have control over our own digital communications. Just as we hope to increase technological diversity on the Internet through decentralization, we also believe that diverse viewpoints and voices among our community members foster innovation and creative solutions to the challenges we face.
We are committed to providing a safe, welcoming, and harrassment-free space for collaboration, for everyone, without regard to age, disability, economic situation, ethnicity, gender identity and expression, language fluency, level of knowledge or experience, nationality, personal appearance, race, religion, sexual identity and orientation, or any other attribute. Community comes first. This policy supersedes all other project goals.
The maintainers of Mail-in-a-Box share the dual responsibility of leading by example and enforcing these policies as necessary to maintain an open and welcoming environment. All community members should be excellent to each other.
## Scope
This Code of Conduct applies to all places where Mail-in-a-Box community activity is ocurring, including on GitHub, in discussion forums, on Slack, on social media, and in real life. The Code of Conduct applies not only on websites/at events run by the Mail-in-a-Box community (e.g. our GitHub organization, our Slack team) but also at any other location where the Mail-in-a-Box community is present (e.g. in issues of other GitHub organizations where Mail-in-a-Box community members are discussing problems related to Mail-in-a-Box, or real-life professional conferences), or whenever a Mail-in-a-Box community member is representing Mail-in-a-Box to the public at large or acting on behalf of Mail-in-a-Box.
This code does not apply to activity on a server running Mail-in-a-Box software, unless your server is hosting a service for the Mail-in-a-Box community at large.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Showing empathy towards other community members
* Making room for new and quieter voices
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory/unwelcome comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Aggressive and micro-aggressive behavior, such as unconstructive criticism, providing corrections that do not improve the conversation (sometimes referred to as "well actually"s), repeatedly interrupting or talking over someone else, feigning surprise at someone's lack of knowledge or awareness about a topic, or subtle prejudice (for example, comments like "That's so easy my grandmother could do it.", which is prejudicial toward grandmothers).
* Other conduct which could reasonably be considered inappropriate in a professional setting
* Retaliating against anyone who reports a violation of this code.
We will not tolerate harassment. Harassment is any unwelcome or hostile behavior towards another person for any reason. This includes, but is not limited to, offensive verbal comments related to personal characteristics or choices, sexual images or comments, deliberate intimidation, bullying, stalking, following, harassing photography or recording, sustained disruption of discussion or events, nonconsensual publication of private comments, inappropriate physical contact, or unwelcome sexual attention. Conduct need not be intentional to be harassment.
## Enforcement
We will remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not consistent with this Code of Conduct. We may ban, temporarily or permanently, any contributor for violating this code, when appropriate.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project lead, [Joshua Tauberer](https://razor.occams.info/). All reports will be treated confidentially, impartially, consistently, and swiftly.
Because the need for confidentiality for all parties involved in an enforcement action outweighs the goals of openness, limited information will be shared with the Mail-in-a-Box community regarding enforcement actions that have taken place.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant, version 1.4](http://contributor-covenant.org/version/1/4) and the code of conduct of [Code for DC](http://codefordc.org/resources/codeofconduct.html).

View File

@ -1,3 +1,50 @@
# Contributing
Mail-in-a-Box is an open source project. Your contributions and pull requests are welcome.
## Development
To start developing Mail-in-a-Box, [clone the repository](https://github.com/mail-in-a-box/mailinabox) and familiarize yourself with the code.
$ git clone https://github.com/mail-in-a-box/mailinabox
### Vagrant and VirtualBox
We recommend you use [Vagrant](https://www.vagrantup.com/intro/getting-started/install.html) and [VirtualBox](https://www.virtualbox.org/wiki/Downloads) for development. Please install them first.
With Vagrant set up, the following should boot up Mail-in-a-Box inside a virtual machine:
$ vagrant up --provision
_If you're seeing an error message about your *IP address being listed in the Spamhaus Block List*, simply uncomment the `export SKIP_NETWORK_CHECKS=1` line in `Vagrantfile`. It's normal, you're probably using a dynamic IP address assigned by your Internet providerthey're almost all listed._
### Modifying your `hosts` file
After a while, Mail-in-a-Box will be available at `192.168.56.4` (unless you changed that in your `Vagrantfile`). To be able to use the web-based bits, we recommend to add a hostname to your `hosts` file:
$ echo "192.168.56.4 mailinabox.lan" | sudo tee -a /etc/hosts
You should now be able to navigate to https://mailinabox.lan/admin using your browser. There should be an initial admin user with the name `me@mailinabox.lan` and the password `12345678`.
### Making changes
Your working copy of Mail-in-a-Box will be mounted inside your VM at `/vagrant`. Any change you make locally will appear inside your VM automatically.
Running `vagrant up --provision` again will repeat the installation with your modifications.
Alternatively, you can also ssh into the VM using:
$ vagrant ssh
Once inside the VM, you can re-run individual parts of the setup like in this example:
vm$ cd /vagrant
vm$ sudo setup/owncloud.sh # replace with script you'd like to re-run
### Tests
Mail-in-a-Box needs more tests. If you're still looking for a way to help out, writing and contributing tests would be a great start!
## Public domain
This project is in the public domain. Copyright and related rights in the work worldwide are waived through the [CC0 1.0 Universal public domain dedication][CC0]. See the LICENSE file in this directory.
@ -5,3 +52,7 @@ This project is in the public domain. Copyright and related rights in the work w
All contributions to this project must be released under the same CC0 wavier. By submitting a pull request or patch, you are agreeing to comply with this waiver of copyright interest.
[CC0]: http://creativecommons.org/publicdomain/zero/1.0/
## Code of Conduct
This project has a [Code of Conduct](CODE_OF_CONDUCT.md). Please review it when joining our community.

View File

@ -9,91 +9,92 @@ Mail-in-a-Box helps individuals take back control of their email by defining a o
* * *
I am trying to:
Our goals are to:
* Make deploying a good mail server easy.
* Promote [decentralization](http://redecentralize.org/), innovation, and privacy on the web.
* Have automated, auditable, and [idempotent](http://sharknet.us/2014/02/01/automated-configuration-management-challenges-with-idempotency/) configuration.
* Have automated, auditable, and [idempotent](https://web.archive.org/web/20190518072631/https://sharknet.us/2014/02/01/automated-configuration-management-challenges-with-idempotency/) configuration.
* **Not** make a totally unhackable, NSA-proof server.
* **Not** make something customizable by power users.
This setup is what has been powering my own personal email since September 2013.
Additionally, this project has a [Code of Conduct](CODE_OF_CONDUCT.md), which supersedes the goals above. Please review it when joining our community.
The Box
-------
Mail-in-a-Box turns a fresh Ubuntu 14.04 LTS 64-bit machine into a working mail server by installing and configuring various components.
In The Box
----------
It is a one-click email appliance. There are no user-configurable setup options. It "just works".
Mail-in-a-Box turns a fresh Ubuntu 22.04 LTS 64-bit machine into a working mail server by installing and configuring various components.
It is a one-click email appliance. There are no user-configurable setup options. It "just works."
The components installed are:
* SMTP ([postfix](http://www.postfix.org/)), IMAP ([dovecot](http://dovecot.org/)), CardDAV/CalDAV ([ownCloud](http://owncloud.org/)), Exchange ActiveSync ([z-push](https://github.com/fmbiete/Z-Push-contrib))
* Webmail ([Roundcube](http://roundcube.net/)), static website hosting ([nginx](http://nginx.org/))
* Spam filtering ([spamassassin](https://spamassassin.apache.org/)), greylisting ([postgrey](http://postgrey.schweikert.ch/))
* DNS ([nsd4](http://www.nlnetlabs.nl/projects/nsd/)) with [SPF](https://en.wikipedia.org/wiki/Sender_Policy_Framework), DKIM ([OpenDKIM](http://www.opendkim.org/)), [DMARC](https://en.wikipedia.org/wiki/DMARC), [DNSSEC](https://en.wikipedia.org/wiki/DNSSEC), [DANE TLSA](https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities), and [SSHFP](https://tools.ietf.org/html/rfc4255) records automatically set
* Backups ([duplicity](http://duplicity.nongnu.org/)), firewall ([ufw](https://launchpad.net/ufw)), intrusion protection ([fail2ban](http://www.fail2ban.org/wiki/index.php/Main_Page)), system monitoring ([munin](http://munin-monitoring.org/))
* SMTP ([postfix](http://www.postfix.org/)), IMAP ([Dovecot](http://dovecot.org/)), CardDAV/CalDAV ([Nextcloud](https://nextcloud.com/)), and Exchange ActiveSync ([z-push](http://z-push.org/)) servers
* Webmail ([Roundcube](http://roundcube.net/)), mail filter rules (thanks to Roundcube and Dovecot), and email client autoconfig settings (served by [nginx](http://nginx.org/))
* Spam filtering ([spamassassin](https://spamassassin.apache.org/)) and greylisting ([postgrey](http://postgrey.schweikert.ch/))
* DNS ([nsd4](https://www.nlnetlabs.nl/projects/nsd/)) with [SPF](https://en.wikipedia.org/wiki/Sender_Policy_Framework), DKIM ([OpenDKIM](http://www.opendkim.org/)), [DMARC](https://en.wikipedia.org/wiki/DMARC), [DNSSEC](https://en.wikipedia.org/wiki/DNSSEC), [DANE TLSA](https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities), [MTA-STS](https://tools.ietf.org/html/rfc8461), and [SSHFP](https://tools.ietf.org/html/rfc4255) policy records automatically set
* TLS certificates are automatically provisioned using [Let's Encrypt](https://letsencrypt.org/) for protecting https and all of the other services on the box
* Backups ([duplicity](http://duplicity.nongnu.org/)), firewall ([ufw](https://launchpad.net/ufw)), intrusion protection ([fail2ban](http://www.fail2ban.org/wiki/index.php/Main_Page)), and basic system monitoring ([munin](http://munin-monitoring.org/))
It also includes:
It also includes system management tools:
* A control panel and API for adding/removing mail users, aliases, custom DNS records, etc. and detailed system monitoring.
* Our own builds of postgrey (adding better whitelisting) and dovecot-lucene (faster search for mail) distributed via the [Mail-in-a-Box PPA](https://launchpad.net/~mail-in-a-box/+archive/ubuntu/ppa) on Launchpad.
* Comprehensive health monitoring that checks each day that services are running, ports are open, TLS certificates are valid, and DNS records are correct
* A control panel for adding/removing mail users, aliases, custom DNS records, configuring backups, etc.
* An API for all of the actions on the control panel
Internationalized domain names are supported and configured easily (but SMTPUTF8 is not supported, unfortunately).
It also supports static website hosting since the box is serving HTTPS anyway. (To serve a website for your domains elsewhere, just add a custom DNS "A" record in you Mail-in-a-Box's control panel to point domains to another server.)
For more information on how Mail-in-a-Box handles your privacy, see the [security details page](security.md).
Installation
------------
See the [setup guide](https://mailinabox.email/guide.html) for detailed, user-friendly instructions.
For experts, start with a completely fresh (really, I mean it) Ubuntu 14.04 LTS 64-bit machine. On the machine...
For experts, start with a completely fresh (really, I mean it) Ubuntu 22.04 LTS 64-bit machine. On the machine...
Clone this repository:
Clone this repository and checkout the tag corresponding to the most recent release:
$ git clone https://github.com/mail-in-a-box/mailinabox
$ cd mailinabox
_Optional:_ Download my PGP key and then verify that the sources were signed
by me:
$ curl -s https://keybase.io/joshdata/key.asc | gpg --import
gpg: key C10BDD81: public key "Joshua Tauberer <jt@occams.info>" imported
$ git verify-tag v0.19a
gpg: Signature made ..... using RSA key ID C10BDD81
gpg: Good signature from "Joshua Tauberer <jt@occams.info>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 5F4C 0E73 13CC D744 693B 2AEA B920 41F4 C10B DD81
You'll get a lot of warnings, but that's OK. Check that the primary key fingerprint matches the
fingerprint in the key details at [https://keybase.io/joshdata](https://keybase.io/joshdata)
and on my [personal homepage](https://razor.occams.info/). (Of course, if this repository has been compromised you can't trust these instructions.)
Checkout the tag corresponding to the most recent release:
$ git checkout v0.19a
$ git checkout v68
Begin the installation.
$ sudo setup/start.sh
For help, DO NOT contact me directly --- I don't do tech support by email or tweet (no exceptions).
The installation will install, uninstall, and configure packages to turn the machine into a working, good mail server.
For help, DO NOT contact Josh directly --- I don't do tech support by email or tweet (no exceptions).
Post your question on the [discussion forum](https://discourse.mailinabox.email/) instead, where maintainers and Mail-in-a-Box users may be able to help you.
Note that while we want everything to "just work," we can't control the rest of the Internet. Other mail services might block or spam-filter email sent from your Mail-in-a-Box.
This is a challenge faced by everyone who runs their own mail server, with or without Mail-in-a-Box. See our discussion forum for tips about that.
Contributing and Development
----------------------------
Mail-in-a-Box is an open source project. Your contributions and pull requests are welcome. See [CONTRIBUTING](CONTRIBUTING.md) to get started.
Post your question on the [discussion forum](https://discourse.mailinabox.email/) instead, where me and other Mail-in-a-Box users may be able to help you.
The Acknowledgements
--------------------
This project was inspired in part by the ["NSA-proof your email in 2 hours"](http://sealedabstract.com/code/nsa-proof-your-e-mail-in-2-hours/) blog post by Drew Crawford, [Sovereign](https://github.com/al3x/sovereign) by Alex Payne, and conversations with <a href="http://twitter.com/shevski" target="_blank">@shevski</a>, <a href="https://github.com/konklone" target="_blank">@konklone</a>, and <a href="https://github.com/gregelin" target="_blank">@GregElin</a>.
This project was inspired in part by the ["NSA-proof your email in 2 hours"](http://sealedabstract.com/code/nsa-proof-your-e-mail-in-2-hours/) blog post by Drew Crawford, [Sovereign](https://github.com/sovereign/sovereign) by Alex Payne, and conversations with <a href="https://twitter.com/shevski" target="_blank">@shevski</a>, <a href="https://github.com/konklone" target="_blank">@konklone</a>, and <a href="https://github.com/gregelin" target="_blank">@GregElin</a>.
Mail-in-a-Box is similar to [iRedMail](http://www.iredmail.org/) and [Modoboa](https://github.com/tonioo/modoboa).
The History
-----------
* In 2007 I wrote a relatively popular Mozilla Thunderbird extension that added client-side SPF and DKIM checks to mail to warn users about possible phishing: [add-on page](https://addons.mozilla.org/en-us/thunderbird/addon/sender-verification-anti-phish/), [source](https://github.com/JoshData/thunderbird-spf).
* In August 2013 I began Mail-in-a-Box by combining my own mail server configuration with the setup in ["NSA-proof your email in 2 hours"](http://sealedabstract.com/code/nsa-proof-your-e-mail-in-2-hours/) and making the setup steps reproducible with bash scripts.
* Mail-in-a-Box was a semifinalist in the 2014 [Knight News Challenge](https://www.newschallenge.org/challenge/2014/submissions/mail-in-a-box), but it was not selected as a winner.
* Mail-in-a-Box hit the front page of Hacker News in [April](https://news.ycombinator.com/item?id=7634514) 2014, [September](https://news.ycombinator.com/item?id=8276171) 2014, and [May](https://news.ycombinator.com/item?id=9624267) 2015.
* Mail-in-a-Box hit the front page of Hacker News in [April](https://news.ycombinator.com/item?id=7634514) 2014, [September](https://news.ycombinator.com/item?id=8276171) 2014, [May](https://news.ycombinator.com/item?id=9624267) 2015, and [November](https://news.ycombinator.com/item?id=13050500) 2016.
* FastCompany mentioned Mail-in-a-Box a [roundup of privacy projects](http://www.fastcompany.com/3047645/your-own-private-cloud) on June 26, 2015.

17
Vagrantfile vendored
View File

@ -2,26 +2,23 @@
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu14.04"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.box = "ubuntu/jammy64"
# Network config: Since it's a mail server, the machine must be connected
# to the public web. However, we currently don't want to expose SSH since
# the machine's box will let anyone log into it. So instead we'll put the
# machine on a private network.
config.vm.hostname = "mailinabox"
config.vm.network "private_network", ip: "192.168.50.4"
config.vm.hostname = "mailinabox.lan"
config.vm.network "private_network", ip: "192.168.56.4"
config.vm.provision :shell, :inline => <<-SH
# Set environment variables so that the setup script does
# not ask any questions during provisioning. We'll let the
# machine figure out its own public IP and it'll take a
# subdomain on our justtesting.email domain so we can get
# started quickly.
# Set environment variables so that the setup script does
# not ask any questions during provisioning. We'll let the
# machine figure out its own public IP.
export NONINTERACTIVE=1
export PUBLIC_IP=auto
export PUBLIC_IPV6=auto
export PRIMARY_HOSTNAME=auto-easy
export PRIMARY_HOSTNAME=auto
#export SKIP_NETWORK_CHECKS=1
# Start the setup script.

23
api/docs/generate-docs.sh Executable file
View File

@ -0,0 +1,23 @@
#!/usr/bin/env sh
# Requirements:
# - Node.js
# - redoc-cli (`npm install redoc-cli -g`)
redoc-cli bundle ../mailinabox.yml \
-t template.hbs \
-o api-docs.html \
--templateOptions.metaDescription="Mail-in-a-Box HTTP API" \
--title="Mail-in-a-Box HTTP API" \
--options.expandSingleSchemaField \
--options.hideSingleRequestSampleTab \
--options.jsonSampleExpandLevel=10 \
--options.hideDownloadButton \
--options.theme.logo.maxHeight=180px \
--options.theme.logo.maxWidth=180px \
--options.theme.colors.primary.main="#C52" \
--options.theme.typography.fontSize=16px \
--options.theme.typography.fontFamily="Raleway, sans-serif" \
--options.theme.typography.headings.fontFamily="Ubuntu, Arial, sans-serif" \
--options.theme.typography.code.fontSize=15px \
--options.theme.typography.code.fontFamily='"Source Code Pro", monospace'

31
api/docs/template.hbs Normal file
View File

@ -0,0 +1,31 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf8" />
<title>{{title}}</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="{{templateOptions.metaDescription}}" />
<link rel="icon" type="image/png" href="https://mailinabox.email/static/logo_small.png">
<link rel="apple-touch-icon" type="image/png" href="https://mailinabox.email/static/logo_small.png">
<link href="https://fonts.googleapis.com/css?family=Raleway:400,700" rel="stylesheet" />
<link href="https://fonts.googleapis.com/css?family=Ubuntu:300" rel="stylesheet" />
<link href="https://fonts.googleapis.com/css?family=Source+Code+Pro:500" rel="stylesheet" />
<style>
body {
margin: 0;
padding: 0;
}
h1 {
color: #000 !important;
}
</style>
{{{redocHead}}}
</head>
<body>
{{{redocHTML}}}
</body>
</html>

2732
api/mailinabox.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
# Fail2Ban filter Dovecot authentication and pop3/imap server
# Fail2Ban filter Dovecot authentication and pop3/imap/managesieve server
# For Mail-in-a-Box
[INCLUDES]
@ -9,7 +9,7 @@ before = common.conf
_daemon = (auth|dovecot(-auth)?|auth-worker)
failregex = ^%(__prefix_line)s(pop3|imap)-login: (Info: )?(Aborted login|Disconnected)(: Inactivity)? \(((no auth attempts|auth failed, \d+ attempts)( in \d+ secs)?|tried to use (disabled|disallowed) \S+ auth)\):( user=<\S*>,)?( method=\S+,)? rip=<HOST>, lip=(\d{1,3}\.){3}\d{1,3}(, TLS( handshaking)?(: Disconnected)?)?(, session=<\S+>)?\s*$
failregex = ^%(__prefix_line)s(pop3|imap|managesieve)-login: (Info: )?(Aborted login|Disconnected)(: Inactivity)? \(((no auth attempts|auth failed, \d+ attempts)( in \d+ secs)?|tried to use (disabled|disallowed) \S+ auth)\):( user=<\S*>,)?( method=\S+,)? rip=<HOST>, lip=(\d{1,3}\.){3}\d{1,3}(, TLS( handshaking)?(: Disconnected)?)?(, session=<\S+>)?\s*$
ignoreregex =

View File

@ -3,5 +3,5 @@
before = common.conf
[Definition]
failregex=<HOST> - .*GET /admin/munin/.* HTTP/1.1\" 401.*
failregex=<HOST> - .*GET /admin/munin/.* HTTP/\d+\.\d+\" 401.*
ignoreregex =

View File

@ -3,5 +3,6 @@
before = common.conf
[Definition]
datepattern = %%Y-%%m-%%d %%H:%%M:%%S
failregex=Login failed: .*Remote IP: '<HOST>[\)']
ignoreregex =

View File

@ -5,7 +5,7 @@
# Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks
# ping services over the public interface so we should whitelist that address of
# ours too. The string is substituted during installation.
ignoreip = 127.0.0.1/8 PUBLIC_IP
ignoreip = 127.0.0.1/8 PUBLIC_IP ::1 PUBLIC_IPV6
[dovecot]
enabled = true
@ -34,10 +34,18 @@ findtime = 30
enabled = true
port = http,https
filter = miab-owncloud
logpath = STORAGE_ROOT/owncloud/owncloud.log
logpath = STORAGE_ROOT/owncloud/nextcloud.log
maxretry = 20
findtime = 120
[miab-postfix465]
enabled = true
port = 465
filter = miab-postfix-submission
logpath = /var/log/mail.log
maxretry = 20
findtime = 30
[miab-postfix587]
enabled = true
port = 587
@ -50,7 +58,7 @@ findtime = 30
enabled = true
port = http,https
filter = miab-roundcube
logpath = /var/log/roundcubemail/errors
logpath = /var/log/roundcubemail/errors.log
maxretry = 20
findtime = 30
@ -69,12 +77,10 @@ action = iptables-allports[name=recidive]
# So the notification is ommited. This will prevent message appearing in the mail.log that mail
# can't be delivered to fail2ban@$HOSTNAME.
[sasl]
[postfix-sasl]
enabled = true
[ssh]
[sshd]
enabled = true
maxretry = 7
bantime = 3600
[ssh-ddos]
enabled = true

View File

@ -18,8 +18,6 @@
<string>PRIMARY_HOSTNAME</string>
<key>CalDAVPort</key>
<real>443</real>
<key>CalDAVPrincipalURL</key>
<string>/cloud/remote.php/caldav/calendars/</string>
<key>CalDAVUseSSL</key>
<true/>
<key>PayloadDescription</key>
@ -55,7 +53,7 @@
<key>OutgoingMailServerHostName</key>
<string>PRIMARY_HOSTNAME</string>
<key>OutgoingMailServerPortNumber</key>
<integer>587</integer>
<integer>465</integer>
<key>OutgoingMailServerUseSSL</key>
<true/>
<key>OutgoingPasswordSameAsIncomingPassword</key>

11
conf/mailinabox.service Normal file
View File

@ -0,0 +1,11 @@
[Unit]
Description=Mail-in-a-Box System Management Service
After=multi-user.target
[Service]
Type=idle
IgnoreSIGPIPE=False
ExecStart=/usr/local/lib/mailinabox/start
[Install]
WantedBy=multi-user.target

View File

@ -1,135 +0,0 @@
#! /bin/sh
### BEGIN INIT INFO
# Provides: mailinabox
# Required-Start: $all
# Required-Stop: $all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start and stop the Mail-in-a-Box management daemon.
# Description: Start and stop the Mail-in-a-Box management daemon.
### END INIT INFO
# Adapted from http://blog.codefront.net/2007/06/11/nginx-php-and-a-php-fastcgi-daemon-init-script/
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Mail-in-a-Box Management Daemon"
NAME=mailinabox
DAEMON=/usr/local/bin/mailinabox-daemon
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Set defaults.
START=yes
EXEC_AS_USER=root
# Ensure Python reads/writes files in UTF-8. If the machine
# triggers some other locale in Python, like ASCII encoding,
# Python may not be able to read/write files. Set also
# setup/start.sh (where the locale is also installed if not
# already present) and management/daily_tasks.sh.
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_TYPE=en_US.UTF-8
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
# If the daemon is not enabled, give the user a warning and then exit,
# unless we are stopping the daemon
if [ "$START" != "yes" -a "$1" != "stop" ]; then
log_warning_msg "To enable $NAME, edit /etc/default/$NAME and set START=yes"
exit 0
fi
# Process configuration
#export ...
DAEMON_ARGS=""
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON \
--background --make-pidfile --chuid $EXEC_AS_USER --startas $DAEMON -- \
$DAEMON_ARGS \
|| return 2
}
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE > /dev/null # --name $DAEMON
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
restart|force-reload)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
exit 3
;;
esac

View File

@ -16,12 +16,12 @@
<outgoingServer type="smtp">
<hostname>PRIMARY_HOSTNAME</hostname>
<port>587</port>
<socketType>STARTTLS</socketType>
<port>465</port>
<socketType>SSL</socketType>
<username>%EMAILADDRESS%</username>
<authentication>password-cleartext</authentication>
<addThisServer>true</addThisServer>
<useGlobalPreferredServer>true</useGlobalPreferredServer>
<useGlobalPreferredServer>false</useGlobalPreferredServer>
</outgoingServer>
<documentation url="https://PRIMARY_HOSTNAME/">

4
conf/mta-sts.txt Normal file
View File

@ -0,0 +1,4 @@
version: STSv1
mode: MODE
mx: PRIMARY_HOSTNAME
max_age: 604800

10
conf/munin.service Normal file
View File

@ -0,0 +1,10 @@
[Unit]
Description=Munin System Monitoring Startup Script
After=multi-user.target
[Service]
Type=idle
ExecStart=/usr/local/lib/mailinabox/munin_start.sh
[Install]
WantedBy=multi-user.target

View File

@ -18,6 +18,12 @@
location = /.well-known/autoconfig/mail/config-v1.1.xml {
alias /var/lib/mailinabox/mozilla-autoconfig.xml;
}
location = /mail/config-v1.1.xml {
alias /var/lib/mailinabox/mozilla-autoconfig.xml;
}
location = /.well-known/mta-sts.txt {
alias /var/lib/mailinabox/mta-sts.txt;
}
# Roundcube Webmail configuration.
rewrite ^/mail$ /mail/ redirect;
@ -70,7 +76,7 @@
# takes precedence over all non-regex matches and only regex matches that
# come after it (i.e. none of those, since this is the last one.) That means
# we're blocking dotfiles in the static hosted sites but not the FastCGI-
# handled locations for ownCloud (which serves user-uploaded files that might
# handled locations for Nextcloud (which serves user-uploaded files that might
# have this pattern, see #414) or some of the other services.
location ~ /\.(ht|svn|git|hg|bzr) {
log_not_found off;

View File

@ -1,6 +1,9 @@
# Control Panel
# Proxy /admin to our Python based control panel daemon. It is
# listening on IPv4 only so use an IP address and not 'localhost'.
location /admin/assets {
alias /usr/local/lib/mailinabox/vendor/assets;
}
rewrite ^/admin$ /admin/;
rewrite ^/admin/munin$ /admin/munin/ redirect;
location /admin/ {
@ -9,22 +12,30 @@
add_header X-Frame-Options "DENY";
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "frame-ancestors 'none';";
add_header Strict-Transport-Security max-age=31536000;
}
# ownCloud configuration.
# Nextcloud configuration.
rewrite ^/cloud$ /cloud/ redirect;
rewrite ^/cloud/$ /cloud/index.php;
rewrite ^/cloud/(contacts|calendar|files)$ /cloud/index.php/apps/$1/ redirect;
rewrite ^(/cloud/core/doc/[^\/]+/)$ $1/index.html;
rewrite ^(/cloud/oc[sm]-provider)/$ $1/index.php redirect;
location /cloud/ {
alias /usr/local/lib/owncloud/;
location ~ ^/cloud/(build|tests|config|lib|3rdparty|templates|data|README)/ {
deny all;
}
location ~ ^/cloud/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/cloud/(build|tests|config|lib|3rdparty|templates|data|README)/ {
deny all;
}
location ~ ^/cloud/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
# Enable paths for service and cloud federation discovery
# Resolves warning in Nextcloud Settings panel
location ~ ^/cloud/(oc[sm]-provider)?/([^/]+\.php)$ {
index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/local/lib/owncloud/$1/$2;
fastcgi_pass php-fpm;
}
}
location ~ ^(/cloud)((?:/ocs)?/[^/]+\.php)(/.*)?$ {
# note: ~ has precendence over a regular location block
@ -41,13 +52,11 @@
fastcgi_param MOD_X_ACCEL_REDIRECT_PREFIX /owncloud-xaccel;
fastcgi_read_timeout 630;
fastcgi_pass php-fpm;
error_page 403 /cloud/core/templates/403.php;
error_page 404 /cloud/core/templates/404.php;
client_max_body_size 1G;
fastcgi_buffers 64 4K;
}
location ^~ /owncloud-xaccel/ {
# This directory is for MOD_X_ACCEL_REDIRECT_ENABLED. ownCloud sends the full file
# This directory is for MOD_X_ACCEL_REDIRECT_ENABLED. Nextcloud sends the full file
# path on disk as a subdirectory under this virtual path.
# We must only allow 'internal' redirects within nginx so that the filesystem
# is not exposed to the world.
@ -64,4 +73,9 @@
rewrite ^/.well-known/carddav /cloud/remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /cloud/remote.php/caldav/ redirect;
# This addresses those service discovery issues mentioned in:
# https://docs.nextcloud.com/server/23/admin_manual/issues/general_troubleshooting.html#service-discovery
rewrite ^/.well-known/webfinger /cloud/index.php/.well-known/webfinger redirect;
rewrite ^/.well-known/nodeinfo /cloud/index.php/.well-known/nodeinfo redirect;
# ADDITIONAL DIRECTIVES HERE

View File

@ -1,76 +1,20 @@
# from: https://gist.github.com/konklone/6532544
###################################################################################
# We track the Mozilla "intermediate" compatibility TLS recommendations.
# Note that these settings are repeated in the SMTP and IMAP configuration.
# ssl_protocols has moved to nginx.conf in bionic, check there for enabled protocols.
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_dhparam STORAGE_ROOT/ssl/dh2048.pem;
# Basically the nginx configuration I use at konklone.com.
# I check it using https://www.ssllabs.com/ssltest/analyze.html?d=konklone.com
#
# To provide feedback, please tweet at @konklone or email eric@konklone.com.
# Comments on gists don't notify the author.
#
# Thanks to WubTheCaptain (https://wubthecaptain.eu) for his help and ciphersuites.
# Thanks to Ilya Grigorik (https://www.igvita.com) for constant inspiration.
# Path to certificate and private key.
# The .crt may omit the root CA cert, if it's a standard CA that ships with clients.
#ssl_certificate /path/to/unified.crt;
#ssl_certificate_key /path/to/my-private-decrypted.key;
# Tell browsers to require SSL (warning: difficult to change your mind)
# Handled by the management daemon because we can toggle this version or a
# preload version.
#add_header Strict-Transport-Security max-age=31536000;
# Prefer certain ciphersuites, to enforce Forward Secrecy and avoid known vulnerabilities.
#
# Forces forward secrecy in all browsers and clients that can use TLS,
# but with a small exception (DES-CBC3-SHA) for IE8/XP users.
#
# Reference client: https://www.ssllabs.com/ssltest/analyze.html
ssl_prefer_server_ciphers on;
ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !MD5 !EXP !DSS !PSK !SRP !kECDH !CAMELLIA !RC4 !SEED';
# Cut out (the old, broken) SSLv3 entirely.
# This **excludes IE6 users** and (apparently) Yandexbot.
# Just comment out if you need to support IE6, bless your soul.
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
# Turn on session resumption, using a 10 min cache shared across nginx processes,
# as recommended by http://nginx.org/en/docs/http/configuring_https_servers.html
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
#keepalive_timeout 70; # in Ubuntu 14.04/nginx 1.4.6 the default is 65, so plenty good
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
# Buffer size of 1400 bytes fits in one MTU.
# nginx 1.5.9+ ONLY
#ssl_buffer_size 1400;
ssl_buffer_size 1400;
# SPDY header compression (0 for none, 9 for slow/heavy compression). Preferred is 6.
#
# BUT: header compression is flawed and vulnerable in SPDY versions 1 - 3.
# Disable with 0, until using a version of nginx with SPDY 4.
spdy_headers_comp 0;
# Now let's really get fancy, and pre-generate a 2048 bit random parameter
# for DH elliptic curves. If not created and specified, default is only 1024 bits.
#
# Generated by OpenSSL with the following command:
# openssl dhparam -outform pem -out dhparam2048.pem 2048
#
# Note: raising the bits to 2048 excludes Java 6 clients. Comment out if a problem.
ssl_dhparam STORAGE_ROOT/ssl/dh2048.pem;
# OCSP stapling - means nginx will poll the CA for signed OCSP responses,
# and send them to clients so clients don't make their own OCSP calls.
# http://en.wikipedia.org/wiki/OCSP_stapling
#
# while the ssl_certificate above may omit the root cert if the CA is trusted,
# ssl_trusted_certificate below must point to a chain of **all** certs
# in the trust path - (your cert, intermediary certs, root cert)
#
# 8.8.8.8 and 8.8.4.4 below are Google's public IPv4 DNS servers.
# nginx will use them to talk to the CA.
ssl_stapling on;
ssl_stapling_verify on;
resolver 127.0.0.1 valid=86400;
resolver_timeout 10;
# h/t https://gist.github.com/konklone/6532544

View File

@ -7,6 +7,6 @@
## your own --- please do not ask for help from us.
upstream php-fpm {
server unix:/var/run/php5-fpm.sock;
server unix:/var/run/php/php8.0-fpm.sock;
}

View File

@ -25,14 +25,14 @@ server {
# This path must be served over HTTP for ACME domain validation.
# We map this to a special path where our TLS cert provisioning
# tool knows to store challenge response files.
alias $STORAGE_ROOT/ssl/lets_encrypt/acme_challenges/;
alias $STORAGE_ROOT/ssl/lets_encrypt/webroot/.well-known/acme-challenge/;
}
}
# The secure HTTPS server.
server {
listen 443 ssl;
listen [::]:443 ssl;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name $HOSTNAME;

View File

@ -1,6 +1,7 @@
<html>
<head>
<title>this is a mail-in-a-box</title>
<meta name="robots" content="noindex">
</head>
<body>
<h1>this is a mail-in-a-box</h1>

View File

@ -5,11 +5,12 @@
* Descr : Autodiscover configuration file
************************************************/
define('TIMEZONE', '');
// Defines the base path on the server
define('BASE_PATH', dirname($_SERVER['SCRIPT_FILENAME']). '/');
// The Z-Push server location for the autodiscover response
define('SERVERURL', 'https://PRIMARY_HOSTNAME/Microsoft-Server-ActiveSync');
define('ZPUSH_HOST', 'PRIMARY_HOSTNAME');
define('USE_FULLEMAIL_FOR_LOGIN', true);
@ -18,6 +19,7 @@ define('LOGFILE', LOGFILEDIR . 'autodiscover.log');
define('LOGERRORFILE', LOGFILEDIR . 'autodiscover-error.log');
define('LOGLEVEL', LOGLEVEL_INFO);
define('LOGUSERLEVEL', LOGLEVEL);
$specialLogUsers = array();
// the backend data provider
define('BACKEND_PROVIDER', 'BackendCombined');

View File

@ -17,7 +17,7 @@ define('CARDDAV_CONTACTS_FOLDER_NAME', '%u Addressbook');
define('CARDDAV_SUPPORTS_SYNC', false);
// If the CardDAV server supports the FN attribute for searches
// DAViCal supports it, but SabreDav, Owncloud and SOGo don't
// DAViCal supports it, but SabreDav, Nextcloud and SOGo don't
// Setting this to true will search by FN. If false will search by sn, givenName and email
// It's safe to leave it as false
define('CARDDAV_SUPPORTS_FN_SEARCH', false);

View File

@ -8,7 +8,7 @@
define('IMAP_SERVER', '127.0.0.1');
define('IMAP_PORT', 993);
define('IMAP_OPTIONS', '/ssl/norsh/novalidate-cert');
define('IMAP_DEFAULTFROM', '');
define('IMAP_DEFAULTFROM', 'sql');
define('SYSTEM_MIME_TYPES_MAPPING', '/etc/mime.types');
define('IMAP_AUTOSEEN_ON_DELETE', false);
@ -23,15 +23,19 @@ define('IMAP_FOLDER_TRASH', 'TRASH');
define('IMAP_FOLDER_SPAM', 'SPAM');
define('IMAP_FOLDER_ARCHIVE', 'ARCHIVE');
define('IMAP_INLINE_FORWARD', true);
define('IMAP_EXCLUDED_FOLDERS', '');
// not used
define('IMAP_FROM_SQL_DSN', '');
define('IMAP_FROM_SQL_DSN', 'sqlite:STORAGE_ROOT/mail/roundcube/roundcube.sqlite');
define('IMAP_FROM_SQL_USER', '');
define('IMAP_FROM_SQL_PASSWORD', '');
define('IMAP_FROM_SQL_OPTIONS', serialize(array(PDO::ATTR_PERSISTENT => true)));
define('IMAP_FROM_SQL_QUERY', "select first_name, last_name, mail_address from users where mail_address = '#username@#domain'");
define('IMAP_FROM_SQL_FIELDS', serialize(array('first_name', 'last_name', 'mail_address')));
define('IMAP_FROM_SQL_FROM', '#first_name #last_name <#mail_address>');
define('IMAP_FROM_SQL_QUERY', "SELECT name, email FROM identities i INNER JOIN users u ON i.user_id = u.user_id WHERE u.username = '#username' AND i.standard = 1 AND i.del = 0 AND i.name <> ''");
define('IMAP_FROM_SQL_FIELDS', serialize(array('name', 'email')));
define('IMAP_FROM_SQL_FROM', '#name <#email>');
define('IMAP_FROM_SQL_FULLNAME', '#name');
// not used
define('IMAP_FROM_LDAP_SERVER', '');
define('IMAP_FROM_LDAP_SERVER_PORT', '389');
define('IMAP_FROM_LDAP_USER', 'cn=zpush,ou=servers,dc=zpush,dc=org');
@ -40,12 +44,14 @@ define('IMAP_FROM_LDAP_BASE', 'dc=zpush,dc=org');
define('IMAP_FROM_LDAP_QUERY', '(mail=#username@#domain)');
define('IMAP_FROM_LDAP_FIELDS', serialize(array('givenname', 'sn', 'mail')));
define('IMAP_FROM_LDAP_FROM', '#givenname #sn <#mail>');
define('IMAP_FROM_LDAP_FULLNAME', '#givenname #sn');
define('IMAP_SMTP_METHOD', 'sendmail');
global $imap_smtp_params;
$imap_smtp_params = array('host' => 'ssl://127.0.0.1', 'port' => 587, 'auth' => true, 'username' => 'imap_username', 'password' => 'imap_password');
$imap_smtp_params = array('host' => 'ssl://127.0.0.1', 'port' => 465, 'auth' => true, 'username' => 'imap_username', 'password' => 'imap_password');
define('MAIL_MIMEPART_CRLF', "\r\n");
define('IMAP_MEETING_USE_CALDAV', true);
?>

View File

@ -1,138 +1,158 @@
import base64, os, os.path, hmac
import base64, hmac, json, secrets
from datetime import timedelta
from flask import make_response
from expiringdict import ExpiringDict
import utils
from mailconfig import get_mail_password, get_mail_user_privileges
from mfa import get_hash_mfa_state, validate_auth_mfa
DEFAULT_KEY_PATH = '/var/lib/mailinabox/api.key'
DEFAULT_AUTH_REALM = 'Mail-in-a-Box Management Server'
class KeyAuthService:
"""Generate an API key for authenticating clients
Clients must read the key from the key file and send the key with all HTTP
requests. The key is passed as the username field in the standard HTTP
Basic Auth header.
"""
class AuthService:
def __init__(self):
self.auth_realm = DEFAULT_AUTH_REALM
self.key = self._generate_key()
self.key_path = DEFAULT_KEY_PATH
self.max_session_duration = timedelta(days=2)
def write_key(self):
"""Write key to file so authorized clients can get the key
self.init_system_api_key()
self.sessions = ExpiringDict(max_len=64, max_age_seconds=self.max_session_duration.total_seconds())
The key file is created with mode 0640 so that additional users can be
authorized to access the API by granting group/ACL read permissions on
the key file.
"""
def create_file_with_mode(path, mode):
# Based on answer by A-B-B: http://stackoverflow.com/a/15015748
old_umask = os.umask(0)
try:
return os.fdopen(os.open(path, os.O_WRONLY | os.O_CREAT, mode), 'w')
finally:
os.umask(old_umask)
def init_system_api_key(self):
"""Write an API key to a local file so local processes can use the API"""
os.makedirs(os.path.dirname(self.key_path), exist_ok=True)
with open(self.key_path, encoding='utf-8') as file:
self.key = file.read()
with create_file_with_mode(self.key_path, 0o640) as key_file:
key_file.write(self.key + '\n')
def authenticate(self, request, env):
"""Test if the client key passed in HTTP Authorization header matches the service key
or if the or username/password passed in the header matches an administrator user.
def authenticate(self, request, env, login_only=False, logout=False):
"""Test if the HTTP Authorization header's username matches the system key, a session key,
or if the username/password passed in the header matches a local user.
Returns a tuple of the user's email address and list of user privileges (e.g.
('my@email', []) or ('my@email', ['admin']); raises a ValueError on login failure.
If the user used an API key, the user's email is returned as None."""
If the user used the system API key, the user's email is returned as None since
this key is not associated with a user."""
def decode(s):
return base64.b64decode(s.encode('ascii')).decode('ascii')
def parse_basic_auth(header):
def parse_http_authorization_basic(header):
def decode(s):
return base64.b64decode(s.encode('ascii')).decode('ascii')
if " " not in header:
return None, None
scheme, credentials = header.split(maxsplit=1)
if scheme != 'Basic':
return None, None
credentials = decode(credentials)
if ":" not in credentials:
return None, None
username, password = credentials.split(':', maxsplit=1)
return username, password
header = request.headers.get('Authorization')
if not header:
raise ValueError("No authorization header provided.")
username, password = parse_http_authorization_basic(request.headers.get('Authorization', ''))
if username in {None, ""}:
msg = "Authorization header invalid."
raise ValueError(msg)
username, password = parse_basic_auth(header)
if username.strip() == "" and password.strip() == "":
msg = "No email address, password, session key, or API key provided."
raise ValueError(msg)
if username in (None, ""):
raise ValueError("Authorization header invalid.")
elif username == self.key:
# The user passed the API key which grants administrative privs.
# If user passed the system API key, grant administrative privs. This key
# is not associated with a user.
if username == self.key and not login_only:
return (None, ["admin"])
# If the password corresponds with a session token for the user, grant access for that user.
if self.get_session(username, password, "login", env) and not login_only:
sessionid = password
session = self.sessions[sessionid]
if logout:
# Clear the session.
del self.sessions[sessionid]
else:
# Re-up the session so that it does not expire.
self.sessions[sessionid] = session
# If no password was given, but a username was given, we're missing some information.
elif password.strip() == "":
msg = "Enter a password."
raise ValueError(msg)
else:
# The user is trying to log in with a username and user-specific
# API key or password. Raises or returns privs.
return (username, self.get_user_credentials(username, password, env))
def get_user_credentials(self, email, pw, env):
# Validate a user's credentials. On success returns a list of
# privileges (e.g. [] or ['admin']). On failure raises a ValueError
# with a login error message.
# Sanity check.
if email == "" or pw == "":
raise ValueError("Enter an email address and password.")
# The password might be a user-specific API key. create_user_key raises
# a ValueError if the user does not exist.
if hmac.compare_digest(self.create_user_key(email, env), pw):
# OK.
pass
else:
# Get the hashed password of the user. Raise a ValueError if the
# email address does not correspond to a user.
pw_hash = get_mail_password(email, env)
# Authenticate.
try:
# Use 'doveadm pw' to check credentials. doveadm will return
# a non-zero exit status if the credentials are no good,
# and check_call will raise an exception in that case.
utils.shell('check_call', [
"/usr/bin/doveadm", "pw",
"-p", pw,
"-t", pw_hash,
])
except:
# Login failed.
raise ValueError("Invalid password.")
# The user is trying to log in with a username and a password
# (and possibly a MFA token). On failure, an exception is raised.
self.check_user_auth(username, password, request, env)
# Get privileges for authorization. This call should never fail because by this
# point we know the email address is a valid user. But on error the call will
# return a tuple of an error message and an HTTP status code.
privs = get_mail_user_privileges(email, env)
# point we know the email address is a valid user --- unless the user has been
# deleted after the session was granted. On error the call will return a tuple
# of an error message and an HTTP status code.
privs = get_mail_user_privileges(username, env)
if isinstance(privs, tuple): raise ValueError(privs[0])
# Return a list of privileges.
return privs
# Return the authorization information.
return (username, privs)
def create_user_key(self, email, env):
# Store an HMAC with the client. The hashed message of the HMAC will be the user's
# email address & hashed password and the key will be the master API key. The user of
# course has their own email address and password. We assume they do not have the master
# API key (unless they are trusted anyway). The HMAC proves that they authenticated
# with us in some other way to get the HMAC. Including the password means that when
# a user's password is reset, the HMAC changes and they will correctly need to log
# in to the control panel again. This method raises a ValueError if the user does
# not exist, due to get_mail_password.
msg = b"AUTH:" + email.encode("utf8") + b" " + get_mail_password(email, env).encode("utf8")
return hmac.new(self.key.encode('ascii'), msg, digestmod="sha256").hexdigest()
def check_user_auth(self, email, pw, request, env):
# Validate a user's login email address and password. If MFA is enabled,
# check the MFA token in the X-Auth-Token header.
#
# On login failure, raises a ValueError with a login error message. On
# success, nothing is returned.
def _generate_key(self):
raw_key = os.urandom(32)
return base64.b64encode(raw_key).decode('ascii')
# Authenticate.
try:
# Get the hashed password of the user. Raise a ValueError if the
# email address does not correspond to a user. But wrap it in the
# same exception as if a password fails so we don't easily reveal
# if an email address is valid.
pw_hash = get_mail_password(email, env)
# Use 'doveadm pw' to check credentials. doveadm will return
# a non-zero exit status if the credentials are no good,
# and check_call will raise an exception in that case.
utils.shell('check_call', [
"/usr/bin/doveadm", "pw",
"-p", pw,
"-t", pw_hash,
])
except:
# Login failed.
msg = "Incorrect email address or password."
raise ValueError(msg)
# If MFA is enabled, check that MFA passes.
status, hints = validate_auth_mfa(email, request, env)
if not status:
# Login valid. Hints may have more info.
raise ValueError(",".join(hints))
def create_user_password_state_token(self, email, env):
# Create a token that changes if the user's password or MFA options change
# so that sessions become invalid if any of that information changes.
msg = get_mail_password(email, env).encode("utf8")
# Add to the message the current MFA state, which is a list of MFA information.
# Turn it into a string stably.
msg += b" " + json.dumps(get_hash_mfa_state(email, env), sort_keys=True).encode("utf8")
# Make a HMAC using the system API key as a hash key.
hash_key = self.key.encode('ascii')
return hmac.new(hash_key, msg, digestmod="sha256").hexdigest()
def create_session_key(self, username, env, type=None):
# Create a new session.
token = secrets.token_hex(32)
self.sessions[token] = {
"email": username,
"password_token": self.create_user_password_state_token(username, env),
"type": type,
}
return token
def get_session(self, user_email, session_key, session_type, env):
if session_key not in self.sessions: return None
session = self.sessions[session_key]
if session_type == "login" and session["email"] != user_email: return None
if session["type"] != session_type: return None
if session["password_token"] != self.create_user_password_state_token(session["email"], env): return None
return session

View File

@ -1,4 +1,4 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
# This script performs a backup of all user data:
# 1) System services are stopped.
@ -7,32 +7,32 @@
# 4) The stopped services are restarted.
# 5) STORAGE_ROOT/backup/after-backup is executed if it exists.
import os, os.path, shutil, glob, re, datetime, sys
import os, os.path, re, datetime, sys
import dateutil.parser, dateutil.relativedelta, dateutil.tz
import rtyaml
from exclusiveprocess import Lock
from utils import exclusive_process, load_environment, shell, wait_for_service, fix_boto
from utils import load_environment, shell, wait_for_service
def backup_status(env):
# Root folder
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
# What is the current status of backups?
# Query duplicity to get a list of all backups.
# Use the number of volumes to estimate the size.
# If backups are dissbled, return no status.
config = get_backup_config(env)
now = datetime.datetime.now(dateutil.tz.tzlocal())
# Are backups dissbled?
if config["target"] == "off":
return { }
# Query duplicity to get a list of all full and incremental
# backups available.
backups = { }
now = datetime.datetime.now(dateutil.tz.tzlocal())
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
backup_cache_dir = os.path.join(backup_root, 'cache')
def reldate(date, ref, clip):
if ref < date: return clip
rd = dateutil.relativedelta.relativedelta(ref, date)
if rd.years > 1: return "%d years, %d months" % (rd.years, rd.months)
if rd.years == 1: return "%d year, %d months" % (rd.years, rd.months)
if rd.months > 1: return "%d months, %d days" % (rd.months, rd.days)
if rd.months == 1: return "%d month, %d days" % (rd.months, rd.days)
if rd.days >= 7: return "%d days" % rd.days
@ -46,37 +46,47 @@ def backup_status(env):
date = dateutil.parser.parse(keys[1]).astimezone(dateutil.tz.tzlocal())
return {
"date": keys[1],
"date_str": date.strftime("%x %X") + " " + now.tzname(),
"date_str": date.strftime("%Y-%m-%d %X") + " " + now.tzname(),
"date_delta": reldate(date, now, "the future?"),
"full": keys[0] == "full",
"size": 0, # collection-status doesn't give us the size
"volumes": keys[2], # number of archive volumes for this backup (not really helpful)
"volumes": int(keys[2]), # number of archive volumes for this backup (not really helpful)
}
code, collection_status = shell('check_output', [
"/usr/bin/duplicity",
"collection-status",
"--archive-dir", backup_cache_dir,
"--gpg-options", "--cipher-algo=AES256",
"--gpg-options", "'--cipher-algo=AES256'",
"--log-fd", "1",
config["target"],
*get_duplicity_additional_args(env),
get_duplicity_target_url(config)
],
get_env(env),
get_duplicity_env_vars(env),
trap=True)
if code != 0:
# Command failed. This is likely due to an improperly configured remote
# destination for the backups or the last backup job terminated unexpectedly.
raise Exception("Something is wrong with the backup: " + collection_status)
for line in collection_status.split('\n'):
if line.startswith(" full") or line.startswith(" inc"):
if line.startswith((" full", " inc")):
backup = parse_line(line)
backups[backup["date"]] = backup
# Look at the target to get the sizes of each of the backups. There is more than one file per backup.
# Look at the target directly to get the sizes of each of the backups. There is more than one file per backup.
# Starting with duplicity in Ubuntu 18.04, "signatures" files have dates in their
# filenames that are a few seconds off the backup date and so don't line up
# with the list of backups we have. Track unmatched files so we know how much other
# space is used for those.
unmatched_file_size = 0
for fn, size in list_target_files(config):
m = re.match(r"duplicity-(full|full-signatures|(inc|new-signatures)\.(?P<incbase>\d+T\d+Z)\.to)\.(?P<date>\d+T\d+Z)\.", fn)
if not m: continue # not a part of a current backup chain
key = m.group("date")
backups[key]["size"] += size
if key in backups:
backups[key]["size"] += size
else:
unmatched_file_size += size
# Ensure the rows are sorted reverse chronologically.
# This is relied on by should_force_full() and the next step.
@ -106,7 +116,7 @@ def backup_status(env):
# full backup. That full backup frees up this one to be deleted. But, the backup
# must also be at least min_age_in_days old too.
deleted_in = None
if incremental_count > 0 and first_full_size is not None:
if incremental_count > 0 and incremental_size > 0 and first_full_size is not None:
# How many days until the next incremental backup? First, the part of
# the algorithm based on increment sizes:
est_days_to_next_full = (.5 * first_full_size - incremental_size) / (incremental_size/incremental_count)
@ -139,6 +149,7 @@ def backup_status(env):
return {
"backups": backups,
"unmatched_file_size": unmatched_file_size,
}
def should_force_full(config, env):
@ -174,66 +185,96 @@ def get_passphrase(env):
# only needs to be 43 base64-characters to match AES256's key
# length of 32 bytes.
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
with open(os.path.join(backup_root, 'secret_key.txt')) as f:
with open(os.path.join(backup_root, 'secret_key.txt'), encoding="utf-8") as f:
passphrase = f.readline().strip()
if len(passphrase) < 43: raise Exception("secret_key.txt's first line is too short!")
return passphrase
def get_env(env):
def get_duplicity_target_url(config):
target = config["target"]
if get_target_type(config) == "s3":
from urllib.parse import urlsplit, urlunsplit
target = list(urlsplit(target))
# Although we store the S3 hostname in the target URL,
# duplicity no longer accepts it in the target URL. The hostname in
# the target URL must be the bucket name. The hostname is passed
# via get_duplicity_additional_args. Move the first part of the
# path (the bucket name) into the hostname URL component, and leave
# the rest for the path. (The S3 region name is also stored in the
# hostname part of the URL, in the username portion, which we also
# have to drop here).
target[1], target[2] = target[2].lstrip('/').split('/', 1)
target = urlunsplit(target)
return target
def get_duplicity_additional_args(env):
config = get_backup_config(env)
if get_target_type(config) == 'rsync':
# Extract a port number for the ssh transport. Duplicity accepts the
# optional port number syntax in the target, but it doesn't appear to act
# on it, so we set the ssh port explicitly via the duplicity options.
from urllib.parse import urlsplit
try:
port = urlsplit(config["target"]).port
except ValueError:
port = 22
if port is None:
port = 22
return [
f"--ssh-options='-i /root/.ssh/id_rsa_miab -p {port}'",
f"--rsync-options='-e \"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {port} -i /root/.ssh/id_rsa_miab\"'",
]
elif get_target_type(config) == 's3':
# See note about hostname in get_duplicity_target_url.
# The region name, which is required by some non-AWS endpoints,
# is saved inside the username portion of the URL.
from urllib.parse import urlsplit, urlunsplit
target = urlsplit(config["target"])
endpoint_url = urlunsplit(("https", target.hostname, '', '', ''))
args = ["--s3-endpoint-url", endpoint_url]
if target.username: # region name is stuffed here
args += ["--s3-region-name", target.username]
return args
return []
def get_duplicity_env_vars(env):
config = get_backup_config(env)
env = { "PASSPHRASE" : get_passphrase(env) }
if get_target_type(config) == 's3':
env["AWS_ACCESS_KEY_ID"] = config["target_user"]
env["AWS_SECRET_ACCESS_KEY"] = config["target_pass"]
return env
def get_target_type(config):
protocol = config["target"].split(":")[0]
return protocol
return config["target"].split(":")[0]
def perform_backup(full_backup):
env = load_environment()
exclusive_process("backup")
# Create an global exclusive lock so that the backup script
# cannot be run more than one.
Lock(die=True).forever()
config = get_backup_config(env)
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
backup_cache_dir = os.path.join(backup_root, 'cache')
backup_dir = os.path.join(backup_root, 'encrypted')
# Are backups dissbled?
# Are backups disabled?
if config["target"] == "off":
return
# In an older version of this script, duplicity was called
# such that it did not encrypt the backups it created (in
# backup/duplicity), and instead openssl was called separately
# after each backup run, creating AES256 encrypted copies of
# each file created by duplicity in backup/encrypted.
#
# We detect the transition by the presence of backup/duplicity
# and handle it by 'dupliception': we move all the old *un*encrypted
# duplicity files up out of the backup/duplicity directory (as
# backup/ is excluded from duplicity runs) in order that it is
# included in the next run, and we delete backup/encrypted (which
# duplicity will output files directly to, post-transition).
old_backup_dir = os.path.join(backup_root, 'duplicity')
migrated_unencrypted_backup_dir = os.path.join(env["STORAGE_ROOT"], "migrated_unencrypted_backup")
if os.path.isdir(old_backup_dir):
# Move the old unencrypted files to a new location outside of
# the backup root so they get included in the next (new) backup.
# Then we'll delete them. Also so that they do not get in the
# way of duplicity doing a full backup on the first run after
# we take care of this.
shutil.move(old_backup_dir, migrated_unencrypted_backup_dir)
# The backup_dir (backup/encrypted) now has a new purpose.
# Clear it out.
shutil.rmtree(backup_dir)
# On the first run, always do a full backup. Incremental
# will fail. Otherwise do a full backup when the size of
# the increments since the most recent full backup are
@ -255,9 +296,10 @@ def perform_backup(full_backup):
if quit:
sys.exit(code)
service_command("php5-fpm", "stop", quit=True)
service_command("php8.0-fpm", "stop", quit=True)
service_command("postfix", "stop", quit=True)
service_command("dovecot", "stop", quit=True)
service_command("postgrey", "stop", quit=True)
# Execute a pre-backup script that copies files outside the homedir.
# Run as the STORAGE_USER user, not as root. Pass our settings in
@ -279,21 +321,19 @@ def perform_backup(full_backup):
"--archive-dir", backup_cache_dir,
"--exclude", backup_root,
"--volsize", "250",
"--gpg-options", "--cipher-algo=AES256",
"--gpg-options", "'--cipher-algo=AES256'",
"--allow-source-mismatch",
*get_duplicity_additional_args(env),
env["STORAGE_ROOT"],
config["target"],
"--allow-source-mismatch"
get_duplicity_target_url(config),
],
get_env(env))
get_duplicity_env_vars(env))
finally:
# Start services again.
service_command("postgrey", "start", quit=False)
service_command("dovecot", "start", quit=False)
service_command("postfix", "start", quit=False)
service_command("php5-fpm", "start", quit=False)
# Once the migrated backup is included in a new backup, it can be deleted.
if os.path.isdir(migrated_unencrypted_backup_dir):
shutil.rmtree(migrated_unencrypted_backup_dir)
service_command("php8.0-fpm", "start", quit=False)
# Remove old backups. This deletes all backup data no longer needed
# from more than 3 days ago.
@ -304,9 +344,10 @@ def perform_backup(full_backup):
"--verbosity", "error",
"--archive-dir", backup_cache_dir,
"--force",
config["target"]
*get_duplicity_additional_args(env),
get_duplicity_target_url(config)
],
get_env(env))
get_duplicity_env_vars(env))
# From duplicity's manual:
# "This should only be necessary after a duplicity session fails or is
@ -319,9 +360,10 @@ def perform_backup(full_backup):
"--verbosity", "error",
"--archive-dir", backup_cache_dir,
"--force",
config["target"]
*get_duplicity_additional_args(env),
get_duplicity_target_url(config)
],
get_env(env))
get_duplicity_env_vars(env))
# Change ownership of backups to the user-data user, so that the after-bcakup
# script can access them.
@ -357,9 +399,10 @@ def run_duplicity_verification():
"--compare-data",
"--archive-dir", backup_cache_dir,
"--exclude", backup_root,
config["target"],
*get_duplicity_additional_args(env),
get_duplicity_target_url(config),
env["STORAGE_ROOT"],
], get_env(env))
], get_duplicity_env_vars(env))
def run_duplicity_restore(args):
env = load_environment()
@ -369,55 +412,133 @@ def run_duplicity_restore(args):
"/usr/bin/duplicity",
"restore",
"--archive-dir", backup_cache_dir,
config["target"],
] + args,
get_env(env))
*get_duplicity_additional_args(env),
get_duplicity_target_url(config),
*args],
get_duplicity_env_vars(env))
def print_duplicity_command():
import shlex
env = load_environment()
config = get_backup_config(env)
backup_cache_dir = os.path.join(env["STORAGE_ROOT"], 'backup', 'cache')
for k, v in get_duplicity_env_vars(env).items():
print(f"export {k}={shlex.quote(v)}")
print("duplicity", "{command}", shlex.join([
"--archive-dir", backup_cache_dir,
*get_duplicity_additional_args(env),
get_duplicity_target_url(config)
]))
def list_target_files(config):
import urllib.parse
try:
p = urllib.parse.urlparse(config["target"])
target = urllib.parse.urlparse(config["target"])
except ValueError:
return "invalid target"
if p.scheme == "file":
return [(fn, os.path.getsize(os.path.join(p.path, fn))) for fn in os.listdir(p.path)]
if target.scheme == "file":
return [(fn, os.path.getsize(os.path.join(target.path, fn))) for fn in os.listdir(target.path)]
elif p.scheme == "s3":
# match to a Region
fix_boto() # must call prior to importing boto
import boto.s3
from boto.exception import BotoServerError
for region in boto.s3.regions():
if region.endpoint == p.hostname:
break
elif target.scheme == "rsync":
rsync_fn_size_re = re.compile(r'.* ([^ ]*) [^ ]* [^ ]* (.*)')
rsync_target = '{host}:{path}'
# Strip off any trailing port specifier because it's not valid in rsync's
# DEST syntax. Explicitly set the port number for the ssh transport.
user_host, *_ = target.netloc.rsplit(':', 1)
try:
port = target.port
except ValueError:
port = 22
if port is None:
port = 22
target_path = target.path
if not target_path.endswith('/'):
target_path = target_path + '/'
if target_path.startswith('/'):
target_path = target_path[1:]
rsync_command = [ 'rsync',
'-e',
f'/usr/bin/ssh -i /root/.ssh/id_rsa_miab -oStrictHostKeyChecking=no -oBatchMode=yes -p {port}',
'--list-only',
'-r',
rsync_target.format(
host=user_host,
path=target_path)
]
code, listing = shell('check_output', rsync_command, trap=True, capture_stderr=True)
if code == 0:
ret = []
for l in listing.split('\n'):
match = rsync_fn_size_re.match(l)
if match:
ret.append( (match.groups()[1], int(match.groups()[0].replace(',',''))) )
return ret
else:
raise ValueError("Invalid S3 region/host.")
if 'Permission denied (publickey).' in listing:
reason = "Invalid user or check you correctly copied the SSH key."
elif 'No such file or directory' in listing:
reason = f"Provided path {target_path} is invalid."
elif 'Network is unreachable' in listing:
reason = f"The IP address {target.hostname} is unreachable."
elif 'Could not resolve hostname' in listing:
reason = f"The hostname {target.hostname} cannot be resolved."
else:
reason = ("Unknown error."
"Please check running 'management/backup.py --verify'"
"from mailinabox sources to debug the issue.")
msg = f"Connection to rsync host failed: {reason}"
raise ValueError(msg)
bucket = p.path[1:].split('/')[0]
path = '/'.join(p.path[1:].split('/')[1:]) + '/'
elif target.scheme == "s3":
import boto3.s3
from botocore.exceptions import ClientError
# separate bucket from path in target
bucket = target.path[1:].split('/')[0]
path = '/'.join(target.path[1:].split('/')[1:]) + '/'
# If no prefix is specified, set the path to '', otherwise boto won't list the files
if path == '/':
path = ''
if bucket == "":
raise ValueError("Enter an S3 bucket name.")
msg = "Enter an S3 bucket name."
raise ValueError(msg)
# connect to the region & bucket
try:
conn = region.connect(aws_access_key_id=config["target_user"], aws_secret_access_key=config["target_pass"])
bucket = conn.get_bucket(bucket)
except BotoServerError as e:
if e.status == 403:
raise ValueError("Invalid S3 access key or secret access key.")
elif e.status == 404:
raise ValueError("Invalid S3 bucket name.")
elif e.status == 301:
raise ValueError("Incorrect region for this bucket.")
raise ValueError(e.reason)
s3 = boto3.client('s3', \
endpoint_url=f'https://{target.hostname}', \
aws_access_key_id=config['target_user'], \
aws_secret_access_key=config['target_pass'])
bucket_objects = s3.list_objects_v2(Bucket=bucket, Prefix=path)['Contents']
backup_list = [(key['Key'][len(path):], key['Size']) for key in bucket_objects]
except ClientError as e:
raise ValueError(e)
return backup_list
elif target.scheme == 'b2':
from b2sdk.v1 import InMemoryAccountInfo, B2Api
from b2sdk.v1.exception import NonExistentBucket
info = InMemoryAccountInfo()
b2_api = B2Api(info)
return [(key.name[len(path):], key.size) for key in bucket.list(prefix=path)]
# Extract information from target
b2_application_keyid = target.netloc[:target.netloc.index(':')]
b2_application_key = urllib.parse.unquote(target.netloc[target.netloc.index(':')+1:target.netloc.index('@')])
b2_bucket = target.netloc[target.netloc.index('@')+1:]
try:
b2_api.authorize_account("production", b2_application_keyid, b2_application_key)
bucket = b2_api.get_bucket_by_name(b2_bucket)
except NonExistentBucket:
msg = "B2 Bucket does not exist. Please double check your information!"
raise ValueError(msg)
return [(key.file_name, key.size) for key, _ in bucket.ls()]
else:
raise ValueError(config["target"])
@ -425,7 +546,7 @@ def list_target_files(config):
def backup_set_custom(env, target, target_user, target_pass, min_age):
config = get_backup_config(env, for_save=True)
# min_age must be an int
if isinstance(min_age, str):
min_age = int(min_age)
@ -437,17 +558,17 @@ def backup_set_custom(env, target, target_user, target_pass, min_age):
# Validate.
try:
if config["target"] not in ("off", "local"):
if config["target"] not in {"off", "local"}:
# these aren't supported by the following function, which expects a full url in the target key,
# which is what is there except when loading the config prior to saving
list_target_files(config)
except ValueError as e:
return str(e)
write_backup_config(env, config)
return "OK"
def get_backup_config(env, for_save=False, for_ui=False):
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
@ -459,8 +580,9 @@ def get_backup_config(env, for_save=False, for_ui=False):
# Merge in anything written to custom.yaml.
try:
custom_config = rtyaml.load(open(os.path.join(backup_root, 'custom.yaml')))
if not isinstance(custom_config, dict): raise ValueError() # caught below
with open(os.path.join(backup_root, 'custom.yaml'), encoding="utf-8") as f:
custom_config = rtyaml.load(f)
if not isinstance(custom_config, dict): raise ValueError # caught below
config.update(custom_config)
except:
pass
@ -482,31 +604,43 @@ def get_backup_config(env, for_save=False, for_ui=False):
if config["target"] == "local":
# Expand to the full URL.
config["target"] = "file://" + config["file_target_directory"]
ssh_pub_key = os.path.join('/root', '.ssh', 'id_rsa_miab.pub')
if os.path.exists(ssh_pub_key):
with open(ssh_pub_key, encoding="utf-8") as f:
config["ssh_pub_key"] = f.read()
return config
def write_backup_config(env, newconfig):
backup_root = os.path.join(env["STORAGE_ROOT"], 'backup')
with open(os.path.join(backup_root, 'custom.yaml'), "w") as f:
with open(os.path.join(backup_root, 'custom.yaml'), "w", encoding="utf-8") as f:
f.write(rtyaml.dump(newconfig))
if __name__ == "__main__":
import sys
if sys.argv[-1] == "--verify":
# Run duplicity's verification command to check a) the backup files
# are readable, and b) report if they are up to date.
run_duplicity_verification()
elif sys.argv[-1] == "--list":
# List the saved backup files.
for fn, size in list_target_files(get_backup_config(load_environment())):
print(f"{fn}\t{size}")
elif sys.argv[-1] == "--status":
# Show backup status.
ret = backup_status(load_environment())
print(rtyaml.dump(ret["backups"]))
print("Storage for unmatched files:", ret["unmatched_file_size"])
elif len(sys.argv) >= 2 and sys.argv[1] == "--restore":
# Run duplicity restore. Rest of command line passed as arguments
# to duplicity. The restore path should be specified.
run_duplicity_restore(sys.argv[2:])
elif sys.argv[-1] == "--duplicity-command":
print_duplicity_command()
else:
# Perform a backup. Add --full to force a full backup rather than
# possibly performing an incremental backup.

144
management/cli.py Executable file
View File

@ -0,0 +1,144 @@
#!/usr/bin/python3
#
# This is a command-line script for calling management APIs
# on the Mail-in-a-Box control panel backend. The script
# reads /var/lib/mailinabox/api.key for the backend's
# root API key. This file is readable only by root, so this
# tool can only be used as root.
import sys, getpass, urllib.request, urllib.error, json, csv
import contextlib
def mgmt(cmd, data=None, is_json=False):
# The base URL for the management daemon. (Listens on IPv4 only.)
mgmt_uri = 'http://127.0.0.1:10222'
setup_key_auth(mgmt_uri)
req = urllib.request.Request(mgmt_uri + cmd, urllib.parse.urlencode(data).encode("utf8") if data else None)
try:
response = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
if e.code == 401:
with contextlib.suppress(Exception):
print(e.read().decode("utf8"))
print("The management daemon refused access. The API key file may be out of sync. Try 'service mailinabox restart'.", file=sys.stderr)
elif hasattr(e, 'read'):
print(e.read().decode('utf8'), file=sys.stderr)
else:
print(e, file=sys.stderr)
sys.exit(1)
resp = response.read().decode('utf8')
if is_json: resp = json.loads(resp)
return resp
def read_password():
while True:
first = getpass.getpass('password: ')
if len(first) < 8:
print("Passwords must be at least eight characters.")
continue
second = getpass.getpass(' (again): ')
if first != second:
print("Passwords not the same. Try again.")
continue
break
return first
def setup_key_auth(mgmt_uri):
with open('/var/lib/mailinabox/api.key', encoding='utf-8') as f:
key = f.read().strip()
auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password(
realm='Mail-in-a-Box Management Server',
uri=mgmt_uri,
user=key,
passwd='')
opener = urllib.request.build_opener(auth_handler)
urllib.request.install_opener(opener)
if len(sys.argv) < 2:
print("""Usage:
{cli} user (lists users)
{cli} user add user@domain.com [password]
{cli} user password user@domain.com [password]
{cli} user remove user@domain.com
{cli} user make-admin user@domain.com
{cli} user remove-admin user@domain.com
{cli} user admins (lists admins)
{cli} user mfa show user@domain.com (shows MFA devices for user, if any)
{cli} user mfa disable user@domain.com [id] (disables MFA for user)
{cli} alias (lists aliases)
{cli} alias add incoming.name@domain.com sent.to@other.domain.com
{cli} alias add incoming.name@domain.com 'sent.to@other.domain.com, multiple.people@other.domain.com'
{cli} alias remove incoming.name@domain.com
Removing a mail user does not delete their mail folders on disk. It only prevents IMAP/SMTP login.
""".format(
cli="management/cli.py"
))
elif sys.argv[1] == "user" and len(sys.argv) == 2:
# Dump a list of users, one per line. Mark admins with an asterisk.
users = mgmt("/mail/users?format=json", is_json=True)
for domain in users:
for user in domain["users"]:
if user['status'] == 'inactive': continue
print(user['email'], end='')
if "admin" in user['privileges']:
print("*", end='')
print()
elif sys.argv[1] == "user" and sys.argv[2] in {"add", "password"}:
if len(sys.argv) < 5:
email = input('email: ') if len(sys.argv) < 4 else sys.argv[3]
pw = read_password()
else:
email, pw = sys.argv[3:5]
if sys.argv[2] == "add":
print(mgmt("/mail/users/add", { "email": email, "password": pw }))
elif sys.argv[2] == "password":
print(mgmt("/mail/users/password", { "email": email, "password": pw }))
elif sys.argv[1] == "user" and sys.argv[2] == "remove" and len(sys.argv) == 4:
print(mgmt("/mail/users/remove", { "email": sys.argv[3] }))
elif sys.argv[1] == "user" and sys.argv[2] in {"make-admin", "remove-admin"} and len(sys.argv) == 4:
action = 'add' if sys.argv[2] == 'make-admin' else 'remove'
print(mgmt("/mail/users/privileges/" + action, { "email": sys.argv[3], "privilege": "admin" }))
elif sys.argv[1] == "user" and sys.argv[2] == "admins":
# Dump a list of admin users.
users = mgmt("/mail/users?format=json", is_json=True)
for domain in users:
for user in domain["users"]:
if "admin" in user['privileges']:
print(user['email'])
elif sys.argv[1] == "user" and len(sys.argv) == 5 and sys.argv[2:4] == ["mfa", "show"]:
# Show MFA status for a user.
status = mgmt("/mfa/status", { "user": sys.argv[4] }, is_json=True)
W = csv.writer(sys.stdout)
W.writerow(["id", "type", "label"])
for mfa in status["enabled_mfa"]:
W.writerow([mfa["id"], mfa["type"], mfa["label"]])
elif sys.argv[1] == "user" and len(sys.argv) in {5, 6} and sys.argv[2:4] == ["mfa", "disable"]:
# Disable MFA (all or a particular device) for a user.
print(mgmt("/mfa/disable", { "user": sys.argv[4], "mfa-id": sys.argv[5] if len(sys.argv) == 6 else None }))
elif sys.argv[1] == "alias" and len(sys.argv) == 2:
print(mgmt("/mail/aliases"))
elif sys.argv[1] == "alias" and sys.argv[2] == "add" and len(sys.argv) == 5:
print(mgmt("/mail/aliases/add", { "address": sys.argv[3], "forwards_to": sys.argv[4] }))
elif sys.argv[1] == "alias" and sys.argv[2] == "remove" and len(sys.argv) == 4:
print(mgmt("/mail/aliases/remove", { "address": sys.argv[3] }))
else:
print("Invalid command-line arguments.")
sys.exit(1)

View File

@ -1,31 +1,41 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python3
#
# The API can be accessed on the command line, e.g. use `curl` like so:
# curl --user $(</var/lib/mailinabox/api.key): http://localhost:10222/mail/users
#
# During development, you can start the Mail-in-a-Box control panel
# by running this script, e.g.:
#
# service mailinabox stop # stop the system process
# DEBUG=1 management/daemon.py
# service mailinabox start # when done debugging, start it up again
import os, os.path, re, json, time
import subprocess
import multiprocessing.pool
from functools import wraps
from flask import Flask, request, render_template, abort, Response, send_from_directory, make_response
from flask import Flask, request, render_template, Response, send_from_directory, make_response
import auth, utils, multiprocessing.pool
import auth, utils
from mailconfig import get_mail_users, get_mail_users_ex, get_admins, add_mail_user, set_mail_password, remove_mail_user
from mailconfig import get_mail_user_privileges, add_remove_mail_user_privilege
from mailconfig import get_mail_aliases, get_mail_aliases_ex, get_mail_domains, add_mail_alias, remove_mail_alias
from mfa import get_public_mfa_state, provision_totp, validate_totp_secret, enable_mfa, disable_mfa
import contextlib
env = utils.load_environment()
auth_service = auth.KeyAuthService()
auth_service = auth.AuthService()
# We may deploy via a symbolic link, which confuses flask's template finding.
me = __file__
try:
with contextlib.suppress(OSError):
me = os.readlink(__file__)
except OSError:
pass
# for generating CSRs we need a list of country codes
csr_country_codes = []
with open(os.path.join(os.path.dirname(me), "csr_country_codes.tsv")) as f:
with open(os.path.join(os.path.dirname(me), "csr_country_codes.tsv"), encoding="utf-8") as f:
for line in f:
if line.strip() == "" or line.startswith("#"): continue
code, name = line.strip().split("\t")[0:2]
@ -37,29 +47,39 @@ app = Flask(__name__, template_folder=os.path.abspath(os.path.join(os.path.dirna
def authorized_personnel_only(viewfunc):
@wraps(viewfunc)
def newview(*args, **kwargs):
# Authenticate the passed credentials, which is either the API key or a username:password pair.
# Authenticate the passed credentials, which is either the API key or a username:password pair
# and an optional X-Auth-Token token.
error = None
privs = []
try:
email, privs = auth_service.authenticate(request, env)
except ValueError as e:
# Authentication failed.
privs = []
error = "Incorrect username or password"
# Write a line in the log recording the failed login, unless no authorization header
# was given which can happen on an initial request before a 403 response.
if "Authorization" in request.headers:
log_failed_login(request)
# Write a line in the log recording the failed login
log_failed_login(request)
# Authentication failed.
error = str(e)
# Authorized to access an API view?
if "admin" in privs:
# Store the email address of the logged in user so it can be accessed
# from the API methods that affect the calling user.
request.user_email = email
request.user_privs = privs
# Call view func.
return viewfunc(*args, **kwargs)
elif not error:
if not error:
error = "You are not an administrator."
# Not authorized. Return a 401 (send auth) and a prompt to authorize by default.
status = 401
headers = {
'WWW-Authenticate': 'Basic realm="{0}"'.format(auth_service.auth_realm),
'WWW-Authenticate': f'Basic realm="{auth_service.auth_realm}"',
'X-Reason': error,
}
@ -69,7 +89,7 @@ def authorized_personnel_only(viewfunc):
status = 403
headers = None
if request.headers.get('Accept') in (None, "", "*/*"):
if request.headers.get('Accept') in {None, "", "*/*"}:
# Return plain text output.
return Response(error+"\n", status=status, mimetype='text/plain', headers=headers)
else:
@ -85,8 +105,8 @@ def authorized_personnel_only(viewfunc):
def unauthorized(error):
return auth_service.make_unauthorized_response()
def json_response(data):
return Response(json.dumps(data, indent=2, sort_keys=True)+'\n', status=200, mimetype='application/json')
def json_response(data, status=200):
return Response(json.dumps(data, indent=2, sort_keys=True)+'\n', status=status, mimetype='application/json')
###################################
@ -100,9 +120,9 @@ def index():
no_users_exist = (len(get_mail_users(env)) == 0)
no_admins_exist = (len(get_admins(env)) == 0)
utils.fix_boto() # must call prior to importing boto
import boto.s3
backup_s3_hosts = [(r.name, r.endpoint) for r in boto.s3.regions()]
import boto3.s3
backup_s3_hosts = [(r, f"s3.{r}.amazonaws.com") for r in boto3.session.Session().get_available_regions('s3')]
return render_template('index.html',
hostname=env['PRIMARY_HOSTNAME'],
@ -115,40 +135,56 @@ def index():
csr_country_codes=csr_country_codes,
)
@app.route('/me')
def me():
# Create a session key by checking the username/password in the Authorization header.
@app.route('/login', methods=["POST"])
def login():
# Is the caller authorized?
try:
email, privs = auth_service.authenticate(request, env)
email, privs = auth_service.authenticate(request, env, login_only=True)
except ValueError as e:
# Log the failed login
log_failed_login(request)
return json_response({
"status": "invalid",
"reason": "Incorrect username or password",
if "missing-totp-token" in str(e):
return json_response({
"status": "missing-totp-token",
"reason": str(e),
})
else:
# Log the failed login
log_failed_login(request)
return json_response({
"status": "invalid",
"reason": str(e),
})
# Return a new session for the user.
resp = {
"status": "ok",
"email": email,
"privileges": privs,
"api_key": auth_service.create_session_key(email, env, type='login'),
}
# Is authorized as admin? Return an API key for future use.
if "admin" in privs:
resp["api_key"] = auth_service.create_user_key(email, env)
app.logger.info(f"New login session created for {email}")
# Return.
return json_response(resp)
@app.route('/logout', methods=["POST"])
def logout():
try:
email, _ = auth_service.authenticate(request, env, logout=True)
app.logger.info(f"{email} logged out")
except ValueError:
pass
finally:
return json_response({ "status": "ok" })
# MAIL
@app.route('/mail/users')
@authorized_personnel_only
def mail_users():
if request.args.get("format", "") == "json":
return json_response(get_mail_users_ex(env, with_archived=True, with_slow_info=True))
return json_response(get_mail_users_ex(env, with_archived=True))
else:
return "".join(x+"\n" for x in get_mail_users(env))
@ -198,7 +234,7 @@ def mail_aliases():
if request.args.get("format", "") == "json":
return json_response(get_mail_aliases_ex(env))
else:
return "".join(address+"\t"+receivers+"\t"+(senders or "")+"\n" for address, receivers, senders in get_mail_aliases(env))
return "".join(address+"\t"+receivers+"\t"+(senders or "")+"\n" for address, receivers, senders, auto in get_mail_aliases(env))
@app.route('/mail/aliases/add', methods=['POST'])
@authorized_personnel_only
@ -256,17 +292,50 @@ def dns_set_secondary_nameserver():
@app.route('/dns/custom')
@authorized_personnel_only
def dns_get_records(qname=None, rtype=None):
from dns_update import get_custom_dns_config
return json_response([
{
"qname": r[0],
"rtype": r[1],
"value": r[2],
}
for r in get_custom_dns_config(env)
if r[0] != "_secondary_nameserver"
and (not qname or r[0] == qname)
and (not rtype or r[1] == rtype) ])
# Get the current set of custom DNS records.
from dns_update import get_custom_dns_config, get_dns_zones
records = get_custom_dns_config(env, only_real_records=True)
# Filter per the arguments for the more complex GET routes below.
records = [r for r in records
if (not qname or r[0] == qname)
and (not rtype or r[1] == rtype) ]
# Make a better data structure.
records = [
{
"qname": r[0],
"rtype": r[1],
"value": r[2],
"sort-order": { },
} for r in records ]
# To help with grouping by zone in qname sorting, label each record with which zone it is in.
# There's an inconsistency in how we handle zones in get_dns_zones and in sort_domains, so
# do this first before sorting the domains within the zones.
zones = utils.sort_domains([z[0] for z in get_dns_zones(env)], env)
for r in records:
for z in zones:
if r["qname"] == z or r["qname"].endswith("." + z):
r["zone"] = z
break
# Add sorting information. The 'created' order follows the order in the YAML file on disk,
# which tracs the order entries were added in the control panel since we append to the end.
# The 'qname' sort order sorts by our standard domain name sort (by zone then by qname),
# then by rtype, and last by the original order in the YAML file (since sorting by value
# may not make sense, unless we parse IP addresses, for example).
for i, r in enumerate(records):
r["sort-order"]["created"] = i
domain_sort_order = utils.sort_domains([r["qname"] for r in records], env)
for i, r in enumerate(sorted(records, key = lambda r : (
zones.index(r["zone"]) if r.get("zone") else 0, # record is not within a zone managed by the box
domain_sort_order.index(r["qname"]),
r["rtype"]))):
r["sort-order"]["qname"] = i
# Return.
return json_response(records)
@app.route('/dns/custom/<qname>', methods=['GET', 'POST', 'PUT', 'DELETE'])
@app.route('/dns/custom/<qname>/<rtype>', methods=['GET', 'POST', 'PUT', 'DELETE'])
@ -285,9 +354,9 @@ def dns_set_record(qname, rtype="A"):
# Get the existing records matching the qname and rtype.
return dns_get_records(qname, rtype)
elif request.method in ("POST", "PUT"):
elif request.method in {"POST", "PUT"}:
# There is a default value for A/AAAA records.
if rtype in ("A", "AAAA") and value == "":
if rtype in {"A", "AAAA"} and value == "":
value = request.environ.get("HTTP_X_FORWARDED_FOR") # normally REMOTE_ADDR but we're behind nginx as a reverse proxy
# Cannot add empty records.
@ -326,6 +395,12 @@ def dns_get_dump():
from dns_update import build_recommended_dns
return json_response(build_recommended_dns(env))
@app.route('/dns/zonefile/<zone>')
@authorized_personnel_only
def dns_get_zonefile(zone):
from dns_update import get_dns_zonefile
return Response(get_dns_zonefile(zone, env), status=200, mimetype='text/plain')
# SSL
@app.route('/ssl/status')
@ -335,11 +410,16 @@ def ssl_get_status():
from web_update import get_web_domains_info, get_web_domains
# What domains can we provision certificates for? What unexpected problems do we have?
provision, cant_provision = get_certificates_to_provision(env, show_extended_problems=False)
provision, cant_provision = get_certificates_to_provision(env, show_valid_certs=False)
# What's the current status of TLS certificates on all of the domain?
domains_status = get_web_domains_info(env)
domains_status = [{ "domain": d["domain"], "status": d["ssl_certificate"][0], "text": d["ssl_certificate"][1] } for d in domains_status ]
domains_status = [
{
"domain": d["domain"],
"status": d["ssl_certificate"][0],
"text": d["ssl_certificate"][1] + (" " + cant_provision[d["domain"]] if d["domain"] in cant_provision else "")
} for d in domains_status ]
# Warn the user about domain names not hosted here because of other settings.
for domain in set(get_web_domains(env, exclude_dns_elsewhere=False)) - set(get_web_domains(env)):
@ -351,7 +431,6 @@ def ssl_get_status():
return json_response({
"can_provision": utils.sort_domains(provision, env),
"cant_provision": [{ "domain": domain, "problem": cant_provision[domain] } for domain in utils.sort_domains(cant_provision, env) ],
"status": domains_status,
})
@ -378,12 +457,63 @@ def ssl_install_cert():
@authorized_personnel_only
def ssl_provision_certs():
from ssl_certificates import provision_certificates
agree_to_tos_url = request.form.get('agree_to_tos_url')
status = provision_certificates(env,
agree_to_tos_url=agree_to_tos_url,
jsonable=True)
return json_response(status)
requests = provision_certificates(env, limit_domains=None)
return json_response({ "requests": requests })
# multi-factor auth
@app.route('/mfa/status', methods=['POST'])
@authorized_personnel_only
def mfa_get_status():
# Anyone accessing this route is an admin, and we permit them to
# see the MFA status for any user if they submit a 'user' form
# field. But we don't include provisioning info since a user can
# only provision for themselves.
email = request.form.get('user', request.user_email) # user field if given, otherwise the user making the request
try:
resp = {
"enabled_mfa": get_public_mfa_state(email, env)
}
if email == request.user_email:
resp.update({
"new_mfa": {
"totp": provision_totp(email, env)
}
})
except ValueError as e:
return (str(e), 400)
return json_response(resp)
@app.route('/mfa/totp/enable', methods=['POST'])
@authorized_personnel_only
def totp_post_enable():
secret = request.form.get('secret')
token = request.form.get('token')
label = request.form.get('label')
if not isinstance(token, str):
return ("Bad Input", 400)
try:
validate_totp_secret(secret)
enable_mfa(request.user_email, "totp", secret, token, label, env)
except ValueError as e:
return (str(e), 400)
return "OK"
@app.route('/mfa/disable', methods=['POST'])
@authorized_personnel_only
def totp_post_disable():
# Anyone accessing this route is an admin, and we permit them to
# disable the MFA status for any user if they submit a 'user' form
# field.
email = request.form.get('user', request.user_email) # user field if given, otherwise the user making the request
try:
result = disable_mfa(email, request.form.get('mfa-id') or None, env) # convert empty string to None
except ValueError as e:
return (str(e), 400)
if result: # success
return "OK"
else: # error
return ("Invalid user or MFA id.", 400)
# WEB
@ -438,9 +568,10 @@ def system_status():
self.items[-1]["extra"].append({ "text": message, "monospace": monospace })
output = WebOutput()
# Create a temporary pool of processes for the status checks
pool = multiprocessing.pool.Pool(processes=5)
run_checks(False, env, output, pool)
pool.terminate()
with multiprocessing.pool.Pool(processes=5) as pool:
run_checks(False, env, output, pool)
pool.close()
pool.join()
return json_response(output.items)
@app.route('/system/updates')
@ -448,8 +579,7 @@ def system_status():
def show_updates():
from status_checks import list_apt_updates
return "".join(
"%s (%s)\n"
% (p["package"], p["version"])
"{} ({})\n".format(p["package"], p["version"])
for p in list_apt_updates())
@app.route('/system/update-packages', methods=["POST"])
@ -524,16 +654,42 @@ def privacy_status_set():
# MUNIN
@app.route('/munin/')
@app.route('/munin/<path:filename>')
@authorized_personnel_only
def munin(filename=""):
# Checks administrative access (@authorized_personnel_only) and then just proxies
# the request to static files.
def munin_start():
# Munin pages, static images, and dynamically generated images are served
# outside of the AJAX API. We'll start with a 'start' API that sets a cookie
# that subsequent requests will read for authorization. (We don't use cookies
# for the API to avoid CSRF vulnerabilities.)
response = make_response("OK")
response.set_cookie("session", auth_service.create_session_key(request.user_email, env, type='cookie'),
max_age=60*30, secure=True, httponly=True, samesite="Strict") # 30 minute duration
return response
def check_request_cookie_for_admin_access():
session = auth_service.get_session(None, request.cookies.get("session", ""), "cookie", env)
if not session: return False
privs = get_mail_user_privileges(session["email"], env)
if not isinstance(privs, list): return False
if "admin" not in privs: return False
return True
def authorized_personnel_only_via_cookie(f):
@wraps(f)
def g(*args, **kwargs):
if not check_request_cookie_for_admin_access():
return Response("Unauthorized", status=403, mimetype='text/plain', headers={})
return f(*args, **kwargs)
return g
@app.route('/munin/<path:filename>')
@authorized_personnel_only_via_cookie
def munin_static_file(filename=""):
# Proxy the request to static files.
if filename == "": filename = "index.html"
return send_from_directory("/var/cache/munin/www", filename)
@app.route('/munin/cgi-graph/<path:filename>')
@authorized_personnel_only
@authorized_personnel_only_via_cookie
def munin_cgi(filename):
""" Relay munin cgi dynazoom requests
/usr/lib/munin/cgi/munin-cgi-graph is a perl cgi script in the munin package
@ -541,10 +697,9 @@ def munin_cgi(filename):
headers based on parameters in the requesting URL. All output is written
to stdout which munin_cgi splits into response headers and binary response
data.
munin-cgi-graph reads environment variables as well as passed input to determine
munin-cgi-graph reads environment variables to determine
what it should do. It expects a path to be in the env-var PATH_INFO, and a
querystring to be in the env-var QUERY_STRING as well as passed as input to the
command.
querystring to be in the env-var QUERY_STRING.
munin-cgi-graph has several failure modes. Some write HTTP Status headers and
others return nonzero exit codes.
Situating munin_cgi between the user-agent and munin-cgi-graph enables keeping
@ -552,7 +707,7 @@ def munin_cgi(filename):
support infrastructure like spawn-fcgi.
"""
COMMAND = 'su - munin --preserve-environment --shell=/bin/bash -c /usr/lib/munin/cgi/munin-cgi-graph "%s"'
COMMAND = 'su munin --preserve-environment --shell=/bin/bash -c /usr/lib/munin/cgi/munin-cgi-graph'
# su changes user, we use the munin user here
# --preserve-environment retains the environment, which is where Popen's `env` data is
# --shell=/bin/bash ensures the shell used is bash
@ -564,19 +719,17 @@ def munin_cgi(filename):
query_str = request.query_string.decode("utf-8", 'ignore')
env = {'PATH_INFO': '/%s/' % filename, 'QUERY_STRING': query_str}
cmd = COMMAND % query_str
env = {'PATH_INFO': '/%s/' % filename, 'REQUEST_METHOD': 'GET', 'QUERY_STRING': query_str}
code, binout = utils.shell('check_output',
cmd.split(' ', 5),
# Using a maxsplit of 5 keeps the last 2 arguments together
input=query_str.encode('UTF-8'),
COMMAND.split(" ", 5),
# Using a maxsplit of 5 keeps the last arguments together
env=env,
return_bytes=True,
trap=True)
if code != 0:
# nonzero returncode indicates error
app.logger.error("munin_cgi: munin-cgi-graph returned nonzero exit code, %s", process.returncode)
app.logger.error("munin_cgi: munin-cgi-graph returned nonzero exit code, %s", code)
return ("error processing graph image", 500)
# /usr/lib/munin/cgi/munin-cgi-graph returns both headers and binary png when successful.
@ -596,32 +749,24 @@ def log_failed_login(request):
# During setup we call the management interface directly to determine the user
# status. So we can't always use X-Forwarded-For because during setup that header
# will not be present.
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr
ip = request.headers.getlist("X-Forwarded-For")[0] if request.headers.getlist("X-Forwarded-For") else request.remote_addr
# We need to add a timestamp to the log message, otherwise /dev/log will eat the "duplicate"
# message.
app.logger.warning( "Mail-in-a-Box Management Daemon: Failed login attempt from ip %s - timestamp %s" % (ip, time.time()))
app.logger.warning( f"Mail-in-a-Box Management Daemon: Failed login attempt from ip {ip} - timestamp {time.time()}")
# APP
if __name__ == '__main__':
if "DEBUG" in os.environ: app.debug = True
if "APIKEY" in os.environ: auth_service.key = os.environ["APIKEY"]
if "DEBUG" in os.environ:
# Turn on Flask debugging.
app.debug = True
if not app.debug:
app.logger.addHandler(utils.create_syslog_handler())
# For testing on the command line, you can use `curl` like so:
# curl --user $(</var/lib/mailinabox/api.key): http://localhost:10222/mail/users
auth_service.write_key()
# For testing in the browser, you can copy the API key that's output to the
# debug console and enter that as the username
app.logger.info('API key: ' + auth_service.key)
#app.logger.info('API key: ' + auth_service.key)
# Start the application server. Listens on 127.0.0.1 (IPv4 only).
app.run(port=10222)

View File

@ -9,11 +9,17 @@ export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_TYPE=en_US.UTF-8
# On Mondays, i.e. once a week, send the administrator a report of total emails
# sent and received so the admin might notice server abuse.
if [ "$(date "+%u")" -eq 1 ]; then
management/mail_log.py -t week | management/email_administrator.py "Mail-in-a-Box Usage Report"
fi
# Take a backup.
management/backup.py | management/email_administrator.py "Backup Status"
management/backup.py 2>&1 | management/email_administrator.py "Backup Status"
# Provision any new certificates for new domains or domains with expiring certificates.
management/ssl_certificates.py --headless | management/email_administrator.py "Error Provisioning TLS Certificate"
management/ssl_certificates.py -q 2>&1 | management/email_administrator.py "TLS Certificate Provisioning Result"
# Run status checks and email the administrator if anything changed.
management/status_checks.py --show-changes | management/email_administrator.py "Status Checks Change Notice"
management/status_checks.py --show-changes 2>&1 | management/email_administrator.py "Status Checks Change Notice"

View File

@ -1,22 +1,33 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
# Creates DNS zone files for all of the domains of all of the mail users
# and mail aliases and restarts nsd.
########################################################################
import sys, os, os.path, urllib.parse, datetime, re, hashlib, base64
import sys, os, os.path, datetime, re, hashlib, base64
import ipaddress
import rtyaml
import dns.resolver
from mailconfig import get_mail_domains
from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains
from utils import shell, load_env_vars_from_file, safe_domain_name, sort_domains, get_ssh_port
from ssl_certificates import get_ssl_certificates, check_certificate
import contextlib
# From https://stackoverflow.com/questions/3026957/how-to-validate-a-domain-name-using-regex-php/16491074#16491074
# This regular expression matches domain names according to RFCs, it also accepts fqdn with an leading dot,
# underscores, as well as asteriks which are allowed in domain names but not hostnames (i.e. allowed in
# DNS but not in URLs), which are common in certain record types like for DKIM.
DOMAIN_RE = r"^(?!\-)(?:[*][.])?(?:[a-zA-Z\d\-_]{0,62}[a-zA-Z\d_]\.){1,126}(?!\d+)[a-zA-Z\d_]{1,63}(\.?)$"
def get_dns_domains(env):
# Add all domain names in use by email users and mail aliases and ensure
# PRIMARY_HOSTNAME is in the list.
# Add all domain names in use by email users and mail aliases, any
# domains we serve web for (except www redirects because that would
# lead to infinite recursion here) and ensure PRIMARY_HOSTNAME is in the list.
from mailconfig import get_mail_domains
from web_update import get_web_domains
domains = set()
domains |= get_mail_domains(env)
domains |= set(get_mail_domains(env))
domains |= set(get_web_domains(env, include_www_redirects=False))
domains.add(env['PRIMARY_HOSTNAME'])
return domains
@ -28,7 +39,7 @@ def get_dns_zones(env):
# Exclude domains that are subdomains of other domains we know. Proceed
# by looking at shorter domains first.
zone_domains = set()
for domain in sorted(domains, key=lambda d : len(d)):
for domain in sorted(domains, key=len):
for d in zone_domains:
if domain.endswith("." + d):
# We found a parent domain already in the list.
@ -38,9 +49,7 @@ def get_dns_zones(env):
zone_domains.add(domain)
# Make a nice and safe filename for each domain.
zonefiles = []
for domain in zone_domains:
zonefiles.append([domain, safe_domain_name(domain) + ".txt"])
zonefiles = [[domain, safe_domain_name(domain) + ".txt"] for domain in zone_domains]
# Sort the list so that the order is nice and so that nsd.conf has a
# stable order so we don't rewrite the file & restart the service
@ -86,11 +95,21 @@ def do_dns_update(env, force=False):
if len(updated_domains) == 0:
updated_domains.append("DNS configuration")
# Kick nsd if anything changed.
# Tell nsd to reload changed zone files.
if len(updated_domains) > 0:
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
# 'reconfig' is needed if there are added or removed zones, but
# it may not reload existing zones, so we call 'reload' too. If
# nsd isn't running, nsd-control fails, so in that case revert
# to restarting nsd to make sure it is running. Restarting nsd
# should also refresh everything.
try:
shell('check_call', ["/usr/sbin/nsd-control", "reconfig"])
shell('check_call', ["/usr/sbin/nsd-control", "reload"])
except:
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
# Write the OpenDKIM configuration tables for all of the domains.
# Write the OpenDKIM configuration tables for all of the mail domains.
from mailconfig import get_mail_domains
if write_opendkim_tables(get_mail_domains(env), env):
# Settings changed. Kick opendkim.
shell('check_call', ["/usr/sbin/service", "opendkim", "restart"])
@ -115,18 +134,47 @@ def build_zones(env):
domains = get_dns_domains(env)
zonefiles = get_dns_zones(env)
# Custom records to add to zones.
additional_records = list(get_custom_dns_config(env))
# Create a dictionary of domains to a set of attributes for each
# domain, such as whether there are mail users at the domain.
from mailconfig import get_mail_domains
from web_update import get_web_domains
www_redirect_domains = set(get_web_domains(env)) - set(get_web_domains(env, include_www_redirects=False))
mail_domains = set(get_mail_domains(env))
mail_user_domains = set(get_mail_domains(env, users_only=True)) # i.e. will log in for mail, Nextcloud
web_domains = set(get_web_domains(env))
auto_domains = web_domains - set(get_web_domains(env, include_auto=False))
domains |= auto_domains # www redirects not included in the initial list, see above
# Add ns1/ns2+PRIMARY_HOSTNAME which must also have A/AAAA records
# when the box is acting as authoritative DNS server for its domains.
for ns in ("ns1", "ns2"):
d = ns + "." + env["PRIMARY_HOSTNAME"]
domains.add(d)
auto_domains.add(d)
domains = {
domain: {
"user": domain in mail_user_domains,
"mail": domain in mail_domains,
"web": domain in web_domains,
"auto": domain in auto_domains,
}
for domain in domains
}
# For MTA-STS, we'll need to check if the PRIMARY_HOSTNAME certificate is
# singned and valid. Check that now rather than repeatedly for each domain.
domains[env["PRIMARY_HOSTNAME"]]["certificate-is-valid"] = is_domain_cert_signed_and_valid(env["PRIMARY_HOSTNAME"], env)
# Load custom records to add to zones.
additional_records = list(get_custom_dns_config(env))
# Build DNS records for each zone.
for domain, zonefile in zonefiles:
# Build the records to put in the zone.
records = build_zone(domain, domains, additional_records, www_redirect_domains, env)
records = build_zone(domain, domains, additional_records, env)
yield (domain, zonefile, records)
def build_zone(domain, all_domains, additional_records, www_redirect_domains, env, is_zone=True):
def build_zone(domain, domain_properties, additional_records, env, is_zone=True):
records = []
# For top-level zones, define the authoritative name servers.
@ -138,28 +186,18 @@ def build_zone(domain, all_domains, additional_records, www_redirect_domains, en
# 'False' in the tuple indicates these records would not be used if the zone
# is managed outside of the box.
if is_zone:
# Obligatory definition of ns1.PRIMARY_HOSTNAME.
# Obligatory NS record to ns1.PRIMARY_HOSTNAME.
records.append((None, "NS", "ns1.%s." % env["PRIMARY_HOSTNAME"], False))
# Define ns2.PRIMARY_HOSTNAME or whatever the user overrides.
# NS record to ns2.PRIMARY_HOSTNAME or whatever the user overrides.
# User may provide one or more additional nameservers
secondary_ns_list = get_secondary_dns(additional_records, mode="NS") \
or ["ns2." + env["PRIMARY_HOSTNAME"]]
for secondary_ns in secondary_ns_list:
records.append((None, "NS", secondary_ns+'.', False))
or ["ns2." + env["PRIMARY_HOSTNAME"]]
records.extend((None, "NS", secondary_ns+'.', False) for secondary_ns in secondary_ns_list)
# In PRIMARY_HOSTNAME...
if domain == env["PRIMARY_HOSTNAME"]:
# Define ns1 and ns2.
# 'False' in the tuple indicates these records would not be used if the zone
# is managed outside of the box.
records.append(("ns1", "A", env["PUBLIC_IP"], False))
records.append(("ns2", "A", env["PUBLIC_IP"], False))
if env.get('PUBLIC_IPV6'):
records.append(("ns1", "AAAA", env["PUBLIC_IPV6"], False))
records.append(("ns2", "AAAA", env["PUBLIC_IPV6"], False))
# Set the A/AAAA records. Do this early for the PRIMARY_HOSTNAME so that the user cannot override them
# and we can provide different explanatory text.
records.append((None, "A", env["PUBLIC_IP"], "Required. Sets the IP address of the box."))
@ -172,28 +210,25 @@ def build_zone(domain, all_domains, additional_records, www_redirect_domains, en
records.append(("_443._tcp", "TLSA", build_tlsa_record(env), "Optional. When DNSSEC is enabled, provides out-of-band HTTPS certificate validation for a few web clients that support it."))
# Add a SSHFP records to help SSH key validation. One per available SSH key on this system.
for value in build_sshfp_records():
records.append((None, "SSHFP", value, "Optional. Provides an out-of-band method for verifying an SSH key before connecting. Use 'VerifyHostKeyDNS yes' (or 'VerifyHostKeyDNS ask') when connecting with ssh."))
records.extend((None, "SSHFP", value, "Optional. Provides an out-of-band method for verifying an SSH key before connecting. Use 'VerifyHostKeyDNS yes' (or 'VerifyHostKeyDNS ask') when connecting with ssh.") for value in build_sshfp_records())
# Add DNS records for any subdomains of this domain. We should not have a zone for
# both a domain and one of its subdomains.
subdomains = [d for d in all_domains if d.endswith("." + domain)]
for subdomain in subdomains:
subdomain_qname = subdomain[0:-len("." + domain)]
subzone = build_zone(subdomain, [], additional_records, www_redirect_domains, env, is_zone=False)
for child_qname, child_rtype, child_value, child_explanation in subzone:
if child_qname == None:
child_qname = subdomain_qname
else:
child_qname += "." + subdomain_qname
records.append((child_qname, child_rtype, child_value, child_explanation))
if is_zone: # don't recurse when we're just loading data for a subdomain
subdomains = [d for d in domain_properties if d.endswith("." + domain)]
for subdomain in subdomains:
subdomain_qname = subdomain[0:-len("." + domain)]
subzone = build_zone(subdomain, domain_properties, additional_records, env, is_zone=False)
for child_qname, child_rtype, child_value, child_explanation in subzone:
if child_qname is None:
child_qname = subdomain_qname
else:
child_qname += "." + subdomain_qname
records.append((child_qname, child_rtype, child_value, child_explanation))
has_rec_base = list(records) # clone current state
def has_rec(qname, rtype, prefix=None):
for rec in has_rec_base:
if rec[0] == qname and rec[1] == rtype and (prefix is None or rec[2].startswith(prefix)):
return True
return False
return any(rec[0] == qname and rec[1] == rtype and (prefix is None or rec[2].startswith(prefix)) for rec in has_rec_base)
# The user may set other records that don't conflict with our settings.
# Don't put any TXT records above this line, or it'll prevent any custom TXT records.
@ -213,21 +248,23 @@ def build_zone(domain, all_domains, additional_records, www_redirect_domains, en
continue
records.append((qname, rtype, value, "(Set by user.)"))
# Add defaults if not overridden by the user's custom settings (and not otherwise configured).
# Add A/AAAA defaults if not overridden by the user's custom settings (and not otherwise configured).
# Any CNAME or A record on the qname overrides A and AAAA. But when we set the default A record,
# we should not cause the default AAAA record to be skipped because it thinks a custom A record
# was set. So set has_rec_base to a clone of the current set of DNS settings, and don't update
# during this process.
has_rec_base = list(records)
a_expl = "Required. May have a different value. Sets the IP address that %s resolves to for web hosting and other services besides mail. The A record must be present but its value does not affect mail delivery." % domain
if domain_properties[domain]["auto"]:
if domain.startswith(("ns1.", "ns2.")): a_expl = False # omit from 'External DNS' page since this only applies if box is its own DNS server
if domain.startswith("www."): a_expl = "Optional. Sets the IP address that %s resolves to so that the box can provide a redirect to the parent domain." % domain
if domain.startswith("mta-sts."): a_expl = "Optional. MTA-STS Policy Host serving /.well-known/mta-sts.txt."
if domain.startswith("autoconfig."): a_expl = "Provides email configuration autodiscovery support for Thunderbird Autoconfig."
if domain.startswith("autodiscover."): a_expl = "Provides email configuration autodiscovery support for Z-Push ActiveSync Autodiscover."
defaults = [
(None, "A", env["PUBLIC_IP"], "Required. May have a different value. Sets the IP address that %s resolves to for web hosting and other services besides mail. The A record must be present but its value does not affect mail delivery." % domain),
(None, "A", env["PUBLIC_IP"], a_expl),
(None, "AAAA", env.get('PUBLIC_IPV6'), "Optional. Sets the IPv6 address that %s resolves to, e.g. for web hosting. (It is not necessary for receiving mail on this domain.)" % domain),
]
if "www." + domain in www_redirect_domains:
defaults += [
("www", "A", env["PUBLIC_IP"], "Optional. Sets the IP address that www.%s resolves to so that the box can provide a redirect to the parent domain." % domain),
("www", "AAAA", env.get('PUBLIC_IPV6'), "Optional. Sets the IPv6 address that www.%s resolves to so that the box can provide a redirect to the parent domain." % domain),
]
for qname, rtype, value, explanation in defaults:
if value is None or value.strip() == "": continue # skip IPV6 if not set
if not is_zone and qname == "www": continue # don't create any default 'www' subdomains on what are themselves subdomains
@ -241,52 +278,111 @@ def build_zone(domain, all_domains, additional_records, www_redirect_domains, en
# Don't pin the list of records that has_rec checks against anymore.
has_rec_base = records
# The MX record says where email for the domain should be delivered: Here!
if not has_rec(None, "MX", prefix="10 "):
records.append((None, "MX", "10 %s." % env["PRIMARY_HOSTNAME"], "Required. Specifies the hostname (and priority) of the machine that handles @%s mail." % domain))
if domain_properties[domain]["mail"]:
# The MX record says where email for the domain should be delivered: Here!
if not has_rec(None, "MX", prefix="10 "):
records.append((None, "MX", "10 %s." % env["PRIMARY_HOSTNAME"], "Required. Specifies the hostname (and priority) of the machine that handles @%s mail." % domain))
# SPF record: Permit the box ('mx', see above) to send mail on behalf of
# the domain, and no one else.
# Skip if the user has set a custom SPF record.
if not has_rec(None, "TXT", prefix="v=spf1 "):
records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain))
# SPF record: Permit the box ('mx', see above) to send mail on behalf of
# the domain, and no one else.
# Skip if the user has set a custom SPF record.
if not has_rec(None, "TXT", prefix="v=spf1 "):
records.append((None, "TXT", 'v=spf1 mx -all', "Recommended. Specifies that only the box is permitted to send @%s mail." % domain))
# Append the DKIM TXT record to the zone as generated by OpenDKIM.
# Skip if the user has set a DKIM record already.
opendkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.txt')
with open(opendkim_record_file) as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
records.append((m.group(1), "TXT", val, "Recommended. Provides a way for recipients to verify that this machine sent @%s mail." % domain))
# Append the DKIM TXT record to the zone as generated by OpenDKIM.
# Skip if the user has set a DKIM record already.
opendkim_record_file = os.path.join(env['STORAGE_ROOT'], 'mail/dkim/mail.txt')
with open(opendkim_record_file, encoding="utf-8") as orf:
m = re.match(r'(\S+)\s+IN\s+TXT\s+\( ((?:"[^"]+"\s+)+)\)', orf.read(), re.S)
val = "".join(re.findall(r'"([^"]+)"', m.group(2)))
if not has_rec(m.group(1), "TXT", prefix="v=DKIM1; "):
records.append((m.group(1), "TXT", val, "Recommended. Provides a way for recipients to verify that this machine sent @%s mail." % domain))
# Append a DMARC record.
# Skip if the user has set a DMARC record already.
if not has_rec("_dmarc", "TXT", prefix="v=DMARC1; "):
records.append(("_dmarc", "TXT", 'v=DMARC1; p=quarantine', "Recommended. Specifies that mail that does not originate from the box but claims to be from @%s or which does not have a valid DKIM signature is suspect and should be quarantined by the recipient's mail system." % domain))
# Append a DMARC record.
# Skip if the user has set a DMARC record already.
if not has_rec("_dmarc", "TXT", prefix="v=DMARC1; "):
records.append(("_dmarc", "TXT", 'v=DMARC1; p=quarantine;', "Recommended. Specifies that mail that does not originate from the box but claims to be from @%s or which does not have a valid DKIM signature is suspect and should be quarantined by the recipient's mail system." % domain))
# For any subdomain with an A record but no SPF or DMARC record, add strict policy records.
all_resolvable_qnames = set(r[0] for r in records if r[1] in ("A", "AAAA"))
for qname in all_resolvable_qnames:
if not has_rec(qname, "TXT", prefix="v=spf1 "):
records.append((qname, "TXT", 'v=spf1 -all', "Recommended. Prevents use of this domain name for outbound mail by specifying that no servers are valid sources for mail from @%s. If you do send email from this domain name you should either override this record such that the SPF rule does allow the originating server, or, take the recommended approach and have the box handle mail for this domain (simply add any receiving alias at this domain name to make this machine treat the domain name as one of its mail domains)." % (qname + "." + domain)))
dmarc_qname = "_dmarc" + ("" if qname is None else "." + qname)
if not has_rec(dmarc_qname, "TXT", prefix="v=DMARC1; "):
records.append((dmarc_qname, "TXT", 'v=DMARC1; p=reject', "Recommended. Prevents use of this domain name for outbound mail by specifying that the SPF rule should be honoured for mail from @%s." % (qname + "." + domain)))
if domain_properties[domain]["user"]:
# Add CardDAV/CalDAV SRV records on the non-primary hostname that points to the primary hostname
# for autoconfiguration of mail clients (so only domains hosting user accounts need it).
# The SRV record format is priority (0, whatever), weight (0, whatever), port, service provider hostname (w/ trailing dot).
if domain != env["PRIMARY_HOSTNAME"]:
for dav in ("card", "cal"):
qname = "_" + dav + "davs._tcp"
if not has_rec(qname, "SRV"):
records.append((qname, "SRV", "0 0 443 " + env["PRIMARY_HOSTNAME"] + ".", "Recommended. Specifies the hostname of the server that handles CardDAV/CalDAV services for email addresses on this domain."))
# Add CardDAV/CalDAV SRV records on the non-primary hostname that points to the primary hostname.
# The SRV record format is priority (0, whatever), weight (0, whatever), port, service provider hostname (w/ trailing dot).
if domain != env["PRIMARY_HOSTNAME"]:
for dav in ("card", "cal"):
qname = "_" + dav + "davs._tcp"
if not has_rec(qname, "SRV"):
records.append((qname, "SRV", "0 0 443 " + env["PRIMARY_HOSTNAME"] + ".", "Recommended. Specifies the hostname of the server that handles CardDAV/CalDAV services for email addresses on this domain."))
# If this is a domain name that there are email addresses configured for, i.e. "something@"
# this domain name, then the domain name is a MTA-STS (https://tools.ietf.org/html/rfc8461)
# Policy Domain.
#
# A "_mta-sts" TXT record signals the presence of a MTA-STS policy. The id field helps clients
# cache the policy. It should be stable so we don't update DNS unnecessarily but change when
# the policy changes. It must be at most 32 letters and numbers, so we compute a hash of the
# policy file.
#
# The policy itself is served at the "mta-sts" (no underscore) subdomain over HTTPS. Therefore
# the TLS certificate used by Postfix for STARTTLS must be a valid certificate for the MX
# domain name (PRIMARY_HOSTNAME) *and* the TLS certificate used by nginx for HTTPS on the mta-sts
# subdomain must be valid certificate for that domain. Do not set an MTA-STS policy if either
# certificate in use is not valid (e.g. because it is self-signed and a valid certificate has not
# yet been provisioned). Since we cannot provision a certificate without A/AAAA records, we
# always set them (by including them in the www domains) --- only the TXT records depend on there
# being valid certificates.
mta_sts_records = [ ]
if domain_properties[domain]["mail"] \
and domain_properties[env["PRIMARY_HOSTNAME"]]["certificate-is-valid"] \
and is_domain_cert_signed_and_valid("mta-sts." + domain, env):
# Compute an up-to-32-character hash of the policy file. We'll take a SHA-1 hash of the policy
# file (20 bytes) and encode it as base-64 (28 bytes, using alphanumeric alternate characters
# instead of '+' and '/' which are not allowed in an MTA-STS policy id) but then just take its
# first 20 characters, which is more than sufficient to change whenever the policy file changes
# (and ensures any '=' padding at the end of the base64 encoding is dropped).
with open("/var/lib/mailinabox/mta-sts.txt", "rb") as f:
mta_sts_policy_id = base64.b64encode(hashlib.sha1(f.read()).digest(), altchars=b"AA").decode("ascii")[0:20]
mta_sts_records.extend([
("_mta-sts", "TXT", "v=STSv1; id=" + mta_sts_policy_id, "Optional. Part of the MTA-STS policy for incoming mail. If set, a MTA-STS policy must also be published.")
])
# Enable SMTP TLS reporting (https://tools.ietf.org/html/rfc8460) if the user has set a config option.
# Skip if the rules below if the user has set a custom _smtp._tls record.
if env.get("MTA_STS_TLSRPT_RUA") and not has_rec("_smtp._tls", "TXT", prefix="v=TLSRPTv1;"):
mta_sts_records.append(("_smtp._tls", "TXT", "v=TLSRPTv1; rua=" + env["MTA_STS_TLSRPT_RUA"], "Optional. Enables MTA-STS reporting."))
for qname, rtype, value, explanation in mta_sts_records:
if not has_rec(qname, rtype):
records.append((qname, rtype, value, explanation))
# Add no-mail-here records for any qname that has an A or AAAA record
# but no MX record. This would include domain itself if domain is a
# non-mail domain and also may include qnames from custom DNS records.
# Do this once at the end of generating a zone.
if is_zone:
qnames_with_a = {qname for (qname, rtype, value, explanation) in records if rtype in {"A", "AAAA"}}
qnames_with_mx = {qname for (qname, rtype, value, explanation) in records if rtype == "MX"}
for qname in qnames_with_a - qnames_with_mx:
# Mark this domain as not sending mail with hard-fail SPF and DMARC records.
d = (qname+"." if qname else "") + domain
if not has_rec(qname, "TXT", prefix="v=spf1 "):
records.append((qname, "TXT", 'v=spf1 -all', "Recommended. Prevents use of this domain name for outbound mail by specifying that no servers are valid sources for mail from @%s. If you do send email from this domain name you should either override this record such that the SPF rule does allow the originating server, or, take the recommended approach and have the box handle mail for this domain (simply add any receiving alias at this domain name to make this machine treat the domain name as one of its mail domains)." % d))
if not has_rec("_dmarc" + ("."+qname if qname else ""), "TXT", prefix="v=DMARC1; "):
records.append(("_dmarc" + ("."+qname if qname else ""), "TXT", 'v=DMARC1; p=reject;', "Recommended. Prevents use of this domain name for outbound mail by specifying that the SPF rule should be honoured for mail from @%s." % d))
# And with a null MX record (https://explained-from-first-principles.com/email/#null-mx-record)
if not has_rec(qname, "MX"):
records.append((qname, "MX", '0 .', "Recommended. Prevents use of this domain name for incoming mail."))
# Sort the records. The None records *must* go first in the nsd zone file. Otherwise it doesn't matter.
records.sort(key = lambda rec : list(reversed(rec[0].split(".")) if rec[0] is not None else ""))
return records
def is_domain_cert_signed_and_valid(domain, env):
cert = get_ssl_certificates(env).get(domain)
if not cert: return False # no certificate provisioned
cert_status = check_certificate(domain, cert['certificate'], cert['private-key'])
return cert_status[0] == 'OK'
########################################################################
def build_tlsa_record(env):
@ -342,17 +438,29 @@ def build_sshfp_records():
"ssh-rsa": 1,
"ssh-dss": 2,
"ecdsa-sha2-nistp256": 3,
"ssh-ed25519": 4,
}
# Get our local fingerprints by running ssh-keyscan. The output looks
# like the known_hosts file: hostname, keytype, fingerprint. The order
# of the output is arbitrary, so sort it to prevent spurrious updates
# to the zone file (that trigger bumping the serial number).
keys = shell("check_output", ["ssh-keyscan", "localhost"])
for key in sorted(keys.split("\n")):
# to the zone file (that trigger bumping the serial number). However,
# if SSH has been configured to listen on a nonstandard port, we must
# specify that port to sshkeyscan.
port = get_ssh_port()
# If nothing returned, SSH is probably not installed.
if not port:
return
keys = shell("check_output", ["ssh-keyscan", "-4", "-t", "rsa,dsa,ecdsa,ed25519", "-p", str(port), "localhost"])
keys = sorted(keys.split("\n"))
for key in keys:
if key.strip() == "" or key[0] == "#": continue
try:
host, keytype, pubkey = key.split(" ")
_host, keytype, pubkey = key.split(" ")
yield "%d %d ( %s )" % (
algorithm_number[keytype],
2, # specifies we are using SHA-256 on next line
@ -369,24 +477,27 @@ def write_nsd_zone(domain, zonefile, records, env, force):
# On the $ORIGIN line, there's typically a ';' comment at the end explaining
# what the $ORIGIN line does. Any further data after the domain confuses
# ldns-signzone, however. It used to say '; default zone domain'.
#
# The SOA contact address for all of the domains on this system is hostmaster
# @ the PRIMARY_HOSTNAME. Hopefully that's legit.
#
# For the refresh through TTL fields, a good reference is:
# http://www.peerwisdom.org/2013/05/15/dns-understanding-the-soa-record/
# https://www.ripe.net/publications/docs/ripe-203
#
# A hash of the available DNSSEC keys are added in a comment so that when
# the keys change we force a re-generation of the zone which triggers
# re-signing it.
zone = """
$ORIGIN {domain}.
$TTL 1800 ; default time to live
$TTL 86400 ; default time to live
@ IN SOA ns1.{primary_domain}. hostmaster.{primary_domain}. (
__SERIAL__ ; serial number
7200 ; Refresh (secondary nameserver update interval)
1800 ; Retry (when refresh fails, how often to try again)
3600 ; Retry (when refresh fails, how often to try again, should be lower than the refresh)
1209600 ; Expire (when refresh fails, how long secondary nameserver will keep records around anyway)
1800 ; Negative TTL (how long negative responses are cached)
86400 ; Negative TTL (how long negative responses are cached)
)
"""
@ -394,7 +505,7 @@ $TTL 1800 ; default time to live
zone = zone.format(domain=domain, primary_domain=env["PRIMARY_HOSTNAME"])
# Add records.
for subdomain, querytype, value, explanation in records:
for subdomain, querytype, value, _explanation in records:
if subdomain:
zone += subdomain
zone += "\tIN\t" + querytype + "\t"
@ -411,6 +522,9 @@ $TTL 1800 ; default time to live
value = v2
zone += value + "\n"
# Append a stable hash of DNSSEC signing keys in a comment.
zone += f"\n; DNSSEC signing keys hash: {hash_dnssec_keys(domain, env)}\n"
# DNSSEC requires re-signing a zone periodically. That requires
# bumping the serial number even if no other records have changed.
# We don't see the DNSSEC records yet, so we have to figure out
@ -425,7 +539,7 @@ $TTL 1800 ; default time to live
# We've signed the domain. Check if we are close to the expiration
# time of the signature. If so, we'll force a bump of the serial
# number so we can re-sign it.
with open(zonefile + ".signed") as f:
with open(zonefile + ".signed", encoding="utf-8") as f:
signed_zone = f.read()
expiration_times = re.findall(r"\sRRSIG\s+SOA\s+\d+\s+\d+\s\d+\s+(\d{14})", signed_zone)
if len(expiration_times) == 0:
@ -444,7 +558,7 @@ $TTL 1800 ; default time to live
if os.path.exists(zonefile):
# If the zone already exists, is different, and has a later serial number,
# increment the number.
with open(zonefile) as f:
with open(zonefile, encoding="utf-8") as f:
existing_zone = f.read()
m = re.search(r"(\d+)\s*;\s*serial number", existing_zone)
if m:
@ -468,92 +582,129 @@ $TTL 1800 ; default time to live
zone = zone.replace("__SERIAL__", serial)
# Write the zone file.
with open(zonefile, "w") as f:
with open(zonefile, "w", encoding="utf-8") as f:
f.write(zone)
return True # file is updated
def get_dns_zonefile(zone, env):
for domain, fn in get_dns_zones(env):
if zone == domain:
break
else:
raise ValueError("%s is not a domain name that corresponds to a zone." % zone)
nsd_zonefile = "/etc/nsd/zones/" + fn
with open(nsd_zonefile, encoding="utf-8") as f:
return f.read()
########################################################################
def write_nsd_conf(zonefiles, additional_records, env):
# Write the list of zones to a configuration file.
nsd_conf_file = "/etc/nsd/zones.conf"
nsd_conf_file = "/etc/nsd/nsd.conf.d/zones.conf"
nsdconf = ""
# Append the zones.
for domain, zonefile in zonefiles:
nsdconf += """
nsdconf += f"""
zone:
name: %s
zonefile: %s
""" % (domain, zonefile)
name: {domain}
zonefile: {zonefile}
"""
# If custom secondary nameservers have been set, allow zone transfers
# and notifies to them.
# and, if not a subnet, notifies to them.
for ipaddr in get_secondary_dns(additional_records, mode="xfr"):
nsdconf += "\n\tnotify: %s NOKEY\n\tprovide-xfr: %s NOKEY\n" % (ipaddr, ipaddr)
if "/" not in ipaddr:
nsdconf += "\n\tnotify: %s NOKEY" % (ipaddr)
nsdconf += "\n\tprovide-xfr: %s NOKEY\n" % (ipaddr)
# Check if the file is changing. If it isn't changing,
# return False to flag that no change was made.
if os.path.exists(nsd_conf_file):
with open(nsd_conf_file) as f:
with open(nsd_conf_file, encoding="utf-8") as f:
if f.read() == nsdconf:
return False
# Write out new contents and return True to signal that
# configuration changed.
with open(nsd_conf_file, "w") as f:
with open(nsd_conf_file, "w", encoding="utf-8") as f:
f.write(nsdconf)
return True
########################################################################
def dnssec_choose_algo(domain, env):
if '.' in domain and domain.rsplit('.')[-1] in \
("email", "guide", "fund", "be"):
# At GoDaddy, RSASHA256 is the only algorithm supported
# for .email and .guide.
# A variety of algorithms are supported for .fund. This
# is preferred.
# Gandi tells me that .be does not support RSASHA1-NSEC3-SHA1
return "RSASHA256"
def find_dnssec_signing_keys(domain, env):
# For key that we generated (one per algorithm)...
d = os.path.join(env['STORAGE_ROOT'], 'dns/dnssec')
keyconfs = [f for f in os.listdir(d) if f.endswith(".conf")]
for keyconf in keyconfs:
# Load the file holding the KSK and ZSK key filenames.
keyconf_fn = os.path.join(d, keyconf)
keyinfo = load_env_vars_from_file(keyconf_fn)
# For any domain we were able to sign before, don't change the algorithm
# on existing users. We'll probably want to migrate to SHA256 later.
return "RSASHA1-NSEC3-SHA1"
# Skip this key if the conf file has a setting named DOMAINS,
# holding a comma-separated list of domain names, and if this
# domain is not in the list. This allows easily disabling a
# key by setting "DOMAINS=" or "DOMAINS=none", other than
# deleting the key's .conf file, which might result in the key
# being regenerated next upgrade. Keys should be disabled if
# they are not needed to reduce the DNSSEC query response size.
if "DOMAINS" in keyinfo and domain not in [dd.strip() for dd in keyinfo["DOMAINS"].split(",")]:
continue
for keytype in ("KSK", "ZSK"):
yield keytype, keyinfo[keytype]
def hash_dnssec_keys(domain, env):
# Create a stable (by sorting the items) hash of all of the private keys
# that will be used to sign this domain.
keydata = []
for keytype, keyfn in sorted(find_dnssec_signing_keys(domain, env)):
oldkeyfn = os.path.join(env['STORAGE_ROOT'], 'dns/dnssec', keyfn + ".private")
keydata.extend((keytype, keyfn))
with open(oldkeyfn, encoding="utf-8") as fr:
keydata.append( fr.read() )
keydata = "".join(keydata).encode("utf8")
return hashlib.sha1(keydata).hexdigest()
def sign_zone(domain, zonefile, env):
algo = dnssec_choose_algo(domain, env)
dnssec_keys = load_env_vars_from_file(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/%s.conf' % algo))
# Sign the zone with all of the keys that were generated during
# setup so that the user can choose which to use in their DS record at
# their registrar, and also to support migration to newer algorithms.
# In order to use the same keys for all domains, we have to generate
# a new .key file with a DNSSEC record for the specific domain. We
# can reuse the same key, but it won't validate without a DNSSEC
# record specifically for the domain.
# In order to use the key files generated at setup which are for
# the domain _domain_, we have to re-write the files and place
# the actual domain name in it, so that ldns-signzone works.
#
# Copy the .key and .private files to /tmp to patch them up.
#
# Use os.umask and open().write() to securely create a copy that only
# we (root) can read.
files_to_kill = []
for key in ("KSK", "ZSK"):
if dnssec_keys.get(key, "").strip() == "": raise Exception("DNSSEC is not properly set up.")
oldkeyfn = os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/' + dnssec_keys[key])
newkeyfn = '/tmp/' + dnssec_keys[key].replace("_domain_", domain)
dnssec_keys[key] = newkeyfn
# Patch each key, storing the patched version in /tmp for now.
# Each key has a .key and .private file. Collect a list of filenames
# for all of the keys (and separately just the key-signing keys).
all_keys = []
ksk_keys = []
for keytype, keyfn in find_dnssec_signing_keys(domain, env):
newkeyfn = '/tmp/' + keyfn.replace("_domain_", domain)
for ext in (".private", ".key"):
if not os.path.exists(oldkeyfn + ext): raise Exception("DNSSEC is not properly set up.")
with open(oldkeyfn + ext, "r") as fr:
# Copy the .key and .private files to /tmp to patch them up.
#
# Use os.umask and open().write() to securely create a copy that only
# we (root) can read.
oldkeyfn = os.path.join(env['STORAGE_ROOT'], 'dns/dnssec', keyfn + ext)
with open(oldkeyfn, encoding="utf-8") as fr:
keydata = fr.read()
keydata = keydata.replace("_domain_", domain) # trick ldns-signkey into letting our generic key be used by this zone
fn = newkeyfn + ext
keydata = keydata.replace("_domain_", domain)
prev_umask = os.umask(0o77) # ensure written file is not world-readable
try:
with open(fn, "w") as fw:
with open(newkeyfn + ext, "w", encoding="utf-8") as fw:
fw.write(keydata)
finally:
os.umask(prev_umask) # other files we write should be world-readable
files_to_kill.append(fn)
# Put the patched key filename base (without extension) into the list of keys we'll sign with.
all_keys.append(newkeyfn)
if keytype == "KSK": ksk_keys.append(newkeyfn)
# Do the signing.
expiry_date = (datetime.datetime.now() + datetime.timedelta(days=30)).strftime("%Y%m%d")
@ -566,32 +717,34 @@ def sign_zone(domain, zonefile, env):
# zonefile to sign
"/etc/nsd/zones/" + zonefile,
]
# keys to sign with (order doesn't matter -- it'll figure it out)
dnssec_keys["KSK"],
dnssec_keys["ZSK"],
])
+ all_keys
)
# Create a DS record based on the patched-up key files. The DS record is specific to the
# zone being signed, so we can't use the .ds files generated when we created the keys.
# The DS record points to the KSK only. Write this next to the zone file so we can
# get it later to give to the user with instructions on what to do with it.
#
# We want to be able to validate DS records too, but multiple forms may be valid depending
# on the digest type. So we'll write all (both) valid records. Only one DS record should
# actually be deployed. Preferebly the first.
with open("/etc/nsd/zones/" + zonefile + ".ds", "w") as f:
for digest_type in ('2', '1'):
rr_ds = shell('check_output', ["/usr/bin/ldns-key2ds",
"-n", # output to stdout
"-" + digest_type, # 1=SHA1, 2=SHA256
dnssec_keys["KSK"] + ".key"
])
f.write(rr_ds)
# Generate a DS record for each key. There are also several possible hash algorithms that may
# be used, so we'll pre-generate all for each key. One DS record per line. Only one
# needs to actually be deployed at the registrar. We'll select the preferred one
# in the status checks.
with open("/etc/nsd/zones/" + zonefile + ".ds", "w", encoding="utf-8") as f:
for key in ksk_keys:
for digest_type in ('1', '2', '4'):
rr_ds = shell('check_output', ["/usr/bin/ldns-key2ds",
"-n", # output to stdout
"-" + digest_type, # 1=SHA1, 2=SHA256, 4=SHA384
key + ".key"
])
f.write(rr_ds)
# Remove our temporary file.
for fn in files_to_kill:
os.unlink(fn)
# Remove the temporary patched key files.
for fn in all_keys:
os.unlink(fn + ".private")
os.unlink(fn + ".key")
########################################################################
@ -615,7 +768,7 @@ def write_opendkim_tables(domains, env):
# So we must have a separate KeyTable entry for each domain.
"SigningTable":
"".join(
"*@{domain} {domain}\n".format(domain=domain)
f"*@{domain} {domain}\n"
for domain in domains
),
@ -624,7 +777,7 @@ def write_opendkim_tables(domains, env):
# signing domain must match the sender's From: domain.
"KeyTable":
"".join(
"{domain} {domain}:mail:{key_file}\n".format(domain=domain, key_file=opendkim_key_file)
f"{domain} {domain}:mail:{opendkim_key_file}\n"
for domain in domains
),
}
@ -633,12 +786,12 @@ def write_opendkim_tables(domains, env):
for filename, content in config.items():
# Don't write the file if it doesn't need an update.
if os.path.exists("/etc/opendkim/" + filename):
with open("/etc/opendkim/" + filename) as f:
with open("/etc/opendkim/" + filename, encoding="utf-8") as f:
if f.read() == content:
continue
# The contents needs to change.
with open("/etc/opendkim/" + filename, "w") as f:
with open("/etc/opendkim/" + filename, "w", encoding="utf-8") as f:
f.write(content)
did_update = True
@ -648,14 +801,17 @@ def write_opendkim_tables(domains, env):
########################################################################
def get_custom_dns_config(env):
def get_custom_dns_config(env, only_real_records=False):
try:
custom_dns = rtyaml.load(open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml')))
if not isinstance(custom_dns, dict): raise ValueError() # caught below
with open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml'), encoding="utf-8") as f:
custom_dns = rtyaml.load(f)
if not isinstance(custom_dns, dict): raise ValueError # caught below
except:
return [ ]
for qname, value in custom_dns.items():
if qname == "_secondary_nameserver" and only_real_records: continue # skip fake record
# Short form. Mapping a domain name to a string is short-hand
# for creating A records.
if isinstance(value, str):
@ -667,7 +823,7 @@ def get_custom_dns_config(env):
# No other type of data is allowed.
else:
raise ValueError()
raise ValueError
for rtype, value2 in values:
if isinstance(value2, str):
@ -677,7 +833,7 @@ def get_custom_dns_config(env):
yield (qname, rtype, value3)
# No other type of data is allowed.
else:
raise ValueError()
raise ValueError
def filter_custom_records(domain, custom_dns_iter):
for qname, rtype, value in custom_dns_iter:
@ -693,10 +849,7 @@ def filter_custom_records(domain, custom_dns_iter):
# our short form (None => domain, or a relative QNAME) if
# domain is not None.
if domain is not None:
if qname == domain:
qname = None
else:
qname = qname[0:len(qname)-len("." + domain)]
qname = None if qname == domain else qname[0:len(qname) - len("." + domain)]
yield (qname, rtype, value)
@ -732,12 +885,12 @@ def write_custom_dns_config(config, env):
# Write.
config_yaml = rtyaml.dump(dns)
with open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml'), "w") as f:
with open(os.path.join(env['STORAGE_ROOT'], 'dns/custom.yaml'), "w", encoding="utf-8") as f:
f.write(config_yaml)
def set_custom_dns_record(qname, rtype, value, action, env):
# validate qname
for zone, fn in get_dns_zones(env):
for zone, _fn in get_dns_zones(env):
# It must match a zone apex or be a subdomain of a zone
# that we are otherwise hosting.
if qname == zone or qname.endswith("."+zone):
@ -750,12 +903,28 @@ def set_custom_dns_record(qname, rtype, value, action, env):
# validate rtype
rtype = rtype.upper()
if value is not None and qname != "_secondary_nameserver":
if rtype in ("A", "AAAA"):
if not re.search(DOMAIN_RE, qname):
msg = "Invalid name."
raise ValueError(msg)
if rtype in {"A", "AAAA"}:
if value != "local": # "local" is a special flag for us
v = ipaddress.ip_address(value) # raises a ValueError if there's a problem
if rtype == "A" and not isinstance(v, ipaddress.IPv4Address): raise ValueError("That's an IPv6 address.")
if rtype == "AAAA" and not isinstance(v, ipaddress.IPv6Address): raise ValueError("That's an IPv4 address.")
elif rtype in ("CNAME", "TXT", "SRV", "MX"):
elif rtype in {"CNAME", "NS"}:
if rtype == "NS" and qname == zone:
msg = "NS records can only be set for subdomains."
raise ValueError(msg)
# ensure value has a trailing dot
if not value.endswith("."):
value = value + "."
if not re.search(DOMAIN_RE, value):
msg = "Invalid value."
raise ValueError(msg)
elif rtype in {"CNAME", "TXT", "SRV", "MX", "SSHFP", "CAA"}:
# anything goes
pass
else:
@ -788,7 +957,7 @@ def set_custom_dns_record(qname, rtype, value, action, env):
# Drop this record.
made_change = True
continue
if value == None and (_qname, _rtype) == (qname, rtype):
if value is None and (_qname, _rtype) == (qname, rtype):
# Drop all qname-rtype records.
made_change = True
continue
@ -798,7 +967,7 @@ def set_custom_dns_record(qname, rtype, value, action, env):
# Preserve this record.
newconfig.append((_qname, _rtype, _value))
if action in ("add", "set") and needs_add and value is not None:
if action in {"add", "set"} and needs_add and value is not None:
newconfig.append((qname, rtype, value))
made_change = True
@ -812,31 +981,45 @@ def set_custom_dns_record(qname, rtype, value, action, env):
def get_secondary_dns(custom_dns, mode=None):
resolver = dns.resolver.get_default_resolver()
resolver.timeout = 10
resolver.lifetime = 10
values = []
for qname, rtype, value in custom_dns:
for qname, _rtype, value in custom_dns:
if qname != '_secondary_nameserver': continue
for hostname in value.split(" "):
hostname = hostname.strip()
if mode == None:
if mode is None:
# Just return the setting.
values.append(hostname)
continue
# This is a hostname. Before including in zone xfr lines,
# resolve to an IP address. Otherwise just return the hostname.
if not hostname.startswith("xfr:"):
if mode == "xfr":
response = dns.resolver.query(hostname+'.', "A")
hostname = str(response[0])
values.append(hostname)
# If the entry starts with "xfr:" only include it in the zone transfer settings.
if hostname.startswith("xfr:"):
if mode != "xfr": continue
hostname = hostname[4:]
# This is a zone-xfer-only IP address. Do not return if
# we're querying for NS record hostnames. Only return if
# we're querying for zone xfer IP addresses - return the
# IP address.
elif mode == "xfr":
values.append(hostname[4:])
# If is a hostname, before including in zone xfr lines,
# resolve to an IP address.
# It may not resolve to IPv6, so don't throw an exception if it
# doesn't. Skip the entry if there is a DNS error.
if mode == "xfr":
try:
ipaddress.ip_interface(hostname) # test if it's an IP address or CIDR notation
values.append(hostname)
except ValueError:
try:
response = dns.resolver.resolve(hostname+'.', "A", raise_on_no_answer=False)
values.extend(map(str, response))
except dns.exception.DNSException:
pass
try:
response = dns.resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False)
values.extend(map(str, response))
except dns.exception.DNSException:
pass
else:
values.append(hostname)
return values
@ -845,20 +1028,27 @@ def set_secondary_dns(hostnames, env):
# Validate that all hostnames are valid and that all zone-xfer IP addresses are valid.
resolver = dns.resolver.get_default_resolver()
resolver.timeout = 5
resolver.lifetime = 5
for item in hostnames:
if not item.startswith("xfr:"):
# Resolve hostname.
try:
response = resolver.query(item, "A")
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
raise ValueError("Could not resolve the IP address of %s." % item)
resolver.resolve(item, "A")
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer, dns.resolver.Timeout):
try:
resolver.resolve(item, "AAAA")
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer, dns.resolver.Timeout):
raise ValueError("Could not resolve the IP address of %s." % item)
else:
# Validate IP address.
try:
v = ipaddress.ip_address(item[4:]) # raises a ValueError if there's a problem
if not isinstance(v, ipaddress.IPv4Address): raise ValueError("That's an IPv6 address.")
if "/" in item[4:]:
ipaddress.ip_network(item[4:]) # raises a ValueError if there's a problem
else:
ipaddress.ip_address(item[4:]) # raises a ValueError if there's a problem
except ValueError:
raise ValueError("'%s' is not an IPv4 address." % item[4:])
raise ValueError("'%s' is not an IPv4 or IPv6 address or subnet." % item[4:])
# Set.
set_custom_dns_record("_secondary_nameserver", "A", " ".join(hostnames), "set", env)
@ -870,18 +1060,17 @@ def set_secondary_dns(hostnames, env):
return do_dns_update(env)
def get_custom_dns_record(custom_dns, qname, rtype):
def get_custom_dns_records(custom_dns, qname, rtype):
for qname1, rtype1, value in custom_dns:
if qname1 == qname and rtype1 == rtype:
return value
return None
yield value
########################################################################
def build_recommended_dns(env):
ret = []
for (domain, zonefile, records) in build_zones(env):
# remove records that we don't dislay
for (domain, _zonefile, records) in build_zones(env):
# remove records that we don't display
records = [r for r in records if r[3] is not False]
# put Required at the top, then Recommended, then everythiing else
@ -889,10 +1078,7 @@ def build_recommended_dns(env):
# expand qnames
for i in range(len(records)):
if records[i][0] == None:
qname = domain
else:
qname = records[i][0] + "." + domain
qname = domain if records[i][0] is None else records[i][0] + "." + domain
records[i] = {
"qname": qname,
@ -911,7 +1097,7 @@ if __name__ == "__main__":
if sys.argv[-1] == "--lint":
write_custom_dns_config(get_custom_dns_config(env), env)
else:
for zone, records in build_recommended_dns(env):
for _zone, records in build_recommended_dns(env):
for record in records:
print("; " + record['explanation'])
print(record['qname'], record['rtype'], record['value'], sep="\t")

View File

@ -1,11 +1,17 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
# Reads in STDIN. If the stream is not empty, mail it to the system administrator.
import sys
import html
import smtplib
from email.message import Message
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# In Python 3.6:
#from email.message import Message
from utils import load_environment
@ -23,14 +29,26 @@ content = sys.stdin.read().strip()
# If there's nothing coming in, just exit.
if content == "":
sys.exit(0)
sys.exit(0)
# create MIME message
msg = Message()
msg['From'] = "\"%s\" <%s>" % (env['PRIMARY_HOSTNAME'], admin_addr)
msg = MIMEMultipart('alternative')
# In Python 3.6:
#msg = Message()
msg['From'] = '"{}" <{}>'.format(env['PRIMARY_HOSTNAME'], admin_addr)
msg['To'] = admin_addr
msg['Subject'] = "[%s] %s" % (env['PRIMARY_HOSTNAME'], subject)
msg.set_payload(content, "UTF-8")
msg['Subject'] = "[{}] {}".format(env['PRIMARY_HOSTNAME'], subject)
content_html = f'<html><body><pre style="overflow-x: scroll; white-space: pre;">{html.escape(content)}</pre></body></html>'
msg.attach(MIMEText(content, 'plain'))
msg.attach(MIMEText(content_html, 'html'))
# In Python 3.6:
#msg.set_content(content)
#msg.add_alternative(content_html, "html")
# send
smtpclient = smtplib.SMTP('127.0.0.1', 25)

File diff suppressed because it is too large Load Diff

View File

@ -1,14 +1,23 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
import subprocess, shutil, os, sqlite3, re
# NOTE:
# This script is run both using the system-wide Python 3
# interpreter (/usr/bin/python3) as well as through the
# virtualenv (/usr/local/lib/mailinabox/env). So only
# import packages at the top level of this script that
# are installed in *both* contexts. We use the system-wide
# Python 3 in setup/questions.sh to validate the email
# address entered by the user.
import os, sqlite3, re
import utils
from email_validator import validate_email as validate_email_, EmailNotValidError
import idna
def validate_email(email, mode=None):
# Checks that an email address is syntactically valid. Returns True/False.
# Until Postfix supports SMTPUTF8, an email address may contain ASCII
# characters only; IDNs must be IDNA-encoded.
# An email address may contain ASCII characters only because Dovecot's
# authentication mechanism gets confused with other character encodings.
#
# When mode=="user", we're checking that this can be a user account name.
# Dovecot has tighter restrictions - letters, numbers, underscore, and
@ -77,10 +86,7 @@ def prettify_idn_email_address(email):
def is_dcv_address(email):
email = email.lower()
for localpart in ("admin", "administrator", "postmaster", "hostmaster", "webmaster", "abuse"):
if email.startswith(localpart+"@") or email.startswith(localpart+"+"):
return True
return False
return any(email.startswith((localpart + "@", localpart + "+")) for localpart in ("admin", "administrator", "postmaster", "hostmaster", "webmaster", "abuse"))
def open_database(env, with_connection=False):
conn = sqlite3.connect(env["STORAGE_ROOT"] + "/mail/users.sqlite")
@ -96,7 +102,7 @@ def get_mail_users(env):
users = [ row[0] for row in c.fetchall() ]
return utils.sort_email_addresses(users, env)
def get_mail_users_ex(env, with_archived=False, with_slow_info=False):
def get_mail_users_ex(env, with_archived=False):
# Returns a complex data structure of all user accounts, optionally
# including archived (status="inactive") accounts.
#
@ -130,9 +136,6 @@ def get_mail_users_ex(env, with_archived=False, with_slow_info=False):
}
users.append(user)
if with_slow_info:
user["mailbox_size"] = utils.du(os.path.join(env['STORAGE_ROOT'], 'mail/mailboxes', *reversed(email.split("@"))))
# Add in archived accounts.
if with_archived:
root = os.path.join(env['STORAGE_ROOT'], 'mail/mailboxes')
@ -144,13 +147,11 @@ def get_mail_users_ex(env, with_archived=False, with_slow_info=False):
if email in active_accounts: continue
user = {
"email": email,
"privileges": "",
"privileges": [],
"status": "inactive",
"mailbox": mbox,
}
users.append(user)
if with_slow_info:
user["mailbox_size"] = utils.du(mbox)
# Group by domain.
domains = { }
@ -182,14 +183,13 @@ def get_admins(env):
return users
def get_mail_aliases(env):
# Returns a sorted list of tuples of (address, forward-tos, permitted-senders).
# Returns a sorted list of tuples of (address, forward-tos, permitted-senders, auto).
c = open_database(env)
c.execute('SELECT source, destination, permitted_senders FROM aliases')
c.execute('SELECT source, destination, permitted_senders, 0 as auto FROM aliases UNION SELECT source, destination, permitted_senders, 1 as auto FROM auto_aliases')
aliases = { row[0]: row for row in c.fetchall() } # make dict
# put in a canonical order: sort by domain, then by email address lexicographically
aliases = [ aliases[address] for address in utils.sort_email_addresses(aliases.keys(), env) ]
return aliases
return [ aliases[address] for address in utils.sort_email_addresses(aliases.keys(), env) ]
def get_mail_aliases_ex(env):
# Returns a complex data structure of all mail aliases, similar
@ -204,7 +204,7 @@ def get_mail_aliases_ex(env):
# address_display: "name@domain.tld", # full Unicode
# forwards_to: ["user1@domain.com", "receiver-only1@domain.com", ...],
# permitted_senders: ["user1@domain.com", "sender-only1@domain.com", ...] OR null,
# required: True|False
# auto: True|False
# },
# ...
# ]
@ -212,15 +212,16 @@ def get_mail_aliases_ex(env):
# ...
# ]
required_aliases = get_required_aliases(env)
domains = {}
for address, forwards_to, permitted_senders in get_mail_aliases(env):
for address, forwards_to, permitted_senders, auto in get_mail_aliases(env):
# skip auto domain maps since these are not informative in the control panel's aliases list
if auto and address.startswith("@"): continue
# get alias info
domain = get_domain(address)
required = (address in required_aliases)
# add to list
if not domain in domains:
if domain not in domains:
domains[domain] = {
"domain": domain,
"aliases": [],
@ -230,7 +231,7 @@ def get_mail_aliases_ex(env):
"address_display": prettify_idn_email_address(address),
"forwards_to": [prettify_idn_email_address(r.strip()) for r in forwards_to.split(",")],
"permitted_senders": [prettify_idn_email_address(s.strip()) for s in permitted_senders.split(",")] if permitted_senders is not None else None,
"required": required,
"auto": bool(auto),
})
# Sort domains.
@ -238,7 +239,7 @@ def get_mail_aliases_ex(env):
# Sort aliases within each domain first by required-ness then lexicographically by address.
for domain in domains:
domain["aliases"].sort(key = lambda alias : (alias["required"], alias["address"]))
domain["aliases"].sort(key = lambda alias : (alias["auto"], alias["address"]))
return domains
def get_domain(emailaddr, as_unicode=True):
@ -254,13 +255,16 @@ def get_domain(emailaddr, as_unicode=True):
pass
return ret
def get_mail_domains(env, filter_aliases=lambda alias : True):
def get_mail_domains(env, filter_aliases=lambda alias : True, users_only=False):
# Returns the domain names (IDNA-encoded) of all of the email addresses
# configured on the system.
return set(
[get_domain(login, as_unicode=False) for login in get_mail_users(env)]
+ [get_domain(address, as_unicode=False) for address, *_ in get_mail_aliases(env) if filter_aliases(address) ]
)
# configured on the system. If users_only is True, only return domains
# with email addresses that correspond to user accounts. Exclude Unicode
# forms of domain names listed in the automatic aliases table.
domains = []
domains.extend([get_domain(login, as_unicode=False) for login in get_mail_users(env)])
if not users_only:
domains.extend([get_domain(address, as_unicode=False) for address, _, _, auto in get_mail_aliases(env) if filter_aliases(address) and not auto ])
return set(domains)
def add_mail_user(email, pw, privs, env):
# validate email
@ -435,9 +439,11 @@ def add_mail_alias(address, forwards_to, permitted_senders, env, update_if_exist
email = email.strip()
if email == "": continue
email = sanitize_idn_email_address(email) # Unicode => IDNA
# Strip any +tag from email alias and check privileges
privileged_email = re.sub(r"(?=\+)[^@]*(?=@)",'',email)
if not validate_email(email):
return ("Invalid receiver email address (%s)." % email, 400)
if is_dcv_source and not is_dcv_address(email) and "admin" not in get_mail_user_privileges(email, env, empty_on_error=True):
if is_dcv_source and not is_dcv_address(email) and "admin" not in get_mail_user_privileges(privileged_email, env, empty_on_error=True):
# Make domain control validation hijacking a little harder to mess up by
# requiring aliases for email addresses typically used in DCV to forward
# only to accounts that are administrators on this system.
@ -467,10 +473,7 @@ def add_mail_alias(address, forwards_to, permitted_senders, env, update_if_exist
forwards_to = ",".join(validated_forwards_to)
if len(validated_permitted_senders) == 0:
permitted_senders = None
else:
permitted_senders = ",".join(validated_permitted_senders)
permitted_senders = None if len(validated_permitted_senders) == 0 else ",".join(validated_permitted_senders)
conn, c = open_database(env, with_connection=True)
try:
@ -488,6 +491,7 @@ def add_mail_alias(address, forwards_to, permitted_senders, env, update_if_exist
if do_kick:
# Update things in case any new domains are added.
return kick(env, return_status)
return None
def remove_mail_alias(address, env, do_kick=True):
# convert Unicode domain to IDNA
@ -503,6 +507,14 @@ def remove_mail_alias(address, env, do_kick=True):
if do_kick:
# Update things in case any domains are removed.
return kick(env, "alias removed")
return None
def add_auto_aliases(aliases, env):
conn, c = open_database(env, with_connection=True)
c.execute("DELETE FROM auto_aliases")
for source, destination in aliases.items():
c.execute("INSERT INTO auto_aliases (source, destination) VALUES (?, ?)", (source, destination))
conn.commit()
def get_system_administrator(env):
return "administrator@" + env['PRIMARY_HOSTNAME']
@ -547,41 +559,36 @@ def kick(env, mail_result=None):
if mail_result is not None:
results.append(mail_result + "\n")
# Ensure every required alias exists.
auto_aliases = { }
existing_users = get_mail_users(env)
existing_alias_records = get_mail_aliases(env)
existing_aliases = set(a for a, *_ in existing_alias_records) # just first entry in tuple
# Mape required aliases to the administrator alias (which should be created manually).
administrator = get_system_administrator(env)
required_aliases = get_required_aliases(env)
for alias in required_aliases:
if alias == administrator: continue # don't make an alias from the administrator to itself --- this alias must be created manually
auto_aliases[alias] = administrator
def ensure_admin_alias_exists(address):
# If a user account exists with that address, we're good.
if address in existing_users:
return
# Add domain maps from Unicode forms of IDNA domains to the ASCII forms stored in the alias table.
for domain in get_mail_domains(env):
try:
domain_unicode = idna.decode(domain.encode("ascii"))
if domain == domain_unicode: continue # not an IDNA/Unicode domain
auto_aliases["@" + domain_unicode] = "@" + domain
except (ValueError, UnicodeError, idna.IDNAError):
continue
# If the alias already exists, we're good.
if address in existing_aliases:
return
add_auto_aliases(auto_aliases, env)
# Doesn't exist.
administrator = get_system_administrator(env)
if address == administrator: return # don't make an alias from the administrator to itself --- this alias must be created manually
add_mail_alias(address, administrator, "", env, do_kick=False)
if administrator not in existing_aliases: return # don't report the alias in output if the administrator alias isn't in yet -- this is a hack to supress confusing output on initial setup
results.append("added alias %s (=> %s)\n" % (address, administrator))
for address in required_aliases:
ensure_admin_alias_exists(address)
# Remove auto-generated postmaster/admin on domains we no
# longer have any other email addresses for.
for address, forwards_to, *_ in existing_alias_records:
# Remove auto-generated postmaster/admin/abuse alises from the main aliases table.
# They are now stored in the auto_aliases table.
for address, forwards_to, _permitted_senders, auto in get_mail_aliases(env):
user, domain = address.split("@")
if user in ("postmaster", "admin", "abuse") \
if user in {"postmaster", "admin", "abuse"} \
and address not in required_aliases \
and forwards_to == get_system_administrator(env):
and forwards_to == get_system_administrator(env) \
and not auto:
remove_mail_alias(address, env, do_kick=False)
results.append("removed alias %s (was to %s; domain no longer used for email)\n" % (address, forwards_to))
results.append(f"removed alias {address} (was to {forwards_to}; domain no longer used for email)\n")
# Update DNS and nginx in case any domains are added/removed.
@ -596,12 +603,11 @@ def kick(env, mail_result=None):
def validate_password(pw):
# validate password
if pw.strip() == "":
raise ValueError("No password provided.")
if re.search(r"[\s]", pw):
raise ValueError("Passwords cannot contain spaces.")
if len(pw) < 4:
raise ValueError("Passwords must be at least four characters.")
msg = "No password provided."
raise ValueError(msg)
if len(pw) < 8:
msg = "Passwords must be at least eight characters."
raise ValueError(msg)
if __name__ == "__main__":
import sys

145
management/mfa.py Normal file
View File

@ -0,0 +1,145 @@
import base64
import hmac
import io
import os
import pyotp
import qrcode
from mailconfig import open_database
def get_user_id(email, c):
c.execute('SELECT id FROM users WHERE email=?', (email,))
r = c.fetchone()
if not r: raise ValueError("User does not exist.")
return r[0]
def get_mfa_state(email, env):
c = open_database(env)
c.execute('SELECT id, type, secret, mru_token, label FROM mfa WHERE user_id=?', (get_user_id(email, c),))
return [
{ "id": r[0], "type": r[1], "secret": r[2], "mru_token": r[3], "label": r[4] }
for r in c.fetchall()
]
def get_public_mfa_state(email, env):
mfa_state = get_mfa_state(email, env)
return [
{ "id": s["id"], "type": s["type"], "label": s["label"] }
for s in mfa_state
]
def get_hash_mfa_state(email, env):
mfa_state = get_mfa_state(email, env)
return [
{ "id": s["id"], "type": s["type"], "secret": s["secret"] }
for s in mfa_state
]
def enable_mfa(email, type, secret, token, label, env):
if type == "totp":
validate_totp_secret(secret)
# Sanity check with the provide current token.
totp = pyotp.TOTP(secret)
if not totp.verify(token, valid_window=1):
msg = "Invalid token."
raise ValueError(msg)
else:
msg = "Invalid MFA type."
raise ValueError(msg)
conn, c = open_database(env, with_connection=True)
c.execute('INSERT INTO mfa (user_id, type, secret, label) VALUES (?, ?, ?, ?)', (get_user_id(email, c), type, secret, label))
conn.commit()
def set_mru_token(email, mfa_id, token, env):
conn, c = open_database(env, with_connection=True)
c.execute('UPDATE mfa SET mru_token=? WHERE user_id=? AND id=?', (token, get_user_id(email, c), mfa_id))
conn.commit()
def disable_mfa(email, mfa_id, env):
conn, c = open_database(env, with_connection=True)
if mfa_id is None:
# Disable all MFA for a user.
c.execute('DELETE FROM mfa WHERE user_id=?', (get_user_id(email, c),))
else:
# Disable a particular MFA mode for a user.
c.execute('DELETE FROM mfa WHERE user_id=? AND id=?', (get_user_id(email, c), mfa_id))
conn.commit()
return c.rowcount > 0
def validate_totp_secret(secret):
if not isinstance(secret, str) or secret.strip() == "":
msg = "No secret provided."
raise ValueError(msg)
if len(secret) != 32:
msg = "Secret should be a 32 characters base32 string"
raise ValueError(msg)
def provision_totp(email, env):
# Make a new secret.
secret = base64.b32encode(os.urandom(20)).decode('utf-8')
validate_totp_secret(secret) # sanity check
# Make a URI that we encode within a QR code.
uri = pyotp.TOTP(secret).provisioning_uri(
name=email,
issuer_name=env["PRIMARY_HOSTNAME"] + " Mail-in-a-Box Control Panel"
)
# Generate a QR code as a base64-encode PNG image.
qr = qrcode.make(uri)
byte_arr = io.BytesIO()
qr.save(byte_arr, format='PNG')
png_b64 = base64.b64encode(byte_arr.getvalue()).decode('utf-8')
return {
"type": "totp",
"secret": secret,
"qr_code_base64": png_b64
}
def validate_auth_mfa(email, request, env):
# Validates that a login request satisfies any MFA modes
# that have been enabled for the user's account. Returns
# a tuple (status, [hints]). status is True for a successful
# MFA login, False for a missing token. If status is False,
# hints is an array of codes that indicate what the user
# can try. Possible codes are:
# "missing-totp-token"
# "invalid-totp-token"
mfa_state = get_mfa_state(email, env)
# If no MFA modes are added, return True.
if len(mfa_state) == 0:
return (True, [])
# Try the enabled MFA modes.
hints = set()
for mfa_mode in mfa_state:
if mfa_mode["type"] == "totp":
# Check that a token is present in the X-Auth-Token header.
# If not, give a hint that one can be supplied.
token = request.headers.get('x-auth-token')
if not token:
hints.add("missing-totp-token")
continue
# Check for a replay attack.
if hmac.compare_digest(token, mfa_mode['mru_token'] or ""):
# If the token fails, skip this MFA mode.
hints.add("invalid-totp-token")
continue
# Check the token.
totp = pyotp.TOTP(mfa_mode["secret"])
if not totp.verify(token, valid_window=1):
hints.add("invalid-totp-token")
continue
# On success, record the token to prevent a replay attack.
set_mru_token(email, mfa_mode['id'], token, env)
return (True, [])
# On a failed login, indicate failure and any hints for what the user can do instead.
return (False, list(hints))

2
management/munin_start.sh Executable file
View File

@ -0,0 +1,2 @@
#!/bin/bash
mkdir -p /var/run/munin && chown munin /var/run/munin

View File

@ -1,11 +1,11 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
# Utilities for installing and selecting SSL certificates.
import os, os.path, re, shutil
import os, os.path, re, shutil, subprocess, tempfile
from utils import shell, safe_domain_name, sort_domains
import idna
import functools
import operator
# SELECTING SSL CERTIFICATES FOR USE IN WEB
@ -25,6 +25,16 @@ def get_ssl_certificates(env):
if not os.path.exists(ssl_root):
return
for fn in os.listdir(ssl_root):
if fn == 'ssl_certificate.pem':
# This is always a symbolic link
# to the certificate to use for
# PRIMARY_HOSTNAME. Don't let it
# be eligible for use because we
# could end up creating a symlink
# to itself --- we want to find
# the cert that it should be a
# symlink to.
continue
fn = os.path.join(ssl_root, fn)
if os.path.isfile(fn):
yield fn
@ -49,32 +59,34 @@ def get_ssl_certificates(env):
# Not a valid PEM format for a PEM type we care about.
continue
# Remember where we got this object.
pem._filename = fn
# Is it a private key?
if isinstance(pem, RSAPrivateKey):
private_keys[pem.public_key().public_numbers()] = pem
private_keys[pem.public_key().public_numbers()] = { "filename": fn, "key": pem }
# Is it a certificate?
if isinstance(pem, Certificate):
certificates.append(pem)
certificates.append({ "filename": fn, "cert": pem })
# Process the certificates.
domains = { }
for cert in certificates:
# What domains is this certificate good for?
cert_domains, primary_domain = get_certificate_domains(cert)
cert._primary_domain = primary_domain
cert_domains, primary_domain = get_certificate_domains(cert["cert"])
cert["primary_domain"] = primary_domain
# Is there a private key file for this certificate?
private_key = private_keys.get(cert.public_key().public_numbers())
private_key = private_keys.get(cert["cert"].public_key().public_numbers())
if not private_key:
continue
cert._private_key = private_key
cert["private_key"] = private_key
# Add this cert to the list of certs usable for the domains.
for domain in cert_domains:
# The primary hostname can only use a certificate mapped
# to the system private key.
if domain == env['PRIMARY_HOSTNAME'] and cert["private_key"]["filename"] != os.path.join(env['STORAGE_ROOT'], 'ssl', 'ssl_private_key.pem'):
continue
domains.setdefault(domain, []).append(cert)
# Sort the certificates to prefer good ones.
@ -82,12 +94,13 @@ def get_ssl_certificates(env):
now = datetime.datetime.utcnow()
ret = { }
for domain, cert_list in domains.items():
#for c in cert_list: print(domain, c.not_valid_before, c.not_valid_after, "("+str(now)+")", c.issuer, c.subject, c._filename)
cert_list.sort(key = lambda cert : (
# must be valid NOW
cert.not_valid_before <= now <= cert.not_valid_after,
cert["cert"].not_valid_before <= now <= cert["cert"].not_valid_after,
# prefer one that is not self-signed
cert.issuer != cert.subject,
cert["cert"].issuer != cert["cert"].subject,
###########################################################
# The above lines ensure that valid certificates are chosen
@ -97,7 +110,7 @@ def get_ssl_certificates(env):
# prefer one with the expiration furthest into the future so
# that we can easily rotate to new certs as we get them
cert.not_valid_after,
cert["cert"].not_valid_after,
###########################################################
# We always choose the certificate that is good for the
@ -112,36 +125,37 @@ def get_ssl_certificates(env):
# in case a certificate is installed in multiple paths,
# prefer the... lexicographically last one?
cert._filename,
cert["filename"],
), reverse=True)
cert = cert_list.pop(0)
ret[domain] = {
"private-key": cert._private_key._filename,
"certificate": cert._filename,
"primary-domain": cert._primary_domain,
"certificate_object": cert,
"private-key": cert["private_key"]["filename"],
"certificate": cert["filename"],
"primary-domain": cert["primary_domain"],
"certificate_object": cert["cert"],
}
return ret
def get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=False, raw=False):
# Get the system certificate info.
ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem'))
ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'))
system_certificate = {
"private-key": ssl_private_key,
"certificate": ssl_certificate,
"primary-domain": env['PRIMARY_HOSTNAME'],
"certificate_object": load_pem(load_cert_chain(ssl_certificate)[0]),
}
def get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=False, use_main_cert=True):
if use_main_cert or not allow_missing_cert:
# Get the system certificate info.
ssl_private_key = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_private_key.pem'))
ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'))
system_certificate = {
"private-key": ssl_private_key,
"certificate": ssl_certificate,
"primary-domain": env['PRIMARY_HOSTNAME'],
"certificate_object": load_pem(load_cert_chain(ssl_certificate)[0]),
}
if domain == env['PRIMARY_HOSTNAME']:
if use_main_cert and domain == env['PRIMARY_HOSTNAME']:
# The primary domain must use the server certificate because
# it is hard-coded in some service configuration files.
return system_certificate
wildcard_domain = re.sub("^[^\.]+", "*", domain)
wildcard_domain = re.sub(r"^[^\.]+", "*", domain)
if domain in ssl_certificates:
return ssl_certificates[domain]
elif wildcard_domain in ssl_certificates:
@ -156,127 +170,123 @@ def get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=False
# PROVISIONING CERTIFICATES FROM LETSENCRYPT
def get_certificates_to_provision(env, show_extended_problems=True, force_domains=None):
# Get a set of domain names that we should now provision certificates
# for. Provision if a domain name has no valid certificate or if any
# certificate is expiring in 14 days. If provisioning anything, also
# provision certificates expiring within 30 days. The period between
# 14 and 30 days allows us to consolidate domains into multi-domain
# certificates for domains expiring around the same time.
def get_certificates_to_provision(env, limit_domains=None, show_valid_certs=True):
# Get a set of domain names that we can provision certificates for
# using certbot. We start with domains that the box is serving web
# for and subtract:
# * domains not in limit_domains if limit_domains is not empty
# * domains with custom "A" records, i.e. they are hosted elsewhere
# * domains with actual "A" records that point elsewhere (misconfiguration)
# * domains that already have certificates that will be valid for a while
from web_update import get_web_domains
from status_checks import query_dns, normalize_ip
import datetime
now = datetime.datetime.utcnow()
existing_certs = get_ssl_certificates(env)
# Get domains with missing & expiring certificates.
certs = get_ssl_certificates(env)
domains = set()
domains_if_any = set()
problems = { }
for domain in get_web_domains(env):
# If the user really wants a cert for certain domains, include it.
if force_domains:
if force_domains == "ALL" or (isinstance(force_domains, list) and domain in force_domains):
domains.add(domain)
plausible_web_domains = get_web_domains(env, exclude_dns_elsewhere=False)
actual_web_domains = get_web_domains(env)
domains_to_provision = set()
domains_cant_provision = { }
for domain in plausible_web_domains:
# Skip domains that the user doesn't want to provision now.
if limit_domains and domain not in limit_domains:
continue
# Include this domain if its certificate is missing, self-signed, or expiring soon.
try:
cert = get_domain_ssl_files(domain, certs, env, allow_missing_cert=True)
except FileNotFoundError as e:
# system certificate is not present
problems[domain] = "Error: " + str(e)
continue
if cert is None:
# No valid certificate available.
domains.add(domain)
# Check that there isn't an explicit A/AAAA record.
if domain not in actual_web_domains:
domains_cant_provision[domain] = "The domain has a custom DNS A/AAAA record that points the domain elsewhere, so there is no point to installing a TLS certificate here and we could not automatically provision one anyway because provisioning requires access to the website (which isn't here)."
# Check that the DNS resolves to here.
else:
cert = cert["certificate_object"]
if cert.issuer == cert.subject:
# This is self-signed. Get a real one.
domains.add(domain)
# Valid certificate today, but is it expiring soon?
elif cert.not_valid_after-now < datetime.timedelta(days=14):
domains.add(domain)
elif cert.not_valid_after-now < datetime.timedelta(days=30):
domains_if_any.add(domain)
# It's valid. Should we report its validness?
elif show_extended_problems:
problems[domain] = "The certificate is valid for at least another 30 days --- no need to replace."
# Does the domain resolve to this machine in public DNS? If not,
# we can't do domain control validation. For IPv6 is configured,
# make sure both IPv4 and IPv6 are correct because we don't know
# how Let's Encrypt will connect.
bad_dns = []
for rtype, value in [("A", env["PUBLIC_IP"]), ("AAAA", env.get("PUBLIC_IPV6"))]:
if not value: continue # IPv6 is not configured
response = query_dns(domain, rtype)
if response != normalize_ip(value):
bad_dns.append(f"{response} ({rtype})")
# Warn the user about domains hosted elsewhere.
if not force_domains and show_extended_problems:
for domain in set(get_web_domains(env, exclude_dns_elsewhere=False)) - set(get_web_domains(env)):
problems[domain] = "The domain's DNS is pointed elsewhere, so there is no point to installing a TLS certificate here and we could not automatically provision one anyway because provisioning requires access to the website (which isn't here)."
if bad_dns:
domains_cant_provision[domain] = "The domain name does not resolve to this machine: " \
+ (", ".join(bad_dns)) \
+ "."
# Filter out domains that we can't provision a certificate for.
def can_provision_for_domain(domain):
# Let's Encrypt doesn't yet support IDNA domains.
# We store domains in IDNA (ASCII). To see if this domain is IDNA,
# we'll see if its IDNA-decoded form is different.
if idna.decode(domain.encode("ascii")) != domain:
problems[domain] = "Let's Encrypt does not yet support provisioning certificates for internationalized domains."
return False
else:
# DNS is all good.
# Does the domain resolve to this machine in public DNS? If not,
# we can't do domain control validation. For IPv6 is configured,
# make sure both IPv4 and IPv6 are correct because we don't know
# how Let's Encrypt will connect.
import dns.resolver
for rtype, value in [("A", env["PUBLIC_IP"]), ("AAAA", env.get("PUBLIC_IPV6"))]:
if not value: continue # IPv6 is not configured
try:
# Must make the qname absolute to prevent a fall-back lookup with a
# search domain appended, by adding a period to the end.
response = dns.resolver.query(domain + ".", rtype)
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer) as e:
problems[domain] = "DNS isn't configured properly for this domain: DNS resolution failed (%s: %s)." % (rtype, str(e) or repr(e)) # NoAnswer's str is empty
return False
except Exception as e:
problems[domain] = "DNS isn't configured properly for this domain: DNS lookup had an error: %s." % str(e)
return False
if len(response) != 1 or str(response[0]) != value:
problems[domain] = "Domain control validation cannot be performed for this domain because DNS points the domain to another machine (%s %s)." % (rtype, ", ".join(str(r) for r in response))
return False
# Check for a good existing cert.
existing_cert = get_domain_ssl_files(domain, existing_certs, env, use_main_cert=False, allow_missing_cert=True)
if existing_cert:
existing_cert_check = check_certificate(domain, existing_cert['certificate'], existing_cert['private-key'],
warn_if_expiring_soon=14)
if existing_cert_check[0] == "OK":
if show_valid_certs:
domains_cant_provision[domain] = "The domain has a valid certificate already. ({} Certificate: {}, private key {})".format(
existing_cert_check[1],
existing_cert['certificate'],
existing_cert['private-key'])
continue
return True
domains_to_provision.add(domain)
domains = set(filter(can_provision_for_domain, domains))
# If there are any domains we definitely will provision for, add in
# additional domains to do at this time.
if len(domains) > 0:
domains |= set(filter(can_provision_for_domain, domains_if_any))
return (domains, problems)
def provision_certificates(env, agree_to_tos_url=None, logger=None, show_extended_problems=True, force_domains=None, jsonable=False):
import requests.exceptions
import acme.messages
from free_tls_certificates import client
return (domains_to_provision, domains_cant_provision)
def provision_certificates(env, limit_domains):
# What domains should we provision certificates for? And what
# errors prevent provisioning for other domains.
domains, problems = get_certificates_to_provision(env, force_domains=force_domains, show_extended_problems=show_extended_problems)
domains, domains_cant_provision = get_certificates_to_provision(env, limit_domains=limit_domains)
# Exit fast if there is nothing to do.
if len(domains) == 0:
return {
"requests": [],
"problems": problems,
}
# Build a list of what happened on each domain or domain-set.
ret = []
for domain, error in domains_cant_provision.items():
ret.append({
"domains": [domain],
"log": [error],
"result": "skipped",
})
# Break into groups of up to 100 certificates at a time, which is Let's Encrypt's
# limit for a single certificate. We'll sort to put related domains together.
domains = sort_domains(domains, env)
certs = []
while len(domains) > 0:
certs.append( domains[0:100] )
domains = domains[100:]
# Break into groups by DNS zone: Group every domain with its parent domain, if
# its parent domain is in the list of domains to request a certificate for.
# Start with the zones so that if the zone doesn't need a certificate itself,
# its children will still be grouped together. Sort the provision domains to
# put parents ahead of children.
# Since Let's Encrypt requests are limited to 100 domains at a time,
# we'll create a list of lists of domains where the inner lists have
# at most 100 items. By sorting we also get the DNS zone domain as the first
# entry in each list (unless we overflow beyond 100) which ends up as the
# primary domain listed in each certificate.
from dns_update import get_dns_zones
certs = { }
for zone, _zonefile in get_dns_zones(env):
certs[zone] = [[]]
for domain in sort_domains(domains, env):
# Does the domain end with any domain we've seen so far.
for parent in certs:
if domain.endswith("." + parent):
# Add this to the parent's list of domains.
# Start a new group if the list already has
# 100 items.
if len(certs[parent][-1]) == 100:
certs[parent].append([])
certs[parent][-1].append(domain)
break
else:
# This domain is not a child of any domain we've seen yet, so
# start a new group. This shouldn't happen since every zone
# was already added.
certs[domain] = [[domain]]
# Flatten to a list of lists of domains (from a mapping). Remove empty
# lists (zones with no domains that need certs).
certs = functools.reduce(operator.iadd, certs.values(), [])
certs = [_ for _ in certs if len(_) > 0]
# Prepare to provision.
@ -285,268 +295,125 @@ def provision_certificates(env, agree_to_tos_url=None, logger=None, show_extende
if not os.path.exists(account_path):
os.mkdir(account_path)
# Where should we put ACME challenge files. This is mapped to /.well-known/acme_challenge
# by the nginx configuration.
challenges_path = os.path.join(account_path, 'acme_challenges')
if not os.path.exists(challenges_path):
os.mkdir(challenges_path)
# Read in the private key that we use for all TLS certificates. We'll need that
# to generate a CSR (done by free_tls_certificates).
with open(os.path.join(env['STORAGE_ROOT'], 'ssl/ssl_private_key.pem'), 'rb') as f:
private_key = f.read()
# Provision certificates.
ret = []
for domain_list in certs:
# For return.
ret_item = {
ret.append({
"domains": domain_list,
"log": [],
}
ret.append(ret_item)
# Logging for free_tls_certificates.
def my_logger(message):
if logger: logger(message)
ret_item["log"].append(message)
# Attempt to provision a certificate.
})
try:
try:
cert = client.issue_certificate(
domain_list,
account_path,
agree_to_tos_url=agree_to_tos_url,
private_key=private_key,
logger=my_logger)
# Create a CSR file for our master private key so that certbot
# uses our private key.
key_file = os.path.join(env['STORAGE_ROOT'], 'ssl', 'ssl_private_key.pem')
with tempfile.NamedTemporaryFile() as csr_file:
# We could use openssl, but certbot requires
# that the CN domain and SAN domains match
# the domain list passed to certbot, and adding
# SAN domains openssl req is ridiculously complicated.
# subprocess.check_output([
# "openssl", "req", "-new",
# "-key", key_file,
# "-out", csr_file.name,
# "-subj", "/CN=" + domain_list[0],
# "-sha256" ])
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.serialization import Encoding
from cryptography.hazmat.primitives import hashes
from cryptography.x509.oid import NameOID
builder = x509.CertificateSigningRequestBuilder()
builder = builder.subject_name(x509.Name([ x509.NameAttribute(NameOID.COMMON_NAME, domain_list[0]) ]))
builder = builder.add_extension(x509.BasicConstraints(ca=False, path_length=None), critical=True)
builder = builder.add_extension(x509.SubjectAlternativeName(
[x509.DNSName(d) for d in domain_list]
), critical=False)
request = builder.sign(load_pem(load_cert_chain(key_file)[0]), hashes.SHA256(), default_backend())
with open(csr_file.name, "wb") as f:
f.write(request.public_bytes(Encoding.PEM))
except client.NeedToTakeAction as e:
# Write out the ACME challenge files.
for action in e.actions:
if isinstance(action, client.NeedToInstallFile):
fn = os.path.join(challenges_path, action.file_name)
with open(fn, 'w') as f:
f.write(action.contents)
else:
raise ValueError(str(action))
# Provision, writing to a temporary file.
webroot = os.path.join(account_path, 'webroot')
os.makedirs(webroot, exist_ok=True)
with tempfile.TemporaryDirectory() as d:
cert_file = os.path.join(d, 'cert_and_chain.pem')
print("Provisioning TLS certificates for " + ", ".join(domain_list) + ".")
certbotret = subprocess.check_output([
"certbot",
"certonly",
#"-v", # just enough to see ACME errors
"--non-interactive", # will fail if user hasn't registered during Mail-in-a-Box setup
# Try to provision now that the challenge files are installed.
"-d", ",".join(domain_list), # first will be main domain
cert = client.issue_certificate(
domain_list,
account_path,
private_key=private_key,
logger=my_logger)
"--csr", csr_file.name, # use our private key; unfortunately this doesn't work with auto-renew so we need to save cert manually
"--cert-path", os.path.join(d, 'cert'), # we only use the full chain
"--chain-path", os.path.join(d, 'chain'), # we only use the full chain
"--fullchain-path", cert_file,
except client.NeedToAgreeToTOS as e:
# The user must agree to the Let's Encrypt terms of service agreement
# before any further action can be taken.
ret_item.update({
"result": "agree-to-tos",
"url": e.url,
})
"--webroot", "--webroot-path", webroot,
except client.WaitABit as e:
# We need to hold on for a bit before querying again to see if we can
# acquire a provisioned certificate.
import time, datetime
ret_item.update({
"result": "wait",
"until": e.until_when if not jsonable else e.until_when.isoformat(),
"seconds": (e.until_when - datetime.datetime.now()).total_seconds()
})
"--config-dir", account_path,
#"--staging",
], stderr=subprocess.STDOUT).decode("utf8")
install_cert_copy_file(cert_file, env)
except client.AccountDataIsCorrupt as e:
# This is an extremely rare condition.
ret_item.update({
"result": "error",
"message": "Something unexpected went wrong. It looks like your local Let's Encrypt account data is corrupted. There was a problem with the file " + e.account_file_path + ".",
})
ret[-1]["log"].append(certbotret)
ret[-1]["result"] = "installed"
except subprocess.CalledProcessError as e:
ret[-1]["log"].append(e.output.decode("utf8"))
ret[-1]["result"] = "error"
except Exception as e:
ret[-1]["log"].append(str(e))
ret[-1]["result"] = "error"
except (client.InvalidDomainName, client.NeedToTakeAction, client.ChallengeFailed, client.RateLimited, acme.messages.Error, requests.exceptions.RequestException) as e:
ret_item.update({
"result": "error",
"message": "Something unexpected went wrong: " + str(e),
})
else:
# A certificate was issued.
install_status = install_cert(domain_list[0], cert['cert'].decode("ascii"), b"\n".join(cert['chain']).decode("ascii"), env, raw=True)
# str indicates the certificate was not installed.
if isinstance(install_status, str):
ret_item.update({
"result": "error",
"message": "Something unexpected was wrong with the provisioned certificate: " + install_status,
})
else:
# A list indicates success and what happened next.
ret_item["log"].extend(install_status)
ret_item.update({
"result": "installed",
})
# Run post-install steps.
ret.extend(post_install_func(env))
# Return what happened with each certificate request.
return {
"requests": ret,
"problems": problems,
}
return ret
def provision_certificates_cmdline():
import sys
from utils import load_environment, exclusive_process
from exclusiveprocess import Lock
exclusive_process("update_tls_certificates")
from utils import load_environment
Lock(die=True).forever()
env = load_environment()
verbose = False
headless = False
force_domains = None
show_extended_problems = True
args = list(sys.argv)
args.pop(0) # program name
if args and args[0] == "-v":
verbose = True
args.pop(0)
if args and args[0] == "q":
show_extended_problems = False
args.pop(0)
if args and args[0] == "--headless":
headless = True
args.pop(0)
if args and args[0] == "--force":
force_domains = "ALL"
args.pop(0)
else:
force_domains = args
quiet = False
domains = []
agree_to_tos_url = None
while True:
# Run the provisioning script. This installs certificates. If there are
# a very large number of domains on this box, it issues separate
# certificates for groups of domains. We have to check the result for
# each group.
def my_logger(message):
if verbose:
print(">", message)
status = provision_certificates(env, agree_to_tos_url=agree_to_tos_url, logger=my_logger, force_domains=force_domains, show_extended_problems=show_extended_problems)
agree_to_tos_url = None # reset to prevent infinite looping
for arg in sys.argv[1:]:
if arg == "-q":
quiet = True
else:
domains.append(arg)
if not status["requests"]:
# No domains need certificates.
if not headless or verbose:
if len(status["problems"]) == 0:
print("No domains hosted on this box need a new TLS certificate at this time.")
elif len(status["problems"]) > 0:
print("No TLS certificates could be provisoned at this time:")
print()
for domain in sort_domains(status["problems"], env):
print("%s: %s" % (domain, status["problems"][domain]))
# Go.
status = provision_certificates(env, limit_domains=domains)
sys.exit(0)
# What happened?
wait_until = None
wait_domains = []
for request in status["requests"]:
if request["result"] == "agree-to-tos":
# We may have asked already in a previous iteration.
if agree_to_tos_url is not None:
continue
# Can't ask the user a question in this mode. Warn the user that something
# needs to be done.
if headless:
print(", ".join(request["domains"]) + " need a new or renewed TLS certificate.")
print()
print("This box can't do that automatically for you until you agree to Let's Encrypt's")
print("Terms of Service agreement. Use the Mail-in-a-Box control panel to provision")
print("certificates for these domains.")
sys.exit(1)
print("""
I'm going to provision a TLS certificate (formerly called a SSL certificate)
for you from Let's Encrypt (letsencrypt.org).
TLS certificates are cryptographic keys that ensure communication between
you and this box are secure when getting and sending mail and visiting
websites hosted on this box. Let's Encrypt is a free provider of TLS
certificates.
Please open this document in your web browser:
%s
It is Let's Encrypt's terms of service agreement. If you agree, I can
provision that TLS certificate. If you don't agree, you will have an
opportunity to install your own TLS certificate from the Mail-in-a-Box
control panel.
Do you agree to the agreement? Type Y or N and press <ENTER>: """
% request["url"], end='', flush=True)
if sys.stdin.readline().strip().upper() != "Y":
print("\nYou didn't agree. Quitting.")
sys.exit(1)
# Okay, indicate agreement on next iteration.
agree_to_tos_url = request["url"]
if request["result"] == "wait":
# Must wait. We'll record until when. The wait occurs below.
if wait_until is None:
wait_until = request["until"]
else:
wait_until = max(wait_until, request["until"])
wait_domains += request["domains"]
if request["result"] == "error":
print(", ".join(request["domains"]) + ":")
print(request["message"])
if request["result"] == "installed":
print("A TLS certificate was successfully installed for " + ", ".join(request["domains"]) + ".")
if wait_until:
# Wait, then loop.
import time, datetime
# Show what happened.
for request in status:
if isinstance(request, str):
print(request)
else:
if quiet and request['result'] == 'skipped':
continue
print(request['result'] + ":", ", ".join(request['domains']) + ":")
for line in request["log"]:
print(line)
print()
print("A TLS certificate was requested for: " + ", ".join(wait_domains) + ".")
first = True
while wait_until > datetime.datetime.now():
if not headless or first:
print ("We have to wait", int(round((wait_until - datetime.datetime.now()).total_seconds())), "seconds for the certificate to be issued...")
time.sleep(10)
first = False
continue # Loop!
if agree_to_tos_url:
# The user agrees to the TOS. Loop to try again by agreeing.
continue # Loop!
# Unless we were instructed to wait, or we just agreed to the TOS,
# we're done for now.
break
# And finally show the domains with problems.
if len(status["problems"]) > 0:
print("TLS certificates could not be provisoned for:")
for domain in sort_domains(status["problems"], env):
print("%s: %s" % (domain, status["problems"][domain]))
# INSTALLING A NEW CERTIFICATE FROM THE CONTROL PANEL
def create_csr(domain, ssl_key, country_code, env):
return shell("check_output", [
"openssl", "req", "-new",
"-key", ssl_key,
"-sha256",
"-subj", "/C=%s/ST=/L=/O=/CN=%s" % (country_code, domain)])
"openssl", "req", "-new",
"-key", ssl_key,
"-sha256",
"-subj", f"/C={country_code}/CN={domain}"])
def install_cert(domain, ssl_cert, ssl_chain, env, raw=False):
# Write the combined cert+chain to a temporary path and validate that it is OK.
@ -567,13 +434,23 @@ def install_cert(domain, ssl_cert, ssl_chain, env, raw=False):
cert_status += " " + cert_status_details
return cert_status
# Copy certifiate into ssl directory.
install_cert_copy_file(fn, env)
# Run post-install steps.
ret = post_install_func(env)
if raw: return ret
return "\n".join(ret)
def install_cert_copy_file(fn, env):
# Where to put it?
# Make a unique path for the certificate.
from cryptography.hazmat.primitives import hashes
from binascii import hexlify
cert = load_pem(load_cert_chain(fn)[0])
all_domains, cn = get_certificate_domains(cert)
path = "%s-%s-%s.pem" % (
_all_domains, cn = get_certificate_domains(cert)
path = "{}-{}-{}.pem".format(
safe_domain_name(cn), # common name, which should be filename safe because it is IDNA-encoded, but in case of a malformed cert make sure it's ok to use as a filename
cert.not_valid_after.date().isoformat().replace("-", ""), # expiration date
hexlify(cert.fingerprint(hashes.SHA256())).decode("ascii")[0:8], # fingerprint prefix
@ -584,14 +461,26 @@ def install_cert(domain, ssl_cert, ssl_chain, env, raw=False):
os.makedirs(os.path.dirname(ssl_certificate), exist_ok=True)
shutil.move(fn, ssl_certificate)
ret = ["OK"]
# When updating the cert for PRIMARY_HOSTNAME, symlink it from the system
def post_install_func(env):
ret = []
# Get the certificate to use for PRIMARY_HOSTNAME.
ssl_certificates = get_ssl_certificates(env)
cert = get_domain_ssl_files(env['PRIMARY_HOSTNAME'], ssl_certificates, env, use_main_cert=False)
if not cert:
# Ruh-row, we don't have any certificate usable
# for the primary hostname.
ret.append("there is no valid certificate for " + env['PRIMARY_HOSTNAME'])
# Symlink the best cert for PRIMARY_HOSTNAME to the system
# certificate path, which is hard-coded for various purposes, and then
# restart postfix and dovecot.
if domain == env['PRIMARY_HOSTNAME']:
system_ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'))
if cert and os.readlink(system_ssl_certificate) != cert['certificate']:
# Update symlink.
system_ssl_certificate = os.path.join(os.path.join(env["STORAGE_ROOT"], 'ssl', 'ssl_certificate.pem'))
ret.append("updating primary certificate")
ssl_certificate = cert['certificate']
os.unlink(system_ssl_certificate)
os.symlink(ssl_certificate, system_ssl_certificate)
@ -607,12 +496,12 @@ def install_cert(domain, ssl_cert, ssl_chain, env, raw=False):
# Update the web configuration so nginx picks up the new certificate file.
from web_update import do_web_update
ret.append( do_web_update(env) )
if raw: return ret
return "\n".join(ret)
return ret
# VALIDATION OF CERTIFICATES
def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring_soon=True, rounded_time=False, just_check_domain=False):
def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring_soon=10, rounded_time=False, just_check_domain=False):
# Check that the ssl_certificate & ssl_private_key files are good
# for the provided domain.
@ -632,12 +521,12 @@ def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring
# First check that the domain name is one of the names allowed by
# the certificate.
if domain is not None:
certificate_names, cert_primary_name = get_certificate_domains(cert)
certificate_names, _cert_primary_name = get_certificate_domains(cert)
# Check that the domain appears among the acceptable names, or a wildcard
# form of the domain name (which is a stricter check than the specs but
# should work in normal cases).
wildcard_domain = re.sub("^[^\.]+", "*", domain)
wildcard_domain = re.sub(r"^[^\.]+", "*", domain)
if domain not in certificate_names and wildcard_domain not in certificate_names:
return ("The certificate is for the wrong domain name. It is for %s."
% ", ".join(sorted(certificate_names)), None)
@ -645,9 +534,10 @@ def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring
# Second, check that the certificate matches the private key.
if ssl_private_key is not None:
try:
priv_key = load_pem(open(ssl_private_key, 'rb').read())
with open(ssl_private_key, 'rb') as f:
priv_key = load_pem(f.read())
except ValueError as e:
return ("The private key file %s is not a private key file: %s" % (ssl_private_key, str(e)), None)
return (f"The private key file {ssl_private_key} is not a private key file: {e!s}", None)
if not isinstance(priv_key, RSAPrivateKey):
return ("The private key file %s is not a private key file." % ssl_private_key, None)
@ -675,7 +565,7 @@ def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring
import datetime
now = datetime.datetime.utcnow()
if not(cert.not_valid_before <= now <= cert.not_valid_after):
return ("The certificate has expired or is not yet valid. It is valid from %s to %s." % (cert.not_valid_before, cert.not_valid_after), None)
return (f"The certificate has expired or is not yet valid. It is valid from {cert.not_valid_before} to {cert.not_valid_after}.", None)
# Next validate that the certificate is valid. This checks whether the certificate
# is self-signed, that the chain of trust makes sense, that it is signed by a CA
@ -713,12 +603,12 @@ def check_certificate(domain, ssl_certificate, ssl_private_key, warn_if_expiring
ndays = (cert_expiration_date-now).days
if not rounded_time or ndays <= 10:
# Yikes better renew soon!
expiry_info = "The certificate expires in %d days on %s." % (ndays, cert_expiration_date.strftime("%x"))
expiry_info = "The certificate expires in %d days on %s." % (ndays, cert_expiration_date.date().isoformat())
else:
# We'll renew it with Lets Encrypt.
expiry_info = "The certificate expires on %s." % cert_expiration_date.strftime("%x")
expiry_info = "The certificate expires on %s." % cert_expiration_date.date().isoformat()
if ndays <= 10 and warn_if_expiring_soon:
if warn_if_expiring_soon and ndays <= warn_if_expiring_soon:
# Warn on day 10 to give 4 days for us to automatically renew the
# certificate, which occurs on day 14.
return ("The certificate is expiring soon: " + expiry_info, None)
@ -734,7 +624,8 @@ def load_cert_chain(pemfile):
pem = f.read() + b"\n" # ensure trailing newline
pemblocks = re.findall(re_pem, pem)
if len(pemblocks) == 0:
raise ValueError("File does not contain valid PEM data.")
msg = "File does not contain valid PEM data."
raise ValueError(msg)
return pemblocks
def load_pem(pem):
@ -745,9 +636,10 @@ def load_pem(pem):
from cryptography.hazmat.backends import default_backend
pem_type = re.match(b"-+BEGIN (.*?)-+[\r\n]", pem)
if pem_type is None:
raise ValueError("File is not a valid PEM-formatted file.")
msg = "File is not a valid PEM-formatted file."
raise ValueError(msg)
pem_type = pem_type.group(1)
if pem_type in (b"RSA PRIVATE KEY", b"PRIVATE KEY"):
if pem_type in {b"RSA PRIVATE KEY", b"PRIVATE KEY"}:
return serialization.load_pem_private_key(pem, password=None, backend=default_backend())
if pem_type == b"CERTIFICATE":
return load_pem_x509_certificate(pem, default_backend())

View File

@ -1,22 +1,23 @@
#!/usr/bin/python3
#!/usr/local/lib/mailinabox/env/bin/python
#
# Checks that the upstream DNS has been set correctly and that
# TLS certificates have been signed, etc., and if not tells the user
# what to do next.
import sys, os, os.path, re, subprocess, datetime, multiprocessing.pool
import sys, os, os.path, re, datetime, multiprocessing.pool
import asyncio
import dns.reversename, dns.resolver
import dateutil.parser, dateutil.tz
import idna
import psutil
import postfix_mta_sts_resolver.resolver
from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_record
from dns_update import get_dns_zones, build_tlsa_record, get_custom_dns_config, get_secondary_dns, get_custom_dns_records
from web_update import get_web_domains, get_domains_with_a_records
from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate
from mailconfig import get_mail_domains, get_mail_aliases
from utils import shell, sort_domains, load_env_vars_from_file, load_settings
from utils import shell, sort_domains, load_env_vars_from_file, load_settings, get_ssh_port, get_ssh_config_value
def get_services():
return [
@ -28,11 +29,11 @@ def get_services():
{ "name": "Spamassassin", "port": 10025, "public": False, },
{ "name": "OpenDKIM", "port": 8891, "public": False, },
{ "name": "OpenDMARC", "port": 8893, "public": False, },
{ "name": "Memcached", "port": 11211, "public": False, },
{ "name": "Mail-in-a-Box Management Daemon", "port": 10222, "public": False, },
{ "name": "SSH Login (ssh)", "port": get_ssh_port(), "public": True, },
{ "name": "Public DNS (nsd4)", "port": 53, "public": True, },
{ "name": "Incoming Mail (SMTP/postfix)", "port": 25, "public": True, },
{ "name": "Outgoing Mail (SMTP 465/postfix)", "port": 465, "public": True, },
{ "name": "Outgoing Mail (SMTP 587/postfix)", "port": 587, "public": True, },
#{ "name": "Postfix/master", "port": 10587, "public": True, },
{ "name": "IMAPS (dovecot)", "port": 993, "public": True, },
@ -41,7 +42,7 @@ def get_services():
{ "name": "HTTPS Web (nginx)", "port": 443, "public": True, },
]
def run_checks(rounded_values, env, output, pool):
def run_checks(rounded_values, env, output, pool, domains_to_check=None):
# run systems checks
output.add_heading("System")
@ -62,37 +63,25 @@ def run_checks(rounded_values, env, output, pool):
# perform other checks asynchronously
run_network_checks(env, output)
run_domain_checks(rounded_values, env, output, pool)
def get_ssh_port():
# Returns ssh port
try:
output = shell('check_output', ['sshd', '-T'])
except FileNotFoundError:
# sshd is not installed. That's ok.
return None
returnNext = False
for e in output.split():
if returnNext:
return int(e)
if e == "port":
returnNext = True
# Did not find port!
return None
run_domain_checks(rounded_values, env, output, pool, domains_to_check=domains_to_check)
def run_services_checks(env, output, pool):
# Check that system services are running.
all_running = True
fatal = False
ret = pool.starmap(check_service, ((i, service, env) for i, service in enumerate(get_services())), chunksize=1)
for i, running, fatal2, output2 in sorted(ret):
for _i, running, fatal2, output2 in sorted(ret):
if output2 is None: continue # skip check (e.g. no port was set, e.g. no sshd)
all_running = all_running and running
fatal = fatal or fatal2
output2.playback(output)
# Check fail2ban.
code, ret = shell('check_output', ["fail2ban-client", "status"], capture_stderr=True, trap=True)
if code != 0:
output.print_error("fail2ban is not running.")
all_running = False
if all_running:
output.print_ok("All system services are running.")
@ -117,7 +106,7 @@ def check_service(i, service, env):
try:
s.connect((ip, service["port"]))
return True
except OSError as e:
except OSError:
# timed out or some other odd error
return False
finally:
@ -133,7 +122,7 @@ def check_service(i, service, env):
# IPv4 ok but IPv6 failed. Try the PRIVATE_IPV6 address to see if the service is bound to the interface.
elif service["port"] != 53 and try_connect(env["PRIVATE_IPV6"]):
output.print_error("%s is running (and available over IPv4 and the local IPv6 address), but it is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IP'], service['port']))
output.print_error("%s is running (and available over IPv4 and the local IPv6 address), but it is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IPV6'], service['port']))
else:
output.print_error("%s is running and available over IPv4 but is not accessible over IPv6 at %s port %d." % (service['name'], env['PUBLIC_IPV6'], service['port']))
@ -144,18 +133,17 @@ def check_service(i, service, env):
output.print_error("%s is not running (port %d)." % (service['name'], service['port']))
# Why is nginx not running?
if not running and service["port"] in (80, 443):
if not running and service["port"] in {80, 443}:
output.print_line(shell('check_output', ['nginx', '-t'], capture_stderr=True, trap=True)[1].strip())
# Service should be running locally.
elif try_connect("127.0.0.1"):
running = True
else:
# Service should be running locally.
if try_connect("127.0.0.1"):
running = True
else:
output.print_error("%s is not running (port %d)." % (service['name'], service['port']))
output.print_error("%s is not running (port %d)." % (service['name'], service['port']))
# Flag if local DNS is not running.
if not running and service["port"] == 53 and service["public"] == False:
if not running and service["port"] == 53 and service["public"] is False:
fatal = True
return (i, running, fatal, output)
@ -169,14 +157,25 @@ def run_system_checks(rounded_values, env, output):
check_free_memory(rounded_values, env, output)
def check_ufw(env, output):
ufw = shell('check_output', ['ufw', 'status']).splitlines()
if not os.path.isfile('/usr/sbin/ufw'):
output.print_warning("""The ufw program was not installed. If your system is able to run iptables, rerun the setup.""")
return
code, ufw = shell('check_output', ['ufw', 'status'], trap=True)
if code != 0:
# The command failed, it's safe to say the firewall is disabled
output.print_warning("""The firewall is not working on this machine. An error was received
while trying to check the firewall. To investigate run 'sudo ufw status'.""")
return
ufw = ufw.splitlines()
if ufw[0] == "Status: active":
not_allowed_ports = 0
for service in get_services():
if service["public"] and not is_port_allowed(ufw, service["port"]):
not_allowed_ports += 1
output.print_error("Port %s (%s) should be allowed in the firewall, please re-run the setup." % (service["port"], service["name"]))
output.print_error("Port {} ({}) should be allowed in the firewall, please re-run the setup.".format(service["port"], service["name"]))
if not_allowed_ports == 0:
output.print_ok("Firewall is active.")
@ -189,20 +188,15 @@ def is_port_allowed(ufw, port):
return any(re.match(str(port) +"[/ \t].*", item) for item in ufw)
def check_ssh_password(env, output):
# Check that SSH login with password is disabled. The openssh-server
# package may not be installed so check that before trying to access
# the configuration file.
if not os.path.exists("/etc/ssh/sshd_config"):
return
sshd = open("/etc/ssh/sshd_config").read()
if re.search("\nPasswordAuthentication\s+yes", sshd) \
or not re.search("\nPasswordAuthentication\s+no", sshd):
output.print_error("""The SSH server on this machine permits password-based login. A more secure
way to log in is using a public key. Add your SSH public key to $HOME/.ssh/authorized_keys, check
that you can log in without a password, set the option 'PasswordAuthentication no' in
/etc/ssh/sshd_config, and then restart the openssh via 'sudo service ssh restart'.""")
else:
output.print_ok("SSH disallows password-based login.")
config_value = get_ssh_config_value("passwordauthentication")
if config_value:
if config_value == "no":
output.print_ok("SSH disallows password-based login.")
else:
output.print_error("""The SSH server on this machine permits password-based login. A more secure
way to log in is using a public key. Add your SSH public key to $HOME/.ssh/authorized_keys, check
that you can log in without a password, set the option 'PasswordAuthentication no' in
/etc/ssh/sshd_config, and then restart the openssh via 'sudo service ssh restart'.""")
def is_reboot_needed_due_to_package_installation():
return os.path.exists("/var/run/reboot-required")
@ -217,7 +211,7 @@ def check_software_updates(env, output):
else:
output.print_error("There are %d software packages that can be updated." % len(pkgs))
for p in pkgs:
output.print_line("%s (%s)" % (p["package"], p["version"]))
output.print_line("{} ({})".format(p["package"], p["version"]))
def check_system_aliases(env, output):
# Check that the administrator alias exists since that's where all
@ -229,17 +223,28 @@ def check_free_disk_space(rounded_values, env, output):
st = os.statvfs(env['STORAGE_ROOT'])
bytes_total = st.f_blocks * st.f_frsize
bytes_free = st.f_bavail * st.f_frsize
if not rounded_values:
disk_msg = "The disk has %s GB space remaining." % str(round(bytes_free/1024.0/1024.0/1024.0*10.0)/10)
else:
disk_msg = "The disk has less than %s%% space left." % str(round(bytes_free/bytes_total/10 + .5)*10)
disk_msg = "The disk has %.2f GB space remaining." % (bytes_free/1024.0/1024.0/1024.0)
if bytes_free > .3 * bytes_total:
if rounded_values: disk_msg = "The disk has more than 30% free space."
output.print_ok(disk_msg)
elif bytes_free > .15 * bytes_total:
if rounded_values: disk_msg = "The disk has less than 30% free space."
output.print_warning(disk_msg)
else:
if rounded_values: disk_msg = "The disk has less than 15% free space."
output.print_error(disk_msg)
# Check that there's only one duplicity cache. If there's more than one,
# it's probably no longer in use, and we can recommend clearing the cache
# to save space. The cache directory may not exist yet, which is OK.
backup_cache_path = os.path.join(env['STORAGE_ROOT'], 'backup/cache')
try:
backup_cache_count = len(os.listdir(backup_cache_path))
except:
backup_cache_count = 0
if backup_cache_count > 1:
output.print_warning(f"The backup cache directory {backup_cache_path} has more than one backup target cache. Consider clearing this directory to save disk space.")
def check_free_memory(rounded_values, env, output):
# Check free memory.
percent_free = 100 - psutil.virtual_memory().percent
@ -264,7 +269,7 @@ def run_network_checks(env, output):
# Stop if we cannot make an outbound connection on port 25. Many residential
# networks block outbound port 25 to prevent their network from sending spam.
# See if we can reach one of Google's MTAs with a 5-second timeout.
code, ret = shell("check_call", ["/bin/nc", "-z", "-w5", "aspmx.l.google.com", "25"], trap=True)
_code, ret = shell("check_call", ["/bin/nc", "-z", "-w5", "aspmx.l.google.com", "25"], trap=True)
if ret == 0:
output.print_ok("Outbound mail (SMTP port 25) is not blocked.")
else:
@ -277,16 +282,28 @@ def run_network_checks(env, output):
# The user might have ended up on an IP address that was previously in use
# by a spammer, or the user may be deploying on a residential network. We
# will not be able to reliably send mail in these cases.
# See https://www.spamhaus.org/news/article/807/using-our-public-mirrors-check-your-return-codes-now. for
# information on spamhaus return codes
rev_ip4 = ".".join(reversed(env['PUBLIC_IP'].split('.')))
zen = query_dns(rev_ip4+'.zen.spamhaus.org', 'A', nxdomain=None)
if zen is None:
output.print_ok("IP address is not blacklisted by zen.spamhaus.org.")
elif zen == "[timeout]":
output.print_warning("Connection to zen.spamhaus.org timed out. Could not determine whether this box's IP address is blacklisted. Please try again later.")
elif zen == "[Not Set]":
output.print_warning("Could not connect to zen.spamhaus.org. Could not determine whether this box's IP address is blacklisted. Please try again later.")
elif zen == "127.255.255.252":
output.print_warning("Incorrect spamhaus query: %s. Could not determine whether this box's IP address is blacklisted." % (rev_ip4+'.zen.spamhaus.org'))
elif zen == "127.255.255.254":
output.print_warning("Mail-in-a-Box is configured to use a public DNS server. This is not supported by spamhaus. Could not determine whether this box's IP address is blacklisted.")
elif zen == "127.255.255.255":
output.print_warning("Too many queries have been performed on the spamhaus server. Could not determine whether this box's IP address is blacklisted.")
else:
output.print_error("""The IP address of this machine %s is listed in the Spamhaus Block List (code %s),
which may prevent recipients from receiving your email. See http://www.spamhaus.org/query/ip/%s."""
% (env['PUBLIC_IP'], zen, env['PUBLIC_IP']))
output.print_error("""The IP address of this machine {} is listed in the Spamhaus Block List (code {}),
which may prevent recipients from receiving your email. See http://www.spamhaus.org/query/ip/{}.""".format(env['PUBLIC_IP'], zen, env['PUBLIC_IP']))
def run_domain_checks(rounded_time, env, output, pool):
def run_domain_checks(rounded_time, env, output, pool, domains_to_check=None):
# Get the list of domains we handle mail for.
mail_domains = get_mail_domains(env)
@ -297,7 +314,19 @@ def run_domain_checks(rounded_time, env, output, pool):
# Get the list of domains we serve HTTPS for.
web_domains = set(get_web_domains(env))
domains_to_check = mail_domains | dns_domains | web_domains
if domains_to_check is None:
domains_to_check = mail_domains | dns_domains | web_domains
# Remove "www", "autoconfig", "autodiscover", and "mta-sts" subdomains, which we group with their parent,
# if their parent is in the domains to check list.
domains_to_check = [
d for d in domains_to_check
if not (
d.split(".", 1)[0] in {"www", "autoconfig", "autodiscover", "mta-sts"}
and len(d.split(".", 1)) == 2
and d.split(".", 1)[1] in domains_to_check
)
]
# Get the list of domains that we don't serve web for because of a custom CNAME/A record.
domains_with_a_records = get_domains_with_a_records(env)
@ -317,6 +346,11 @@ def run_domain_checks(rounded_time, env, output, pool):
def run_domain_checks_on_domain(domain, rounded_time, env, dns_domains, dns_zonefiles, mail_domains, web_domains, domains_with_a_records):
output = BufferedOutput()
# When running inside Flask, the worker threads don't get a thread pool automatically.
# Also this method is called in a forked worker pool, so creating a new loop is probably
# a good idea.
asyncio.set_event_loop(asyncio.new_event_loop())
# we'd move this up, but this returns non-pickleable values
ssl_certificates = get_ssl_certificates(env)
@ -344,16 +378,35 @@ def run_domain_checks_on_domain(domain, rounded_time, env, dns_domains, dns_zone
if domain in dns_domains:
check_dns_zone_suggestions(domain, env, output, dns_zonefiles, domains_with_a_records)
# Check auto-configured subdomains. See run_domain_checks.
# Skip mta-sts because we check the policy directly.
for label in ("www", "autoconfig", "autodiscover"):
subdomain = label + "." + domain
if subdomain in web_domains or subdomain in mail_domains:
# Run checks.
subdomain_output = run_domain_checks_on_domain(subdomain, rounded_time, env, dns_domains, dns_zonefiles, mail_domains, web_domains, domains_with_a_records)
# Prepend the domain name to the start of each check line, and then add to the
# checks for this domain.
for attr, args, kwargs in subdomain_output[1].buf:
if attr == "add_heading":
# Drop the heading, but use its text as the subdomain name in
# each line since it is in Unicode form.
subdomain = args[0]
continue
if len(args) == 1 and isinstance(args[0], str):
args = [ subdomain + ": " + args[0] ]
getattr(output, attr)(*args, **kwargs)
return (domain, output)
def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles):
# If a DS record is set on the zone containing this domain, check DNSSEC now.
has_dnssec = False
for zone in dns_domains:
if zone == domain or domain.endswith("." + zone):
if query_dns(zone, "DS", nxdomain=None) is not None:
has_dnssec = True
check_dnssec(zone, env, output, dns_zonefiles, is_checking_primary=True)
if (zone == domain or domain.endswith("." + zone)) and query_dns(zone, "DS", nxdomain=None) is not None:
has_dnssec = True
check_dnssec(zone, env, output, dns_zonefiles, is_checking_primary=True)
ip = query_dns(domain, "A")
ns_ips = query_dns("ns1." + domain, "A") + '/' + query_dns("ns2." + domain, "A")
@ -365,44 +418,41 @@ def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles):
# the nameserver, are reporting the right info --- but if the glue is incorrect this
# will probably fail.
if ns_ips == env['PUBLIC_IP'] + '/' + env['PUBLIC_IP']:
output.print_ok("Nameserver glue records are correct at registrar. [ns1/ns2.%s%s]" % (env['PRIMARY_HOSTNAME'], env['PUBLIC_IP']))
output.print_ok("Nameserver glue records are correct at registrar. [ns1/ns2.{}{}]".format(env['PRIMARY_HOSTNAME'], env['PUBLIC_IP']))
elif ip == env['PUBLIC_IP']:
# The NS records are not what we expect, but the domain resolves correctly, so
# the user may have set up external DNS. List this discrepancy as a warning.
output.print_warning("""Nameserver glue records (ns1.%s and ns2.%s) should be configured at your domain name
registrar as having the IP address of this box (%s). They currently report addresses of %s. If you have set up External DNS, this may be OK."""
% (env['PRIMARY_HOSTNAME'], env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'], ns_ips))
output.print_warning("""Nameserver glue records (ns1.{} and ns2.{}) should be configured at your domain name
registrar as having the IP address of this box ({}). They currently report addresses of {}. If you have set up External DNS, this may be OK.""".format(env['PRIMARY_HOSTNAME'], env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'], ns_ips))
else:
output.print_error("""Nameserver glue records are incorrect. The ns1.%s and ns2.%s nameservers must be configured at your domain name
registrar as having the IP address %s. They currently report addresses of %s. It may take several hours for
public DNS to update after a change."""
% (env['PRIMARY_HOSTNAME'], env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'], ns_ips))
output.print_error("""Nameserver glue records are incorrect. The ns1.{} and ns2.{} nameservers must be configured at your domain name
registrar as having the IP address {}. They currently report addresses of {}. It may take several hours for
public DNS to update after a change.""".format(env['PRIMARY_HOSTNAME'], env['PRIMARY_HOSTNAME'], env['PUBLIC_IP'], ns_ips))
# Check that PRIMARY_HOSTNAME resolves to PUBLIC_IP[V6] in public DNS.
ipv6 = query_dns(domain, "AAAA") if env.get("PUBLIC_IPV6") else None
if ip == env['PUBLIC_IP'] and ipv6 in (None, env['PUBLIC_IPV6']):
output.print_ok("Domain resolves to box's IP address. [%s%s]" % (env['PRIMARY_HOSTNAME'], my_ips))
if ip == env['PUBLIC_IP'] and not (ipv6 and env['PUBLIC_IPV6'] and ipv6 != normalize_ip(env['PUBLIC_IPV6'])):
output.print_ok("Domain resolves to box's IP address. [{}{}]".format(env['PRIMARY_HOSTNAME'], my_ips))
else:
output.print_error("""This domain must resolve to your box's IP address (%s) in public DNS but it currently resolves
to %s. It may take several hours for public DNS to update after a change. This problem may result from other
issues listed above."""
% (my_ips, ip + ((" / " + ipv6) if ipv6 is not None else "")))
output.print_error("""This domain must resolve to this box's IP address ({}) in public DNS but it currently resolves
to {}. It may take several hours for public DNS to update after a change. This problem may result from other
issues listed above.""".format(my_ips, ip + ((" / " + ipv6) if ipv6 is not None else "")))
# Check reverse DNS matches the PRIMARY_HOSTNAME. Note that it might not be
# a DNS zone if it is a subdomain of another domain we have a zone for.
existing_rdns_v4 = query_dns(dns.reversename.from_address(env['PUBLIC_IP']), "PTR")
existing_rdns_v6 = query_dns(dns.reversename.from_address(env['PUBLIC_IPV6']), "PTR") if env.get("PUBLIC_IPV6") else None
if existing_rdns_v4 == domain and existing_rdns_v6 in (None, domain):
output.print_ok("Reverse DNS is set correctly at ISP. [%s%s]" % (my_ips, env['PRIMARY_HOSTNAME']))
if existing_rdns_v4 == domain and existing_rdns_v6 in {None, domain}:
output.print_ok("Reverse DNS is set correctly at ISP. [{}{}]".format(my_ips, env['PRIMARY_HOSTNAME']))
elif existing_rdns_v4 == existing_rdns_v6 or existing_rdns_v6 is None:
output.print_error("""Your box's reverse DNS is currently %s, but it should be %s. Your ISP or cloud provider will have instructions
on setting up reverse DNS for your box.""" % (existing_rdns_v4, domain) )
output.print_error(f"""This box's reverse DNS is currently {existing_rdns_v4}, but it should be {domain}. Your ISP or cloud provider will have instructions
on setting up reverse DNS for this box.""" )
else:
output.print_error("""Your box's reverse DNS is currently %s (IPv4) and %s (IPv6), but it should be %s. Your ISP or cloud provider will have instructions
on setting up reverse DNS for your box.""" % (existing_rdns_v4, existing_rdns_v6, domain) )
output.print_error(f"""This box's reverse DNS is currently {existing_rdns_v4} (IPv4) and {existing_rdns_v6} (IPv6), but it should be {domain}. Your ISP or cloud provider will have instructions
on setting up reverse DNS for this box.""" )
# Check the TLSA record.
tlsa_qname = "_25._tcp." + domain
@ -416,18 +466,17 @@ def check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles):
# since TLSA shouldn't be used without DNSSEC.
output.print_warning("""The DANE TLSA record for incoming mail is not set. This is optional.""")
else:
output.print_error("""The DANE TLSA record for incoming mail (%s) is not correct. It is '%s' but it should be '%s'.
It may take several hours for public DNS to update after a change."""
% (tlsa_qname, tlsa25, tlsa25_expected))
output.print_error(f"""The DANE TLSA record for incoming mail ({tlsa_qname}) is not correct. It is '{tlsa25}' but it should be '{tlsa25_expected}'.
It may take several hours for public DNS to update after a change.""")
# Check that the hostmaster@ email address exists.
check_alias_exists("Hostmaster contact address", "hostmaster@" + domain, env, output)
def check_alias_exists(alias_name, alias, env, output):
mail_aliases = dict([(address, receivers) for address, receivers, *_ in get_mail_aliases(env)])
mail_aliases = {address: receivers for address, receivers, *_ in get_mail_aliases(env)}
if alias in mail_aliases:
if mail_aliases[alias]:
output.print_ok("%s exists as a mail alias. [%s%s]" % (alias_name, alias, mail_aliases[alias]))
output.print_ok(f"{alias_name} exists as a mail alias. [{alias}{mail_aliases[alias]}]")
else:
output.print_error("""You must set the destination of the mail alias for %s to direct email to you or another administrator.""" % alias)
else:
@ -448,12 +497,12 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
# half working.)
custom_dns_records = list(get_custom_dns_config(env)) # generator => list so we can reuse it
correct_ip = get_custom_dns_record(custom_dns_records, domain, "A") or env['PUBLIC_IP']
correct_ip = "; ".join(sorted(get_custom_dns_records(custom_dns_records, domain, "A"))) or env['PUBLIC_IP']
custom_secondary_ns = get_secondary_dns(custom_dns_records, mode="NS")
secondary_ns = custom_secondary_ns or ["ns2." + env['PRIMARY_HOSTNAME']]
existing_ns = query_dns(domain, "NS")
correct_ns = "; ".join(sorted(["ns1." + env['PRIMARY_HOSTNAME']] + secondary_ns))
correct_ns = "; ".join(sorted(["ns1." + env["PRIMARY_HOSTNAME"], *secondary_ns]))
ip = query_dns(domain, "A")
probably_external_dns = False
@ -462,24 +511,24 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
output.print_ok("Nameservers are set correctly at registrar. [%s]" % correct_ns)
elif ip == correct_ip:
# The domain resolves correctly, so maybe the user is using External DNS.
output.print_warning("""The nameservers set on this domain at your domain name registrar should be %s. They are currently %s.
If you are using External DNS, this may be OK."""
% (correct_ns, existing_ns) )
output.print_warning(f"""The nameservers set on this domain at your domain name registrar should be {correct_ns}. They are currently {existing_ns}.
If you are using External DNS, this may be OK.""" )
probably_external_dns = True
else:
output.print_error("""The nameservers set on this domain are incorrect. They are currently %s. Use your domain name registrar's
control panel to set the nameservers to %s."""
% (existing_ns, correct_ns) )
output.print_error(f"""The nameservers set on this domain are incorrect. They are currently {existing_ns}. Use your domain name registrar's
control panel to set the nameservers to {correct_ns}.""" )
# Check that each custom secondary nameserver resolves the IP address.
if custom_secondary_ns and not probably_external_dns:
for ns in custom_secondary_ns:
# We must first resolve the nameserver to an IP address so we can query it.
ns_ip = query_dns(ns, "A")
if not ns_ip:
ns_ips = query_dns(ns, "A")
if not ns_ips or ns_ips in {'[Not Set]', '[timeout]'}:
output.print_error("Secondary nameserver %s is not valid (it doesn't resolve to an IP address)." % ns)
continue
# Choose the first IP if nameserver returns multiple
ns_ip = ns_ips.split('; ')[0]
# Now query it to see what it says about this domain.
ip = query_dns(domain, "A", at=ns_ip, nxdomain=None)
@ -488,7 +537,7 @@ def check_dns_zone(domain, env, output, dns_zonefiles):
elif ip is None:
output.print_error("Secondary nameserver %s is not configured to resolve this domain." % ns)
else:
output.print_error("Secondary nameserver %s is not configured correctly. (It resolved this domain as %s. It should be %s.)" % (ns, ip, correct_ip))
output.print_error(f"Secondary nameserver {ns} is not configured correctly. (It resolved this domain as {ip}. It should be {correct_ip}.)")
def check_dns_zone_suggestions(domain, env, output, dns_zonefiles, domains_with_a_records):
# Warn if a custom DNS record is preventing this or the automatic www redirect from
@ -505,61 +554,108 @@ def check_dns_zone_suggestions(domain, env, output, dns_zonefiles, domains_with_
def check_dnssec(domain, env, output, dns_zonefiles, is_checking_primary=False):
# See if the domain has a DS record set at the registrar. The DS record may have
# several forms. We have to be prepared to check for any valid record. We've
# pre-generated all of the valid digests --- read them in.
# See if the domain has a DS record set at the registrar. The DS record must
# match one of the keys that we've used to sign the zone. It may use one of
# several hashing algorithms. We've pre-generated all possible valid DS
# records, although some will be preferred.
alg_name_map = { '7': 'RSASHA1-NSEC3-SHA1', '8': 'RSASHA256', '13': 'ECDSAP256SHA256' }
digalg_name_map = { '1': 'SHA-1', '2': 'SHA-256', '4': 'SHA-384' }
# Read in the pre-generated DS records
expected_ds_records = { }
ds_file = '/etc/nsd/zones/' + dns_zonefiles[domain] + '.ds'
if not os.path.exists(ds_file): return # Domain is in our database but DNS has not yet been updated.
ds_correct = open(ds_file).read().strip().split("\n")
digests = { }
for rr_ds in ds_correct:
ds_keytag, ds_alg, ds_digalg, ds_digest = rr_ds.split("\t")[4].split(" ")
digests[ds_digalg] = ds_digest
with open(ds_file, encoding="utf-8") as f:
for rr_ds in f:
rr_ds = rr_ds.rstrip()
ds_keytag, ds_alg, ds_digalg, ds_digest = rr_ds.split("\t")[4].split(" ")
# Some registrars may want the public key so they can compute the digest. The DS
# record that we suggest using is for the KSK (and that's how the DS records were generated).
alg_name_map = { '7': 'RSASHA1-NSEC3-SHA1', '8': 'RSASHA256' }
dnssec_keys = load_env_vars_from_file(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/%s.conf' % alg_name_map[ds_alg]))
dnsssec_pubkey = open(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/' + dnssec_keys['KSK'] + '.key')).read().split("\t")[3].split(" ")[3]
# Some registrars may want the public key so they can compute the digest. The DS
# record that we suggest using is for the KSK (and that's how the DS records were generated).
# We'll also give the nice name for the key algorithm.
dnssec_keys = load_env_vars_from_file(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/%s.conf' % alg_name_map[ds_alg]))
with open(os.path.join(env['STORAGE_ROOT'], 'dns/dnssec/' + dnssec_keys['KSK'] + '.key'), encoding="utf-8") as f:
dnsssec_pubkey = f.read().split("\t")[3].split(" ")[3]
expected_ds_records[ (ds_keytag, ds_alg, ds_digalg, ds_digest) ] = {
"record": rr_ds,
"keytag": ds_keytag,
"alg": ds_alg,
"alg_name": alg_name_map[ds_alg],
"digalg": ds_digalg,
"digalg_name": digalg_name_map[ds_digalg],
"digest": ds_digest,
"pubkey": dnsssec_pubkey,
}
# Query public DNS for the DS record at the registrar.
ds = query_dns(domain, "DS", nxdomain=None)
ds_looks_valid = ds and len(ds.split(" ")) == 4
if ds_looks_valid: ds = ds.split(" ")
if ds_looks_valid and ds[0] == ds_keytag and ds[1] == ds_alg and ds[3] == digests.get(ds[2]):
if is_checking_primary: return
output.print_ok("DNSSEC 'DS' record is set correctly at registrar.")
ds = query_dns(domain, "DS", nxdomain=None, as_list=True)
if ds is None or isinstance(ds, str): ds = []
# There may be more that one record, so we get the result as a list.
# Filter out records that don't look valid, just in case, and split
# each record on spaces.
ds = [tuple(str(rr).split(" ")) for rr in ds if len(str(rr).split(" ")) == 4]
if len(ds) == 0:
output.print_warning("""This domain's DNSSEC DS record is not set. The DS record is optional. The DS record activates DNSSEC. See below for instructions.""")
else:
if ds == None:
if is_checking_primary: return
output.print_warning("""This domain's DNSSEC DS record is not set. The DS record is optional. The DS record activates DNSSEC.
To set a DS record, you must follow the instructions provided by your domain name registrar and provide to them this information:""")
matched_ds = set(ds) & set(expected_ds_records)
if matched_ds:
# At least one DS record matches one that corresponds with one of the ways we signed
# the zone, so it is valid.
#
# But it may not be preferred. Only algorithm 13 is preferred. Warn if any of the
# matched zones uses a different algorithm.
if {r[1] for r in matched_ds} == { '13' } and {r[2] for r in matched_ds} <= { '2', '4' }: # all are alg 13 and digest type 2 or 4
output.print_ok("DNSSEC 'DS' record is set correctly at registrar.")
return
elif len([r for r in matched_ds if r[1] == '13' and r[2] in { '2', '4' }]) > 0: # some but not all are alg 13
output.print_ok("DNSSEC 'DS' record is set correctly at registrar. (Records using algorithm other than ECDSAP256SHA256 and digest types other than SHA-256/384 should be removed.)")
return
else: # no record uses alg 13
output.print_warning("""DNSSEC 'DS' record set at registrar is valid but should be updated to ECDSAP256SHA256 and SHA-256 (see below).
IMPORTANT: Do not delete existing DNSSEC 'DS' records for this domain until confirmation that the new DNSSEC 'DS' record
for this domain is valid.""")
else:
if is_checking_primary:
output.print_error("""The DNSSEC 'DS' record for %s is incorrect. See further details below.""" % domain)
return
output.print_error("""This domain's DNSSEC DS record is incorrect. The chain of trust is broken between the public DNS system
and this machine's DNS server. It may take several hours for public DNS to update after a change. If you did not recently
make a change, you must resolve this immediately by following the instructions provided by your domain name registrar and
provide to them this information:""")
make a change, you must resolve this immediately (see below).""")
output.print_line("""Follow the instructions provided by your domain name registrar to set a DS record.
Registrars support different sorts of DS records. Use the first option that works:""")
preferred_ds_order = [(7, 2), (8, 4), (13, 4), (8, 2), (13, 2)] # low to high, see https://github.com/mail-in-a-box/mailinabox/issues/1998
def preferred_ds_order_func(ds_suggestion):
k = (int(ds_suggestion['alg']), int(ds_suggestion['digalg']))
if k in preferred_ds_order:
return preferred_ds_order.index(k)
return -1 # index before first item
output.print_line("")
for i, ds_suggestion in enumerate(sorted(expected_ds_records.values(), key=preferred_ds_order_func, reverse=True)):
if preferred_ds_order_func(ds_suggestion) == -1: continue # don't offer record types that the RFC says we must not offer
output.print_line("")
output.print_line("Key Tag: " + ds_keytag + ("" if not ds_looks_valid or ds[0] == ds_keytag else " (Got '%s')" % ds[0]))
output.print_line("Key Flags: KSK")
output.print_line(
("Algorithm: %s / %s" % (ds_alg, alg_name_map[ds_alg]))
+ ("" if not ds_looks_valid or ds[1] == ds_alg else " (Got '%s')" % ds[1]))
# see http://www.iana.org/assignments/dns-sec-alg-numbers/dns-sec-alg-numbers.xhtml
output.print_line("Digest Type: 2 / SHA-256")
# http://www.ietf.org/assignments/ds-rr-types/ds-rr-types.xml
output.print_line("Digest: " + digests['2'])
if ds_looks_valid and ds[3] != digests.get(ds[2]):
output.print_line("(Got digest type %s and digest %s which do not match.)" % (ds[2], ds[3]))
output.print_line("Option " + str(i+1) + ":")
output.print_line("----------")
output.print_line("Key Tag: " + ds_suggestion['keytag'])
output.print_line("Key Flags: KSK / 257")
output.print_line("Algorithm: {} / {}".format(ds_suggestion['alg'], ds_suggestion['alg_name']))
output.print_line("Digest Type: {} / {}".format(ds_suggestion['digalg'], ds_suggestion['digalg_name']))
output.print_line("Digest: " + ds_suggestion['digest'])
output.print_line("Public Key: ")
output.print_line(dnsssec_pubkey, monospace=True)
output.print_line(ds_suggestion['pubkey'], monospace=True)
output.print_line("")
output.print_line("Bulk/Record Format:")
output.print_line("" + ds_correct[0])
output.print_line(ds_suggestion['record'], monospace=True)
if len(ds) > 0:
output.print_line("")
output.print_line("The DS record is currently set to:")
for rr in sorted(ds):
output.print_line("Key Tag: {}, Algorithm: {}, Digest Type: {}, Digest: {}".format(*rr))
def check_mail_domain(domain, env, output):
# Check the MX record.
@ -567,19 +663,19 @@ def check_mail_domain(domain, env, output):
recommended_mx = "10 " + env['PRIMARY_HOSTNAME']
mx = query_dns(domain, "MX", nxdomain=None)
if mx is None:
if mx is None or mx == "[timeout]":
mxhost = None
else:
# query_dns returns a semicolon-delimited list
# of priority-host pairs.
mxhost = mx.split('; ')[0].split(' ')[1]
if mxhost == None:
if mxhost is None:
# A missing MX record is okay on the primary hostname because
# the primary hostname's A record (the MX fallback) is... itself,
# which is what we want the MX to be.
if domain == env['PRIMARY_HOSTNAME']:
output.print_ok("Domain's email is directed to this domain. [%s has no MX record, which is ok]" % (domain,))
output.print_ok(f"Domain's email is directed to this domain. [{domain} has no MX record, which is ok]")
# And a missing MX record is okay on other domains if the A record
# matches the A record of the PRIMARY_HOSTNAME. Actually this will
@ -587,22 +683,35 @@ def check_mail_domain(domain, env, output):
else:
domain_a = query_dns(domain, "A", nxdomain=None)
primary_a = query_dns(env['PRIMARY_HOSTNAME'], "A", nxdomain=None)
if domain_a != None and domain_a == primary_a:
output.print_ok("Domain's email is directed to this domain. [%s has no MX record but its A record is OK]" % (domain,))
if domain_a is not None and domain_a == primary_a:
output.print_ok(f"Domain's email is directed to this domain. [{domain} has no MX record but its A record is OK]")
else:
output.print_error("""This domain's DNS MX record is not set. It should be '%s'. Mail will not
output.print_error(f"""This domain's DNS MX record is not set. It should be '{recommended_mx}'. Mail will not
be delivered to this box. It may take several hours for public DNS to update after a
change. This problem may result from other issues listed here.""" % (recommended_mx,))
change. This problem may result from other issues listed here.""")
elif mxhost == env['PRIMARY_HOSTNAME']:
good_news = "Domain's email is directed to this domain. [%s%s]" % (domain, mx)
good_news = f"Domain's email is directed to this domain. [{domain}{mx}]"
if mx != recommended_mx:
good_news += " This configuration is non-standard. The recommended configuration is '%s'." % (recommended_mx,)
good_news += f" This configuration is non-standard. The recommended configuration is '{recommended_mx}'."
output.print_ok(good_news)
# Check MTA-STS policy.
loop = asyncio.new_event_loop()
sts_resolver = postfix_mta_sts_resolver.resolver.STSResolver(loop=loop)
valid, policy = loop.run_until_complete(sts_resolver.resolve(domain))
if valid == postfix_mta_sts_resolver.resolver.STSFetchResult.VALID:
if policy[1].get("mx") == [env['PRIMARY_HOSTNAME']] and policy[1].get("mode") == "enforce": # policy[0] is the policyid
output.print_ok("MTA-STS policy is present.")
else:
output.print_error(f"MTA-STS policy is present but has unexpected settings. [{policy[1]}]")
else:
output.print_error(f"MTA-STS policy is missing: {valid}")
else:
output.print_error("""This domain's DNS MX record is incorrect. It is currently set to '%s' but should be '%s'. Mail will not
output.print_error(f"""This domain's DNS MX record is incorrect. It is currently set to '{mx}' but should be '{recommended_mx}'. Mail will not
be delivered to this box. It may take several hours for public DNS to update after a change. This problem may result from
other issues listed here.""" % (mx, recommended_mx))
other issues listed here.""")
# Check that the postmaster@ email address exists. Not required if the domain has a
# catch-all address or domain alias.
@ -612,13 +721,26 @@ def check_mail_domain(domain, env, output):
# Stop if the domain is listed in the Spamhaus Domain Block List.
# The user might have chosen a domain that was previously in use by a spammer
# and will not be able to reliably send mail.
# See https://www.spamhaus.org/news/article/807/using-our-public-mirrors-check-your-return-codes-now. for
# information on spamhaus return codes
dbl = query_dns(domain+'.dbl.spamhaus.org', "A", nxdomain=None)
if dbl is None:
output.print_ok("Domain is not blacklisted by dbl.spamhaus.org.")
elif dbl == "[timeout]":
output.print_warning(f"Connection to dbl.spamhaus.org timed out. Could not determine whether the domain {domain} is blacklisted. Please try again later.")
elif dbl == "[Not Set]":
output.print_warning(f"Could not connect to dbl.spamhaus.org. Could not determine whether the domain {domain} is blacklisted. Please try again later.")
elif dbl == "127.255.255.252":
output.print_warning("Incorrect spamhaus query: %s. Could not determine whether the domain %s is blacklisted." % (domain+'.dbl.spamhaus.org', domain))
elif dbl == "127.255.255.254":
output.print_warning("Mail-in-a-Box is configured to use a public DNS server. This is not supported by spamhaus. Could not determine whether the domain {} is blacklisted.".format(domain))
elif dbl == "127.255.255.255":
output.print_warning("Too many queries have been performed on the spamhaus server. Could not determine whether the domain {} is blacklisted.".format(domain))
else:
output.print_error("""This domain is listed in the Spamhaus Domain Block List (code %s),
output.print_error(f"""This domain is listed in the Spamhaus Domain Block List (code {dbl}),
which may prevent recipients from receiving your mail.
See http://www.spamhaus.org/dbl/ and http://www.spamhaus.org/query/domain/%s.""" % (dbl, domain))
See http://www.spamhaus.org/dbl/ and http://www.spamhaus.org/query/domain/{domain}.""")
def check_web_domain(domain, rounded_time, ssl_certificates, env, output):
# See if the domain's A record resolves to our PUBLIC_IP. This is already checked
@ -629,16 +751,16 @@ def check_web_domain(domain, rounded_time, ssl_certificates, env, output):
for (rtype, expected) in (("A", env['PUBLIC_IP']), ("AAAA", env.get('PUBLIC_IPV6'))):
if not expected: continue # IPv6 is not configured
value = query_dns(domain, rtype)
if value == expected:
if value == normalize_ip(expected):
ok_values.append(value)
else:
output.print_error("""This domain should resolve to your box's IP address (%s %s) if you would like the box to serve
webmail or a website on this domain. The domain currently resolves to %s in public DNS. It may take several hours for
public DNS to update after a change. This problem may result from other issues listed here.""" % (rtype, expected, value))
output.print_error(f"""This domain should resolve to this box's IP address ({rtype} {expected}) if you would like the box to serve
webmail or a website on this domain. The domain currently resolves to {value} in public DNS. It may take several hours for
public DNS to update after a change. This problem may result from other issues listed here.""")
return
# If both A and AAAA are correct...
output.print_ok("Domain resolves to this box's IP address. [%s%s]" % (domain, '; '.join(ok_values)))
output.print_ok("Domain resolves to this box's IP address. [{}{}]".format(domain, '; '.join(ok_values)))
# We need a TLS certificate for PRIMARY_HOSTNAME because that's where the
@ -646,7 +768,7 @@ def check_web_domain(domain, rounded_time, ssl_certificates, env, output):
# website for also needs a signed certificate.
check_ssl_cert(domain, rounded_time, ssl_certificates, env, output)
def query_dns(qname, rtype, nxdomain='[Not Set]', at=None):
def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False):
# Make the qname absolute by appending a period. Without this, dns.resolver.query
# will fall back a failed lookup to a second query with this machine's hostname
# appended. This has been causing some false-positive Spamhaus reports. The
@ -659,16 +781,21 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None):
# running bind server), or if the 'at' argument is specified, use that host
# as the nameserver.
resolver = dns.resolver.get_default_resolver()
if at:
# Make sure at is not a string that cannot be used as a nameserver
if at and at not in {'[Not set]', '[timeout]'}:
resolver = dns.resolver.Resolver()
resolver.nameservers = [at]
# Set a timeout so that a non-responsive server doesn't hold us back.
resolver.timeout = 5
# The number of seconds to spend trying to get an answer to the question. If the
# lifetime expires a dns.exception.Timeout exception will be raised.
resolver.lifetime = 5
# Do the query.
try:
response = resolver.query(qname, rtype)
response = resolver.resolve(qname, rtype)
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
# Host did not have an answer for this query; not sure what the
# difference is between the two exceptions.
@ -676,6 +803,16 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None):
except dns.exception.Timeout:
return "[timeout]"
# Normalize IP addresses. IP address --- especially IPv6 addresses --- can
# be expressed in equivalent string forms. Canonicalize the form before
# returning them. The caller should normalize any IP addresses the result
# of this method is compared with.
if rtype in {"A", "AAAA"}:
response = [normalize_ip(str(r)) for r in response]
if as_list:
return response
# There may be multiple answers; concatenate the response. Remove trailing
# periods from responses since that's how qnames are encoded in DNS but is
# confusing for us. The order of the answers doesn't matter, so sort so we
@ -686,7 +823,7 @@ def check_ssl_cert(domain, rounded_time, ssl_certificates, env, output):
# Check that TLS certificate is signed.
# Skip the check if the A record is not pointed here.
if query_dns(domain, "A", None) not in (env['PUBLIC_IP'], None): return
if query_dns(domain, "A", None) not in {env['PUBLIC_IP'], None}: return
# Where is the certificate file stored?
tls_cert = get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=True)
@ -756,37 +893,41 @@ def list_apt_updates(apt_update=True):
return pkgs
def what_version_is_this(env):
# This function runs `git describe --abbrev=0` on the Mail-in-a-Box installation directory.
# This function runs `git describe --always --abbrev=0` on the Mail-in-a-Box installation directory.
# Git may not be installed and Mail-in-a-Box may not have been cloned from github,
# so this function may raise all sorts of exceptions.
miab_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
tag = shell("check_output", ["/usr/bin/git", "describe", "--abbrev=0"], env={"GIT_DIR": os.path.join(miab_dir, '.git')}).strip()
return tag
return shell("check_output", ["/usr/bin/git", "describe", "--always", "--abbrev=0"], env={"GIT_DIR": os.path.join(miab_dir, '.git')}).strip()
def get_latest_miab_version():
# This pings https://mailinabox.email/setup.sh and extracts the tag named in
# the script to determine the current product version.
import urllib.request
return re.search(b'TAG=(.*)', urllib.request.urlopen("https://mailinabox.email/setup.sh?ping=1").read()).group(1).decode("utf8")
from urllib.request import urlopen, HTTPError, URLError
try:
return re.search(b'TAG=(.*)', urlopen("https://mailinabox.email/setup.sh?ping=1", timeout=5).read()).group(1).decode("utf8")
except (TimeoutError, HTTPError, URLError):
return None
def check_miab_version(env, output):
config = load_settings(env)
if config.get("privacy", True):
output.print_warning("Mail-in-a-Box version check disabled by privacy setting.")
else:
try:
this_ver = what_version_is_this(env)
except:
this_ver = "Unknown"
try:
this_ver = what_version_is_this(env)
except:
this_ver = "Unknown"
if config.get("privacy", True):
output.print_warning("You are running version Mail-in-a-Box %s. Mail-in-a-Box version check disabled by privacy setting." % this_ver)
else:
latest_ver = get_latest_miab_version()
if this_ver == latest_ver:
output.print_ok("Mail-in-a-Box is up to date. You are running version %s." % this_ver)
elif latest_ver is None:
output.print_error("Latest Mail-in-a-Box version could not be determined. You are running version %s." % this_ver)
else:
output.print_error("A new version of Mail-in-a-Box is available. You are running version %s. The latest version is %s. For upgrade instructions, see https://mailinabox.email. "
% (this_ver, latest_ver))
output.print_error(f"A new version of Mail-in-a-Box is available. You are running version {this_ver}. The latest version is {latest_ver}. For upgrade instructions, see https://mailinabox.email. ")
def run_and_output_changes(env, pool):
import json
@ -801,7 +942,11 @@ def run_and_output_changes(env, pool):
# Load previously saved status checks.
cache_fn = "/var/cache/mailinabox/status_checks.json"
if os.path.exists(cache_fn):
prev = json.load(open(cache_fn))
with open(cache_fn, encoding="utf-8") as f:
try:
prev = json.load(f)
except json.JSONDecodeError:
prev = []
# Group the serial output into categories by the headings.
def group_by_heading(lines):
@ -836,14 +981,14 @@ def run_and_output_changes(env, pool):
out.add_heading(category + " -- Previously:")
elif op == "delete":
out.add_heading(category + " -- Removed")
if op in ("replace", "delete"):
if op in {"replace", "delete"}:
BufferedOutput(with_lines=prev_lines[i1:i2]).playback(out)
if op == "replace":
out.add_heading(category + " -- Currently:")
elif op == "insert":
out.add_heading(category + " -- Added")
if op in ("replace", "insert"):
if op in {"replace", "insert"}:
BufferedOutput(with_lines=cur_lines[j1:j2]).playback(out)
for category, prev_lines in prev_status.items():
@ -853,9 +998,19 @@ def run_and_output_changes(env, pool):
# Store the current status checks output for next time.
os.makedirs(os.path.dirname(cache_fn), exist_ok=True)
with open(cache_fn, "w") as f:
with open(cache_fn, "w", encoding="utf-8") as f:
json.dump(cur.buf, f, indent=True)
def normalize_ip(ip):
# Use ipaddress module to normalize the IPv6 notation and
# ensure we are matching IPv6 addresses written in different
# representations according to rfc5952.
import ipaddress
try:
return str(ipaddress.ip_address(ip))
except:
return ip
class FileOutput:
def __init__(self, buf, width):
self.buf = buf
@ -877,8 +1032,8 @@ class FileOutput:
def print_block(self, message, first_line=" "):
print(first_line, end='', file=self.buf)
message = re.sub("\n\s*", " ", message)
words = re.split("(\s+)", message)
message = re.sub("\n\\s*", " ", message)
words = re.split(r"(\s+)", message)
linelen = 0
for w in words:
if self.width and (linelen + len(w) > self.width-1-len(first_line)):
@ -897,7 +1052,7 @@ class FileOutput:
class ConsoleOutput(FileOutput):
def __init__(self):
self.buf = sys.stdout
# Do nice line-wrapping according to the size of the terminal.
# The 'stty' program queries standard input for terminal information.
if sys.stdin.isatty():
@ -917,9 +1072,9 @@ class ConsoleOutput(FileOutput):
class BufferedOutput:
# Record all of the instance method calls so we can play them back later.
def __init__(self, with_lines=None):
self.buf = [] if not with_lines else with_lines
self.buf = with_lines if with_lines else []
def __getattr__(self, attr):
if attr not in ("add_heading", "print_ok", "print_error", "print_warning", "print_block", "print_line"):
if attr not in {"add_heading", "print_ok", "print_error", "print_warning", "print_block", "print_line"}:
raise AttributeError
# Return a function that just records the call & arguments to our buffer.
def w(*args, **kwargs):
@ -934,13 +1089,14 @@ if __name__ == "__main__":
from utils import load_environment
env = load_environment()
pool = multiprocessing.pool.Pool(processes=10)
if len(sys.argv) == 1:
run_checks(False, env, ConsoleOutput(), pool)
with multiprocessing.pool.Pool(processes=10) as pool:
run_checks(False, env, ConsoleOutput(), pool)
elif sys.argv[1] == "--show-changes":
run_and_output_changes(env, pool)
with multiprocessing.pool.Pool(processes=10) as pool:
run_and_output_changes(env, pool)
elif sys.argv[1] == "--check-primary-hostname":
# See if the primary hostname appears resolvable and has a signed certificate.
@ -958,3 +1114,7 @@ if __name__ == "__main__":
elif sys.argv[1] == "--version":
print(what_version_is_this(env))
elif sys.argv[1] == "--only":
with multiprocessing.pool.Pool(processes=10) as pool:
run_checks(False, env, ConsoleOutput(), pool, domains_to_check=sys.argv[2:])

View File

@ -1,13 +1,13 @@
<style>
#alias_table .actions > * { padding-right: 3px; }
#alias_table .alias-required .remove { display: none }
#alias_table .alias-auto .actions > * { display: none }
</style>
<h2>Aliases</h2>
<h3>Add a mail alias</h3>
<p>Aliases are email forwarders. An alias can forward email to a <a href="#" onclick="return show_panel('users')">mail user</a> or to any email address.</p>
<p>Aliases are email forwarders. An alias can forward email to a <a href="#users">mail user</a> or to any email address.</p>
<p>To use an alias or any address besides your own login username in outbound mail, the sending user must be included as a permitted sender for the alias.</p>
@ -39,8 +39,9 @@
<label for="addaliasForwardsTo" class="col-sm-1 control-label">Forwards To</label>
<div class="col-sm-10">
<textarea class="form-control" rows="3" id="addaliasForwardsTo"></textarea>
<div style="margin-top: 3px; padding-left: 3px; font-size: 90%" class="text-muted">
<span class="domainalias">Enter just the part of an email address starting with the @-sign.</span>
<div style="margin-top: 3px; padding-left: 3px; font-size: 90%">
<span class="domainalias text-muted">Enter just the part of an email address starting with the @-sign.</span>
<span class="text-danger">Only forward mail to addresses handled by this Mail-in-a-Box, since mail forwarded by aliases to other domains may be rejected or filtered by the receiver. To forward mail to other domains, create a mail user and then log into webmail for the user and create a filter rule to forward mail.</span>
</div>
</div>
</div>
@ -50,7 +51,7 @@
<div class="radio">
<label>
<input id="addaliasForwardsToNotAdvanced" name="addaliasForwardsToDivToggle" type="radio" checked onclick="$('#addaliasForwardsToDiv').toggle(false)">
Any mail user listed in the Fowards To box can send mail claiming to be from <span class="regularalias">the alias address</span><span class="catchall domainalias">any address on the alias domain</span>.
Any mail user listed in the Forwards To box can send mail claiming to be from <span class="regularalias">the alias address</span><span class="catchall domainalias">any address on the alias domain</span>.
</label>
</div>
<div class="radio">
@ -123,7 +124,7 @@
<table class="table" style="margin-top: .5em">
<thead><th>Verb</th> <th>Action</th><th></th></thead>
<tr><td>GET</td><td><i>(none)</i></td> <td>Returns a list of existing mail aliases. Adding <code>?format=json</code> to the URL will give JSON-encoded results.</td></tr>
<tr><td>POST</td><td>/add</td> <td>Adds a new mail alias. Required POST-body parameters are <code>address</code> and <code>forward_to</code>.</td></tr>
<tr><td>POST</td><td>/add</td> <td>Adds a new mail alias. Required POST-body parameters are <code>address</code> and <code>forwards_to</code>.</td></tr>
<tr><td>POST</td><td>/remove</td> <td>Removes a mail alias. Required POST-body parameter is <code>address</code>.</td></tr>
</table>
@ -135,7 +136,7 @@
curl -X GET https://{{hostname}}/admin/mail/aliases?format=json
# Adds a new alias
curl -X POST -d "address=new_alias@mydomail.com" -d "forward_to=my_email@mydomain.com" https://{{hostname}}/admin/mail/aliases/add
curl -X POST -d "address=new_alias@mydomail.com" -d "forwards_to=my_email@mydomain.com" https://{{hostname}}/admin/mail/aliases/add
# Removes an alias
curl -X POST -d "address=new_alias@mydomail.com" https://{{hostname}}/admin/mail/aliases/remove
@ -152,8 +153,8 @@ function show_aliases() {
function(r) {
$('#alias_table tbody').html("");
for (var i = 0; i < r.length; i++) {
var hdr = $("<tr><td colspan='3'><h4/></td></tr>");
hdr.find('h4').text(r[i].domain);
var hdr = $("<tr><th colspan='4' style='background-color: #EEE'></th></tr>");
hdr.find('th').text(r[i].domain);
$('#alias_table tbody').append(hdr);
for (var k = 0; k < r[i].aliases.length; k++) {
@ -162,7 +163,7 @@ function show_aliases() {
var n = $("#alias-template").clone();
n.attr('id', '');
if (alias.required) n.addClass('alias-required');
if (alias.auto) n.addClass('alias-auto');
n.attr('data-address', alias.address_display); // this is decoded from IDNA, but will get re-coded to IDNA on the backend
n.find('td.address').text(alias.address_display)
for (var j = 0; j < alias.forwards_to.length; j++)
@ -287,7 +288,7 @@ function aliases_remove(elem) {
},
function(r) {
// Responses are multiple lines of pre-formatted text.
show_modal_error("Remove User", $("<pre/>").text(r));
show_modal_error("Remove Alias", $("<pre/>").text(r));
show_aliases();
});
});

View File

@ -31,12 +31,15 @@
<label for="customdnsType" class="col-sm-1 control-label">Type</label>
<div class="col-sm-10">
<select id="customdnsType" class="form-control" style="max-width: 400px" onchange="show_customdns_rtype_hint()">
<option value="A" data-hint="Enter an IPv4 address (i.e. a dotted quad, such as 123.456.789.012).">A (IPv4 address)</option>
<option value="AAAA" data-hint="Enter an IPv6 address.">AAAA (IPv6 address)</option>
<option value="A" data-hint="Enter an IPv4 address (i.e. a dotted quad, such as 123.456.789.012). The 'local' alias sets the record to this box's public IPv4 address.">A (IPv4 address)</option>
<option value="AAAA" data-hint="Enter an IPv6 address. The 'local' alias sets the record to this box's public IPv6 address.">AAAA (IPv6 address)</option>
<option value="CAA" data-hint="Enter a CA that can issue certificates for this domain in the form of FLAG TAG VALUE. (0 issuewild &quot;letsencrypt.org&quot;)">CAA (Certificate Authority Authorization)</option>
<option value="CNAME" data-hint="Enter another domain name followed by a period at the end (e.g. mypage.github.io.).">CNAME (DNS forwarding)</option>
<option value="TXT" data-hint="Enter arbitrary text.">TXT (text record)</option>
<option value="MX" data-hint="Enter record in the form of PRIORIY DOMAIN., including trailing period (e.g. 20 mx.example.com.).">MX (mail exchanger)</option>
<option value="SRV" data-hint="Enter record in the form of PRIORIY WEIGHT PORT TARGET., including trailing period (e.g. 10 10 5060 sip.example.com.).">SRV (service record)</option>
<option value="MX" data-hint="Enter record in the form of PRIORITY DOMAIN., including trailing period (e.g. 20 mx.example.com.).">MX (mail exchanger)</option>
<option value="SRV" data-hint="Enter record in the form of PRIORITY WEIGHT PORT TARGET., including trailing period (e.g. 10 10 5060 sip.example.com.).">SRV (service record)</option>
<option value="SSHFP" data-hint="Enter record in the form of ALGORITHM TYPE FINGERPRINT.">SSHFP (SSH fingerprint record)</option>
<option value="NS" data-hint="Enter a hostname to which this subdomain should be delegated to">NS (DNS subdomain delegation)</option>
</select>
</div>
</div>
@ -54,7 +57,13 @@
</div>
</form>
<table id="custom-dns-current" class="table" style="width: auto; display: none">
<div style="text-align: right; font-size; 90%; margin-top: 1em;">
sort by:
<a href="#" onclick="window.miab_custom_dns_data_sort_order='qname'; show_current_custom_dns_update_after_sort(); return false;">domain name</a>
|
<a href="#" onclick="window.miab_custom_dns_data_sort_order='created'; show_current_custom_dns_update_after_sort(); return false;">created</a>
</div>
<table id="custom-dns-current" class="table" style="width: auto; display: none; margin-top: 0;">
<thead>
<th>Domain Name</th>
<th>Record Type</th>
@ -68,8 +77,8 @@
<h3>Using a secondary nameserver</h3>
<p>If your TLD requires you to have two separate nameservers, you can either set up <a href="#" onclick="return show_panel('external_dns')">external DNS</a> and ignore the DNS server on this box entirely, or use the DNS server on this box but add a secondary (aka &ldquo;slave&rdquo;) nameserver.</p>
<p>If you choose to use a seconday nameserver, you must find a seconday nameserver service provider. Your domain name registrar or virtual cloud provider may provide this service for you. Once you set up the seconday nameserver service, enter the hostname (not the IP address) of <em>their</em> secondary nameserver in the box below.</p>
<p>If your TLD requires you to have two separate nameservers, you can either set up <a href="#external_dns">external DNS</a> and ignore the DNS server on this box entirely, or use the DNS server on this box but add a secondary (aka &ldquo;slave&rdquo;) nameserver.</p>
<p>If you choose to use a secondary nameserver, you must find a secondary nameserver service provider. Your domain name registrar or virtual cloud provider may provide this service for you. Once you set up the secondary nameserver service, enter the hostname (not the IP address) of <em>their</em> secondary nameserver in the box below.</p>
<form class="form-horizontal" role="form" onsubmit="do_set_secondary_dns(); return false;">
<div class="form-group">
@ -86,8 +95,8 @@
<div class="form-group">
<div class="col-sm-offset-1 col-sm-11">
<p class="small">
Multiple secondary servers can be separated with commas or spaces (i.e., <code>ns2.hostingcompany.com ns3.hostingcompany.com</code>).
To enable zone transfers to additional servers without listing them as secondary nameservers, add <code>xfr:IPADDRESS</code>.
Multiple secondary servers can be separated with commas or spaces (i.e., <code>ns2.hostingcompany.com ns3.hostingcompany.com</code>).
To enable zone transfers to additional servers without listing them as secondary nameservers, prefix a hostname, IP address, or subnet with <code>xfr:</code>, e.g. <code>xfr:10.20.30.40</code> or <code>xfr:10.0.0.0/8</code>.
</p>
<p id="secondarydns-clear-instructions" style="display: none" class="small">
Clear the input field above and click Update to use this machine itself as secondary DNS, which is the default/normal setup.
@ -124,7 +133,7 @@
<tr><td>email</td> <td>The email address of any administrative user here.</td></tr>
<tr><td>password</td> <td>That user&rsquo;s password.</td></tr>
<tr><td>qname</td> <td>The fully qualified domain name for the record you are trying to set. It must be one of the domain names or a subdomain of one of the domain names hosted on this box. (Add mail users or aliases to add new domains.)</td></tr>
<tr><td>rtype</td> <td>The resource type. Defaults to <code>A</code> if omitted. Possible values: <code>A</code> (an IPv4 address), <code>AAAA</code> (an IPv6 address), <code>TXT</code> (a text string), <code>CNAME</code> (an alias, which is a fully qualified domain name &mdash; don&rsquo;t forget the final period), <code>MX</code>, or <code>SRV</code>.</td></tr>
<tr><td>rtype</td> <td>The resource type. Defaults to <code>A</code> if omitted. Possible values: <code>A</code> (an IPv4 address), <code>AAAA</code> (an IPv6 address), <code>TXT</code> (a text string), <code>CNAME</code> (an alias, which is a fully qualified domain name &mdash; don&rsquo;t forget the final period), <code>MX</code>, <code>SRV</code>, <code>SSHFP</code>, <code>CAA</code> or <code>NS</code>.</td></tr>
<tr><td>value</td> <td>For PUT, POST, and DELETE, the record&rsquo;s value. If the <code>rtype</code> is <code>A</code> or <code>AAAA</code> and <code>value</code> is empty or omitted, the IPv4 or IPv6 address of the remote host is used (be sure to use the <code>-4</code> or <code>-6</code> options to curl). This is handy for dynamic DNS!</td></tr>
</table>
@ -189,20 +198,38 @@ function show_current_custom_dns() {
$('#custom-dns-current').fadeIn();
else
$('#custom-dns-current').fadeOut();
window.miab_custom_dns_data = data;
show_current_custom_dns_update_after_sort();
});
}
$('#custom-dns-current').find("tbody").text('');
function show_current_custom_dns_update_after_sort() {
var data = window.miab_custom_dns_data;
var sort_key = window.miab_custom_dns_data_sort_order || "qname";
data.sort(function(a, b) { return a["sort-order"][sort_key] - b["sort-order"][sort_key] });
var tbody = $('#custom-dns-current').find("tbody");
tbody.text('');
var last_zone = null;
for (var i = 0; i < data.length; i++) {
if (sort_key == "qname" && data[i].zone != last_zone) {
var r = $("<tr><th colspan=4 style='background-color: #EEE'></th></tr>");
r.find("th").text(data[i].zone);
tbody.append(r);
last_zone = data[i].zone;
}
var tr = $("<tr/>");
$('#custom-dns-current').find("tbody").append(tr);
tbody.append(tr);
tr.attr('data-qname', data[i].qname);
tr.attr('data-rtype', data[i].rtype);
tr.attr('data-value', data[i].value);
tr.append($('<td class="long"/>').text(data[i].qname));
tr.append($('<td/>').text(data[i].rtype));
tr.append($('<td class="long"/>').text(data[i].value));
tr.append($('<td class="long" style="max-width: 40em"/>').text(data[i].value));
tr.append($('<td>[<a href="#" onclick="return delete_custom_dns_record(this)">delete</a>]</td>'));
}
});
}
function delete_custom_dns_record(elem) {

View File

@ -38,10 +38,23 @@
<p class="alert" role="alert">
<span class="glyphicon glyphicon-info-sign"></span>
You may encounter zone file errors when attempting to create a TXT record with a long string.
<a href="http://tools.ietf.org/html/rfc4408#section-3.1.3">RFC 4408</a> states a TXT record is allowed to contain multiple strings, and this technique can be used to construct records that would exceed the 255-byte maximum length.
<a href="https://tools.ietf.org/html/rfc4408#section-3.1.3">RFC 4408</a> states a TXT record is allowed to contain multiple strings, and this technique can be used to construct records that would exceed the 255-byte maximum length.
You may need to adopt this technique when adding DomainKeys. Use a tool like <code>named-checkzone</code> to validate your zone file.
</p>
<h3>Download zonefile</h3>
<p>You can download your zonefiles here or use the table of records below.</p>
<form class="form-inline" role="form" onsubmit="do_download_zonefile(); return false;">
<div class="form-group">
<div class="form-group">
<label for="downloadZonefile" class="control-label sr-only">Zone</label>
<select id="downloadZonefile" class="form-control" style="width: auto"> </select>
</div>
<button type="submit" class="btn btn-primary">Download</button>
</div>
</form>
<h3>Records</h3>
<table id="external_dns_settings" class="table">
<thead>
@ -57,6 +70,18 @@
<script>
function show_external_dns() {
api(
"/dns/zones",
"GET",
{ },
function(data) {
var zones = $('#downloadZonefile');
zones.text('');
for (var j = 0; j < data.length; j++) {
zones.append($('<option/>').text(data[j]));
}
});
$('#external_dns_settings tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>")
api(
"/dns/dump",
@ -84,4 +109,19 @@ function show_external_dns() {
}
})
}
function do_download_zonefile() {
var zone = $('#downloadZonefile').val();
api(
"/dns/zonefile/"+ zone,
"GET",
{},
function(data) {
show_modal_error("Download Zonefile", $("<pre/>").text(data));
},
function(err) {
show_modal_error("Download Zonefile (Error)", $("<pre/>").text(err));
});
}
</script>

View File

@ -9,11 +9,11 @@
<meta name="robots" content="noindex, nofollow">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
<link rel="stylesheet" href="/admin/assets/bootstrap/css/bootstrap.min.css">
<style>
body {
overflow-y: scroll;
padding-bottom: 20px;
body {
overflow-y: scroll;
padding-bottom: 20px;
}
p {
@ -36,20 +36,20 @@
margin-bottom: 13px;
margin-top: 30px;
}
.panel-heading h3 {
border: none;
padding: 0;
margin: 0;
}
.panel-heading h3 {
border: none;
padding: 0;
margin: 0;
}
h4 {
font-size: 110%;
margin-bottom: 13px;
margin-top: 18px;
}
h4:first-child {
margin-top: 6px;
}
h4:first-child {
margin-top: 6px;
}
.admin_panel {
display: none;
@ -59,11 +59,37 @@
margin: 1.5em 0;
}
ol li {
margin-bottom: 1em;
}
ol li {
margin-bottom: 1em;
}
.if-logged-in { display: none; }
.if-logged-in-admin { display: none; }
/* The below only gets used if it is supported */
@media (prefers-color-scheme: dark) {
/* Invert invert lightness but not hue */
html {
filter: invert(100%) hue-rotate(180deg);
}
/* Override Boostrap theme here to give more contrast. The black turns to white by the filter. */
.form-control {
color: black !important;
}
/* Revert the invert for the navbar */
button, div.navbar {
filter: invert(100%) hue-rotate(180deg);
}
/* Revert the revert for the dropdowns */
ul.dropdown-menu {
filter: invert(100%) hue-rotate(180deg);
}
}
</style>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap-theme.min.css" integrity="sha384-fLW2N01lMqjakBkx3l/M9EahuwpSfeNvV63J5ezn3uZzapT0u7EYsXMjQV+0En5r" crossorigin="anonymous">
<link rel="stylesheet" href="/admin/assets/bootstrap/css/bootstrap-theme.min.css">
</head>
<body>
@ -83,38 +109,46 @@
</div>
<div class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="dropdown">
<li class="dropdown if-logged-in-admin">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">System <b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="#system_status" onclick="return show_panel(this);">Status Checks</a></li>
<li><a href="#tls" onclick="return show_panel(this);">TLS (SSL) Certificates</a></li>
<li><a href="#system_backup" onclick="return show_panel(this);">Backup Status</a></li>
<li><a href="#system_status">Status Checks</a></li>
<li><a href="#tls">TLS (SSL) Certificates</a></li>
<li><a href="#system_backup">Backup Status</a></li>
<li class="divider"></li>
<li class="dropdown-header">Advanced Pages</li>
<li><a href="#custom_dns" onclick="return show_panel(this);">Custom DNS</a></li>
<li><a href="#external_dns" onclick="return show_panel(this);">External DNS</a></li>
<li><a href="/admin/munin" target="_blank">Munin Monitoring</a></li>
<li><a href="#custom_dns">Custom DNS</a></li>
<li><a href="#external_dns">External DNS</a></li>
<li><a href="#munin">Munin Monitoring</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Mail <b class="caret"></b></a>
<li><a href="#mail-guide" class="if-logged-in-not-admin">Mail</a></li>
<li class="dropdown if-logged-in-admin">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">Mail &amp; Users <b class="caret"></b></a>
<ul class="dropdown-menu">
<li><a href="#mail-guide" onclick="return show_panel(this);">Instructions</a></li>
<li><a href="#users" onclick="return show_panel(this);">Users</a></li>
<li><a href="#aliases" onclick="return show_panel(this);">Aliases</a></li>
<li><a href="#mail-guide">Instructions</a></li>
<li><a href="#users">Users</a></li>
<li><a href="#aliases">Aliases</a></li>
<li class="divider"></li>
<li class="dropdown-header">Your Account</li>
<li><a href="#mfa">Two-Factor Authentication</a></li>
</ul>
</li>
<li><a href="#sync_guide" onclick="return show_panel(this);">Contacts/Calendar</a></li>
<li><a href="#web" onclick="return show_panel(this);">Web</a></li>
<li><a href="#sync_guide" class="if-logged-in">Contacts/Calendar</a></li>
<li><a href="#web" class="if-logged-in-admin">Web</a></li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li><a href="#" onclick="do_logout(); return false;" style="color: white">Log out?</a></li>
<li class="if-logged-in"><a href="#" onclick="do_logout(); return false;" style="color: white">Log out</a></li>
</ul>
</div><!--/.navbar-collapse -->
</div>
</div>
<div class="container">
<div id="panel_welcome" class="admin_panel">
{% include "welcome.html" %}
</div>
<div id="panel_system_status" class="admin_panel">
{% include "system-status.html" %}
</div>
@ -131,6 +165,10 @@
{% include "custom-dns.html" %}
</div>
<div id="panel_mfa" class="admin_panel">
{% include "mfa.html" %}
</div>
<div id="panel_login" class="admin_panel">
{% include "login.html" %}
</div>
@ -159,6 +197,10 @@
{% include "ssl.html" %}
</div>
<div id="panel_munin" class="admin_panel">
{% include "munin.html" %}
</div>
<hr>
<footer>
@ -191,8 +233,8 @@
</div>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js" integrity="sha256-rsPUGdUPBXgalvIj4YKJrrUlmLXbOb6Cp7cdxn1qeUc=" crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js" integrity="sha384-0mSbJDEHialfmuBBQP6A4Qrprq5OVfW37PRR3j5ELqxss1yVqOtnepnHVP9aJ7xS" crossorigin="anonymous"></script>
<script src="/admin/assets/jquery.min.js"></script>
<script src="/admin/assets/bootstrap/js/bootstrap.min.js"></script>
<script>
var global_modal_state = null;
@ -218,7 +260,7 @@ $(function() {
if (global_modal_state == null) global_modal_state = 1; // cancel if the user hit ESC or clicked outside of the modal
if (global_modal_funcs && global_modal_funcs[global_modal_state])
global_modal_funcs[global_modal_state]();
})
})
})
function show_modal_error(title, message, callback) {
@ -281,7 +323,7 @@ function ajax_with_indicator(options) {
};
options.error = function(jqxhr) {
hide_loading_indicator();
if (!old_error)
if (!old_error)
show_modal_error("Error", "Something went wrong, sorry.")
else
old_error(jqxhr.responseText, jqxhr);
@ -291,8 +333,8 @@ function ajax_with_indicator(options) {
return false; // handy when called from onclick
}
var api_credentials = ["", ""];
function api(url, method, data, callback, callback_error) {
var api_credentials = null;
function api(url, method, data, callback, callback_error, headers) {
// from http://www.webtoolkit.info/javascript-base64.html
function base64encode(input) {
_keyStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
@ -330,7 +372,7 @@ function api(url, method, data, callback, callback_error) {
method: method,
cache: false,
data: data,
headers: headers,
// the custom DNS api sends raw POST/PUT bodies --- prevent URL-encoding
processData: typeof data != "string",
mimeType: typeof data == "string" ? "text/plain; charset=ascii" : null,
@ -339,9 +381,10 @@ function api(url, method, data, callback, callback_error) {
// We don't store user credentials in a cookie to avoid the hassle of CSRF
// attacks. The Authorization header only gets set in our AJAX calls triggered
// by user actions.
xhr.setRequestHeader(
'Authorization',
'Basic ' + base64encode(api_credentials[0] + ':' + api_credentials[1]));
if (api_credentials)
xhr.setRequestHeader(
'Authorization',
'Basic ' + base64encode(api_credentials.username + ':' + api_credentials.session_key));
},
success: callback,
error: callback_error || default_error,
@ -358,34 +401,64 @@ function api(url, method, data, callback, callback_error) {
var current_panel = null;
var switch_back_to_panel = null;
function do_logout() {
// Clear the session from the backend.
api("/logout", "POST");
// Forget the token.
api_credentials = null;
if (typeof localStorage != 'undefined')
localStorage.removeItem("miab-cp-credentials");
if (typeof sessionStorage != 'undefined')
sessionStorage.removeItem("miab-cp-credentials");
// Return to the start.
show_panel('login');
// Reset menus.
show_hide_menus();
}
function show_panel(panelid) {
if (panelid.getAttribute)
if (panelid.getAttribute) {
// we might be passed an HTMLElement <a>.
panelid = panelid.getAttribute('href').substring(1);
}
$('.admin_panel').hide();
$('#panel_' + panelid).show();
if (typeof localStorage != 'undefined')
localStorage.setItem("miab-cp-lastpanel", panelid);
if (window["show_" + panelid])
window["show_" + panelid]();
current_panel = panelid;
switch_back_to_panel = null;
return false; // when called from onclick, cancel navigation
}
window.onhashchange = function() {
var panelid = window.location.hash.substring(1);
show_panel(panelid);
};
$(function() {
// Recall saved user credentials.
if (typeof sessionStorage != 'undefined' && sessionStorage.getItem("miab-cp-credentials"))
api_credentials = sessionStorage.getItem("miab-cp-credentials").split(":");
else if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-credentials"))
api_credentials = localStorage.getItem("miab-cp-credentials").split(":");
try {
if (typeof sessionStorage != 'undefined' && sessionStorage.getItem("miab-cp-credentials"))
api_credentials = JSON.parse(sessionStorage.getItem("miab-cp-credentials"));
else if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-credentials"))
api_credentials = JSON.parse(localStorage.getItem("miab-cp-credentials"));
} catch (_) {
}
// Toggle menu state.
show_hide_menus();
// Recall what the user was last looking at.
if (typeof localStorage != 'undefined' && localStorage.getItem("miab-cp-lastpanel")) {
show_panel(localStorage.getItem("miab-cp-lastpanel"));
if (api_credentials != null && window.location.hash) {
var panelid = window.location.hash.substring(1);
show_panel(panelid);
} else if (api_credentials != null) {
show_panel('welcome');
} else {
show_panel('login');
}

View File

@ -1,4 +1,29 @@
<h1 style="margin: 1em; text-align: center">{{hostname}}</h1>
<style>
.title {
margin: 1em;
text-align: center;
}
.subtitle {
margin: 2em;
text-align: center;
}
.login {
margin: 0 auto;
max-width: 32em;
}
.login #loginOtp {
display: none;
}
#loginForm.is-twofactor #loginOtp {
display: block
}
</style>
<h1 class="title">{{hostname}}</h1>
{% if no_users_exist or no_admins_exist %}
<div class="row">
@ -7,23 +32,23 @@
<p class="text-danger">There are no users on this system! To make an administrative user,
log into this machine using SSH (like when you first set it up) and run:</p>
<pre>cd mailinabox
sudo tools/mail.py user add me@{{hostname}}
sudo tools/mail.py user make-admin me@{{hostname}}</pre>
sudo management/cli.py user add me@{{hostname}}
sudo management/cli.py user make-admin me@{{hostname}}</pre>
{% else %}
<p class="text-danger">There are no administrative users on this system! To make an administrative user,
log into this machine using SSH (like when you first set it up) and run:</p>
<pre>cd mailinabox
sudo tools/mail.py user make-admin me@{{hostname}}</pre>
sudo management/cli.py user make-admin me@{{hostname}}</pre>
{% endif %}
<hr>
</div>
</div>
</div>
{% endif %}
<p style="margin: 2em; text-align: center;">Log in here for your Mail-in-a-Box control panel.</p>
<p class="subtitle">Log in here for your Mail-in-a-Box control panel.</p>
<div style="margin: 0 auto; max-width: 32em;">
<form class="form-horizontal" role="form" onsubmit="do_login(); return false;">
<div class="login">
<form id="loginForm" class="form-horizontal" role="form" onsubmit="do_login(); return false;" method="get">
<div class="form-group">
<label for="inputEmail3" class="col-sm-3 control-label">Email</label>
<div class="col-sm-9">
@ -36,6 +61,13 @@ sudo tools/mail.py user make-admin me@{{hostname}}</pre>
<input name="password" type="password" class="form-control" id="loginPassword" placeholder="Password">
</div>
</div>
<div class="form-group" id="loginOtp">
<label for="loginOtpInput" class="col-sm-3 control-label">Code</label>
<div class="col-sm-9">
<input type="text" class="form-control" id="loginOtpInput" placeholder="6-digit code" autocomplete="off">
<div class="help-block" style="margin-top: 5px; font-size: 90%">Enter the six-digit code generated by your two factor authentication app.</div>
</div>
</div>
<div class="form-group">
<div class="col-sm-offset-3 col-sm-9">
<div class="checkbox">
@ -53,15 +85,15 @@ sudo tools/mail.py user make-admin me@{{hostname}}</pre>
</form>
</div>
<script>
function do_login() {
if ($('#loginEmail').val() == "") {
show_modal_error("Login Failed", "Enter your email address.", function() {
$('#loginEmail').focus();
$('#loginEmail').focus();
});
return false;
}
if ($('#loginPassword').val() == "") {
show_modal_error("Login Failed", "Enter your email password.", function() {
$('#loginPassword').focus();
@ -70,22 +102,34 @@ function do_login() {
}
// Exchange the email address & password for an API key.
api_credentials = [$('#loginEmail').val(), $('#loginPassword').val()]
api_credentials = { username: $('#loginEmail').val(), session_key: $('#loginPassword').val() }
api(
"/me",
"GET",
{ },
function(response){
"/login",
"POST",
{},
function(response) {
// This API call always succeeds. It returns a JSON object indicating
// whether the request was authenticated or not.
if (response.status != "ok") {
// Show why the login failed.
show_modal_error("Login Failed", response.reason)
if (response.status != 'ok') {
if (response.status === 'missing-totp-token' || (response.status === 'invalid' && response.reason == 'invalid-totp-token')) {
$('#loginForm').addClass('is-twofactor');
if (response.reason === "invalid-totp-token") {
show_modal_error("Login Failed", "Incorrect two factor authentication token.");
} else {
setTimeout(() => {
$('#loginOtpInput').focus();
});
}
} else {
$('#loginForm').removeClass('is-twofactor');
// Reset any saved credentials.
do_logout();
// Show why the login failed.
show_modal_error("Login Failed", response.reason)
// Reset any saved credentials.
do_logout();
}
} else if (!("api_key" in response)) {
// Login succeeded but user might not be authorized!
show_modal_error("Login Failed", "You are not an administrator on this system.")
@ -97,41 +141,56 @@ function do_login() {
// Login succeeded.
// Save the new credentials.
api_credentials = [response.email, response.api_key];
api_credentials = { username: response.email,
session_key: response.api_key,
privileges: response.privileges };
// Try to wipe the username/password information.
$('#loginEmail').val('');
$('#loginPassword').val('');
$('#loginOtpInput').val('');
$('#loginForm').removeClass('is-twofactor');
// Remember the credentials.
if (typeof localStorage != 'undefined' && typeof sessionStorage != 'undefined') {
if ($('#loginRemember').val()) {
localStorage.setItem("miab-cp-credentials", api_credentials.join(":"));
localStorage.setItem("miab-cp-credentials", JSON.stringify(api_credentials));
sessionStorage.removeItem("miab-cp-credentials");
} else {
localStorage.removeItem("miab-cp-credentials");
sessionStorage.setItem("miab-cp-credentials", api_credentials.join(":"));
sessionStorage.setItem("miab-cp-credentials", JSON.stringify(api_credentials));
}
}
// Toggle menus.
show_hide_menus();
// Open the next panel the user wants to go to. Do this after the XHR response
// is over so that we don't start a new XHR request while this one is finishing,
// which confuses the loading indicator.
setTimeout(function() { show_panel(!switch_back_to_panel || switch_back_to_panel == "login" ? 'system_status' : switch_back_to_panel) }, 300);
}
})
}
setTimeout(function() {
if (window.location.hash) {
var panelid = window.location.hash.substring(1);
show_panel(panelid);
} else {
show_panel(
!switch_back_to_panel || switch_back_to_panel == "login"
? 'welcome'
: switch_back_to_panel)
}
}, 300);
function do_logout() {
api_credentials = ["", ""];
if (typeof localStorage != 'undefined')
localStorage.removeItem("miab-cp-credentials");
if (typeof sessionStorage != 'undefined')
sessionStorage.removeItem("miab-cp-credentials");
show_panel('login');
}
},
undefined,
{
'x-auth-token': $('#loginOtpInput').val()
});
}
function show_login() {
$('#loginForm').removeClass('is-twofactor');
$('#loginOtpInput').val('');
$('#loginEmail,#loginPassword').each(function() {
var input = $(this);
if (!$.trim(input.val())) {
@ -140,4 +199,19 @@ function show_login() {
}
});
}
function show_hide_menus() {
var is_logged_in = (api_credentials != null);
var privs = api_credentials ? api_credentials.privileges : [];
$('.if-logged-in').toggle(is_logged_in);
$('.if-logged-in-admin, .if-logged-in-not-admin').toggle(false);
if (is_logged_in) {
$('.if-logged-in-not-admin').toggle(true);
privs.forEach(function(priv) {
$('.if-logged-in-' + priv).toggle(true);
$('.if-logged-in-not-' + priv).toggle(false);
});
}
$('.if-not-logged-in').toggle(!is_logged_in);
}
</script>

View File

@ -16,7 +16,7 @@
<h4>Automatic configuration</h4>
<p>iOS and OS X only: Open <a style="font-weight: bold" href="https://{{hostname}}/mailinabox.mobileconfig">this configuration link</a> on your iOS device or on your Mac desktop to easily set up mail (IMAP/SMTP), Contacts, and Calendar. Your username is your whole email address.</p>
<p>iOS and macOS only: Open <a style="font-weight: bold" href="https://{{hostname}}/mailinabox.mobileconfig">this configuration link</a> on your iOS device or on your Mac desktop to easily set up mail (IMAP/SMTP), Contacts, and Calendar. Your username is your whole email address.</p>
<h4>Manual configuration</h4>
@ -30,19 +30,19 @@
<tr><th>Mail server</th> <td>{{hostname}}</td>
<tr><th>IMAP Port</th> <td>993</td></tr>
<tr><th>IMAP Security</th> <td>SSL or TLS</td></tr>
<tr><th>SMTP Port</th> <td>587</td></tr>
<tr><th>SMTP Security</td> <td>STARTTLS <small>(&ldquo;always&rdquo; or &ldquo;required&rdquo;, if prompted)</small></td></tr>
<tr><th>SMTP Port</th> <td>465</td></tr>
<tr><th>SMTP Security</td> <td>SSL or TLS</td></tr>
<tr><th>Username:</th> <td>Your whole email address.</td></tr>
<tr><th>Password:</th> <td>Your mail password.</td></tr>
</table>
<p>In addition to setting up your email, you&rsquo;ll also need to set up <a href="#sync_guide" onclick="return show_panel(this);">contacts and calendar synchronization</a> separately.</p>
<p>In addition to setting up your email, you&rsquo;ll also need to set up <a href="#sync_guide">contacts and calendar synchronization</a> separately.</p>
<p>As an alternative to IMAP you can also use the POP protocol: choose POP as the protocol, port 995, and SSL or TLS security in your mail client. The SMTP settings and usernames and passwords remain the same. However, we recommend you use IMAP instead.</p>
<h4>Exchange/ActiveSync settings</h4>
<p>On iOS devices, devices on this <a href="http://z-push.org/compatibility/">compatibility list</a>, or using Outlook 2007 or later on Windows 7 and later, you may set up your mail as an Exchange or ActiveSync server. However, we&rsquo;ve found this to be more buggy than using IMAP as described above. If you encounter any problems, please use the manual settings above.</p>
<p>On iOS devices, devices on this <a href="https://wiki.z-hub.io/display/ZP/Compatibility">compatibility list</a>, or using Outlook 2007 or later on Windows 7 and later, you may set up your mail as an Exchange or ActiveSync server. However, we&rsquo;ve found this to be more buggy than using IMAP as described above. If you encounter any problems, please use the manual settings above.</p>
<table class="table">
<tr><th>Server</th> <td>{{hostname}}</td></tr>
@ -59,7 +59,7 @@
</div>
<div class="panel-body">
<h4>Greylisting</h4>
<p>Your box using a technique called greylisting to cut down on spam. Greylisting works by delaying mail from people you haven&rsquo;t received mail from before for up to about 10 minutes. The vast majority of spam gets tricked by this. If you are waiting for an email from someone new, such as if you are registering on a new website and are waiting for an email confirmation, please give it up to 10-15 minutes to arrive.</p>
<p>Your box uses a technique called greylisting to cut down on spam. Greylisting works by initially rejecting mail from people you haven&rsquo;t received mail from before. Legitimate mail servers will attempt redelivery shortly afterwards, but the vast majority of spam gets tricked by this. If you are waiting for an email from someone new, such as if you are registering on a new website and are waiting for an email confirmation, please be aware there will be a minimum of 3 minutes delay, depending how soon the remote server attempts redelivery.</p>
<h4>+tag addresses</h4>
<p>Every incoming email address also receives mail for <code>+tag</code> addresses. If your email address is <code>you@yourdomain.com</code>, you&rsquo;ll also automatically get mail sent to <code>you+anythinghere@yourdomain.com</code>. Use this as a fast way to segment incoming mail for your own filtering rules without having to create aliases in this control panel.</p>

View File

@ -0,0 +1,242 @@
<style>
.twofactor #totp-setup,
.twofactor #disable-2fa,
.twofactor #output-2fa {
display: none;
}
.twofactor.loaded .loading-indicator {
display: none;
}
.twofactor.disabled #disable-2fa,
.twofactor.enabled #totp-setup {
display: none;
}
.twofactor.disabled #totp-setup,
.twofactor.enabled #disable-2fa {
display: block;
}
.twofactor #totp-setup-qr img {
display: block;
width: 256px;
max-width: 100%;
height: auto;
}
.twofactor #output-2fa.visible {
display: block;
}
</style>
<h2>Two-Factor Authentication</h2>
<p>When two-factor authentication is enabled, you will be prompted to enter a six digit code from an
authenticator app (usually on your phone) when you log into this control panel.</p>
<div class="panel panel-danger">
<div class="panel-heading">
Enabling two-factor authentication does not protect access to your email
</div>
<div class="panel-body">
Enabling two-factor authentication on this page only limits access to this control panel. Remember that most websites allow you to
reset your password by checking your email, so anyone with access to your email can typically take over
your other accounts. Additionally, if your email address or any alias that forwards to your email
address is a typical domain control validation address (e.g admin@, administrator@, postmaster@, hostmaster@,
webmaster@, abuse@), extra care should be taken to protect the account. <strong>Always use a strong password,
and ensure every administrator account for this control panel does the same.</strong>
</div>
</div>
<div class="twofactor">
<div class="loading-indicator">Loading...</div>
<form id="totp-setup">
<h3>Setup Instructions</h3>
<div class="form-group">
<p>1. Install <a href="https://freeotp.github.io/">FreeOTP</a> or <a href="https://www.pcworld.com/article/3225913/what-is-two-factor-authentication-and-which-2fa-apps-are-best.html">any
other two-factor authentication app</a> that supports TOTP.</p>
</div>
<div class="form-group">
<p style="margin-bottom: 0">2. Scan the QR code in the app or directly enter the secret into the app:</p>
<div id="totp-setup-qr"></div>
</div>
<div class="form-group">
<label for="otp-label" style="font-weight: normal">3. Optionally, give your device a label so that you can remember what device you set it up on:</label>
<input type="text" id="totp-setup-label" class="form-control" placeholder="my phone" />
</div>
<div class="form-group">
<label for="otp" style="font-weight: normal">4. Use the app to generate your first six-digit code and enter it here:</label>
<input type="text" id="totp-setup-token" class="form-control" placeholder="6-digit code" />
</div>
<input type="hidden" id="totp-setup-secret" />
<div class="form-group">
<p>When you click Enable Two-Factor Authentication, you will be logged out of the control panel and will have to log in
again, now using your two-factor authentication app.</p>
<button id="totp-setup-submit" disabled type="submit" class="btn">Enable Two-Factor Authentication</button>
</div>
</form>
<form id="disable-2fa">
<div class="form-group">
<p>Two-factor authentication is active for your account<span id="mfa-device-label"></span>.</p>
<p>You will have to log into the admin panel again after disabling two-factor authentication.</p>
</div>
<div class="form-group">
<button type="submit" class="btn btn-danger">Disable Two-Factor Authentication</button>
</div>
</form>
<div id="output-2fa" class="panel panel-danger">
<div class="panel-body"></div>
</div>
</div>
<script>
var el = {
disableForm: document.getElementById('disable-2fa'),
output: document.getElementById('output-2fa'),
totpSetupForm: document.getElementById('totp-setup'),
totpSetupToken: document.getElementById('totp-setup-token'),
totpSetupSecret: document.getElementById('totp-setup-secret'),
totpSetupLabel: document.getElementById('totp-setup-label'),
totpQr: document.getElementById('totp-setup-qr'),
totpSetupSubmit: document.querySelector('#totp-setup-submit'),
wrapper: document.querySelector('.twofactor')
}
function update_setup_disabled(evt) {
var val = evt.target.value.trim();
if (
typeof val !== 'string' ||
typeof el.totpSetupSecret.value !== 'string' ||
val.length !== 6 ||
el.totpSetupSecret.value.length !== 32 ||
!(/^\+?\d+$/.test(val))
) {
el.totpSetupSubmit.setAttribute('disabled', '');
} else {
el.totpSetupSubmit.removeAttribute('disabled');
}
}
function render_totp_setup(provisioned_totp) {
var img = document.createElement('img');
img.src = "data:image/png;base64," + provisioned_totp.qr_code_base64;
var code = document.createElement('div');
code.innerHTML = `Secret: ${provisioned_totp.secret}`;
el.totpQr.appendChild(img);
el.totpQr.appendChild(code);
el.totpSetupToken.addEventListener('input', update_setup_disabled);
el.totpSetupForm.addEventListener('submit', do_enable_totp);
el.totpSetupSecret.setAttribute('value', provisioned_totp.secret);
el.wrapper.classList.add('disabled');
}
function render_disable(mfa) {
el.disableForm.addEventListener('submit', do_disable);
el.wrapper.classList.add('enabled');
if (mfa.label)
$("#mfa-device-label").text(" on device '" + mfa.label + "'");
}
function hide_error() {
el.output.querySelector('.panel-body').innerHTML = '';
el.output.classList.remove('visible');
}
function render_error(msg) {
el.output.querySelector('.panel-body').innerHTML = msg;
el.output.classList.add('visible');
}
function reset_view() {
el.wrapper.classList.remove('loaded', 'disabled', 'enabled');
el.disableForm.removeEventListener('submit', do_disable);
hide_error();
el.totpSetupForm.reset();
el.totpSetupForm.removeEventListener('submit', do_enable_totp);
el.totpSetupSecret.setAttribute('value', '');
el.totpSetupToken.removeEventListener('input', update_setup_disabled);
el.totpSetupSubmit.setAttribute('disabled', '');
el.totpQr.innerHTML = '';
}
function show_mfa() {
reset_view();
api(
'/mfa/status',
'POST',
{},
function(res) {
el.wrapper.classList.add('loaded');
var has_mfa = false;
res.enabled_mfa.forEach(function(mfa) {
if (mfa.type == "totp") {
render_disable(mfa);
has_mfa = true;
}
});
if (!has_mfa)
render_totp_setup(res.new_mfa.totp);
}
);
}
function do_disable(evt) {
evt.preventDefault();
hide_error();
api(
'/mfa/disable',
'POST',
{ type: 'totp' },
function() {
do_logout();
}
);
return false;
}
function do_enable_totp(evt) {
evt.preventDefault();
hide_error();
api(
'/mfa/totp/enable',
'POST',
{
token: $(el.totpSetupToken).val(),
secret: $(el.totpSetupSecret).val(),
label: $(el.totpSetupLabel).val()
},
function(res) { do_logout(); },
function(res) { render_error(res); }
);
return false;
}
</script>

View File

@ -0,0 +1,20 @@
<h2>Munin Monitoring</h2>
<style>
</style>
<p>Opening munin in a new tab... You may need to allow pop-ups for this site.</p>
<script>
function show_munin() {
// Set the cookie.
api(
"/munin",
"GET",
{ },
function(r) {
// Redirect.
window.open("/admin/munin/index.html", "_blank");
});
}
</script>

View File

@ -8,7 +8,7 @@
<p>You need a TLS certificate for this box&rsquo;s hostname ({{hostname}}) and every other domain name and subdomain that this box is hosting a website for (see the list below).</p>
<div id="ssl_provision">
<h3>Provision a certificate</h3>
<h3>Provision certificates</h3>
<div id="ssl_provision_p" style="display: none; margin-top: 1.5em">
<button onclick='return provision_tls_cert();' class='btn btn-primary' style="float: left; margin: 0 1.5em 1em 0;">Provision</button>
@ -19,21 +19,6 @@
<div class="clearfix"> </div>
<div id="ssl_provision_result"></div>
<div id="ssl_provision_problems_div" style="display: none;">
<p style="margin-bottom: .5em;">Certificates cannot be automatically provisioned for:</p>
<table id="ssl_provision_problems" style="margin-top: 0;" class="table">
<thead>
<tr>
<th>Domain</th>
<th>Problem</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
<p>Use the <em>Install Certificate</em> button below for these domains.</p>
</div>
</div>
<h3>Certificate status</h3>
@ -55,7 +40,7 @@
<h3 id="ssl_install_header">Install certificate</h3>
<p>There are many other places where you can get a free or cheap certificate. If you don't want to use our automatic Let's Encrypt integration, you can give <a href="https://www.namecheap.com/security/ssl-certificates/domain-validation.aspx">Namecheap&rsquo;s $9 certificate</a>, <a href="https://www.startssl.com/">StartSSL&rsquo;s free express lane</a>, <a href="https://buy.wosign.com/free/">WoSign&rsquo;s free TLS</a></a> or any other certificate provider a try.</p>
<p>If you don't want to use our automatic Let's Encrypt integration, you can give any other certificate provider a try. You can generate the needed CSR below.</p>
<p>Which domain are you getting a certificate for?</p>
@ -103,23 +88,11 @@ function show_tls(keep_provisioning_shown) {
// provisioning status
if (!keep_provisioning_shown)
$('#ssl_provision').toggle(res.can_provision.length + res.cant_provision.length > 0)
$('#ssl_provision').toggle(res.can_provision.length > 0)
$('#ssl_provision_p').toggle(res.can_provision.length > 0);
if (res.can_provision.length > 0)
$('#ssl_provision_p span').text(res.can_provision.join(", "));
$('#ssl_provision_problems_div').toggle(res.cant_provision.length > 0);
$('#ssl_provision_problems tbody').text("");
for (var i = 0; i < res.cant_provision.length; i++) {
var domain = res.cant_provision[i];
var row = $("<tr><th class='domain'><a href=''></a></th><td class='status'></td></tr>");
$('#ssl_provision_problems tbody').append(row);
row.attr('data-domain', domain.domain);
row.find('.domain a').text(domain.domain);
row.find('.domain a').attr('href', 'https://' + domain.domain);
row.find('.status').text(domain.problem);
}
// certificate status
var domains = res.status;
@ -159,7 +132,11 @@ function ssl_install(elem) {
}
function show_csr() {
// Can't show a CSR until both inputs are entered.
if ($('#ssldomain').val() == "") return;
if ($('#sslcc').val() == "") return;
// Scroll to it and fetch.
$('#csr_info').slideDown();
$('#ssl_csr').text('Loading...');
api(
@ -192,20 +169,15 @@ function install_cert() {
});
}
var agree_to_tos_url_prompt = null;
var agree_to_tos_url = null;
function provision_tls_cert() {
// Automatically provision any certs.
$('#ssl_provision_p .btn').attr('disabled', '1'); // prevent double-clicks
api(
"/ssl/provision",
"POST",
{
agree_to_tos_url: agree_to_tos_url
},
{ },
function(status) {
// Clear last attempt.
agree_to_tos_url = null;
$('#ssl_provision_result').text("");
may_reenable_provision_button = true;
@ -221,60 +193,40 @@ function provision_tls_cert() {
for (var i = 0; i < status.requests.length; i++) {
var r = status.requests[i];
if (r.result == "skipped") {
// not interested --- this domain wasn't in the table
// to begin with
continue;
}
// create an HTML block to display the results of this request
var n = $("<div><h4/><p/></div>");
$('#ssl_provision_result').append(n);
// plain log line
if (typeof r === "string") {
n.find("p").text(r);
continue;
}
// show a header only to disambiguate request blocks
if (status.requests.length > 0)
n.find("h4").text(r.domains.join(", "));
if (r.result == "agree-to-tos") {
// user needs to agree to Let's Encrypt's TOS
agree_to_tos_url_prompt = r.url;
$('#ssl_provision_p .btn').attr('disabled', '1');
n.find("p").html("Please open and review <a href='" + r.url + "' target='_blank'>Let's Encrypt's terms of service agreement</a>. You must agree to their terms for a certificate to be automatically provisioned from them.");
n.append($('<button onclick="agree_to_tos_url = agree_to_tos_url_prompt; return provision_tls_cert();" class="btn btn-success" style="margin-left: 2em">Agree &amp; Try Again</button>'));
// don't re-enable the Provision button -- user must use the Agree button
may_reenable_provision_button = false;
} else if (r.result == "error") {
if (r.result == "error") {
n.find("p").addClass("text-danger").text(r.message);
} else if (r.result == "wait") {
// Show a button that counts down to zero, at which point it becomes enabled.
n.find("p").text("A certificate is now in the process of being provisioned, but it takes some time. Please wait until the Finish button is enabled, and then click it to acquire the certificate.");
var b = $('<button onclick="return provision_tls_cert();" class="btn btn-success" style="margin-left: 2em">Finish</button>');
b.attr("disabled", "1");
var now = new Date();
n.append(b);
function ready_to_finish() {
var remaining = Math.round(r.seconds - (new Date() - now)/1000);
if (remaining > 0) {
setTimeout(ready_to_finish, 1000);
b.text("Finish (" + remaining + "...)")
} else {
b.text("Finish (ready)")
b.removeAttr("disabled");
}
}
ready_to_finish();
// don't re-enable the Provision button -- user must use the Retry button when it becomes enabled
may_reenable_provision_button = false;
} else if (r.result == "installed") {
n.find("p").addClass("text-success").text("The TLS certificate was provisioned and installed.");
setTimeout("show_tls(true)", 1); // update main table of certificate statuses, call with arg keep_provisioning_shown true so that we don't clear what we just outputted
}
// display the detailed log info in case of problems
var trace = $("<div class='small text-muted' style='margin-top: 1.5em'>Log:</div>");
n.append(trace);
for (var j = 0; j < r.log.length; j++)
trace.append($("<div/>").text(r.log[j]));
}
if (may_reenable_provision_button)

View File

@ -17,22 +17,22 @@
<tr><th>Calendar</td> <td><a href="https://{{hostname}}/cloud/calendar">https://{{hostname}}/cloud/calendar</a></td></tr>
</table>
<p>Log in settings are the same as with <a href="#mail-guide" onclick="return show_panel(this);">mail</a>: your
<p>Log in settings are the same as with <a href="#mail-guide">mail</a>: your
complete email address and your mail password.</p>
</div>
<div class="col-sm-6">
<h4>On your mobile device</h4>
<p>If you set up your <a href="#mail-guide" onclick="return show_panel(this);">mail</a> using Exchange/ActiveSync,
<p>If you set up your <a href="#mail-guide">mail</a> using Exchange/ActiveSync,
your contacts and calendar may already appear on your device.</p>
<p>Otherwise, here are some apps that can synchronize your contacts and calendar to your Android phone.</p>
<table class="table">
<thead><tr><th>For...</th> <th>Use...</th></tr></thead>
<tr><td>Contacts and Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=at.bitfire.davdroid">DAVdroid</a> ($3.69; free <a href="https://f-droid.org/repository/browse/?fdfilter=dav&fdid=at.bitfire.davdroid">here</a>)</td></tr>
<tr><td>Only Contacts</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.carddav.sync">CardDAV-Sync free beta</a> (free)</td></tr>
<tr><td>Only Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.caldav.lib">CalDAV-Sync</a> ($2.89)</td></tr>
<tr><td>Contacts and Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=at.bitfire.davdroid">DAVx⁵</a> ($5.99; free <a href="https://f-droid.org/packages/at.bitfire.davdroid/">here</a>)</td></tr>
<tr><td>Only Contacts</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.carddav.sync">CardDAV-Sync free</a> (free)</td></tr>
<tr><td>Only Calendar</td> <td><a href="https://play.google.com/store/apps/details?id=org.dmfs.caldav.lib">CalDAV-Sync</a> ($2.99)</td></tr>
</table>
<p>Use the following settings:</p>

View File

@ -5,7 +5,7 @@
<h2>Backup Status</h2>
<p>The box makes an incremental backup each night. By default the backup is stored on the machine itself, but you can also have it stored on Amazon S3.</p>
<p>The box makes an incremental backup each night. You can store the backup on any Amazon Web Services S3-compatible service, or other options.</p>
<h3>Configuration</h3>
@ -16,36 +16,101 @@
<select class="form-control" rows="1" id="backup-target-type" onchange="toggle_form()">
<option value="off">Nowhere (Disable Backups)</option>
<option value="local">{{hostname}}</option>
<option value="s3">Amazon S3</option>
<option value="rsync">rsync</option>
<option value="s3">S3 (Amazon or compatible) </option>
<option value="b2">Backblaze B2</option>
</select>
</div>
</div>
<!-- LOCAL BACKUP -->
<div class="form-group backup-target-local">
<div class="col-sm-10 col-sm-offset-2">
<p>Backups are stored on this machine&rsquo;s own hard disk. You are responsible for periodically using SFTP (FTP over SSH) to copy the backup files from <tt id="backup-location"></tt> to a safe location. These files are encrypted, so they are safe to store anywhere.</p>
<p>Backups are stored on this machine&rsquo;s own hard disk. You are responsible for periodically using SFTP (FTP over SSH) to copy the backup files from <tt class="backup-location"></tt> to a safe location. These files are encrypted, so they are safe to store anywhere.</p>
<p>Separately copy the encryption password from <tt class="backup-encpassword-file"></tt> to a safe and secure location. You will need this file to decrypt backup files.</p>
</div>
</div>
<!-- RSYNC BACKUP -->
<div class="form-group backup-target-rsync">
<div class="col-sm-10 col-sm-offset-2">
<p>Backups synced to a remote machine using rsync over SSH, with local
copies in <tt class="backup-location"></tt>. These files are encrypted, so
they are safe to store anywhere.</p> <p>Separately copy the encryption
password from <tt class="backup-encpassword-file"></tt> to a safe and
secure location. You will need this file to decrypt backup files.</p>
</div>
</div>
<div class="form-group backup-target-rsync">
<label for="backup-target-rsync-host" class="col-sm-2 control-label">Hostname</label>
<div class="col-sm-8">
<input type="text" placeholder="hostname.local" class="form-control" rows="1" id="backup-target-rsync-host">
<div class="small" style="margin-top: 2px">
The hostname at your rsync provider, e.g. <tt>da2327.rsync.net</tt>. Optionally includes a colon
and the provider's non-standard ssh port number, e.g. <tt>u215843.your-storagebox.de:23</tt>.
</div>
</div>
</div>
<div class="form-group backup-target-rsync">
<label for="backup-target-rsync-path" class="col-sm-2 control-label">Path</label>
<div class="col-sm-8">
<input type="text" placeholder="/backups/{{hostname}}" class="form-control" rows="1" id="backup-target-rsync-path">
</div>
</div>
<div class="form-group backup-target-rsync">
<label for="backup-target-rsync-user" class="col-sm-2 control-label">Username</label>
<div class="col-sm-8">
<input type="text" class="form-control" rows="1" id="backup-target-rsync-user">
</div>
</div>
<div class="form-group backup-target-rsync">
<label for="ssh-pub-key" class="col-sm-2 control-label">Public SSH Key</label>
<div class="col-sm-8">
<input type="text" class="form-control" rows="1" id="ssh-pub-key" readonly>
<div class="small" style="margin-top: 2px">
Copy the Public SSH Key above, and paste it within the <tt>~/.ssh/authorized_keys</tt>
of target user on the backup server specified above. That way you'll enable secure and
passwordless authentication from your Mail-in-a-Box server and your backup server.
</div>
</div>
<div id="copy_pub_key_div" class="col-sm">
<button type="button" class="btn btn-small" onclick="copy_pub_key_to_clipboard()">Copy</button>
</div>
</div>
<!-- S3 BACKUP -->
<div class="form-group backup-target-s3">
<div class="col-sm-10 col-sm-offset-2">
<p>Backups are stored in an Amazon Web Services S3 bucket. You must have an AWS account already.</p>
<p>You MUST manually copy the encryption password from <tt class="backup-encpassword-file"></tt> to a safe and secure location. You will need this file to decrypt backup files. It is NOT stored in your Amazon S3 bucket.</p>
<p>Backups are stored in an S3-compatible bucket. You must have an AWS or other S3 service account already.</p>
<p>You MUST manually copy the encryption password from <tt class="backup-encpassword-file"></tt> to a safe and secure location. You will need this file to decrypt backup files. It is <b>NOT</b> stored in your S3 bucket.</p>
</div>
</div>
<div class="form-group backup-target-s3">
<label for="backup-target-s3-host" class="col-sm-2 control-label">S3 Region</label>
<label for="backup-target-s3-host-select" class="col-sm-2 control-label">S3 Region</label>
<div class="col-sm-8">
<select class="form-control" rows="1" id="backup-target-s3-host">
<select class="form-control" rows="1" id="backup-target-s3-host-select">
{% for name, host in backup_s3_hosts %}
<option value="{{host}}">{{name}}</option>
{% endfor %}
<option value="other">Other (non AWS)</option>
</select>
</div>
</div>
<div class="form-group backup-target-s3">
<label for="backup-target-s3-path" class="col-sm-2 control-label">S3 Path</label>
<label for="backup-target-s3-host" class="col-sm-2 control-label">S3 Host / Endpoint</label>
<div class="col-sm-8">
<input type="text" placeholder="your-bucket-name/backup-directory" class="form-control" rows="1" id="backup-target-s3-path">
<input type="text" placeholder="https://s3.backuphost.com" class="form-control" rows="1" id="backup-target-s3-host">
</div>
</div>
<div class="form-group backup-target-s3">
<label for="backup-target-s3-region-name" class="col-sm-2 control-label">S3 Region Name <span style="font-weight: normal">(if required)</span></label>
<div class="col-sm-8">
<input type="text" placeholder="region.name" class="form-control" rows="1" id="backup-target-s3-region-name">
</div>
</div>
<div class="form-group backup-target-s3">
<label for="backup-target-s3-path" class="col-sm-2 control-label">S3 Bucket &amp; Path</label>
<div class="col-sm-8">
<input type="text" placeholder="bucket-name/backup-directory" class="form-control" rows="1" id="backup-target-s3-path">
</div>
</div>
<div class="form-group backup-target-s3">
@ -60,11 +125,37 @@
<input type="text" class="form-control" rows="1" id="backup-target-pass">
</div>
</div>
<div class="form-group backup-target-local backup-target-s3">
<label for="min-age" class="col-sm-2 control-label">Days:</label>
<!-- Backblaze -->
<div class="form-group backup-target-b2">
<div class="col-sm-10 col-sm-offset-2">
<p>Backups are stored in a <a href="https://www.backblaze.com/" target="_blank" rel="noreferrer">Backblaze</a> B2 bucket. You must have a Backblaze account already.</p>
<p>You MUST manually copy the encryption password from <tt class="backup-encpassword-file"></tt> to a safe and secure location. You will need this file to decrypt backup files. It is NOT stored in your Backblaze B2 bucket.</p>
</div>
</div>
<div class="form-group backup-target-b2">
<label for="backup-target-b2-user" class="col-sm-2 control-label">B2 Application KeyID</label>
<div class="col-sm-8">
<input type="text" class="form-control" rows="1" id="backup-target-b2-user">
</div>
</div>
<div class="form-group backup-target-b2">
<label for="backup-target-b2-pass" class="col-sm-2 control-label">B2 Application Key</label>
<div class="col-sm-8">
<input type="text" class="form-control" rows="1" id="backup-target-b2-pass">
</div>
</div>
<div class="form-group backup-target-b2">
<label for="backup-target-b2-bucket" class="col-sm-2 control-label">B2 Bucket</label>
<div class="col-sm-8">
<input type="text" class="form-control" rows="1" id="backup-target-b2-bucket">
</div>
</div>
<!-- Common -->
<div class="form-group backup-target-local backup-target-rsync backup-target-s3 backup-target-b2">
<label for="min-age" class="col-sm-2 control-label">Retention Days:</label>
<div class="col-sm-8">
<input type="number" class="form-control" rows="1" id="min-age">
<div class="small" style="margin-top: 2px">This is the <i>minimum</i> number of days backup data is kept for. The box makes an incremental backup, so backup data is often kept much longer. An incremental backup file that is less than this number of days old requires that all previous increments back to the most recent full backup, plus that full backup, remain available.</div>
<div class="small" style="margin-top: 2px">This is the minimum time backup data is kept for. The box makes an incremental backup most nights, which requires that previous backups back to the most recent full backup be preserved, so backup data is often kept much longer than this setting. Full backups are made periodically when the incremental backup data size exceeds a limit.</div>
</div>
</div>
<div class="form-group">
@ -92,8 +183,10 @@
function toggle_form() {
var target_type = $("#backup-target-type").val();
$(".backup-target-local, .backup-target-s3").hide();
$(".backup-target-local, .backup-target-rsync, .backup-target-s3, .backup-target-b2").hide();
$(".backup-target-" + target_type).show();
init_inputs(target_type);
}
function nice_size(bytes) {
@ -114,7 +207,7 @@ function nice_size(bytes) {
function show_system_backup() {
show_custom_backup()
$('#backup-status tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>")
api(
"/system/backup/status",
@ -155,33 +248,52 @@ function show_system_backup() {
total_disk_size += b.size;
}
total_disk_size += r.unmatched_file_size;
$('#backup-total-size').text(nice_size(total_disk_size));
})
}
function show_custom_backup() {
$(".backup-target-local, .backup-target-s3").hide();
$(".backup-target-local, .backup-target-rsync, .backup-target-s3, .backup-target-b2").hide();
api(
"/system/backup/config",
"GET",
{ },
function(r) {
$("#backup-target-user").val(r.target_user);
$("#backup-target-pass").val(r.target_pass);
$("#min-age").val(r.min_age_in_days);
$(".backup-location").text(r.file_target_directory);
$(".backup-encpassword-file").text(r.enc_pw_file);
$("#ssh-pub-key").val(r.ssh_pub_key);
if (r.target == "file://" + r.file_target_directory) {
$("#backup-target-type").val("local");
} else if (r.target == "off") {
$("#backup-target-type").val("off");
} else if (r.target.substring(0, 8) == "rsync://") {
const spec = url_split(r.target);
$("#backup-target-type").val(spec.scheme);
$("#backup-target-rsync-user").val(spec.user);
$("#backup-target-rsync-host").val(spec.host);
$("#backup-target-rsync-path").val(spec.path);
} else if (r.target.substring(0, 5) == "s3://") {
const spec = url_split(r.target);
$("#backup-target-type").val("s3");
var hostpath = r.target.substring(5).split('/');
var host = hostpath.shift();
$("#backup-target-s3-host").val(host);
$("#backup-target-s3-path").val(hostpath.join('/'));
$("#backup-target-s3-host-select").val(spec.host);
$("#backup-target-s3-host").val(spec.host);
$("#backup-target-s3-region-name").val(spec.user); // stuffing the region name in the username
$("#backup-target-s3-path").val(spec.path);
} else if (r.target.substring(0, 5) == "b2://") {
$("#backup-target-type").val("b2");
var targetPath = r.target.substring(5);
var b2_application_keyid = targetPath.split(':')[0];
var b2_applicationkey = targetPath.split(':')[1].split('@')[0];
var b2_bucket = targetPath.split('@')[1];
$("#backup-target-b2-user").val(b2_application_keyid);
$("#backup-target-b2-pass").val(decodeURIComponent(b2_applicationkey));
$("#backup-target-b2-bucket").val(b2_bucket);
}
$("#backup-target-user").val(r.target_user);
$("#backup-target-pass").val(r.target_pass);
$("#min-age").val(r.min_age_in_days);
$('#backup-location').text(r.file_target_directory);
$('.backup-encpassword-file').text(r.enc_pw_file);
toggle_form()
})
}
@ -190,12 +302,26 @@ function set_custom_backup() {
var target_type = $("#backup-target-type").val();
var target_user = $("#backup-target-user").val();
var target_pass = $("#backup-target-pass").val();
var target;
if (target_type == "local" || target_type == "off")
target = target_type;
else if (target_type == "s3")
target = "s3://" + $("#backup-target-s3-host").val() + "/" + $("#backup-target-s3-path").val();
target = "s3://"
+ ($("#backup-target-s3-region-name").val() ? ($("#backup-target-s3-region-name").val() + "@") : "")
+ $("#backup-target-s3-host").val()
+ "/" + $("#backup-target-s3-path").val();
else if (target_type == "rsync") {
target = "rsync://" + $("#backup-target-rsync-user").val() + "@" + $("#backup-target-rsync-host").val()
+ "/" + $("#backup-target-rsync-path").val();
target_user = '';
} else if (target_type == "b2") {
target = 'b2://' + $('#backup-target-b2-user').val() + ':' + encodeURIComponent($('#backup-target-b2-pass').val())
+ '@' + $('#backup-target-b2-bucket').val()
target_user = '';
target_pass = '';
}
var min_age = $("#min-age").val();
api(
@ -217,4 +343,58 @@ function set_custom_backup() {
});
return false;
}
function init_inputs(target_type) {
function set_host(host) {
if(host !== 'other') {
$("#backup-target-s3-host").val(host);
} else {
$("#backup-target-s3-host").val('');
}
}
if (target_type == "s3") {
$('#backup-target-s3-host-select').off('change').on('change', function() {
set_host($('#backup-target-s3-host-select').val());
});
set_host($('#backup-target-s3-host-select').val());
}
}
// Return a two-element array of the substring preceding and the substring following
// the first occurence of separator in string. Return [undefined, string] if the
// separator does not appear in string.
const split1_rest = (string, separator) => {
const index = string.indexOf(separator);
return (index >= 0) ? [string.substring(0, index), string.substring(index + separator.length)] : [undefined, string];
};
// Note: The manifest JS URL class does not work in some security-conscious
// settings, e.g. Brave browser, so we roll our own that handles only what we need.
//
// Use greedy separator parsing to get parts of a MIAB backup target url.
// Note: path will not include a leading forward slash '/'
const url_split = url => {
const [ scheme, scheme_rest ] = split1_rest(url, '://');
const [ user, user_rest ] = split1_rest(scheme_rest, '@');
const [ host, path ] = split1_rest(user_rest, '/');
return {
scheme,
user,
host,
path,
}
};
// Hide Copy button if not in a modern clipboard-supporting environment.
// Using document API because jQuery is not necessarily available in this script scope.
if (!(navigator && navigator.clipboard && navigator.clipboard.writeText)) {
document.getElementById('copy_pub_key_div').hidden = true;
}
function copy_pub_key_to_clipboard() {
const ssh_pub_key = $("#ssh-pub-key").val();
navigator.clipboard.writeText(ssh_pub_key);
}
</script>

View File

@ -10,13 +10,13 @@
border-top: none;
padding-top: 0;
}
#system-checks .status-error td {
#system-checks .status-error td, .summary-error {
color: #733;
}
#system-checks .status-warning td {
#system-checks .status-warning td, .summary-warning {
color: #770;
}
#system-checks .status-ok td {
#system-checks .status-ok td, .summary-ok {
color: #040;
}
#system-checks div.extra {
@ -52,6 +52,9 @@
</div> <!-- /col -->
<div class="col-md-pull-3 col-md-8">
<div id="system-checks-summary">
</div>
<table id="system-checks" class="table" style="max-width: 60em">
<thead>
</thead>
@ -64,6 +67,9 @@
<script>
function show_system_status() {
const summary = $('#system-checks-summary');
summary.html("");
$('#system-checks tbody').html("<tr><td colspan='2' class='text-muted'>Loading...</td></tr>")
api(
@ -93,6 +99,12 @@ function show_system_status() {
{ },
function(r) {
$('#system-checks tbody').html("");
const ok_symbol = "✓";
const error_symbol = "✖";
const warning_symbol = "?";
let count_by_status = { ok: 0, error: 0, warning: 0 };
for (var i = 0; i < r.length; i++) {
var n = $("<tr><td class='status'/><td class='message'><p style='margin: 0'/><div class='extra'/><a class='showhide' href='#'/></tr>");
if (i == 0) n.addClass('first')
@ -100,9 +112,12 @@ function show_system_status() {
n.addClass(r[i].type)
else
n.addClass("status-" + r[i].type)
if (r[i].type == "ok") n.find('td.status').text("✓")
if (r[i].type == "error") n.find('td.status').text("✖")
if (r[i].type == "warning") n.find('td.status').text("?")
if (r[i].type == "ok") n.find('td.status').text(ok_symbol);
if (r[i].type == "error") n.find('td.status').text(error_symbol);
if (r[i].type == "warning") n.find('td.status').text(warning_symbol);
count_by_status[r[i].type]++;
n.find('td.message p').text(r[i].text)
$('#system-checks tbody').append(n);
@ -122,8 +137,17 @@ function show_system_status() {
n.find('> td.message > div').append(m);
}
}
})
// Summary counts
summary.html("Summary: ");
if (count_by_status['error'] + count_by_status['warning'] == 0) {
summary.append($('<span class="summary-ok"/>').text(`All ${count_by_status['ok']} ${ok_symbol} OK`));
} else {
summary.append($('<span class="summary-ok"/>').text(`${count_by_status['ok']} ${ok_symbol} OK, `));
summary.append($('<span class="summary-error"/>').text(`${count_by_status['error']} ${error_symbol} Error, `));
summary.append($('<span class="summary-warning"/>').text(`${count_by_status['warning']} ${warning_symbol} Warning`));
}
})
}
var current_privacy_setting = null;

View File

@ -1,7 +1,6 @@
<h2>Users</h2>
<style>
#user_table h4 { margin: 1em 0 0 0; }
#user_table tr.account_inactive td.address { color: #888; text-decoration: line-through; }
#user_table .actions { margin-top: .33em; font-size: 95%; }
#user_table .account_inactive .if_active { display: none; }
@ -31,10 +30,10 @@
<button type="submit" class="btn btn-primary">Add User</button>
</form>
<ul style="margin-top: 1em; padding-left: 1.5em; font-size: 90%;">
<li>Passwords must be at least four characters and may not contain spaces. For best results, <a href="#" onclick="return generate_random_password()">generate a random password</a>.</li>
<li>Use <a href="#" onclick="return show_panel('aliases')">aliases</a> to create email addresses that forward to existing accounts.</li>
<li>Passwords must be at least eight characters consisting of English letters and numbers only. For best results, <a href="#" onclick="return generate_random_password()">generate a random password</a>.</li>
<li>Use <a href="#aliases">aliases</a> to create email addresses that forward to existing accounts.</li>
<li>Administrators get access to this control panel.</li>
<li>User accounts cannot contain any international (non-ASCII) characters, but <a href="#" onclick="return show_panel('aliases');">aliases</a> can.</li>
<li>User accounts cannot contain any international (non-ASCII) characters, but <a href="#aliases">aliases</a> can.</li>
</ul>
<h3>Existing mail users</h3>
@ -43,7 +42,6 @@
<tr>
<th width="50%">Email Address</th>
<th>Actions</th>
<th>Mailbox Size</th>
</tr>
</thead>
<tbody>
@ -73,8 +71,6 @@
archive account
</a>
</td>
<td class='mailboxsize'>
</td>
</tr>
<tr id="user-extra-template" class="if_inactive">
<td colspan="3" style="border: 0; padding-top: 0">
@ -102,7 +98,7 @@
<thead><th>Verb</th> <th>Action</th><th></th></thead>
<tr><td>GET</td><td><i>(none)</i></td> <td>Returns a list of existing mail users. Adding <code>?format=json</code> to the URL will give JSON-encoded results.</td></tr>
<tr><td>POST</td><td>/add</td> <td>Adds a new mail user. Required POST-body parameters are <code>email</code> and <code>password</code>.</td></tr>
<tr><td>POST</td><td>/remove</td> <td>Removes a mail user. Required POST-by parameter is <code>email</code>.</td></tr>
<tr><td>POST</td><td>/remove</td> <td>Removes a mail user. Required POST-body parameter is <code>email</code>.</td></tr>
<tr><td>POST</td><td>/privileges/add</td> <td>Used to make a mail user an admin. Required POST-body parameters are <code>email</code> and <code>privilege=admin</code>.</td></tr>
<tr><td>POST</td><td>/privileges/remove</td> <td>Used to remove the admin privilege from a mail user. Required POST-body parameter is <code>email</code>.</td></tr>
</table>
@ -137,8 +133,8 @@ function show_users() {
function(r) {
$('#user_table tbody').html("");
for (var i = 0; i < r.length; i++) {
var hdr = $("<tr><td colspan='3'><h4/></td></tr>");
hdr.find('h4').text(r[i].domain);
var hdr = $("<tr><th colspan='2' style='background-color: #EEE'></th></tr>");
hdr.find('th').text(r[i].domain);
$('#user_table tbody').append(hdr);
for (var k = 0; k < r[i].users.length; k++) {
@ -156,7 +152,6 @@ function show_users() {
n.attr('data-email', user.email);
n.find('.address').text(user.email)
n.find('.mailboxsize').text(nice_size(user.mailbox_size))
n2.find('.restore_info tt').text(user.mailbox);
if (user.status == 'inactive') continue;
@ -208,12 +203,12 @@ function users_set_password(elem) {
var email = $(elem).parents('tr').attr('data-email');
var yourpw = "";
if (api_credentials != null && email == api_credentials[0])
if (api_credentials != null && email == api_credentials.username)
yourpw = "<p class='text-danger'>If you change your own password, you will be logged out of this control panel and will need to log in again.</p>";
show_modal_confirm(
"Set Password",
$("<p>Set a new password for <b>" + email + "</b>?</p> <p><label for='users_set_password_pw' style='display: block; font-weight: normal'>New Password:</label><input type='password' id='users_set_password_pw'></p><p><small>Passwords must be at least four characters and may not contain spaces.</small>" + yourpw + "</p>"),
$("<p>Set a new password for <b>" + email + "</b>?</p> <p><label for='users_set_password_pw' style='display: block; font-weight: normal'>New Password:</label><input type='password' id='users_set_password_pw'></p><p><small>Passwords must be at least eight characters and may not contain spaces.</small>" + yourpw + "</p>"),
"Set Password",
function() {
api(
@ -237,7 +232,7 @@ function users_remove(elem) {
var email = $(elem).parents('tr').attr('data-email');
// can't remove yourself
if (api_credentials != null && email == api_credentials[0]) {
if (api_credentials != null && email == api_credentials.username) {
show_modal_error("Archive User", "You cannot archive your own account.");
return;
}
@ -269,7 +264,7 @@ function mod_priv(elem, add_remove) {
var priv = $(elem).parents('td').find('.name').text();
// can't remove your own admin access
if (priv == "admin" && add_remove == "remove" && api_credentials != null && email == api_credentials[0]) {
if (priv == "admin" && add_remove == "remove" && api_credentials != null && email == api_credentials.username) {
show_modal_error("Modify Privileges", "You cannot remove the admin privilege from yourself.");
return;
}
@ -296,7 +291,7 @@ function mod_priv(elem, add_remove) {
function generate_random_password() {
var pw = "";
var charset = "ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz23456789"; // confusable characters skipped
for (var i = 0; i < 10; i++)
for (var i = 0; i < 12; i++)
pw += charset.charAt(Math.floor(Math.random() * charset.length));
show_modal_error("Random Password", "<p>Here, try this:</p> <p><code style='font-size: 110%'>" + pw + "</code></pr");
return false; // cancel click

View File

@ -10,7 +10,7 @@
<p>You can replace the default website with your own HTML pages and other static files. This control panel won&rsquo;t help you design a website, but once you have <tt>.html</tt> files you can upload them following these instructions:</p>
<ol>
<li>Ensure that any domains you are publishing a website for have no problems on the <a href="#system_status" onclick="return show_panel(this);">Status Checks</a> page.</li>
<li>Ensure that any domains you are publishing a website for have no problems on the <a href="#system_status">Status Checks</a> page.</li>
<li>On your personal computer, install an SSH file transfer program such as <a href="https://filezilla-project.org/">FileZilla</a> or <a href="http://linuxcommand.org/man_pages/scp1.html">scp</a>.</li>
@ -32,7 +32,7 @@
</tbody>
</table>
<p>To add a domain to this table, create a dummy <a href="#users" onclick="return show_panel(this);">mail user</a> or <a href="#aliases" onclick="return show_panel(this);">alias</a> on the domain first and see the <a href="https://mailinabox.email/guide.html#domain-name-configuration">setup guide</a> for adding nameserver records to the new domain at your registrar (but <i>not</i> glue records).</p>
<p>To add a domain to this table, create a dummy <a href="#users">mail user</a> or <a href="#aliases">alias</a> on the domain first and see the <a href="https://mailinabox.email/guide.html#domain-name-configuration">setup guide</a> for adding nameserver records to the new domain at your registrar (but <i>not</i> glue records).</p>
</ol>

View File

@ -0,0 +1,16 @@
<style>
.title {
margin: 1em;
text-align: center;
}
.subtitle {
margin: 2em;
text-align: center;
}
</style>
<h1 class="title">{{hostname}}</h1>
<p class="subtitle">Welcome to your Mail-in-a-Box control panel.</p>

View File

@ -14,28 +14,31 @@ def load_env_vars_from_file(fn):
# Load settings from a KEY=VALUE file.
import collections
env = collections.OrderedDict()
for line in open(fn): env.setdefault(*line.strip().split("=", 1))
with open(fn, encoding="utf-8") as f:
for line in f:
env.setdefault(*line.strip().split("=", 1))
return env
def save_environment(env):
with open("/etc/mailinabox.conf", "w") as f:
with open("/etc/mailinabox.conf", "w", encoding="utf-8") as f:
for k, v in env.items():
f.write("%s=%s\n" % (k, v))
f.write(f"{k}={v}\n")
# THE SETTINGS FILE AT STORAGE_ROOT/settings.yaml.
def write_settings(config, env):
import rtyaml
fn = os.path.join(env['STORAGE_ROOT'], 'settings.yaml')
with open(fn, "w") as f:
with open(fn, "w", encoding="utf-8") as f:
f.write(rtyaml.dump(config))
def load_settings(env):
import rtyaml
fn = os.path.join(env['STORAGE_ROOT'], 'settings.yaml')
try:
config = rtyaml.load(open(fn, "r"))
if not isinstance(config, dict): raise ValueError() # caught below
with open(fn, encoding="utf-8") as f:
config = rtyaml.load(f)
if not isinstance(config, dict): raise ValueError # caught below
return config
except:
return { }
@ -56,7 +59,7 @@ def sort_domains(domain_names, env):
# from shortest to longest since zones are always shorter than their
# subdomains.
zones = { }
for domain in sorted(domain_names, key=lambda d : len(d)):
for domain in sorted(domain_names, key=len):
for z in zones.values():
if domain.endswith("." + z):
# We found a parent domain already in the list.
@ -78,7 +81,7 @@ def sort_domains(domain_names, env):
))
# Now sort the domain names that fall within each zone.
domain_names = sorted(domain_names,
return sorted(domain_names,
key = lambda d : (
# First by zone.
zone_domains.index(zones[d]),
@ -92,95 +95,26 @@ def sort_domains(domain_names, env):
# Then in right-to-left lexicographic order of the .-separated parts of the name.
list(reversed(d.split("."))),
))
return domain_names
def sort_email_addresses(email_addresses, env):
email_addresses = set(email_addresses)
domains = set(email.split("@", 1)[1] for email in email_addresses if "@" in email)
domains = {email.split("@", 1)[1] for email in email_addresses if "@" in email}
ret = []
for domain in sort_domains(domains, env):
domain_emails = set(email for email in email_addresses if email.endswith("@" + domain))
domain_emails = {email for email in email_addresses if email.endswith("@" + domain)}
ret.extend(sorted(domain_emails))
email_addresses -= domain_emails
ret.extend(sorted(email_addresses)) # whatever is left
return ret
def exclusive_process(name):
# Ensure that a process named `name` does not execute multiple
# times concurrently.
import os, sys, atexit
pidfile = '/var/run/mailinabox-%s.pid' % name
mypid = os.getpid()
# Attempt to get a lock on ourself so that the concurrency check
# itself is not executed in parallel.
with open(__file__, 'r+') as flock:
# Try to get a lock. This blocks until a lock is acquired. The
# lock is held until the flock file is closed at the end of the
# with block.
os.lockf(flock.fileno(), os.F_LOCK, 0)
# While we have a lock, look at the pid file. First attempt
# to write our pid to a pidfile if no file already exists there.
try:
with open(pidfile, 'x') as f:
# Successfully opened a new file. Since the file is new
# there is no concurrent process. Write our pid.
f.write(str(mypid))
atexit.register(clear_my_pid, pidfile)
return
except FileExistsError:
# The pid file already exixts, but it may contain a stale
# pid of a terminated process.
with open(pidfile, 'r+') as f:
# Read the pid in the file.
existing_pid = None
try:
existing_pid = int(f.read().strip())
except ValueError:
pass # No valid integer in the file.
# Check if the pid in it is valid.
if existing_pid:
if is_pid_valid(existing_pid):
print("Another %s is already running (pid %d)." % (name, existing_pid), file=sys.stderr)
sys.exit(1)
# Write our pid.
f.seek(0)
f.write(str(mypid))
f.truncate()
atexit.register(clear_my_pid, pidfile)
def clear_my_pid(pidfile):
import os
os.unlink(pidfile)
def is_pid_valid(pid):
"""Checks whether a pid is a valid process ID of a currently running process."""
# adapted from http://stackoverflow.com/questions/568271/how-to-check-if-there-exists-a-process-with-a-given-pid
import os, errno
if pid <= 0: raise ValueError('Invalid PID.')
try:
os.kill(pid, 0)
except OSError as err:
if err.errno == errno.ESRCH: # No such process
return False
elif err.errno == errno.EPERM: # Not permitted to send signal
return True
else: # EINVAL
raise
else:
return True
def shell(method, cmd_args, env={}, capture_stderr=False, return_bytes=False, trap=False, input=None):
def shell(method, cmd_args, env=None, capture_stderr=False, return_bytes=False, trap=False, input=None):
# A safe way to execute processes.
# Some processes like apt-get require being given a sane PATH.
import subprocess
if env is None:
env = {}
env.update({ "PATH": "/sbin:/bin:/usr/sbin:/usr/bin" })
kwargs = {
'env': env,
@ -216,7 +150,7 @@ def du(path):
# soft and hard links.
total_size = 0
seen = set()
for dirpath, dirnames, filenames in os.walk(path):
for dirpath, _dirnames, filenames in os.walk(path):
for f in filenames:
fp = os.path.join(dirpath, f)
try:
@ -245,13 +179,33 @@ def wait_for_service(port, public, env, timeout):
return False
time.sleep(min(timeout/4, 1))
def fix_boto():
# Google Compute Engine instances install some Python-2-only boto plugins that
# conflict with boto running under Python 3. Disable boto's default configuration
# file prior to importing boto so that GCE's plugin is not loaded:
import os
os.environ["BOTO_CONFIG"] = "/etc/boto3.cfg"
def get_ssh_port():
port_value = get_ssh_config_value("port")
if port_value:
return int(port_value)
return None
def get_ssh_config_value(parameter_name):
# Returns ssh configuration value for the provided parameter
try:
output = shell('check_output', ['sshd', '-T'])
except FileNotFoundError:
# sshd is not installed. That's ok.
return None
except subprocess.CalledProcessError:
# error while calling shell command
return None
for line in output.split("\n"):
if " " not in line: continue # there's a blank line at the end
key, values = line.split(" ", 1)
if key == parameter_name:
return values # space-delimited if there are multiple values
# Did not find the parameter!
return None
if __name__ == "__main__":
from web_update import get_web_domains

View File

@ -9,7 +9,7 @@ from dns_update import get_custom_dns_config, get_dns_zones
from ssl_certificates import get_ssl_certificates, get_domain_ssl_files, check_certificate
from utils import shell, safe_domain_name, sort_domains
def get_web_domains(env, include_www_redirects=True, exclude_dns_elsewhere=True):
def get_web_domains(env, include_www_redirects=True, include_auto=True, exclude_dns_elsewhere=True):
# What domains should we serve HTTP(S) for?
domains = set()
@ -18,12 +18,22 @@ def get_web_domains(env, include_www_redirects=True, exclude_dns_elsewhere=True)
# if the user wants to make one.
domains |= get_mail_domains(env)
if include_www_redirects:
if include_www_redirects and include_auto:
# Add 'www.' subdomains that we want to provide default redirects
# to the main domain for. We'll add 'www.' to any DNS zones, i.e.
# the topmost of each domain we serve.
domains |= set('www.' + zone for zone, zonefile in get_dns_zones(env))
domains |= {'www.' + zone for zone, zonefile in get_dns_zones(env)}
if include_auto:
# Add Autoconfiguration domains for domains that there are user accounts at:
# 'autoconfig.' for Mozilla Thunderbird auto setup.
# 'autodiscover.' for ActiveSync autodiscovery (Z-Push).
domains |= {'autoconfig.' + maildomain for maildomain in get_mail_domains(env, users_only=True)}
domains |= {'autodiscover.' + maildomain for maildomain in get_mail_domains(env, users_only=True)}
# 'mta-sts.' for MTA-STS support for all domains that have email addresses.
domains |= {'mta-sts.' + maildomain for maildomain in get_mail_domains(env)}
if exclude_dns_elsewhere:
# ...Unless the domain has an A/AAAA record that maps it to a different
# IP address than this box. Remove those domains from our list.
@ -35,15 +45,14 @@ def get_web_domains(env, include_www_redirects=True, exclude_dns_elsewhere=True)
domains.add(env['PRIMARY_HOSTNAME'])
# Sort the list so the nginx conf gets written in a stable order.
domains = sort_domains(domains, env)
return sort_domains(domains, env)
return domains
def get_domains_with_a_records(env):
domains = set()
dns = get_custom_dns_config(env)
for domain, rtype, value in dns:
if rtype == "CNAME" or (rtype in ("A", "AAAA") and value not in ("local", env['PUBLIC_IP'])):
if rtype == "CNAME" or (rtype in {"A", "AAAA"} and value not in {"local", env['PUBLIC_IP']}):
domains.add(domain)
return domains
@ -53,7 +62,8 @@ def get_web_domains_with_root_overrides(env):
root_overrides = { }
nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml")
if os.path.exists(nginx_conf_custom_fn):
custom_settings = rtyaml.load(open(nginx_conf_custom_fn))
with open(nginx_conf_custom_fn, encoding='utf-8') as f:
custom_settings = rtyaml.load(f)
for domain, settings in custom_settings.items():
for type, value in [('redirect', settings.get('redirects', {}).get('/')),
('proxy', settings.get('proxies', {}).get('/'))]:
@ -65,13 +75,18 @@ def do_web_update(env):
# Pre-load what SSL certificates we will use for each domain.
ssl_certificates = get_ssl_certificates(env)
# Helper for reading config files and templates
def read_conf(conf_fn):
with open(os.path.join(os.path.dirname(__file__), "../conf", conf_fn), encoding='utf-8') as f:
return f.read()
# Build an nginx configuration file.
nginx_conf = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-top.conf")).read()
nginx_conf = read_conf("nginx-top.conf")
# Load the templates.
template0 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx.conf")).read()
template1 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-alldomains.conf")).read()
template2 = open(os.path.join(os.path.dirname(__file__), "../conf/nginx-primaryonly.conf")).read()
template0 = read_conf("nginx.conf")
template1 = read_conf("nginx-alldomains.conf")
template2 = read_conf("nginx-primaryonly.conf")
template3 = "\trewrite ^(.*) https://$REDIRECT_DOMAIN$1 permanent;\n"
# Add the PRIMARY_HOST configuration first so it becomes nginx's default server.
@ -97,12 +112,12 @@ def do_web_update(env):
# Did the file change? If not, don't bother writing & restarting nginx.
nginx_conf_fn = "/etc/nginx/conf.d/local.conf"
if os.path.exists(nginx_conf_fn):
with open(nginx_conf_fn) as f:
with open(nginx_conf_fn, encoding='utf-8') as f:
if f.read() == nginx_conf:
return ""
# Save the file.
with open(nginx_conf_fn, "w") as f:
with open(nginx_conf_fn, "w", encoding='utf-8') as f:
f.write(nginx_conf)
# Kick nginx. Since this might be called from the web admin
@ -131,36 +146,65 @@ def make_domain_config(domain, templates, ssl_certificates, env):
def hashfile(filepath):
import hashlib
sha1 = hashlib.sha1()
f = open(filepath, 'rb')
try:
with open(filepath, 'rb') as f:
sha1.update(f.read())
finally:
f.close()
return sha1.hexdigest()
nginx_conf_extra += "# ssl files sha1: %s / %s\n" % (hashfile(tls_cert["private-key"]), hashfile(tls_cert["certificate"]))
nginx_conf_extra += "\t# ssl files sha1: {} / {}\n".format(hashfile(tls_cert["private-key"]), hashfile(tls_cert["certificate"]))
# Add in any user customizations in YAML format.
hsts = "yes"
nginx_conf_custom_fn = os.path.join(env["STORAGE_ROOT"], "www/custom.yaml")
if os.path.exists(nginx_conf_custom_fn):
yaml = rtyaml.load(open(nginx_conf_custom_fn))
with open(nginx_conf_custom_fn, encoding='utf-8') as f:
yaml = rtyaml.load(f)
if domain in yaml:
yaml = yaml[domain]
# any proxy or redirect here?
for path, url in yaml.get("proxies", {}).items():
nginx_conf_extra += "\tlocation %s {\n\t\tproxy_pass %s;\n\t}\n" % (path, url)
# Parse some flags in the fragment of the URL.
pass_http_host_header = False
proxy_redirect_off = False
frame_options_header_sameorigin = False
m = re.search("#(.*)$", url)
if m:
for flag in m.group(1).split(","):
if flag == "pass-http-host":
pass_http_host_header = True
elif flag == "no-proxy-redirect":
proxy_redirect_off = True
elif flag == "frame-options-sameorigin":
frame_options_header_sameorigin = True
url = re.sub("#(.*)$", "", url)
nginx_conf_extra += "\tlocation %s {" % path
nginx_conf_extra += "\n\t\tproxy_pass %s;" % url
if proxy_redirect_off:
nginx_conf_extra += "\n\t\tproxy_redirect off;"
if pass_http_host_header:
nginx_conf_extra += "\n\t\tproxy_set_header Host $http_host;"
if frame_options_header_sameorigin:
nginx_conf_extra += "\n\t\tproxy_set_header X-Frame-Options SAMEORIGIN;"
nginx_conf_extra += "\n\t\tproxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;"
nginx_conf_extra += "\n\t\tproxy_set_header X-Forwarded-Host $http_host;"
nginx_conf_extra += "\n\t\tproxy_set_header X-Forwarded-Proto $scheme;"
nginx_conf_extra += "\n\t\tproxy_set_header X-Real-IP $remote_addr;"
nginx_conf_extra += "\n\t}\n"
for path, alias in yaml.get("aliases", {}).items():
nginx_conf_extra += "\tlocation %s {" % path
nginx_conf_extra += "\n\t\talias %s;" % alias
nginx_conf_extra += "\n\t}\n"
for path, url in yaml.get("redirects", {}).items():
nginx_conf_extra += "\trewrite %s %s permanent;\n" % (path, url)
nginx_conf_extra += f"\trewrite {path} {url} permanent;\n"
# override the HSTS directive type
hsts = yaml.get("hsts", hsts)
# Add the HSTS header.
if hsts == "yes":
nginx_conf_extra += "add_header Strict-Transport-Security max-age=31536000;\n"
nginx_conf_extra += '\tadd_header Strict-Transport-Security "max-age=15768000" always;\n'
elif hsts == "preload":
nginx_conf_extra += "add_header Strict-Transport-Security \"max-age=10886400; includeSubDomains; preload\";\n"
nginx_conf_extra += '\tadd_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;\n'
# Add in any user customizations in the includes/ folder.
nginx_conf_custom_include = os.path.join(env["STORAGE_ROOT"], "www", safe_domain_name(domain) + ".conf")
@ -171,7 +215,7 @@ def make_domain_config(domain, templates, ssl_certificates, env):
# Combine the pieces. Iteratively place each template into the "# ADDITIONAL DIRECTIVES HERE" placeholder
# of the previous template.
nginx_conf = "# ADDITIONAL DIRECTIVES HERE\n"
for t in templates + [nginx_conf_extra]:
for t in [*templates, nginx_conf_extra]:
nginx_conf = re.sub("[ \t]*# ADDITIONAL DIRECTIVES HERE *\n", t, nginx_conf)
# Replace substitution strings in the template & return.
@ -180,9 +224,8 @@ def make_domain_config(domain, templates, ssl_certificates, env):
nginx_conf = nginx_conf.replace("$ROOT", root)
nginx_conf = nginx_conf.replace("$SSL_KEY", tls_cert["private-key"])
nginx_conf = nginx_conf.replace("$SSL_CERTIFICATE", tls_cert["certificate"])
nginx_conf = nginx_conf.replace("$REDIRECT_DOMAIN", re.sub(r"^www\.", "", domain)) # for default www redirects to parent domain
return nginx_conf.replace("$REDIRECT_DOMAIN", re.sub(r"^www\.", "", domain)) # for default www redirects to parent domain
return nginx_conf
def get_web_root(domain, env, test_exists=True):
# Try STORAGE_ROOT/web/domain_name if it exists, but fall back to STORAGE_ROOT/web/default.
@ -198,8 +241,11 @@ def get_web_domains_info(env):
# for the SSL config panel, get cert status
def check_cert(domain):
tls_cert = get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=True)
if tls_cert is None: return ("danger", "No Certificate Installed")
try:
tls_cert = get_domain_ssl_files(domain, ssl_certificates, env, allow_missing_cert=True)
except OSError: # PRIMARY_HOSTNAME cert is missing
tls_cert = None
if tls_cert is None: return ("danger", "No certificate installed.")
cert_status, cert_status_details = check_certificate(domain, tls_cert["certificate"], tls_cert["private-key"])
if cert_status == "OK":
return ("success", "Signed & valid. " + cert_status_details)
@ -218,3 +264,4 @@ def get_web_domains_info(env):
}
for domain in get_web_domains(env)
]

7
management/wsgi.py Normal file
View File

@ -0,0 +1,7 @@
from daemon import app
import utils
app.logger.addHandler(utils.create_syslog_handler())
if __name__ == "__main__":
app.run(port=10222)

View File

@ -1,62 +0,0 @@
POSTGREY_VERSION=1.35-1+miab1
DOVECOT_VERSION=2.2.9-1ubuntu2.1+miab1
all: clean build_postgrey build_dovecot_lucene
clean:
# Clean.
rm -rf /tmp/build
mkdir -p /tmp/build
build_postgrey: clean
# Download the latest Debian postgrey package. It is ahead of Ubuntu,
# and we might as well jump ahead.
git clone git://git.debian.org/git/collab-maint/postgrey.git /tmp/build/postgrey
# Download the corresponding upstream package.
wget -O /tmp/build/postgrey_1.35.orig.tar.gz http://postgrey.schweikert.ch/pub/postgrey-1.35.tar.gz
# Add our source patch to the debian packaging listing.
cp postgrey_sources.diff /tmp/build/postgrey/debian/patches/mailinabox
# Patch the packaging to give it a new version.
patch -p1 -d /tmp/build/postgrey < postgrey.diff
# Build the source package.
(cd /tmp/build/postgrey; dpkg-buildpackage -S -us -uc -nc)
# Sign the packages.
debsign /tmp/build/postgrey_$(POSTGREY_VERSION)_source.changes
# Upload to PPA.
dput ppa:mail-in-a-box/ppa /tmp/build/postgrey_$(POSTGREY_VERSION)_source.changes
# Clear the intermediate files.
rm -rf /tmp/build/postgrey
# TESTING BINARY PACKAGE
#sudo apt-get build-dep -y postgrey
#(cd /tmp/build/postgrey; dpkg-buildpackage -us -uc -nc)
build_dovecot_lucene: clean
# Get the upstream source.
(cd /tmp/build; apt-get source dovecot)
# Patch it so that we build dovecot-lucene (and nothing else).
patch -p1 -d /tmp/build/dovecot-2.2.9 < dovecot_lucene.diff
# Build the source package.
(cd /tmp/build/dovecot-2.2.9; dpkg-buildpackage -S -us -uc -nc)
# Sign the packages.
debsign /tmp/build/dovecot_$(DOVECOT_VERSION)_source.changes
# Upload it.
dput ppa:mail-in-a-box/ppa /tmp/build/dovecot_$(DOVECOT_VERSION)_source.changes
# TESTING BINARY PACKAGE
# Install build dependencies and build dependencies we've added in our patch,
# and then build the binary package.
#sudo apt-get build-dep -y dovecot
#sudo apt-get install libclucene-dev liblzma-dev libexttextcat-dev libstemmer-dev
#(cd /tmp/build/dovecot-2.2.9; dpkg-buildpackage -us -uc -nc)

View File

@ -1,40 +0,0 @@
ppa instructions
================
Mail-in-a-Box maintains a Launchpad.net PPA ([Mail-in-a-Box PPA](https://launchpad.net/~mail-in-a-box/+archive/ubuntu/ppa)) for additional deb's that we want to have installed on systems.
Packages
--------
* postgrey, a fork of [postgrey](http://postgrey.schweikert.ch/) based on the [latest Debian package](http://git.debian.org/?p=collab-maint/postgrey.git), with a modification to whitelist senders that are whitelisted by [dnswl.org](https://www.dnswl.org/) (i.e. don't greylist mail from known good senders).
* dovecot-lucene, [dovecot's lucene full text search plugin](http://wiki2.dovecot.org/Plugins/FTS/Lucene), which isn't built by Ubuntu's dovecot package maintainer unfortunately.
Building
--------
To rebuild the packages in the PPA, you'll need to be @JoshData.
First:
* You should have an account on Launchpad.net.
* Your account should have your GPG key set (to the fingerprint of a GPG key on your system matching the identity at the top of the debian/changelog files).
* You should have write permission to the PPA.
To build:
# Start a clean VM.
vagrant up
# Put your signing keys (on the host machine) into the VM (so it can sign the debs).
gpg --export-secret-keys | vagrant ssh -- gpg --import
# Build & upload to launchpad.
vagrant ssh -- "cd /vagrant && make"
Mail-in-a-Box adds our PPA during setup, but if you need to do that yourself for testing:
apt-add-repository ppa:mail-in-a-box/ppa
apt-get update
apt-get install postgrey dovecot-lucene

12
ppa/Vagrantfile vendored
View File

@ -1,12 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu14.04"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.provision :shell, :inline => <<-SH
sudo apt-get update
sudo apt-get install -y git dpkg-dev devscripts dput
SH
end

View File

@ -1,319 +0,0 @@
--- a/debian/control
+++ b/debian/control
@@ -1,210 +1,23 @@
Source: dovecot
Section: mail
Priority: optional
-Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
-XSBC-Original-Maintainer: Dovecot Maintainers <jaldhar-dovecot@debian.org>
-Uploaders: Jaldhar H. Vyas <jaldhar@debian.org>, Fabio Tranchitella <kobold@debian.org>, Joel Johnson <mrjoel@lixil.net>, Marco Nenciarini <mnencia@debian.org>
-Build-Depends: debhelper (>= 7.2.3~), dpkg-dev (>= 1.16.1), pkg-config, libssl-dev, libpam0g-dev, libldap2-dev, libpq-dev, libmysqlclient-dev, libsqlite3-dev, libsasl2-dev, zlib1g-dev, libkrb5-dev, drac-dev (>= 1.12-5), libbz2-dev, libdb-dev, libcurl4-gnutls-dev, libexpat-dev, libwrap0-dev, dh-systemd, po-debconf, lsb-release, hardening-wrapper, dh-autoreconf, autotools-dev
+Maintainer: Joshua Tauberer <jt@occams.info>
+XSBC-Original-Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
+Build-Depends: debhelper (>= 7.2.3~), dpkg-dev (>= 1.16.1), pkg-config, libssl-dev, libpam0g-dev, libldap2-dev, libpq-dev, libmysqlclient-dev, libsqlite3-dev, libsasl2-dev, zlib1g-dev, libkrb5-dev, drac-dev (>= 1.12-5), libbz2-dev, libdb-dev, libcurl4-gnutls-dev, libexpat-dev, libwrap0-dev, dh-systemd, po-debconf, lsb-release, libclucene-dev (>= 2.3), liblzma-dev, libexttextcat-dev, libstemmer-dev, hardening-wrapper, dh-autoreconf, autotools-dev
Standards-Version: 3.9.4
Homepage: http://dovecot.org/
-Vcs-Git: git://git.debian.org/git/collab-maint/dovecot.git
-Vcs-Browser: http://git.debian.org/?p=collab-maint/dovecot.git
+Vcs-Git: https://github.com/mail-in-a-box/mailinabox
+Vcs-Browser: https://github.com/mail-in-a-box/mailinabox
-Package: dovecot-core
+Package: dovecot-lucene
Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, libpam-runtime (>= 0.76-13.1), openssl, adduser, ucf (>= 2.0020), ssl-cert (>= 1.0-11ubuntu1), lsb-base (>= 3.2-12ubuntu3)
-Suggests: ntp, dovecot-gssapi, dovecot-sieve, dovecot-pgsql, dovecot-mysql, dovecot-sqlite, dovecot-ldap, dovecot-imapd, dovecot-pop3d, dovecot-lmtpd, dovecot-managesieved, dovecot-solr, ufw
-Recommends: ntpdate
-Provides: dovecot-common
-Replaces: dovecot-common (<< 1:2.0.14-2~), mailavenger (<< 0.8.1-4)
-Breaks: dovecot-common (<< 1:2.0.14-2~), mailavenger (<< 0.8.1-4)
-Description: secure POP3/IMAP server - core files
+Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (>= 1:2.2.9-1ubuntu2.1)
+Description: secure POP3/IMAP server - Lucene support
Dovecot is a mail server whose major goals are security and extreme
reliability. It tries very hard to handle all error conditions and verify
that all data is valid, making it nearly impossible to crash. It supports
mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
fast, extensible, and portable.
.
- This package contains the Dovecot main server and its command line utility.
-
-Package: dovecot-dev
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Replaces: dovecot-common (<< 1:2.0.14-2~)
-Breaks: dovecot-common (<< 1:2.0.14-2~)
-Description: secure POP3/IMAP server - header files
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains header files needed to compile plugins for the Dovecot
- mail server.
-
-Package: dovecot-imapd
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), ucf (>= 2.0020)
-Provides: imap-server
-Description: secure POP3/IMAP server - IMAP daemon
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains the Dovecot IMAP server.
-
-Package: dovecot-pop3d
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), ucf (>= 2.0020)
-Provides: pop3-server
-Description: secure POP3/IMAP server - POP3 daemon
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains the Dovecot POP3 server.
-
-Package: dovecot-lmtpd
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), ucf (>= 2.0020)
-Replaces: dovecot-common (<< 1:2.0.14-2~)
-Breaks: dovecot-common (<< 1:2.0.14-2~)
-Description: secure POP3/IMAP server - LMTP server
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains the Dovecot LMTP server.
-
-Package: dovecot-managesieved
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), dovecot-sieve (= ${binary:Version}), ucf (>= 2.0020)
-Replaces: dovecot-common (<< 1:2.0.14-2~)
-Breaks: dovecot-common (<< 1:2.0.14-2~)
-Description: secure POP3/IMAP server - ManageSieve server
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains the Dovecot ManageSieve server.
-
-Package: dovecot-pgsql
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - PostgreSQL support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides PostgreSQL support for Dovecot.
-
-Package: dovecot-mysql
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - MySQL support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides MySQL support for Dovecot.
-
-Package: dovecot-sqlite
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - SQLite support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides SQLite support for Dovecot.
-
-Package: dovecot-ldap
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), ucf (>= 2.0020)
-Description: secure POP3/IMAP server - LDAP support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides LDAP support for Dovecot.
-
-Package: dovecot-gssapi
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - GSSAPI support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides GSSAPI authentication support for Dovecot.
-
-Package: dovecot-sieve
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version}), ucf (>= 2.0020)
-Description: secure POP3/IMAP server - Sieve filters support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides Sieve filters support for Dovecot.
-
-Package: dovecot-solr
-Architecture: any
-Depends: ${shlibs:Depends}, ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - Solr support
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package provides Solr full text search support for Dovecot.
-
-Package: dovecot-dbg
-Section: debug
-Priority: extra
-Architecture: any
-Depends: ${misc:Depends}, dovecot-core (= ${binary:Version})
-Description: secure POP3/IMAP server - debug symbols
- Dovecot is a mail server whose major goals are security and extreme
- reliability. It tries very hard to handle all error conditions and verify
- that all data is valid, making it nearly impossible to crash. It supports
- mbox/Maildir and its own dbox/mdbox formats, and should also be pretty
- fast, extensible, and portable.
- .
- This package contains debug symbols for Dovecot.
-
-Package: mail-stack-delivery
-Architecture: all
-Depends: dovecot-core, dovecot-imapd, dovecot-pop3d, dovecot-managesieved,
- postfix, ${misc:Depends}
-Replaces: dovecot-postfix (<< 1:1.2.12-0ubuntu1~)
-Description: mail server delivery agent stack provided by Ubuntu server team
- Ubuntu's mail stack provides fully operational delivery with
- safe defaults and additional options. Out of the box it supports IMAP,
- POP3 and SMTP services with SASL authentication and Maildir as default
- storage engine.
- .
- This package contains configuration files for dovecot.
- .
- This package modifies postfix's configuration to integrate with dovecot
+ This package provides Lucene full text search support for Dovecot. It has been modified by Mail-in-a-Box
+ to supply a dovecot-lucene package compatible with the official ubuntu trusty dovecot-core.
diff --git a/debian/dovecot-lucene.links b/debian/dovecot-lucene.links
new file mode 100644
index 0000000..6ffcbeb
--- /dev/null
+++ b/debian/dovecot-lucene.links
@@ -0,0 +1 @@
+/usr/share/bug/dovecot-core /usr/share/bug/dovecot-lucene
diff --git a/debian/dovecot-lucene.lintian-overrides b/debian/dovecot-lucene.lintian-overrides
new file mode 100644
index 0000000..60d90fd
--- /dev/null
+++ b/debian/dovecot-lucene.lintian-overrides
@@ -0,0 +1,2 @@
+dovecot-lucene: hardening-no-fortify-functions usr/lib/dovecot/modules/lib21_fts_lucene_plugin.so
+
diff --git a/debian/dovecot-lucene.substvars b/debian/dovecot-lucene.substvars
new file mode 100644
index 0000000..ed54f36
--- /dev/null
+++ b/debian/dovecot-lucene.substvars
@@ -0,0 +1,2 @@
+shlibs:Depends=libc6 (>= 2.4), libclucene-core1 (>= 2.3.3.4), libgcc1 (>= 1:4.1.1), libstdc++6 (>= 4.1.1), libstemmer0d (>= 0+svn527)
+misc:Depends=
diff --git a/debian/dovecot-lucene.triggers b/debian/dovecot-lucene.triggers
new file mode 100644
index 0000000..3d933a5
--- /dev/null
+++ b/debian/dovecot-lucene.triggers
@@ -0,0 +1 @@
+activate register-dovecot-plugin
--- a/debian/rules
+++ b/debian/rules
@@ -40,6 +40,7 @@
--with-solr \
--with-ioloop=best \
--with-libwrap \
+ --with-lucene \
--host=$(DEB_HOST_GNU_TYPE) \
--build=$(DEB_BUILD_GNU_TYPE) \
--prefix=/usr \
@@ -95,6 +96,10 @@
dh_testroot
dh_clean -k
dh_installdirs
+ mkdir -p $(CURDIR)/debian/dovecot-lucene/usr/lib/dovecot/modules
+ mv $(CURDIR)/src/plugins/fts-lucene/.libs/* $(CURDIR)/debian/dovecot-lucene/usr/lib/dovecot/modules/
+
+rest_disabled_by_miab:
$(MAKE) install DESTDIR=$(CURDIR)/debian/dovecot-core
$(MAKE) -C $(PIGEONHOLE_DIR) install DESTDIR=$(CURDIR)/debian/dovecot-core
rm `find $(CURDIR)/debian -name '*.la'`
@@ -209,7 +214,7 @@
dh_installdocs -a
dh_installexamples -a
dh_installpam -a
- mv $(CURDIR)/debian/dovecot-core/etc/pam.d/dovecot-core $(CURDIR)/debian/dovecot-core/etc/pam.d/dovecot
+ # mv $(CURDIR)/debian/dovecot-core/etc/pam.d/dovecot-core $(CURDIR)/debian/dovecot-core/etc/pam.d/dovecot
dh_systemd_enable
dh_installinit -pdovecot-core --name=dovecot
dh_systemd_start
@@ -220,10 +225,10 @@
dh_lintian -a
dh_installchangelogs -a ChangeLog
dh_link -a
- dh_strip -a --dbg-package=dovecot-dbg
+ #dh_strip -a --dbg-package=dovecot-dbg
dh_compress -a
dh_fixperms -a
- chmod 0700 debian/dovecot-core/etc/dovecot/private
+ #chmod 0700 debian/dovecot-core/etc/dovecot/private
dh_makeshlibs -a -n
dh_installdeb -a
dh_shlibdeps -a
--- a/debian/changelog
+++ a/debian/changelog
@@ -1,3 +1,9 @@
+dovecot (1:2.2.9-1ubuntu2.1+miab1) trusty; urgency=low
+
+ * Changed to just build dovecot-lucene for Mail-in-a-box PPA
+
+ -- Joshua Tauberer <jt@occams.info> Sat, 14 May 2015 16:13:00 -0400
+
dovecot (1:2.2.9-1ubuntu2.1) trusty-security; urgency=medium
* SECURITY UPDATE: denial of service via SSL connection exhaustion
--- a/debian/copyright 2014-03-07 07:26:37.000000000 -0500
+++ b/debian/copyright 2015-05-23 18:17:42.668005535 -0400
@@ -1,3 +1,7 @@
+This package is a fork by Mail-in-a-box (https://mailinabox.email). Original
+copyright statement follows:
+----------------------------------------------------------------------------
+
This package was debianized by Jaldhar H. Vyas <jaldhar@debian.org> on
Tue, 3 Dec 2002 01:10:07 -0500.

View File

@ -1,80 +0,0 @@
diff --git a/debian/NEWS b/debian/NEWS
index dd09744..de7b640 100644
--- a/debian/NEWS
+++ b/debian/NEWS
@@ -1,3 +1,9 @@
+postgrey (1.35-1+miab1)
+
+ Added DNSWL.org whitelisting.
+
+ -- Joshua Tauberer <jt@occams.info> Mon May 18 18:58:40 EDT 2015
+
postgrey (1.32-1) unstable; urgency=low
Postgrey is now listening to port 10023 and not 60000. The latter was an
diff --git a/debian/changelog b/debian/changelog
index 1058e15..e5e3557 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+postgrey (1.35-1+miab1) trusty; urgency=low
+
+ * Added DNSWL.org whitelisting.
+
+ -- Joshua Tauberer <jt@occams.info> Mon, 18 May 2015 21:58:40 +0000
+
postgrey (1.35-1) unstable; urgency=low
* New upstream release (Closes: 756486)
diff --git a/debian/control b/debian/control
index ce12ba6..0a82855 100644
--- a/debian/control
+++ b/debian/control
@@ -1,14 +1,11 @@
Source: postgrey
Section: mail
Priority: optional
-Maintainer: Antonio Radici <antonio@debian.org>
-Uploaders: Jon Daley <jondaley-guest@alioth.debian.org>
+Maintainer: Joshua Tauberer <jt@occams.info>
Build-Depends: debhelper (>= 7), quilt
Build-Depends-Indep: po-debconf
Standards-Version: 3.9.6
Homepage: http://postgrey.schweikert.ch/
-Vcs-Browser: http://git.debian.org/?p=collab-maint/postgrey.git
-Vcs-Git: git://git.debian.org/git/collab-maint/postgrey.git
Package: postgrey
Architecture: all
@@ -25,3 +22,6 @@ Description: greylisting implementation for Postfix
.
While Postgrey is designed for use with Postfix, it can also be used
with Exim.
+ .
+ This version has been modified by Mail-in-a-Box to whitelist senders
+ in the DNSWL.org list. See https://mailinabox.email.
diff --git a/debian/copyright b/debian/copyright
index 3cbe377..bf09b89 100644
--- a/debian/copyright
+++ b/debian/copyright
@@ -1,6 +1,10 @@
+This package is a fork by Mail-in-a-Box (https://mailinabox.email). Original
+copyright statement follows:
+----------------------------------------------------------------------------
+
This Debian package was prepared by Adrian von Bidder <cmot@debian.org> in
July 2004, then the package was adopted by Antonio Radici <antonio@dyne.org>
-in Sept 2009
+in Sept 2009.
It was downloaded from http://postgrey.schweikert.ch/
diff --git a/debian/patches/series b/debian/patches/series
index f4c5e31..3cd62b8 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,3 +1,3 @@
imported-upstream-diff
disable-transaction-logic
-
+mailinabox

View File

@ -1,100 +0,0 @@
Description: whitelist whatever dnswl.org whitelists
.
postgrey (1.35-1+miab1) unstable; urgency=low
.
* Added DNSWL.org whitelisting.
Author: Joshua Tauberer <jt@occams.info>
--- postgrey-1.35.orig/README
+++ postgrey-1.35/README
@@ -13,7 +13,7 @@ Requirements
- BerkeleyDB (Perl Module)
- Berkeley DB >= 4.1 (Library)
- Digest::SHA (Perl Module, only for --privacy option)
-
+- Net::DNS (Perl Module)
Documentation
-------------
--- postgrey-1.35.orig/postgrey
+++ postgrey-1.35/postgrey
@@ -18,6 +18,7 @@ use Fcntl ':flock'; # import LOCK_* cons
use Sys::Hostname;
use Sys::Syslog; # used only to find out which version we use
use POSIX qw(strftime setlocale LC_ALL);
+use Net::DNS; # for DNSWL.org whitelisting
use vars qw(@ISA);
@ISA = qw(Net::Server::Multiplex);
@@ -26,6 +27,8 @@ my $VERSION = '1.35';
my $DEFAULT_DBDIR = '/var/lib/postgrey';
my $CONFIG_DIR = '/etc/postgrey';
+my $dns_resolver = Net::DNS::Resolver->new;
+
sub cidr_parse($)
{
defined $_[0] or return undef;
@@ -48,6 +51,36 @@ sub cidr_match($$$)
return ($addr & $mask) == $net;
}
+sub reverseDottedQuad {
+ # This is the sub _chkValidPublicIP from Net::DNSBL by PJ Goodwin
+ # at http://www.the42.net/net-dnsbl.
+ my ($quad) = @_;
+ if ($quad =~ /^(\d+)\.(\d+)\.(\d+)\.(\d+)$/) {
+ my ($ip1,$ip2,$ip3,$ip4) = ($1, $2, $3, $4);
+ if (
+ $ip1 == 10 || #10.0.0.0/8 (10/8)
+ ($ip1 == 172 && $ip2 >= 16 && $ip2 <= 31) || #172.16.0.0/12 (172.16/12)
+ ($ip1 == 192 && $ip2 == 168) || #192.168.0.0/16 (192.168/16)
+ $quad eq '127.0.0.1' # localhost
+ ) {
+ # toss the RFC1918 specified privates
+ return undef;
+ } elsif (
+ ($ip1 <= 1 || $ip1 > 254) ||
+ ($ip2 < 0 || $ip2 > 255) ||
+ ($ip3 < 0 || $ip3 > 255) ||
+ ($ip4 < 0 || $ip4 > 255)
+ ) {
+ #invalid oct, toss it;
+ return undef;
+ }
+ my $revquad = $ip4 . "." . $ip3 . "." . $ip2 . "." . $ip1;
+ return $revquad;
+ } else { # invalid quad
+ return undef;
+ }
+}
+
sub read_clients_whitelists($)
{
my ($self) = @_;
@@ -361,6 +394,25 @@ sub smtpd_access_policy($$)
}
}
+ # whitelist clients in dnswl.org
+ my $revip = reverseDottedQuad($attr->{client_address});
+ if ($revip) { # valid IP / plausibly in DNSWL
+ my $answer = $dns_resolver->send($revip . '.list.dnswl.org');
+ if ($answer && scalar($answer->answer) > 0) {
+ my @rrs = $answer->answer;
+ if ($rrs[0]->type eq 'A' && $rrs[0]->address ne '127.0.0.255') {
+ # Address appears in DNSWL. (127.0.0.255 means we were rate-limited.)
+ my $code = $rrs[0]->address;
+ if ($code =~ /^127.0.(\d+)\.([0-3])$/) {
+ my %dnswltrust = (0 => 'legitimate', 1 => 'occasional spam', 2 => 'rare spam', 3 => 'highly unlikely to send spam');
+ $code = $2 . '/' . $dnswltrust{$2};
+ }
+ $self->mylog_action($attr, 'pass', 'client whitelisted by dnswl.org (' . $code . ')');
+ return 'DUNNO';
+ }
+ }
+ }
+
# auto whitelist clients (see below for explanation)
my ($cawl_db, $cawl_key, $cawl_count, $cawl_last);
if($self->{postgrey}{awl_clients}) {

View File

@ -1,9 +1,14 @@
Mail-in-a-Box Security Guide
============================
Mail-in-a-Box turns a fresh Ubuntu 14.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components.
Mail-in-a-Box turns a fresh Ubuntu 22.04 LTS 64-bit machine into a mail server appliance by installing and configuring various components.
This page documents the security features of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box.
This page documents the security posture of Mail-in-a-Box. The term “box” is used below to mean a configured Mail-in-a-Box.
Reporting Security Vulnerabilities
----------------------------------
Security vulnerabilities should be reported to the [project's maintainer](https://joshdata.me) via email.
Threat Model
------------
@ -32,34 +37,24 @@ The box's administrator and its (non-administrative) mail users must sometimes c
These services are protected by [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security):
* SMTP Submission (port 587). Mail users submit outbound mail through SMTP with STARTTLS on port 587.
* SMTP Submission (ports 465/587). Mail users submit outbound mail through SMTP with TLS (port 465) or STARTTLS (port 587).
* IMAP/POP (ports 993, 995). Mail users check for incoming mail through IMAP or POP over TLS.
* HTTPS (port 443). Webmail, the Exchange/ActiveSync protocol, the administrative control panel, and any static hosted websites are accessed over HTTPS.
The services all follow these rules:
* TLS certificates are generated with 2048-bit RSA keys and SHA-256 fingerprints. The box provides a self-signed certificate by default. The [setup guide](https://mailinabox.email/guide.html) explains how to verify the certificate fingerprint on first login. Users are encouraged to replace the certificate with a proper CA-signed one. ([source](setup/ssl.sh))
* Only TLSv1, TLSv1.1 and TLSv1.2 are offered (the older SSL protocols are not offered).
* Export-grade ciphers, the anonymous DH/ECDH algorithms (aNULL), and clear-text ciphers (eNULL) are not offered.
* The minimum cipher key length offered is 112 bits. The maximum is 256 bits. Diffie-Hellman ciphers use a 2048-bit key for forward secrecy.
* Only TLSv1.2+ are offered (the older SSL protocols are not offered).
* We track the [Mozilla Intermediate Ciphers Recommendation](https://wiki.mozilla.org/Security/Server_Side_TLS), balancing security with supporting a wide range of mail clients. Diffie-Hellman ciphers use a 2048-bit key for forward secrecy. For more details, see the [output of SSLyze for these ports](tests/tls_results.txt).
Additionally:
* SMTP Submission (port 587) will not accept user credentials without STARTTLS (true also of SMTP on port 25 in case of client misconfiguration), and the submission port won't accept mail without encryption. The minimum cipher key length is 128 bits. (The box is of course configured not to be an open relay. User credentials are required to send outbound mail.) ([source](setup/mail-postfix.sh))
* SMTP Submission on port 587 will not accept user credentials without STARTTLS (true also of SMTP on port 25 in case of client misconfiguration), and the submission port won't accept mail without encryption. The minimum cipher key length is 128 bits. (The box is of course configured not to be an open relay. User credentials are required to send outbound mail.) ([source](setup/mail-postfix.sh))
* HTTPS (port 443): The HTTPS Strict Transport Security header is set. A redirect from HTTP to HTTPS is offered. The [Qualys SSL Labs test](https://www.ssllabs.com/ssltest) should report an A+ grade. ([source 1](conf/nginx-ssl.conf), [source 2](conf/nginx.conf))
For more details, see the [output of SSLyze for these ports](tests/tls_results.txt).
The cipher and protocol selection are chosen to support the following clients:
* For HTTPS: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7.
* For other protocols: TBD.
### Password Storage
The passwords for mail users are stored on disk using the [SHA512-CRYPT](http://man7.org/linux/man-pages/man3/crypt.3.html) hashing scheme. ([source](management/mailconfig.py))
When using the web-based administrative control panel, after logging in an API key is placed in the browser's local storage (rather than, say, the user's actual password). The API key is an HMAC based on the user's email address and current password, and it is keyed by a secret known only to the control panel service. By resetting an administrator's password, any HMACs previously generated for that user will expire.
The passwords for mail users are stored on disk using the [SHA512-CRYPT](http://man7.org/linux/man-pages/man3/crypt.3.html) hashing scheme. ([source](management/mailconfig.py)) Password changes (as well as changes to control panel two-factor authentication settings) expire any control panel login sessions.
### Console access
@ -73,12 +68,10 @@ If DNSSEC is enabled at the box's domain name's registrar, the SSHFP record that
`fail2ban` provides some protection from brute-force login attacks (repeated logins that guess account passwords) by blocking offending IP addresses at the network level.
The following services are protected: SSH, IMAP (dovecot), SMTP submission (postfix), webmail (roundcube), ownCloud/CalDAV/CardDAV (over HTTP), and the Mail-in-a-Box control panel & munin (over HTTP).
The following services are protected: SSH, IMAP (dovecot), SMTP submission (postfix), webmail (roundcube), Nextcloud/CalDAV/CardDAV (over HTTP), and the Mail-in-a-Box control panel (over HTTP).
Some other services running on the box may be missing fail2ban filters.
`fail2ban` only blocks IPv4 addresses, however. If the box has a public IPv6 address, it is not protected from these attacks.
Outbound Mail
-------------
@ -102,16 +95,20 @@ Domain policy records allow recipient MTAs to detect when the _domain_ part of o
### User Policy
While domain policy records prevent other servers from sending mail with a "From:" header that matches a domain hosted on the box (see above), those policy records do not guarnatee that the user portion of the sender email address matches the actual sender. In enterprise environments where the box may host the mail of untrusted users, it is important to guard against users impersonating other users.
While domain policy records prevent other servers from sending mail with a "From:" header that matches a domain hosted on the box (see above), those policy records do not guarantee that the user portion of the sender email address matches the actual sender. In enterprise environments where the box may host the mail of untrusted users, it is important to guard against users impersonating other users.
The box restricts the envelope sender address (also called the return path or MAIL FROM address --- this is different from the "From:" header) that users may put into outbound mail. The envelope sender address must be either their own email address (their SMTP login username) or any alias that they are listed as a permitted sender of. (There is currently no restriction on the contents of the "From:" header.)
Incoming Mail
-------------
### Encryption
### Encryption Settings
As discussed above, there is no way to require on-the-wire encryption of mail. When the box receives an incoming email (SMTP on port 25), it offers encryption (STARTTLS) but cannot require that senders use it because some senders may not support STARTTLS at all and other senders may support STARTTLS but not with the latest protocols/ciphers. To give senders the best chance at making use of encryption, the box offers protocols back to TLSv1 and ciphers with key lengths as low as 112 bits. Modern clients (senders) will make use of the 256-bit ciphers and Diffie-Hellman ciphers with a 2048-bit key for perfect forward secrecy, however. ([source](setup/mail-postfix.sh))
As with outbound email, there is no way to require on-the-wire encryption of incoming mail from all senders. When the box receives an incoming email (SMTP on port 25), it offers encryption (STARTTLS) but cannot require that senders use it because some senders may not support STARTTLS at all and other senders may support STARTTLS but not with the latest protocols/ciphers. To give senders the best chance at making use of encryption, the box offers protocols back to TLSv1 and ciphers with key lengths as low as 112 bits. Modern clients (senders) will make use of the 256-bit ciphers and Diffie-Hellman ciphers with a 2048-bit key for perfect forward secrecy, however. ([source](setup/mail-postfix.sh))
### MTA-STS
The box publishes a SMTP MTA Strict Transport Security ([SMTP MTA-STS](https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol#SMTP_MTA_Strict_Transport_Security)) policy (via DNS and HTTPS) in "enforce" mode. Senders that support MTA-STS will use a secure SMTP connection. (MTA-STS tells senders to connect and expect a signed TLS certificate for the "MX" domain without permitting a fallback to an unencrypted connection.)
### DANE

View File

@ -7,44 +7,82 @@
#########################################################
if [ -z "$TAG" ]; then
TAG=v0.19a
# If a version to install isn't explicitly given as an environment
# variable, then install the latest version. But the latest version
# depends on the machine's version of Ubuntu. Existing users need to
# be able to upgrade to the latest version available for that version
# of Ubuntu to satisfy the migration requirements.
#
# Also, the system status checks read this script for TAG = (without the
# space, but if we put it in a comment it would confuse the status checks!)
# to get the latest version, so the first such line must be the one that we
# want to display in status checks.
#
# Allow point-release versions of the major releases, e.g. 22.04.1 is OK.
UBUNTU_VERSION=$( lsb_release -d | sed 's/.*:\s*//' | sed 's/\([0-9]*\.[0-9]*\)\.[0-9]/\1/' )
if [ "$UBUNTU_VERSION" == "Ubuntu 22.04 LTS" ]; then
# This machine is running Ubuntu 22.04, which is supported by
# Mail-in-a-Box versions 60 and later.
TAG=v68
elif [ "$UBUNTU_VERSION" == "Ubuntu 18.04 LTS" ]; then
# This machine is running Ubuntu 18.04, which is supported by
# Mail-in-a-Box versions 0.40 through 5x.
echo "Support is ending for Ubuntu 18.04."
echo "Please immediately begin to migrate your data to"
echo "a new machine running Ubuntu 22.04. See:"
echo "https://mailinabox.email/maintenance.html#upgrade"
TAG=v57a
elif [ "$UBUNTU_VERSION" == "Ubuntu 14.04 LTS" ]; then
# This machine is running Ubuntu 14.04, which is supported by
# Mail-in-a-Box versions 1 through v0.30.
echo "Ubuntu 14.04 is no longer supported."
echo "The last version of Mail-in-a-Box supporting Ubuntu 14.04 will be installed."
TAG=v0.30
else
echo "This script may be used only on a machine running Ubuntu 14.04, 18.04, or 22.04."
exit 1
fi
fi
# Are we running as root?
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root. Did you leave out sudo?"
exit
exit 1
fi
# Clone the Mail-in-a-Box repository if it doesn't exist.
if [ ! -d $HOME/mailinabox ]; then
if [ ! -d "$HOME/mailinabox" ]; then
if [ ! -f /usr/bin/git ]; then
echo Installing git . . .
echo "Installing git . . ."
apt-get -q -q update
DEBIAN_FRONTEND=noninteractive apt-get -q -q install -y git < /dev/null
echo
fi
echo Downloading Mail-in-a-Box $TAG. . .
if [ "$SOURCE" == "" ]; then
SOURCE=https://github.com/mail-in-a-box/mailinabox
fi
echo "Downloading Mail-in-a-Box $TAG. . ."
git clone \
-b $TAG --depth 1 \
https://github.com/mail-in-a-box/mailinabox \
$HOME/mailinabox \
-b "$TAG" --depth 1 \
"$SOURCE" \
"$HOME/mailinabox" \
< /dev/null 2> /dev/null
echo
fi
# Change directory to it.
cd $HOME/mailinabox
cd "$HOME/mailinabox" || exit
# Update it.
if [ "$TAG" != `git describe` ]; then
echo Updating Mail-in-a-Box to $TAG . . .
git fetch --depth 1 --force --prune origin tag $TAG
if ! git checkout -q $TAG; then
echo "Update failed. Did you modify something in `pwd`?"
exit
if [ "$TAG" != "$(git describe --always)" ]; then
echo "Updating Mail-in-a-Box to $TAG . . ."
git fetch --depth 1 --force --prune origin tag "$TAG"
if ! git checkout -q "$TAG"; then
echo "Update failed. Did you modify something in $PWD?"
exit 1
fi
echo
fi

View File

@ -10,22 +10,28 @@ source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# Install DKIM...
echo Installing OpenDKIM/OpenDMARC...
echo "Installing OpenDKIM/OpenDMARC..."
apt_install opendkim opendkim-tools opendmarc
# Make sure configuration directories exist.
mkdir -p /etc/opendkim;
mkdir -p $STORAGE_ROOT/mail/dkim
mkdir -p "$STORAGE_ROOT/mail/dkim"
# Used in InternalHosts and ExternalIgnoreList configuration directives.
# Not quite sure why.
echo "127.0.0.1" > /etc/opendkim/TrustedHosts
# We need to at least create these files, since we reference them later.
# Otherwise, opendkim startup will fail
touch /etc/opendkim/KeyTable
touch /etc/opendkim/SigningTable
if grep -q "ExternalIgnoreList" /etc/opendkim.conf; then
true # already done #NODOC
else
# Add various configuration options to the end of `opendkim.conf`.
cat >> /etc/opendkim.conf << EOF;
Canonicalization relaxed/simple
MinimumKeyBits 1024
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
InternalHosts refile:/etc/opendkim/TrustedHosts
@ -47,16 +53,49 @@ fi
# such as Google. But they and others use a 2048 bit key, so we'll
# do the same. Keys beyond 2048 bits may exceed DNS record limits.
if [ ! -f "$STORAGE_ROOT/mail/dkim/mail.private" ]; then
opendkim-genkey -b 2048 -r -s mail -D $STORAGE_ROOT/mail/dkim
opendkim-genkey -b 2048 -r -s mail -D "$STORAGE_ROOT/mail/dkim"
fi
# Ensure files are owned by the opendkim user and are private otherwise.
chown -R opendkim:opendkim $STORAGE_ROOT/mail/dkim
chmod go-rwx $STORAGE_ROOT/mail/dkim
chown -R opendkim:opendkim "$STORAGE_ROOT/mail/dkim"
chmod go-rwx "$STORAGE_ROOT/mail/dkim"
tools/editconf.py /etc/opendmarc.conf -s \
"Syslog=true" \
"Socket=inet:8893@[127.0.0.1]"
"Socket=inet:8893@[127.0.0.1]" \
"FailureReports=false"
# SPFIgnoreResults causes the filter to ignore any SPF results in the header
# of the message. This is useful if you want the filter to perfrom SPF checks
# itself, or because you don't trust the arriving header. This added header is
# used by spamassassin to evaluate the mail for spamminess.
tools/editconf.py /etc/opendmarc.conf -s \
"SPFIgnoreResults=true"
# SPFSelfValidate causes the filter to perform a fallback SPF check itself
# when it can find no SPF results in the message header. If SPFIgnoreResults
# is also set, it never looks for SPF results in headers and always performs
# the SPF check itself when this is set. This added header is used by
# spamassassin to evaluate the mail for spamminess.
tools/editconf.py /etc/opendmarc.conf -s \
"SPFSelfValidate=true"
# Disables generation of failure reports for sending domains that publish a
# "none" policy.
tools/editconf.py /etc/opendmarc.conf -s \
"FailureReportsOnNone=false"
# AlwaysAddARHeader Adds an "Authentication-Results:" header field even to
# unsigned messages from domains with no "signs all" policy. The reported DKIM
# result will be "none" in such cases. Normally unsigned mail from non-strict
# domains does not cause the results header field to be added. This added header
# is used by spamassassin to evaluate the mail for spamminess.
tools/editconf.py /etc/opendkim.conf -s \
"AlwaysAddARHeader=true"
# Add OpenDKIM and OpenDMARC as milters to postfix, which is how OpenDKIM
# intercepts outgoing mail to perform the signing (by adding a mail header)
@ -75,6 +114,9 @@ tools/editconf.py /etc/postfix/main.cf \
non_smtpd_milters=\$smtpd_milters \
milter_default_action=accept
# We need to explicitly enable the opendmarc service, or it will not start
hide_output systemctl enable opendmarc
# Restart services.
restart_service opendkim
restart_service opendmarc

View File

@ -10,22 +10,19 @@
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# Install the packages.
#
# * nsd: The non-recursive nameserver that publishes our DNS records.
# * ldnsutils: Helper utilities for signing DNSSEC zones.
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
echo "Installing nsd (DNS server)..."
apt_install nsd ldnsutils openssh-client
# Prepare nsd's configuration.
# We configure nsd before installation as we only want it to bind to some addresses
# and it otherwise will have port / bind conflicts with bind9 used as the local resolver
mkdir -p /var/run/nsd
mkdir -p /etc/nsd
mkdir -p /etc/nsd/zones
touch /etc/nsd/zones.conf
cat > /etc/nsd/nsd.conf << EOF;
# No not edit. Overwritten by Mail-in-a-Box setup.
# Do not edit. Overwritten by Mail-in-a-Box setup.
server:
hide-version: yes
logfile: "/var/log/nsd.log"
# identify the server (CH TXT ID.SERVER entry).
identity: ""
@ -49,33 +46,47 @@ for ip in $PRIVATE_IP $PRIVATE_IPV6; do
echo " ip-address: $ip" >> /etc/nsd/nsd.conf;
done
echo "include: /etc/nsd/zones.conf" >> /etc/nsd/nsd.conf;
# Create a directory for additional configuration directives, including
# the zones.conf file written out by our management daemon.
echo "include: /etc/nsd/nsd.conf.d/*.conf" >> /etc/nsd/nsd.conf;
# Remove the old location of zones.conf that we generate. It will
# now be stored in /etc/nsd/nsd.conf.d.
rm -f /etc/nsd/zones.conf
# Add log rotation
cat > /etc/logrotate.d/nsd <<EOF;
/var/log/nsd.log {
weekly
missingok
rotate 12
compress
delaycompress
notifempty
}
EOF
# Install the packages.
#
# * nsd: The non-recursive nameserver that publishes our DNS records.
# * ldnsutils: Helper utilities for signing DNSSEC zones.
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
echo "Installing nsd (DNS server)..."
apt_install nsd ldnsutils openssh-client
# Create DNSSEC signing keys.
mkdir -p "$STORAGE_ROOT/dns/dnssec";
# TLDs don't all support the same algorithms, so we'll generate keys using a few
# different algorithms. RSASHA1-NSEC3-SHA1 was possibly the first widely used
# algorithm that supported NSEC3, which is a security best practice. However TLDs
# will probably be moving away from it to a a SHA256-based algorithm.
#
# Supports `RSASHA1-NSEC3-SHA1` (didn't test with `RSASHA256`):
#
# * .info
# * .me
#
# Requires `RSASHA256`
#
# * .email
# * .guide
#
# Supports `RSASHA256` (and defaulting to this)
#
# * .fund
# TLDs, registrars, and validating nameservers don't all support the same algorithms,
# so we'll generate keys using a few different algorithms so that dns_update.py can
# choose which algorithm to use when generating the zonefiles. See #1953 for recent
# discussion. File for previously used algorithms (i.e. RSASHA1-NSEC3-SHA1) may still
# be in the output directory, and we'll continue to support signing zones with them
# so that trust isn't broken with deployed DS records, but we won't generate those
# keys on new systems.
FIRST=1 #NODOC
for algo in RSASHA1-NSEC3-SHA1 RSASHA256; do
for algo in RSASHA256 ECDSAP256SHA256; do
if [ ! -f "$STORAGE_ROOT/dns/dnssec/$algo.conf" ]; then
if [ $FIRST == 1 ]; then
echo "Generating DNSSEC signing keys..."
@ -84,7 +95,7 @@ if [ ! -f "$STORAGE_ROOT/dns/dnssec/$algo.conf" ]; then
# Create the Key-Signing Key (KSK) (with `-k`) which is the so-called
# Secure Entry Point. The domain name we provide ("_domain_") doesn't
# matter -- we'll use the same keys for all our domains.
# matter -- we'll use the same keys for all our domains.
#
# `ldns-keygen` outputs the new key's filename to stdout, which
# we're capturing into the `KSK` variable.
@ -92,17 +103,22 @@ if [ ! -f "$STORAGE_ROOT/dns/dnssec/$algo.conf" ]; then
# ldns-keygen uses /dev/random for generating random numbers by default.
# This is slow and unecessary if we ensure /dev/urandom is seeded properly,
# so we use /dev/urandom. See system.sh for an explanation. See #596, #115.
KSK=$(umask 077; cd $STORAGE_ROOT/dns/dnssec; ldns-keygen -r /dev/urandom -a $algo -b 2048 -k _domain_);
# (This previously used -b 2048 but it's unclear if this setting makes sense
# for non-RSA keys, so it's removed. The RSA-based keys are not recommended
# anymore anyway.)
KSK=$(umask 077; cd "$STORAGE_ROOT/dns/dnssec"; ldns-keygen -r /dev/urandom -a $algo -k _domain_);
# Now create a Zone-Signing Key (ZSK) which is expected to be
# rotated more often than a KSK, although we have no plans to
# rotate it (and doing so would be difficult to do without
# disturbing DNS availability.) Omit `-k` and use a shorter key length.
ZSK=$(umask 077; cd $STORAGE_ROOT/dns/dnssec; ldns-keygen -r /dev/urandom -a $algo -b 1024 _domain_);
# disturbing DNS availability.) Omit `-k`.
# (This previously used -b 1024 but it's unclear if this setting makes sense
# for non-RSA keys, so it's removed.)
ZSK=$(umask 077; cd "$STORAGE_ROOT/dns/dnssec"; ldns-keygen -r /dev/urandom -a $algo _domain_);
# These generate two sets of files like:
#
# * `K_domain_.+007+08882.ds`: DS record normally provided to domain name registrar (but it's actually invalid with `_domain_`)
# * `K_domain_.+007+08882.ds`: DS record normally provided to domain name registrar (but it's actually invalid with `_domain_` so we don't use this file)
# * `K_domain_.+007+08882.key`: public key
# * `K_domain_.+007+08882.private`: private key (secret!)
@ -110,7 +126,7 @@ if [ ! -f "$STORAGE_ROOT/dns/dnssec/$algo.conf" ]; then
# options. So we'll store the names of the files we just generated.
# We might have multiple keys down the road. This will identify
# what keys are the current keys.
cat > $STORAGE_ROOT/dns/dnssec/$algo.conf << EOF;
cat > "$STORAGE_ROOT/dns/dnssec/$algo.conf" << EOF;
KSK=$KSK
ZSK=$ZSK
EOF
@ -126,7 +142,7 @@ cat > /etc/cron.daily/mailinabox-dnssec << EOF;
#!/bin/bash
# Mail-in-a-Box
# Re-sign any DNS zones with DNSSEC because the signatures expire periodically.
`pwd`/tools/dns_update
$PWD/tools/dns_update
EOF
chmod +x /etc/cron.daily/mailinabox-dnssec

View File

@ -1,16 +1,17 @@
#!/bin/bash
# If there aren't any mail users yet, create one.
if [ -z "`tools/mail.py user`" ]; then
# The outut of "tools/mail.py user" is a list of mail users. If there
if [ -z "$(management/cli.py user)" ]; then
# The outut of "management/cli.py user" is a list of mail users. If there
# aren't any yet, it'll be empty.
# If we didn't ask for an email address at the start, do so now.
if [ -z "$EMAIL_ADDR" ]; then
if [ -z "${EMAIL_ADDR:-}" ]; then
# In an interactive shell, ask the user for an email address.
if [ -z "$NONINTERACTIVE" ]; then
if [ -z "${NONINTERACTIVE:-}" ]; then
input_box "Mail Account" \
"Let's create your first mail account.
\n\nWhat email address do you want?" \
me@`get_default_hostname` \
"me@$(get_default_hostname)" \
EMAIL_ADDR
if [ -z "$EMAIL_ADDR" ]; then
@ -22,7 +23,7 @@ if [ -z "`tools/mail.py user`" ]; then
input_box "Mail Account" \
"That's not a valid email address.
\n\nWhat email address do you want?" \
$EMAIL_ADDR \
"$EMAIL_ADDR" \
EMAIL_ADDR
if [ -z "$EMAIL_ADDR" ]; then
# user hit ESC/cancel
@ -35,7 +36,7 @@ if [ -z "`tools/mail.py user`" ]; then
else
# Use me@PRIMARY_HOSTNAME
EMAIL_ADDR=me@$PRIMARY_HOSTNAME
EMAIL_PW=1234
EMAIL_PW=12345678
echo
echo "Creating a new administrative mail account for $EMAIL_ADDR with password $EMAIL_PW."
echo
@ -47,11 +48,11 @@ if [ -z "`tools/mail.py user`" ]; then
fi
# Create the user's mail account. This will ask for a password if none was given above.
tools/mail.py user add $EMAIL_ADDR $EMAIL_PW
management/cli.py user add "$EMAIL_ADDR" "${EMAIL_PW:-}"
# Make it an admin.
hide_output tools/mail.py user make-admin $EMAIL_ADDR
hide_output management/cli.py user make-admin "$EMAIL_ADDR"
# Create an alias to which we'll direct all automatically-created administrative aliases.
tools/mail.py alias add administrator@$PRIMARY_HOSTNAME $EMAIL_ADDR > /dev/null
fi
management/cli.py alias add "administrator@$PRIMARY_HOSTNAME" "$EMAIL_ADDR" > /dev/null
fi

View File

@ -1,27 +1,39 @@
#!/bin/bash
# Turn on "strict mode." See http://redsymbol.net/articles/unofficial-bash-strict-mode/.
# -e: exit if any command unexpectedly fails.
# -u: exit if we have a variable typo.
# -o pipefail: don't ignore errors in the non-last command in a pipeline
set -euo pipefail
PHP_VER=8.0
function hide_output {
# This function hides the output of a command unless the command fails
# and returns a non-zero exit code.
# Get a temporary file.
OUTPUT=$(tempfile)
OUTPUT=$(mktemp)
# Execute command, redirecting stderr/stdout to the temporary file.
$@ &> $OUTPUT
# Execute command, redirecting stderr/stdout to the temporary file. Since we
# check the return code ourselves, disable 'set -e' temporarily.
set +e
"$@" &> "$OUTPUT"
E=$?
set -e
# If the command failed, show the output that was captured in the temporary file.
E=$?
if [ $E != 0 ]; then
# Something failed.
echo
echo FAILED: $@
echo "FAILED: $*"
echo -----------------------------------------
cat $OUTPUT
cat "$OUTPUT"
echo -----------------------------------------
exit $E
fi
# Remove temporary file.
rm -f $OUTPUT
rm -f "$OUTPUT"
}
function apt_get_quiet {
@ -44,17 +56,16 @@ function apt_install {
# install' for all of the packages. Calling `dpkg` on each package is slow,
# and doesn't affect what we actually do, except in the messages, so let's
# not do that anymore.
PACKAGES=$@
apt_get_quiet install $PACKAGES
apt_get_quiet install "$@"
}
function get_default_hostname {
# Guess the machine's hostname. It should be a fully qualified
# domain name suitable for DNS. None of these calls may provide
# the right value, but it's the best guess we can make.
set -- $(hostname --fqdn 2>/dev/null ||
set -- "$(hostname --fqdn 2>/dev/null ||
hostname --all-fqdns 2>/dev/null ||
hostname 2>/dev/null)
hostname 2>/dev/null)"
printf '%s\n' "$1" # return this value
}
@ -66,7 +77,7 @@ function get_publicip_from_web_service {
#
# Pass '4' or '6' as an argument to this function to specify
# what type of address to get (IPv4, IPv6).
curl -$1 --fail --silent --max-time 15 icanhazip.com 2>/dev/null
curl -"$1" --fail --silent --max-time 15 icanhazip.com 2>/dev/null || /bin/true
}
function get_default_privateip {
@ -109,31 +120,37 @@ function get_default_privateip {
if [ "$1" == "6" ]; then target=2001:4860:4860::8888; fi
# Get the route information.
route=$(ip -$1 -o route get $target | grep -v unreachable)
route=$(ip -"$1" -o route get $target 2>/dev/null | grep -v unreachable)
# Parse the address out of the route information.
address=$(echo $route | sed "s/.* src \([^ ]*\).*/\1/")
address=$(echo "$route" | sed "s/.* src \([^ ]*\).*/\1/")
if [[ "$1" == "6" && $address == fe80:* ]]; then
# For IPv6 link-local addresses, parse the interface out
# of the route information and append it with a '%'.
interface=$(echo $route | sed "s/.* dev \([^ ]*\).*/\1/")
interface=$(echo "$route" | sed "s/.* dev \([^ ]*\).*/\1/")
address=$address%$interface
fi
echo $address
echo "$address"
}
function ufw_allow {
if [ -z "$DISABLE_FIREWALL" ]; then
if [ -z "${DISABLE_FIREWALL:-}" ]; then
# ufw has completely unhelpful output
ufw allow $1 > /dev/null;
ufw allow "$1" > /dev/null;
fi
}
function ufw_limit {
if [ -z "${DISABLE_FIREWALL:-}" ]; then
# ufw has completely unhelpful output
ufw limit "$1" > /dev/null;
fi
}
function restart_service {
hide_output service $1 restart
hide_output service "$1" restart
}
## Dialog Functions ##
@ -145,10 +162,13 @@ function input_box {
# input_box "title" "prompt" "defaultvalue" VARIABLE
# The user's input will be stored in the variable VARIABLE.
# The exit code from dialog will be stored in VARIABLE_EXITCODE.
# Temporarily turn off 'set -e' because we need the dialog return code.
declare -n result=$4
declare -n result_code=$4_EXITCODE
set +e
result=$(dialog --stdout --title "$1" --inputbox "$2" 0 0 "$3")
result_code=$?
set -e
}
function input_menu {
@ -158,8 +178,10 @@ function input_menu {
declare -n result=$4
declare -n result_code=$4_EXITCODE
local IFS=^$'\n'
result=$(dialog --stdout --title "$1" --menu "$2" 0 0 0 $3)
set +e
result=$(dialog --stdout --title "$1" --menu "$2" 0 0 0 "$3")
result_code=$?
set -e
}
function wget_verify {
@ -169,17 +191,17 @@ function wget_verify {
HASH=$2
DEST=$3
CHECKSUM="$HASH $DEST"
rm -f $DEST
wget -q -O $DEST $URL || exit 1
rm -f "$DEST"
hide_output wget -O "$DEST" "$URL"
if ! echo "$CHECKSUM" | sha1sum --check --strict > /dev/null; then
echo "------------------------------------------------------------"
echo "Download of $URL did not match expected checksum."
echo "Found:"
sha1sum $DEST
sha1sum "$DEST"
echo
echo "Expected:"
echo "$CHECKSUM"
rm -f $DEST
rm -f "$DEST"
exit 1
fi
}
@ -195,9 +217,9 @@ function git_clone {
SUBDIR=$3
TARGETPATH=$4
TMPPATH=/tmp/git-clone-$$
rm -rf $TMPPATH $TARGETPATH
git clone -q $REPO $TMPPATH || exit 1
(cd $TMPPATH; git checkout -q $TREEISH;) || exit 1
mv $TMPPATH/$SUBDIR $TARGETPATH
rm -rf $TMPPATH "$TARGETPATH"
git clone -q "$REPO" $TMPPATH || exit 1
(cd $TMPPATH; git checkout -q "$TREEISH";) || exit 1
mv $TMPPATH/"$SUBDIR" "$TARGETPATH"
rm -rf $TMPPATH
}

View File

@ -26,7 +26,7 @@ source /etc/mailinabox.conf # load global vars
echo "Installing Dovecot (IMAP server)..."
apt_install \
dovecot-core dovecot-imapd dovecot-pop3d dovecot-lmtpd dovecot-sqlite sqlite3 \
dovecot-sieve dovecot-managesieved dovecot-lucene
dovecot-sieve dovecot-managesieved
# The `dovecot-imapd`, `dovecot-pop3d`, and `dovecot-lmtpd` packages automatically
# enable IMAP, POP and LMTP protocols.
@ -37,8 +37,16 @@ apt_install \
# of active IMAP connections (at, say, 5 open connections per user that
# would be 20 users). Set it to 250 times the number of cores this
# machine has, so on a two-core machine that's 500 processes/100 users).
# The `default_vsz_limit` is the maximum amount of virtual memory that
# can be allocated. It should be set *reasonably high* to avoid allocation
# issues with larger mailboxes. We're setting it to 1/3 of the total
# available memory (physical mem + swap) to be sure.
# See here for discussion:
# - https://www.dovecot.org/list/dovecot/2012-August/137569.html
# - https://www.dovecot.org/list/dovecot/2011-December/132455.html
tools/editconf.py /etc/dovecot/conf.d/10-master.conf \
default_process_limit=$(echo "`nproc` * 250" | bc) \
default_process_limit="$(($(nproc) * 250))" \
default_vsz_limit="$(($(free -tm | tail -1 | awk '{print $2}') / 3))M" \
log_path=/var/log/mail.log
# The inotify `max_user_instances` default is 128, which constrains
@ -53,7 +61,7 @@ tools/editconf.py /etc/sysctl.conf \
# username part of the user's email address. We'll ensure that no bad domains or email addresses
# are created within the management daemon.
tools/editconf.py /etc/dovecot/conf.d/10-mail.conf \
mail_location=maildir:$STORAGE_ROOT/mail/mailboxes/%d/%n \
mail_location="maildir:$STORAGE_ROOT/mail/mailboxes/%d/%n" \
mail_privileged_group=mail \
first_valid_uid=0
@ -70,13 +78,17 @@ tools/editconf.py /etc/dovecot/conf.d/10-auth.conf \
"auth_mechanisms=plain login"
# Enable SSL, specify the location of the SSL certificate and private key files.
# Disable obsolete SSL protocols and allow only good ciphers per http://baldric.net/2013/12/07/tls-ciphers-in-postfix-and-dovecot/.
# Use Mozilla's "Intermediate" recommendations at https://ssl-config.mozilla.org/#server=dovecot&server-version=2.2.33&config=intermediate&openssl-version=1.1.1,
# except that the current version of Dovecot does not have a TLSv1.3 setting, so we only use TLSv1.2.
tools/editconf.py /etc/dovecot/conf.d/10-ssl.conf \
ssl=required \
"ssl_cert=<$STORAGE_ROOT/ssl/ssl_certificate.pem" \
"ssl_key=<$STORAGE_ROOT/ssl/ssl_private_key.pem" \
"ssl_protocols=!SSLv3 !SSLv2" \
"ssl_cipher_list=TLSv1+HIGH !SSLv2 !RC4 !aNULL !eNULL !3DES @STRENGTH"
"ssl_min_protocol=TLSv1.2" \
"ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \
"ssl_prefer_server_ciphers=no" \
"ssl_dh_parameters_length=2048" \
"ssl_dh=<$STORAGE_ROOT/ssl/dh2048.pem"
# Disable in-the-clear IMAP/POP because there is no reason for a user to transmit
# login credentials outside of an encrypted connection. Only the over-TLS versions
@ -101,17 +113,6 @@ tools/editconf.py /etc/dovecot/conf.d/20-imap.conf \
tools/editconf.py /etc/dovecot/conf.d/20-pop3.conf \
pop3_uidl_format="%08Xu%08Xv"
# Full Text Search - Enable full text search of mail using dovecot's lucene plugin,
# which *we* package and distribute (dovecot-lucene package).
tools/editconf.py /etc/dovecot/conf.d/10-mail.conf \
mail_plugins="\$mail_plugins fts fts_lucene"
cat > /etc/dovecot/conf.d/90-plugin-fts.conf << EOF;
plugin {
fts = lucene
fts_lucene = whitespace_chars=@.
}
EOF
# ### LDA (LMTP)
# Enable Dovecot's LDA service with the LMTP protocol. It will listen
@ -135,15 +136,23 @@ service lmtp {
}
}
# Enable imap-login on localhost to allow the user_external plugin
# for Nextcloud to do imap authentication. (See #1577)
service imap-login {
inet_listener imap {
address = 127.0.0.1
port = 143
}
}
protocol imap {
mail_max_userip_connections = 20
mail_max_userip_connections = 40
}
EOF
# Setting a `postmaster_address` is required or LMTP won't start. An alias
# will be created automatically by our management daemon.
tools/editconf.py /etc/dovecot/conf.d/15-lda.conf \
postmaster_address=postmaster@$PRIMARY_HOSTNAME
"postmaster_address=postmaster@$PRIMARY_HOSTNAME"
# ### Sieve
@ -175,6 +184,7 @@ plugin {
sieve_after = $STORAGE_ROOT/mail/sieve/global_after
sieve = $STORAGE_ROOT/mail/sieve/%d/%n.sieve
sieve_dir = $STORAGE_ROOT/mail/sieve/%d/%n
sieve_redirect_envelope_from = recipient
}
EOF
@ -191,14 +201,14 @@ chown -R mail:dovecot /etc/dovecot
chmod -R o-rwx /etc/dovecot
# Ensure mailbox files have a directory that exists and are owned by the mail user.
mkdir -p $STORAGE_ROOT/mail/mailboxes
chown -R mail.mail $STORAGE_ROOT/mail/mailboxes
mkdir -p "$STORAGE_ROOT/mail/mailboxes"
chown -R mail:mail "$STORAGE_ROOT/mail/mailboxes"
# Same for the sieve scripts.
mkdir -p $STORAGE_ROOT/mail/sieve
mkdir -p $STORAGE_ROOT/mail/sieve/global_before
mkdir -p $STORAGE_ROOT/mail/sieve/global_after
chown -R mail.mail $STORAGE_ROOT/mail/sieve
mkdir -p "$STORAGE_ROOT/mail/sieve"
mkdir -p "$STORAGE_ROOT/mail/sieve/global_before"
mkdir -p "$STORAGE_ROOT/mail/sieve/global_after"
chown -R mail:mail "$STORAGE_ROOT/mail/sieve"
# Allow the IMAP/POP ports in the firewall.
ufw_allow imaps

View File

@ -13,11 +13,11 @@
# destinations according to aliases, and passses email on to
# another service for local mail delivery.
#
# The first hop in local mail delivery is to Spamassassin via
# LMTP. Spamassassin then passes mail over to Dovecot for
# The first hop in local mail delivery is to spampd via
# LMTP. spampd then passes mail over to Dovecot for
# storage in the user's mailbox.
#
# Postfix also listens on port 587 (SMTP+STARTLS) for
# Postfix also listens on ports 465/587 (SMTPS, SMTP+STARTLS) for
# connections from users who can authenticate and then sends
# their email out to the outside world. Postfix queries Dovecot
# to authenticate users.
@ -41,16 +41,8 @@ source /etc/mailinabox.conf # load global vars
# always will.
# * `ca-certificates`: A trust store used to squelch postfix warnings about
# untrusted opportunistically-encrypted connections.
#
# postgrey is going to come in via the Mail-in-a-Box PPA, which publishes
# a modified version of postgrey that lets senders whitelisted by dnswl.org
# pass through without being greylisted. So please note [dnswl's license terms](https://www.dnswl.org/?page_id=9):
# > Every user with more than 100000 queries per day on the public nameserver
# > infrastructure and every commercial vendor of dnswl.org data (eg through
# > anti-spam solutions) must register with dnswl.org and purchase a subscription.
echo "Installing Postfix (SMTP server)..."
apt_install postfix postfix-pcre postgrey ca-certificates
apt_install postfix postfix-sqlite postfix-pcre postgrey ca-certificates
# ### Basic Settings
@ -63,9 +55,9 @@ apt_install postfix postfix-pcre postgrey ca-certificates
# * Set the SMTP banner (which must have the hostname first, then anything).
tools/editconf.py /etc/postfix/main.cf \
inet_interfaces=all \
smtp_bind_address=$PRIVATE_IP \
smtp_bind_address6=$PRIVATE_IPV6 \
myhostname=$PRIMARY_HOSTNAME\
smtp_bind_address="$PRIVATE_IP" \
smtp_bind_address6="$PRIVATE_IPV6" \
myhostname="$PRIMARY_HOSTNAME"\
smtpd_banner="\$myhostname ESMTP Hi, I'm a Mail-in-a-Box (Ubuntu/Postfix; see https://mailinabox.email/)" \
mydestination=localhost
@ -77,61 +69,99 @@ tools/editconf.py /etc/postfix/main.cf \
maximal_queue_lifetime=2d \
bounce_queue_lifetime=1d
# Guard against SMTP smuggling
# This "long-term" fix is recommended at https://www.postfix.org/smtp-smuggling.html.
# This beecame supported in a backported fix in package version 3.6.4-1ubuntu1.3. It is
# unnecessary in Postfix 3.9+ where this is the default. The "short-term" workarounds
# that we previously had are reverted to postfix defaults (though smtpd_discard_ehlo_keywords
# was never included in a released version of Mail-in-a-Box).
tools/editconf.py /etc/postfix/main.cf -e \
smtpd_data_restrictions= \
smtpd_discard_ehlo_keywords=
tools/editconf.py /etc/postfix/main.cf \
smtpd_forbid_bare_newline=normalize
# ### Outgoing Mail
# Enable the 'submission' port 587 smtpd server and tweak its settings.
# Enable the 'submission' ports 465 and 587 and tweak their settings.
#
# * Enable authentication. It's disabled globally so that it is disabled on port 25,
# so we need to explicitly enable it here.
# * Do not add the OpenDMAC Authentication-Results header. That should only be added
# on incoming mail. Omit the OpenDMARC milter by re-setting smtpd_milters to the
# OpenDKIM milter only. See dkim.sh.
# * Even though we dont allow auth over non-TLS connections (smtpd_tls_auth_only below, and without auth the client cant
# send outbound mail), don't allow non-TLS mail submission on this port anyway to prevent accidental misconfiguration.
# * Require the best ciphers for incoming connections per http://baldric.net/2013/12/07/tls-ciphers-in-postfix-and-dovecot/.
# By putting this setting here we leave opportunistic TLS on incoming mail at default cipher settings (any cipher is better than none).
# Setting smtpd_tls_security_level=encrypt also triggers the use of the 'mandatory' settings below (but this is ignored with smtpd_tls_wrappermode=yes.)
# * Give it a different name in syslog to distinguish it from the port 25 smtpd server.
# * Add a new cleanup service specific to the submission service ('authclean')
# that filters out privacy-sensitive headers on mail being sent out by
# authenticated users.
# authenticated users. By default Postfix also applies this to attached
# emails but we turn this off by setting nested_header_checks empty.
tools/editconf.py /etc/postfix/master.cf -s -w \
"smtps=inet n - - - - smtpd
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891
-o cleanup_service_name=authclean" \
"submission=inet n - - - - smtpd
-o smtpd_sasl_auth_enable=yes
-o syslog_name=postfix/submission
-o smtpd_milters=inet:127.0.0.1:8891
-o smtpd_tls_security_level=encrypt
-o smtpd_tls_ciphers=high -o smtpd_tls_exclude_ciphers=aNULL,DES,3DES,MD5,DES+MD5,RC4 -o smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3
-o cleanup_service_name=authclean" \
"authclean=unix n - - - 0 cleanup
-o header_checks=pcre:/etc/postfix/outgoing_mail_header_filters"
-o header_checks=pcre:/etc/postfix/outgoing_mail_header_filters
-o nested_header_checks="
# Install the `outgoing_mail_header_filters` file required by the new 'authclean' service.
cp conf/postfix_outgoing_mail_header_filters /etc/postfix/outgoing_mail_header_filters
# Modify the `outgoing_mail_header_filters` file to use the local machine name and ip
# Modify the `outgoing_mail_header_filters` file to use the local machine name and ip
# on the first received header line. This may help reduce the spam score of email by
# removing the 127.0.0.1 reference.
sed -i "s/PRIMARY_HOSTNAME/$PRIMARY_HOSTNAME/" /etc/postfix/outgoing_mail_header_filters
sed -i "s/PUBLIC_IP/$PUBLIC_IP/" /etc/postfix/outgoing_mail_header_filters
# Enable TLS on these and all other connections (i.e. ports 25 *and* 587) and
# require TLS before a user is allowed to authenticate. This also makes
# opportunistic TLS available on *incoming* mail.
# Set stronger DH parameters, which via openssl tend to default to 1024 bits
# (see ssl.sh).
# Enable TLS on incoming connections. It is not required on port 25, allowing for opportunistic
# encryption. On ports 465 and 587 it is mandatory (see above). Shared and non-shared settings are
# given here. Shared settings include:
# * Require TLS before a user is allowed to authenticate.
# * Set the path to the server TLS certificate and 2048-bit DH parameters for old DH ciphers.
# For port 25 only:
# * Disable extremely old versions of TLS and extremely unsafe ciphers, but some mail servers out in
# the world are very far behind and if we disable too much, they may not be able to use TLS and
# won't fall back to cleartext. So we don't disable too much. smtpd_tls_exclude_ciphers applies to
# both port 25 and port 587, but because we override the cipher list for both, it probably isn't used.
# Use Mozilla's "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=old&openssl-version=1.1.1
tools/editconf.py /etc/postfix/main.cf \
smtpd_tls_security_level=may\
smtpd_tls_auth_only=yes \
smtpd_tls_cert_file=$STORAGE_ROOT/ssl/ssl_certificate.pem \
smtpd_tls_key_file=$STORAGE_ROOT/ssl/ssl_private_key.pem \
smtpd_tls_dh1024_param_file=$STORAGE_ROOT/ssl/dh2048.pem \
smtpd_tls_protocols=\!SSLv2,\!SSLv3 \
smtpd_tls_cert_file="$STORAGE_ROOT/ssl/ssl_certificate.pem" \
smtpd_tls_key_file="$STORAGE_ROOT/ssl/ssl_private_key.pem" \
smtpd_tls_dh1024_param_file="$STORAGE_ROOT/ssl/dh2048.pem" \
smtpd_tls_protocols="!SSLv2,!SSLv3" \
smtpd_tls_ciphers=medium \
tls_medium_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA \
smtpd_tls_exclude_ciphers=aNULL,RC4 \
tls_preempt_cipherlist=no \
smtpd_tls_received_header=yes
# For ports 465/587 (via the 'mandatory' settings):
# * Use Mozilla's "Intermediate" TLS recommendations from https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=intermediate&openssl-version=1.1.1
# using and overriding the "high" cipher list so we don't conflict with the more permissive settings for port 25.
tools/editconf.py /etc/postfix/main.cf \
smtpd_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtpd_tls_mandatory_ciphers=high \
tls_high_cipherlist=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 \
smtpd_tls_mandatory_exclude_ciphers=aNULL,DES,3DES,MD5,DES+MD5,RC4
# Prevent non-authenticated users from sending mail that requires being
# relayed elsewhere. We don't want to be an "open relay". On outbound
# mail, require one of:
#
# * `permit_sasl_authenticated`: Authenticated users (i.e. on port 587).
# * `permit_sasl_authenticated`: Authenticated users (i.e. on port 465/587).
# * `permit_mynetworks`: Mail that originates locally.
# * `reject_unauth_destination`: No one else. (Permits mail whose destination is local and rejects other mail.)
tools/editconf.py /etc/postfix/main.cf \
@ -146,13 +176,17 @@ tools/editconf.py /etc/postfix/main.cf \
# offers it, otherwise it will transmit the message in the clear. Postfix will
# accept whatever SSL certificate the remote end provides. Opportunistic TLS
# protects against passive easvesdropping (but not man-in-the-middle attacks).
# Since we'd rather have poor encryption than none at all, we use Mozilla's
# "Old" recommendations at https://ssl-config.mozilla.org/#server=postfix&server-version=3.3.0&config=old&openssl-version=1.1.1
# for opportunistic encryption but "Intermediate" recommendations when DANE
# is used (see next and above). The cipher lists are set above.
# DANE takes this a step further:
#
# Postfix queries DNS for the TLSA record on the destination MX host. If no TLSA records are found,
# then opportunistic TLS is used. Otherwise the server certificate must match the TLSA records
# or else the mail bounces. TLSA also requires DNSSEC on the MX host. Postfix doesn't do DNSSEC
# itself but assumes the system's nameserver does and reports DNSSEC status. Thus this also
# relies on our local bind9 server being present and `smtp_dns_support_level=dnssec`.
# relies on our local DNS server (see system.sh) and `smtp_dns_support_level=dnssec`.
#
# The `smtp_tls_CAfile` is superflous, but it eliminates warnings in the logs about untrusted certs,
# which we don't care about seeing because Postfix is doing opportunistic TLS anyway. Better to encrypt,
@ -160,24 +194,29 @@ tools/editconf.py /etc/postfix/main.cf \
# now see notices about trusted certs. The CA file is provided by the package `ca-certificates`.
tools/editconf.py /etc/postfix/main.cf \
smtp_tls_protocols=\!SSLv2,\!SSLv3 \
smtp_tls_mandatory_protocols=\!SSLv2,\!SSLv3 \
smtp_tls_ciphers=medium \
smtp_tls_exclude_ciphers=aNULL,RC4 \
smtp_tls_security_level=dane \
smtp_dns_support_level=dnssec \
smtp_tls_mandatory_protocols="!SSLv2,!SSLv3,!TLSv1,!TLSv1.1" \
smtp_tls_mandatory_ciphers=high \
smtp_tls_CAfile=/etc/ssl/certs/ca-certificates.crt \
smtp_tls_loglevel=2
# ### Incoming Mail
# Pass any incoming mail over to a local delivery agent. Spamassassin
# will act as the LDA agent at first. It is listening on port 10025
# with LMTP. Spamassassin will pass the mail over to Dovecot after.
# Pass mail to spampd, which acts as the local delivery agent (LDA),
# which then passes the mail over to the Dovecot LMTP server after.
# spampd runs on port 10025 by default.
#
# In a basic setup we would pass mail directly to Dovecot by setting
# virtual_transport to `lmtp:unix:private/dovecot-lmtp`.
#
tools/editconf.py /etc/postfix/main.cf virtual_transport=lmtp:[127.0.0.1]:10025
tools/editconf.py /etc/postfix/main.cf "virtual_transport=lmtp:[127.0.0.1]:10025"
# Clear the lmtp_destination_recipient_limit setting which in previous
# versions of Mail-in-a-Box was set to 1 because of a spampd bug.
# See https://github.com/mail-in-a-box/mailinabox/issues/1523.
tools/editconf.py /etc/postfix/main.cf -e lmtp_destination_recipient_limit=
# Who can send mail to us? Some basic filters.
#
@ -191,14 +230,15 @@ tools/editconf.py /etc/postfix/main.cf virtual_transport=lmtp:[127.0.0.1]:10025
# * `reject_unlisted_recipient`: Although Postfix will reject mail to unknown recipients, it's nicer to reject such mail ahead of greylisting rather than after.
# * `check_policy_service`: Apply greylisting using postgrey.
#
# Note the spamhaus rbl return codes are taken into account as adviced here: https://docs.spamhaus.com/datasets/docs/source/40-real-world-usage/PublicMirrors/MTAs/020-Postfix.html
# Notes: #NODOC
# permit_dnswl_client can pass through mail from whitelisted IP addresses, which would be good to put before greylisting #NODOC
# so these IPs get mail delivered quickly. But when an IP is not listed in the permit_dnswl_client list (i.e. it is not #NODOC
# whitelisted) then postfix does a DEFER_IF_REJECT, which results in all "unknown user" sorts of messages turning into #NODOC
# "450 4.7.1 Client host rejected: Service unavailable". This is a retry code, so the mail doesn't properly bounce. #NODOC
tools/editconf.py /etc/postfix/main.cf \
smtpd_sender_restrictions="reject_non_fqdn_sender,reject_unknown_sender_domain,reject_authenticated_sender_login_mismatch,reject_rhsbl_sender dbl.spamhaus.org" \
smtpd_recipient_restrictions=permit_sasl_authenticated,permit_mynetworks,"reject_rbl_client zen.spamhaus.org",reject_unlisted_recipient,"check_policy_service inet:127.0.0.1:10023"
smtpd_sender_restrictions="reject_non_fqdn_sender,reject_unknown_sender_domain,reject_authenticated_sender_login_mismatch,reject_rhsbl_sender dbl.spamhaus.org=127.0.1.[2..99]" \
smtpd_recipient_restrictions="permit_sasl_authenticated,permit_mynetworks,reject_rbl_client zen.spamhaus.org=127.0.0.[2..11],reject_unlisted_recipient,check_policy_service inet:127.0.0.1:10023"
# Postfix connects to Postgrey on the 127.0.0.1 interface specifically. Ensure that
# Postgrey listens on the same interface (and not IPv6, for instance).
@ -206,9 +246,57 @@ tools/editconf.py /etc/postfix/main.cf \
# As a matter of fact RFC is not strict about retry timer so postfix and
# other MTA have their own intervals. To fix the problem of receiving
# e-mails really latter, delay of greylisting has been set to
# 180 seconds (default is 300 seconds).
# 180 seconds (default is 300 seconds). We will move the postgrey database
# under $STORAGE_ROOT. This prevents a "warming up" that would have occured
# previously with a migrated or reinstalled OS. We will specify this new path
# with the --dbdir=... option. Arguments within POSTGREY_OPTS can not have spaces,
# including dbdir. This is due to the way the init script sources the
# /etc/default/postgrey file. --dbdir=... either needs to be a path without spaces
# (luckily $STORAGE_ROOT does not currently work with spaces), or it needs to be a
# symlink without spaces that can point to a folder with spaces). We'll just assume
# $STORAGE_ROOT won't have spaces to simplify things.
tools/editconf.py /etc/default/postgrey \
POSTGREY_OPTS=\"'--inet=127.0.0.1:10023 --delay=180'\"
POSTGREY_OPTS=\""--inet=127.0.0.1:10023 --delay=180 --dbdir=$STORAGE_ROOT/mail/postgrey/db"\"
# If the $STORAGE_ROOT/mail/postgrey is empty, copy the postgrey database over from the old location
if [ ! -d "$STORAGE_ROOT/mail/postgrey/db" ]; then
# Stop the service
service postgrey stop
# Ensure the new paths for postgrey db exists
mkdir -p "$STORAGE_ROOT/mail/postgrey/db"
# Move over database files
mv /var/lib/postgrey/* "$STORAGE_ROOT/mail/postgrey/db/" || true
fi
# Ensure permissions are set
chown -R postgrey:postgrey "$STORAGE_ROOT/mail/postgrey/"
chmod 700 "$STORAGE_ROOT/mail/postgrey/"{,db}
# We are going to setup a newer whitelist for postgrey, the version included in the distribution is old
cat > /etc/cron.daily/mailinabox-postgrey-whitelist << EOF;
#!/bin/bash
# Mail-in-a-Box
# check we have a postgrey_whitelist_clients file and that it is not older than 28 days
if [ ! -f /etc/postgrey/whitelist_clients ] || find /etc/postgrey/whitelist_clients -mtime +28 | grep -q '.' ; then
# ok we need to update the file, so lets try to fetch it
if curl https://postgrey.schweikert.ch/pub/postgrey_whitelist_clients --output /tmp/postgrey_whitelist_clients -sS --fail > /dev/null 2>&1 ; then
# if fetching hasn't failed yet then check it is a plain text file
# curl manual states that --fail sometimes still produces output
# this final check will at least check the output is not html
# before moving it into place
if [ "\$(file -b --mime-type /tmp/postgrey_whitelist_clients)" == "text/plain" ]; then
mv /tmp/postgrey_whitelist_clients /etc/postgrey/whitelist_clients
service postgrey restart
else
rm /tmp/postgrey_whitelist_clients
fi
fi
fi
EOF
chmod +x /etc/cron.daily/mailinabox-postgrey-whitelist
/etc/cron.daily/mailinabox-postgrey-whitelist
# Increase the message size limit from 10MB to 128MB.
# The same limit is specified in nginx.conf for mail submitted via webmail and Z-Push.
@ -218,6 +306,7 @@ tools/editconf.py /etc/postfix/main.cf \
# Allow the two SMTP ports in the firewall.
ufw_allow smtp
ufw_allow smtps
ufw_allow submission
# Restart services

View File

@ -18,10 +18,12 @@ source /etc/mailinabox.conf # load global vars
db_path=$STORAGE_ROOT/mail/users.sqlite
# Create an empty database if it doesn't yet exist.
if [ ! -f $db_path ]; then
echo Creating new user database: $db_path;
echo "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL, extra, privileges TEXT NOT NULL DEFAULT '');" | sqlite3 $db_path;
echo "CREATE TABLE aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 $db_path;
if [ ! -f "$db_path" ]; then
echo "Creating new user database: $db_path";
echo "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, password TEXT NOT NULL, extra, privileges TEXT NOT NULL DEFAULT '');" | sqlite3 "$db_path";
echo "CREATE TABLE aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 "$db_path";
echo "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);" | sqlite3 "$db_path";
echo "CREATE TABLE auto_aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 "$db_path";
fi
# ### User Authentication
@ -65,11 +67,15 @@ service auth {
}
EOF
# And have Postfix use that service.
# And have Postfix use that service. We *disable* it here
# so that authentication is not permitted on port 25 (which
# does not run DKIM on relayed mail, so outbound mail isn't
# correct, see #830), but we enable it specifically for the
# submission port.
tools/editconf.py /etc/postfix/main.cf \
smtpd_sasl_type=dovecot \
smtpd_sasl_path=private/auth \
smtpd_sasl_auth_enable=yes
smtpd_sasl_auth_enable=no
# ### Sender Validation
@ -95,8 +101,12 @@ EOF
# ### Destination Validation
# Use a Sqlite3 database to check whether a destination email address exists,
# and to perform any email alias rewrites in Postfix.
# and to perform any email alias rewrites in Postfix. Additionally, we disable
# SMTPUTF8 because Dovecot's LMTP server that delivers mail to inboxes does
# not support it, and if a message is received with the SMTPUTF8 flag it will
# bounce.
tools/editconf.py /etc/postfix/main.cf \
smtputf8_enable=no \
virtual_mailbox_domains=sqlite:/etc/postfix/virtual-mailbox-domains.cf \
virtual_mailbox_maps=sqlite:/etc/postfix/virtual-mailbox-maps.cf \
virtual_alias_maps=sqlite:/etc/postfix/virtual-alias-maps.cf \
@ -105,7 +115,7 @@ tools/editconf.py /etc/postfix/main.cf \
# SQL statement to check if we handle incoming mail for a domain, either for users or aliases.
cat > /etc/postfix/virtual-mailbox-domains.cf << EOF;
dbpath=$db_path
query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1 FROM aliases WHERE source LIKE '%%@%s'
query = SELECT 1 FROM users WHERE email LIKE '%%@%s' UNION SELECT 1 FROM aliases WHERE source LIKE '%%@%s' UNION SELECT 1 FROM auto_aliases WHERE source LIKE '%%@%s'
EOF
# SQL statement to check if we handle incoming mail for a user.
@ -140,7 +150,7 @@ EOF
# empty destination here so that other lower priority rules might match.
cat > /etc/postfix/virtual-alias-maps.cf << EOF;
dbpath=$db_path
query = SELECT destination from (SELECT destination, 0 as priority FROM aliases WHERE source='%s' AND destination<>'' UNION SELECT email as destination, 1 as priority FROM users WHERE email='%s') ORDER BY priority LIMIT 1;
query = SELECT destination from (SELECT destination, 0 as priority FROM aliases WHERE source='%s' AND destination<>'' UNION SELECT email as destination, 1 as priority FROM users WHERE email='%s' UNION SELECT destination, 2 as priority FROM auto_aliases WHERE source='%s' AND destination<>'') ORDER BY priority LIMIT 1;
EOF
# Restart Services

View File

@ -1,56 +1,122 @@
#!/bin/bash
source setup/functions.sh
source /etc/mailinabox.conf # load global vars
echo "Installing Mail-in-a-Box system management daemon..."
# Install packages.
# flask, yaml, dnspython, and dateutil are all for our Python 3 management daemon itself.
# duplicity does backups. python-pip is so we can 'pip install boto' for Python 2, for duplicity, so it can do backups to AWS S3.
apt_install python3-flask links duplicity libyaml-dev python3-dnspython python3-dateutil python-pip
# DEPENDENCIES
# These are required to pip install cryptography.
apt_install build-essential libssl-dev libffi-dev python3-dev
# duplicity is used to make backups of user data.
#
# virtualenv is used to isolate the Python 3 packages we
# install via pip from the system-installed packages.
#
# certbot installs EFF's certbot which we use to
# provision free TLS certificates.
apt_install duplicity python3-pip virtualenv certbot rsync
# b2sdk is used for backblaze backups.
# boto3 is used for amazon aws backups.
# Both are installed outside the pipenv, so they can be used by duplicity
hide_output pip3 install --upgrade b2sdk boto3
# Create a virtualenv for the installation of Python 3 packages
# used by the management daemon.
inst_dir=/usr/local/lib/mailinabox
mkdir -p $inst_dir
venv=$inst_dir/env
if [ ! -d $venv ]; then
# A bug specific to Ubuntu 22.04 and Python 3.10 requires
# forcing a virtualenv directory layout option (see #2335
# and https://github.com/pypa/virtualenv/pull/2415). In
# our issue, reportedly installing python3-distutils didn't
# fix the problem.)
export DEB_PYTHON_INSTALL_LAYOUT='deb'
hide_output virtualenv -ppython3 $venv
fi
# Upgrade pip because the Ubuntu-packaged version is out of date.
hide_output $venv/bin/pip install --upgrade pip
# Install other Python 3 packages used by the management daemon.
# The first line is the packages that Josh maintains himself!
# NOTE: email_validator is repeated in setup/questions.sh, so please keep the versions synced.
hide_output pip3 install --upgrade \
rtyaml "email_validator>=1.0.0" "free_tls_certificates>=0.1.3" \
"idna>=2.0.0" "cryptography>=1.0.2" boto psutil
hide_output $venv/bin/pip install --upgrade \
rtyaml "email_validator>=1.0.0" "exclusiveprocess" \
flask dnspython python-dateutil expiringdict gunicorn \
qrcode[pil] pyotp \
"idna>=2.0.0" "cryptography==37.0.2" psutil postfix-mta-sts-resolver \
b2sdk boto3
# duplicity uses python 2 so we need to get the python 2 package of boto to have backups to S3.
# boto from the Ubuntu package manager is too out-of-date -- it doesn't support the newer
# S3 api used in some regions, which breaks backups to those regions. See #627, #653.
hide_output pip install --upgrade boto
# CONFIGURATION
# Create a backup directory and a random key for encrypting backups.
mkdir -p $STORAGE_ROOT/backup
if [ ! -f $STORAGE_ROOT/backup/secret_key.txt ]; then
$(umask 077; openssl rand -base64 2048 > $STORAGE_ROOT/backup/secret_key.txt)
mkdir -p "$STORAGE_ROOT/backup"
if [ ! -f "$STORAGE_ROOT/backup/secret_key.txt" ]; then
umask 077; openssl rand -base64 2048 > "$STORAGE_ROOT/backup/secret_key.txt"
fi
# Link the management server daemon into a well known location.
rm -f /usr/local/bin/mailinabox-daemon
ln -s `pwd`/management/daemon.py /usr/local/bin/mailinabox-daemon
# Download jQuery and Bootstrap local files
# Make sure we have the directory to save to.
assets_dir=$inst_dir/vendor/assets
rm -rf $assets_dir
mkdir -p $assets_dir
# jQuery CDN URL
jquery_version=2.2.4
jquery_url=https://code.jquery.com
# Get jQuery
wget_verify $jquery_url/jquery-$jquery_version.min.js 69bb69e25ca7d5ef0935317584e6153f3fd9a88c $assets_dir/jquery.min.js
# Bootstrap CDN URL
bootstrap_version=3.4.1
bootstrap_url=https://github.com/twbs/bootstrap/releases/download/v$bootstrap_version/bootstrap-$bootstrap_version-dist.zip
# Get Bootstrap
wget_verify $bootstrap_url 0bb64c67c2552014d48ab4db81c2e8c01781f580 /tmp/bootstrap.zip
unzip -q /tmp/bootstrap.zip -d $assets_dir
mv $assets_dir/bootstrap-$bootstrap_version-dist $assets_dir/bootstrap
rm -f /tmp/bootstrap.zip
# Create an init script to start the management daemon and keep it
# running after a reboot.
rm -f /etc/init.d/mailinabox
ln -s $(pwd)/conf/management-initscript /etc/init.d/mailinabox
hide_output update-rc.d mailinabox defaults
# Set a long timeout since some commands take a while to run, matching
# the timeout we set for PHP (fastcgi_read_timeout in the nginx confs).
# Note: Authentication currently breaks with more than 1 gunicorn worker.
cat > $inst_dir/start <<EOF;
#!/bin/bash
# Set character encoding flags to ensure that any non-ASCII don't cause problems.
export LANGUAGE=en_US.UTF-8
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_TYPE=en_US.UTF-8
# Remove old files we no longer use.
rm -f /etc/cron.daily/mailinabox-backup
rm -f /etc/cron.daily/mailinabox-statuschecks
mkdir -p /var/lib/mailinabox
tr -cd '[:xdigit:]' < /dev/urandom | head -c 32 > /var/lib/mailinabox/api.key
chmod 640 /var/lib/mailinabox/api.key
source $venv/bin/activate
export PYTHONPATH=$PWD/management
exec gunicorn -b localhost:10222 -w 1 --timeout 630 wsgi:app
EOF
chmod +x $inst_dir/start
cp --remove-destination conf/mailinabox.service /lib/systemd/system/mailinabox.service # target was previously a symlink so remove it first
hide_output systemctl link -f /lib/systemd/system/mailinabox.service
hide_output systemctl daemon-reload
hide_output systemctl enable mailinabox.service
# Perform nightly tasks at 3am in system time: take a backup, run
# status checks and email the administrator any changes.
minute=$((RANDOM % 60)) # avoid overloading mailinabox.email
cat > /etc/cron.d/mailinabox-nightly << EOF;
# Mail-in-a-Box --- Do not edit / will be overwritten on update.
# Run nightly tasks: backup, status checks.
0 3 * * * root (cd `pwd` && management/daily_tasks.sh)
$minute 3 * * * root (cd $PWD && management/daily_tasks.sh)
EOF
# Start the management server.

View File

@ -9,6 +9,7 @@ import sys, os, os.path, glob, re, shutil
sys.path.insert(0, 'management')
from utils import load_environment, save_environment, shell
import contextlib
def migration_1(env):
# Re-arrange where we store SSL certificates. There was a typo also.
@ -31,10 +32,8 @@ def migration_1(env):
move_file(sslfn, domain_name, file_type)
# Move the old domains directory if it is now empty.
try:
with contextlib.suppress(Exception):
os.rmdir(os.path.join( env["STORAGE_ROOT"], 'ssl/domains'))
except:
pass
def migration_2(env):
# Delete the .dovecot_sieve script everywhere. This was formerly a copy of our spam -> Spam
@ -137,6 +136,62 @@ def migration_10(env):
shutil.move(sslcert, newname)
os.rmdir(d)
def migration_11(env):
# Archive the old Let's Encrypt account directory managed by free_tls_certificates
# because we'll use that path now for the directory managed by certbot.
try:
old_path = os.path.join(env["STORAGE_ROOT"], 'ssl', 'lets_encrypt')
new_path = os.path.join(env["STORAGE_ROOT"], 'ssl', 'lets_encrypt-old')
shutil.move(old_path, new_path)
except:
# meh
pass
def migration_12(env):
# Upgrading to Carddav Roundcube plugin to version 3+, it requires the carddav_*
# tables to be dropped.
# Checking that the roundcube database already exists.
if os.path.exists(os.path.join(env["STORAGE_ROOT"], "mail/roundcube/roundcube.sqlite")):
import sqlite3
conn = sqlite3.connect(os.path.join(env["STORAGE_ROOT"], "mail/roundcube/roundcube.sqlite"))
c = conn.cursor()
# Get a list of all the tables that begin with 'carddav_'
c.execute("SELECT name FROM sqlite_master WHERE type = ? AND name LIKE ?", ('table', 'carddav_%'))
carddav_tables = c.fetchall()
# If there were tables that begin with 'carddav_', drop them
if carddav_tables:
for table in carddav_tables:
try:
table = table[0]
c = conn.cursor()
dropcmd = "DROP TABLE %s" % table
c.execute(dropcmd)
except:
print("Failed to drop table", table)
# Save.
conn.commit()
conn.close()
# Delete all sessions, requring users to login again to recreate carddav_*
# databases
conn = sqlite3.connect(os.path.join(env["STORAGE_ROOT"], "mail/roundcube/roundcube.sqlite"))
c = conn.cursor()
c.execute("delete from session;")
conn.commit()
conn.close()
def migration_13(env):
# Add the "mfa" table for configuring MFA for login to the control panel.
db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite')
shell("check_call", ["sqlite3", db, "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);"])
def migration_14(env):
# Add the "auto_aliases" table.
db = os.path.join(env["STORAGE_ROOT"], 'mail/users.sqlite')
shell("check_call", ["sqlite3", db, "CREATE TABLE auto_aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);"])
###########################################################
def get_current_migration():
ver = 0
while True:
@ -156,8 +211,8 @@ def run_migrations():
migration_id_file = os.path.join(env['STORAGE_ROOT'], 'mailinabox.version')
migration_id = None
if os.path.exists(migration_id_file):
with open(migration_id_file) as f:
migration_id = f.read().strip();
with open(migration_id_file, encoding='utf-8') as f:
migration_id = f.read().strip()
if migration_id is None:
# Load the legacy location of the migration ID. We'll drop support
@ -166,7 +221,7 @@ def run_migrations():
if migration_id is None:
print()
print("%s file doesn't exists. Skipping migration..." % (migration_id_file,))
print(f"{migration_id_file} file doesn't exists. Skipping migration...")
return
ourver = int(migration_id)
@ -197,7 +252,7 @@ def run_migrations():
# Write out our current version now. Do this sooner rather than later
# in case of any problems.
with open(migration_id_file, "w") as f:
with open(migration_id_file, "w", encoding='utf-8') as f:
f.write(str(ourver) + "\n")
# Delete the legacy location of this field.

View File

@ -29,20 +29,22 @@ address 127.0.0.1
# send alerts to the following address
contacts admin
contact.admin.command mail -s "Munin notification ${var:host}" administrator@$PRIMARY_HOSTNAME
contact.admin.command mail -s "Munin notification \${var:host}" administrator@$PRIMARY_HOSTNAME
contact.admin.always_send warning critical
EOF
# The Debian installer touches these files and chowns them to www-data:adm for use with spawn-fcgi
chown munin. /var/log/munin/munin-cgi-html.log
chown munin. /var/log/munin/munin-cgi-graph.log
chown munin /var/log/munin/munin-cgi-html.log
chown munin /var/log/munin/munin-cgi-graph.log
# ensure munin-node knows the name of this machine
# and reduce logging level to warning
tools/editconf.py /etc/munin/munin-node.conf -s \
host_name=$PRIMARY_HOSTNAME
host_name="$PRIMARY_HOSTNAME" \
log_level=1
# Update the activated plugins through munin's autoconfiguration.
munin-node-configure --shell --remove-also 2>/dev/null | sh
munin-node-configure --shell --remove-also 2>/dev/null | sh || /bin/true
# Deactivate monitoring of NTP peers. Not sure why anyone would want to monitor a NTP peer. The addresses seem to change
# (which is taken care of my munin-node-configure, but only when we re-run it.)
@ -50,15 +52,24 @@ find /etc/munin/plugins/ -lname /usr/share/munin/plugins/ntp_ -print0 | xargs -0
# Deactivate monitoring of network interfaces that are not up. Otherwise we can get a lot of empty charts.
for f in $(find /etc/munin/plugins/ \( -lname /usr/share/munin/plugins/if_ -o -lname /usr/share/munin/plugins/if_err_ -o -lname /usr/share/munin/plugins/bonding_err_ \)); do
IF=$(echo $f | sed s/.*_//);
if ! ifquery $IF >/dev/null 2>/dev/null; then
rm $f;
IF=$(echo "$f" | sed s/.*_//);
if ! grep -qFx up "/sys/class/net/$IF/operstate" 2>/dev/null; then
rm "$f";
fi;
done
# Create a 'state' directory. Not sure why we need to do this manually.
mkdir -p /var/lib/munin-node/plugin-state/
# Create a systemd service for munin.
ln -sf "$PWD/management/munin_start.sh" /usr/local/lib/mailinabox/munin_start.sh
chmod 0744 /usr/local/lib/mailinabox/munin_start.sh
cp --remove-destination conf/munin.service /lib/systemd/system/munin.service # target was previously a symlink so remove first
hide_output systemctl link -f /lib/systemd/system/munin.service
hide_output systemctl daemon-reload
hide_output systemctl unmask munin.service
hide_output systemctl enable munin.service
# Restart services.
restart_service munin
restart_service munin-node
@ -66,4 +77,8 @@ restart_service munin-node
# generate initial statistics so the directory isn't empty
# (We get "Pango-WARNING **: error opening config file '/root/.config/pango/pangorc': Permission denied"
# if we don't explicitly set the HOME directory when sudo'ing.)
sudo -H -u munin munin-cron
# We check to see if munin-cron is already running, if it is, there is no need to run it simultaneously
# generating an error.
if [ ! -f /var/run/munin/munin-update.lock ]; then
sudo -H -u munin munin-cron
fi

View File

@ -1,3 +1,4 @@
#!/bin/bash
# Install the 'host', 'sed', and and 'nc' tools. This script is run before
# the rest of the system setup so we may not yet have things installed.
apt_get_quiet install bind9-host sed netcat-openbsd
@ -6,7 +7,7 @@ apt_get_quiet install bind9-host sed netcat-openbsd
# The user might have chosen a name that was previously in use by a spammer
# and will not be able to reliably send mail. Do this after any automatic
# choices made above.
if host $PRIMARY_HOSTNAME.dbl.spamhaus.org > /dev/null; then
if host "$PRIMARY_HOSTNAME.dbl.spamhaus.org" > /dev/null; then
echo
echo "The hostname you chose '$PRIMARY_HOSTNAME' is listed in the"
echo "Spamhaus Domain Block List. See http://www.spamhaus.org/dbl/"
@ -22,8 +23,8 @@ fi
# The user might have ended up on an IP address that was previously in use
# by a spammer, or the user may be deploying on a residential network. We
# will not be able to reliably send mail in these cases.
REVERSED_IPV4=$(echo $PUBLIC_IP | sed "s/\([0-9]*\).\([0-9]*\).\([0-9]*\).\([0-9]*\)/\4.\3.\2.\1/")
if host $REVERSED_IPV4.zen.spamhaus.org > /dev/null; then
REVERSED_IPV4=$(echo "$PUBLIC_IP" | sed "s/\([0-9]*\).\([0-9]*\).\([0-9]*\).\([0-9]*\)/\4.\3.\2.\1/")
if host "$REVERSED_IPV4.zen.spamhaus.org" > /dev/null; then
echo
echo "The IP address $PUBLIC_IP is listed in the Spamhaus Block List."
echo "See http://www.spamhaus.org/query/ip/$PUBLIC_IP."

454
setup/nextcloud.sh Executable file
View File

@ -0,0 +1,454 @@
#!/bin/bash
# Nextcloud
##########################
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# ### Installing Nextcloud
echo "Installing Nextcloud (contacts/calendar)..."
# Nextcloud core and app (plugin) versions to install.
# With each version we store a hash to ensure we install what we expect.
# Nextcloud core
# --------------
# * See https://nextcloud.com/changelog for the latest version.
# * Check https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html
# for whether it supports the version of PHP available on this machine.
# * Since Nextcloud only supports upgrades from consecutive major versions,
# we automatically install intermediate versions as needed.
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
# copying it from the error message when it doesn't match what is below.
nextcloud_ver=26.0.12
nextcloud_hash=b55e9f51171c0a9b9ab3686cf5c8ad1a4292ca15
# Nextcloud apps
# --------------
# * Find the most recent tag that is compatible with the Nextcloud version above by:
# https://github.com/nextcloud-releases/contacts/tags
# https://github.com/nextcloud-releases/calendar/tags
# https://github.com/nextcloud/user_external/tags
#
# * For these three packages, contact, calendar and user_external, the hash is the SHA1 hash of
# the ZIP package, which you can find by just running this script and copying it from
# the error message when it doesn't match what is below:
# Always ensure the versions are supported, see https://apps.nextcloud.com/apps/contacts
contacts_ver=5.5.3
contacts_hash=799550f38e46764d90fa32ca1a6535dccd8316e5
# Always ensure the versions are supported, see https://apps.nextcloud.com/apps/calendar
calendar_ver=4.6.6
calendar_hash=e34a71669a52d997e319d64a984dcd041389eb22
# Always ensure the versions are supported, see https://apps.nextcloud.com/apps/user_external
user_external_ver=3.2.0
user_external_hash=a494073dcdecbbbc79a9c77f72524ac9994d2eec
# Developer advice (test plan)
# ----------------------------
# When upgrading above versions, how to test?
#
# 1. Enter your server instance (or on the Vagrant image)
# 1. Git clone <your fork>
# 2. Git checkout <your fork>
# 3. Run `sudo ./setup/nextcloud.sh`
# 4. Ensure the installation completes. If any hashes mismatch, correct them.
# 5. Enter nextcloud web, run following tests:
# 5.1 You still can create, edit and delete contacts
# 5.2 You still can create, edit and delete calendar events
# 5.3 You still can create, edit and delete users
# 5.4 Go to Administration > Logs and ensure no new errors are shown
# Clear prior packages and install dependencies from apt.
apt-get purge -qq -y owncloud* # we used to use the package manager
apt_install curl php"${PHP_VER}" php"${PHP_VER}"-fpm \
php"${PHP_VER}"-cli php"${PHP_VER}"-sqlite3 php"${PHP_VER}"-gd php"${PHP_VER}"-imap php"${PHP_VER}"-curl \
php"${PHP_VER}"-dev php"${PHP_VER}"-gd php"${PHP_VER}"-xml php"${PHP_VER}"-mbstring php"${PHP_VER}"-zip php"${PHP_VER}"-apcu \
php"${PHP_VER}"-intl php"${PHP_VER}"-imagick php"${PHP_VER}"-gmp php"${PHP_VER}"-bcmath
# Enable APC before Nextcloud tools are run.
tools/editconf.py /etc/php/"$PHP_VER"/mods-available/apcu.ini -c ';' \
apc.enabled=1 \
apc.enable_cli=1
InstallNextcloud() {
version=$1
hash=$2
version_contacts=$3
hash_contacts=$4
version_calendar=$5
hash_calendar=$6
version_user_external=${7:-}
hash_user_external=${8:-}
echo
echo "Upgrading to Nextcloud version $version"
echo
# Download and verify
wget_verify "https://download.nextcloud.com/server/releases/nextcloud-$version.zip" "$hash" /tmp/nextcloud.zip
# Remove the current owncloud/Nextcloud
rm -rf /usr/local/lib/owncloud
# Extract ownCloud/Nextcloud
unzip -q /tmp/nextcloud.zip -d /usr/local/lib
mv /usr/local/lib/nextcloud /usr/local/lib/owncloud
rm -f /tmp/nextcloud.zip
# The two apps we actually want are not in Nextcloud core. Download the releases from
# their github repositories.
mkdir -p /usr/local/lib/owncloud/apps
wget_verify "https://github.com/nextcloud-releases/contacts/archive/refs/tags/v$version_contacts.tar.gz" "$hash_contacts" /tmp/contacts.tgz
tar xf /tmp/contacts.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/contacts.tgz
wget_verify "https://github.com/nextcloud-releases/calendar/archive/refs/tags/v$version_calendar.tar.gz" "$hash_calendar" /tmp/calendar.tgz
tar xf /tmp/calendar.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/calendar.tgz
# Starting with Nextcloud 15, the app user_external is no longer included in Nextcloud core,
# we will install from their github repository.
if [ -n "$version_user_external" ]; then
wget_verify "https://github.com/nextcloud-releases/user_external/releases/download/v$version_user_external/user_external-v$version_user_external.tar.gz" "$hash_user_external" /tmp/user_external.tgz
tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/
rm /tmp/user_external.tgz
fi
# Fix weird permissions.
chmod 750 /usr/local/lib/owncloud/{apps,config}
# Create a symlink to the config.php in STORAGE_ROOT (for upgrades we're restoring the symlink we previously
# put in, and in new installs we're creating a symlink and will create the actual config later).
ln -sf "$STORAGE_ROOT/owncloud/config.php" /usr/local/lib/owncloud/config/config.php
# Make sure permissions are correct or the upgrade step won't run.
# $STORAGE_ROOT/owncloud may not yet exist, so use -f to suppress
# that error.
chown -f -R www-data:www-data "$STORAGE_ROOT/owncloud /usr/local/lib/owncloud" || /bin/true
# If this isn't a new installation, immediately run the upgrade script.
# Then check for success (0=ok and 3=no upgrade needed, both are success).
if [ -e "$STORAGE_ROOT/owncloud/owncloud.db" ]; then
# ownCloud 8.1.1 broke upgrades. It may fail on the first attempt, but
# that can be OK.
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ upgrade
E=$?
if [ $E -ne 0 ] && [ $E -ne 3 ]; then
echo "Trying ownCloud upgrade again to work around ownCloud upgrade bug..."
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ upgrade
E=$?
if [ $E -ne 0 ] && [ $E -ne 3 ]; then exit 1; fi
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ maintenance:mode --off
echo "...which seemed to work."
fi
# Add missing indices. NextCloud didn't include this in the normal upgrade because it might take some time.
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ db:add-missing-indices
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ db:add-missing-primary-keys
# Run conversion to BigInt identifiers, this process may take some time on large tables.
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ db:convert-filecache-bigint --no-interaction
fi
}
# Current Nextcloud Version, #1623
# Checking /usr/local/lib/owncloud/version.php shows version of the Nextcloud application, not the DB
# $STORAGE_ROOT/owncloud is kept together even during a backup. It is better to rely on config.php than
# version.php since the restore procedure can leave the system in a state where you have a newer Nextcloud
# application version than the database.
# If config.php exists, get version number, otherwise CURRENT_NEXTCLOUD_VER is empty.
if [ -f "$STORAGE_ROOT/owncloud/config.php" ]; then
CURRENT_NEXTCLOUD_VER=$(php"$PHP_VER" -r "include(\"$STORAGE_ROOT/owncloud/config.php\"); echo(\$CONFIG['version']);")
else
CURRENT_NEXTCLOUD_VER=""
fi
# If the Nextcloud directory is missing (never been installed before, or the nextcloud version to be installed is different
# from the version currently installed, do the install/upgrade
if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextcloud_ver ]]; then
# Stop php-fpm if running. If they are not running (which happens on a previously failed install), dont bail.
service php"$PHP_VER"-fpm stop &> /dev/null || /bin/true
# Backup the existing ownCloud/Nextcloud.
# Create a backup directory to store the current installation and database to
BACKUP_DIRECTORY=$STORAGE_ROOT/owncloud-backup/$(date +"%Y-%m-%d-%T")
mkdir -p "$BACKUP_DIRECTORY"
if [ -d /usr/local/lib/owncloud/ ]; then
echo "Upgrading Nextcloud --- backing up existing installation, configuration, and database to directory to $BACKUP_DIRECTORY..."
cp -r /usr/local/lib/owncloud "$BACKUP_DIRECTORY/owncloud-install"
fi
if [ -e "$STORAGE_ROOT/owncloud/owncloud.db" ]; then
cp "$STORAGE_ROOT/owncloud/owncloud.db" "$BACKUP_DIRECTORY"
fi
if [ -e "$STORAGE_ROOT/owncloud/config.php" ]; then
cp "$STORAGE_ROOT/owncloud/config.php" "$BACKUP_DIRECTORY"
fi
# If ownCloud or Nextcloud was previously installed....
if [ -n "${CURRENT_NEXTCLOUD_VER}" ]; then
# Database migrations from ownCloud are no longer possible because ownCloud cannot be run under
# PHP 7.
if [ -e "$STORAGE_ROOT/owncloud/config.php" ]; then
# Remove the read-onlyness of the config, which is needed for migrations, especially for v24
sed -i -e '/config_is_read_only/d' "$STORAGE_ROOT/owncloud/config.php"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^[89] ]]; then
echo "Upgrades from Mail-in-a-Box prior to v0.28 (dated July 30, 2018) with Nextcloud < 13.0.6 (you have ownCloud 8 or 9) are not supported. Upgrade to Mail-in-a-Box version v0.30 first. Setup will continue, but skip the Nextcloud migration."
return 0
elif [[ ${CURRENT_NEXTCLOUD_VER} =~ ^1[012] ]]; then
echo "Upgrades from Mail-in-a-Box prior to v0.28 (dated July 30, 2018) with Nextcloud < 13.0.6 (you have ownCloud 10, 11 or 12) are not supported. Upgrade to Mail-in-a-Box version v0.30 first. Setup will continue, but skip the Nextcloud migration."
return 0
elif [[ ${CURRENT_NEXTCLOUD_VER} =~ ^1[3456789] ]]; then
echo "Upgrades from Mail-in-a-Box prior to v60 with Nextcloud 19 or earlier are not supported. Upgrade to the latest Mail-in-a-Box version supported on your machine first. Setup will continue, but skip the Nextcloud migration."
return 0
fi
# Hint: whenever you bump, remember this:
# - Run a server with the previous version
# - On a new if-else block, copy the versions/hashes from the previous version
# - Run sudo ./setup/start.sh on the new machine. Upon completion, test its basic functionalities.
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^20 ]]; then
InstallNextcloud 21.0.7 f5c7079c5b56ce1e301c6a27c0d975d608bb01c9 4.0.7 45e7cf4bfe99cd8d03625cf9e5a1bb2e90549136 3.0.4 d0284b68135777ec9ca713c307216165b294d0fe
CURRENT_NEXTCLOUD_VER="21.0.7"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^21 ]]; then
InstallNextcloud 22.2.6 9d39741f051a8da42ff7df46ceef2653a1dc70d9 4.1.0 697f6b4a664e928d72414ea2731cb2c9d1dc3077 3.2.2 ce4030ab57f523f33d5396c6a81396d440756f5f 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f
CURRENT_NEXTCLOUD_VER="22.2.6"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^22 ]]; then
InstallNextcloud 23.0.12 d138641b8e7aabebe69bb3ec7c79a714d122f729 4.1.0 697f6b4a664e928d72414ea2731cb2c9d1dc3077 3.2.2 ce4030ab57f523f33d5396c6a81396d440756f5f 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f
CURRENT_NEXTCLOUD_VER="23.0.12"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^23 ]]; then
InstallNextcloud 24.0.12 7aa5d61632c1ccf4ca3ff00fb6b295d318c05599 4.1.0 697f6b4a664e928d72414ea2731cb2c9d1dc3077 3.2.2 ce4030ab57f523f33d5396c6a81396d440756f5f 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f
CURRENT_NEXTCLOUD_VER="24.0.12"
fi
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^24 ]]; then
InstallNextcloud 25.0.7 a5a565c916355005c7b408dd41a1e53505e1a080 5.3.0 4b0a6666374e3b55cfd2ae9b72e1d458b87d4c8c 4.4.2 21a42e15806adc9b2618760ef94f1797ef399e2f 3.2.0 a494073dcdecbbbc79a9c77f72524ac9994d2eec
CURRENT_NEXTCLOUD_VER="25.0.7"
fi
fi
InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash
fi
# ### Configuring Nextcloud
# Setup Nextcloud if the Nextcloud database does not yet exist. Running setup when
# the database does exist wipes the database and user data.
if [ ! -f "$STORAGE_ROOT/owncloud/owncloud.db" ]; then
# Create user data directory
mkdir -p "$STORAGE_ROOT/owncloud"
# Create an initial configuration file.
instanceid=oc$(echo "$PRIMARY_HOSTNAME" | sha1sum | fold -w 10 | head -n 1)
cat > "$STORAGE_ROOT/owncloud/config.php" <<EOF;
<?php
\$CONFIG = array (
'datadirectory' => '$STORAGE_ROOT/owncloud',
'instanceid' => '$instanceid',
'forcessl' => true, # if unset/false, Nextcloud sends a HSTS=0 header, which conflicts with nginx config
'overwritewebroot' => '/cloud',
'overwrite.cli.url' => '/cloud',
'user_backends' => array(
array(
'class' => '\OCA\UserExternal\IMAP',
'arguments' => array(
'127.0.0.1', 143, null, null, false, false
),
),
),
'memcache.local' => '\OC\Memcache\APCu',
'mail_smtpmode' => 'sendmail',
'mail_smtpsecure' => '',
'mail_smtpauthtype' => 'LOGIN',
'mail_smtpauth' => false,
'mail_smtphost' => '',
'mail_smtpport' => '',
'mail_smtpname' => '',
'mail_smtppassword' => '',
'mail_from_address' => 'owncloud',
);
?>
EOF
# Create an auto-configuration file to fill in database settings
# when the install script is run. Make an administrator account
# here or else the install can't finish.
adminpassword=$(dd if=/dev/urandom bs=1 count=40 2>/dev/null | sha1sum | fold -w 30 | head -n 1)
cat > /usr/local/lib/owncloud/config/autoconfig.php <<EOF;
<?php
\$AUTOCONFIG = array (
# storage/database
'directory' => '$STORAGE_ROOT/owncloud',
'dbtype' => 'sqlite3',
# create an administrator account with a random password so that
# the user does not have to enter anything on first load of Nextcloud
'adminlogin' => 'root',
'adminpass' => '$adminpassword',
);
?>
EOF
# Set permissions
chown -R www-data:www-data "$STORAGE_ROOT/owncloud" /usr/local/lib/owncloud
# Execute Nextcloud's setup step, which creates the Nextcloud sqlite database.
# It also wipes it if it exists. And it updates config.php with database
# settings and deletes the autoconfig.php file.
(cd /usr/local/lib/owncloud || exit; sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/index.php;)
fi
# Update config.php.
# * trusted_domains is reset to localhost by autoconfig starting with ownCloud 8.1.1,
# so set it here. It also can change if the box's PRIMARY_HOSTNAME changes, so
# this will make sure it has the right value.
# * Some settings weren't included in previous versions of Mail-in-a-Box.
# * We need to set the timezone to the system timezone to allow fail2ban to ban
# users within the proper timeframe
# * We need to set the logdateformat to something that will work correctly with fail2ban
# * mail_domain' needs to be set every time we run the setup. Making sure we are setting
# the correct domain name if the domain is being change from the previous setup.
# Use PHP to read the settings file, modify it, and write out the new settings array.
TIMEZONE=$(cat /etc/timezone)
CONFIG_TEMP=$(/bin/mktemp)
php"$PHP_VER" <<EOF > "$CONFIG_TEMP" && mv "$CONFIG_TEMP" "$STORAGE_ROOT/owncloud/config.php";
<?php
include("$STORAGE_ROOT/owncloud/config.php");
\$CONFIG['config_is_read_only'] = false;
\$CONFIG['trusted_domains'] = array('$PRIMARY_HOSTNAME');
\$CONFIG['memcache.local'] = '\OC\Memcache\APCu';
\$CONFIG['overwrite.cli.url'] = 'https://${PRIMARY_HOSTNAME}/cloud';
\$CONFIG['mail_from_address'] = 'administrator'; # just the local part, matches our master administrator address
\$CONFIG['logtimezone'] = '$TIMEZONE';
\$CONFIG['logdateformat'] = 'Y-m-d H:i:s';
\$CONFIG['mail_domain'] = '$PRIMARY_HOSTNAME';
\$CONFIG['user_backends'] = array(
array(
'class' => '\OCA\UserExternal\IMAP',
'arguments' => array(
'127.0.0.1', 143, null, null, false, false
),
),
);
echo "<?php\n\\\$CONFIG = ";
var_export(\$CONFIG);
echo ";";
?>
EOF
chown www-data:www-data "$STORAGE_ROOT/owncloud/config.php"
# Enable/disable apps. Note that this must be done after the Nextcloud setup.
# The firstrunwizard gave Josh all sorts of problems, so disabling that.
# user_external is what allows Nextcloud to use IMAP for login. The contacts
# and calendar apps are the extensions we really care about here.
hide_output sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/console.php app:disable firstrunwizard
hide_output sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/console.php app:enable user_external
hide_output sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/console.php app:enable contacts
hide_output sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/console.php app:enable calendar
# When upgrading, run the upgrade script again now that apps are enabled. It seems like
# the first upgrade at the top won't work because apps may be disabled during upgrade?
# Check for success (0=ok, 3=no upgrade needed).
sudo -u www-data php"$PHP_VER" /usr/local/lib/owncloud/occ upgrade
E=$?
if [ $E -ne 0 ] && [ $E -ne 3 ]; then exit 1; fi
# Disable default apps that we don't support
sudo -u www-data \
php"$PHP_VER" /usr/local/lib/owncloud/occ app:disable photos dashboard activity \
| (grep -v "No such app enabled" || /bin/true)
# Set PHP FPM values to support large file uploads
# (semicolon is the comment character in this file, hashes produce deprecation warnings)
tools/editconf.py /etc/php/"$PHP_VER"/fpm/php.ini -c ';' \
upload_max_filesize=16G \
post_max_size=16G \
output_buffering=16384 \
memory_limit=512M \
max_execution_time=600 \
short_open_tag=On
# Set Nextcloud recommended opcache settings
tools/editconf.py /etc/php/"$PHP_VER"/cli/conf.d/10-opcache.ini -c ';' \
opcache.enable=1 \
opcache.enable_cli=1 \
opcache.interned_strings_buffer=8 \
opcache.max_accelerated_files=10000 \
opcache.memory_consumption=128 \
opcache.save_comments=1 \
opcache.revalidate_freq=1
# Migrate users_external data from <0.6.0 to version 3.0.0
# (see https://github.com/nextcloud/user_external).
# This version was probably in use in Mail-in-a-Box v0.41 (February 26, 2019) and earlier.
# We moved to v0.6.3 in 193763f8. Ignore errors - maybe there are duplicated users with the
# correct backend already.
sqlite3 "$STORAGE_ROOT/owncloud/owncloud.db" "UPDATE oc_users_external SET backend='127.0.0.1';" || /bin/true
# Set up a general cron job for Nextcloud.
# Also add another job for Calendar updates, per advice in the Nextcloud docs
# https://docs.nextcloud.com/server/24/admin_manual/groupware/calendar.html#background-jobs
cat > /etc/cron.d/mailinabox-nextcloud << EOF;
#!/bin/bash
# Mail-in-a-Box
*/5 * * * * root sudo -u www-data php$PHP_VER -f /usr/local/lib/owncloud/cron.php
*/5 * * * * root sudo -u www-data php$PHP_VER -f /usr/local/lib/owncloud/occ dav:send-event-reminders
EOF
chmod +x /etc/cron.d/mailinabox-nextcloud
# We also need to change the sending mode from background-job to occ.
# Or else the reminders will just be sent as soon as possible when the background jobs run.
hide_output sudo -u www-data php"$PHP_VER" -f /usr/local/lib/owncloud/occ config:app:set dav sendEventRemindersMode --value occ
# Now set the config to read-only.
# Do this only at the very bottom when no further occ commands are needed.
sed -i'' "s/'config_is_read_only'\s*=>\s*false/'config_is_read_only' => true/" "$STORAGE_ROOT/owncloud/config.php"
# Rotate the nextcloud.log file
cat > /etc/logrotate.d/nextcloud <<EOF
# Nextcloud logs
$STORAGE_ROOT/owncloud/nextcloud.log {
size 10M
create 640 www-data www-data
rotate 30
copytruncate
missingok
compress
}
EOF
# There's nothing much of interest that a user could do as an admin for Nextcloud,
# and there's a lot they could mess up, so we don't make any users admins of Nextcloud.
# But if we wanted to, we would do this:
# ```
# for user in $(management/cli.py user admins); do
# sqlite3 $STORAGE_ROOT/owncloud/owncloud.db "INSERT OR IGNORE INTO oc_group_user VALUES ('admin', '$user')"
# done
# ```
# Enable PHP modules and restart PHP.
restart_service php"$PHP_VER"-fpm

View File

@ -1,234 +0,0 @@
#!/bin/bash
# Owncloud
##########################
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# ### Installing ownCloud
echo "Installing ownCloud (contacts/calendar)..."
apt_install \
dbconfig-common \
php5-cli php5-sqlite php5-gd php5-imap php5-curl php-pear php-apc curl libapr1 libtool libcurl4-openssl-dev php-xml-parser \
php5 php5-dev php5-gd php5-fpm memcached php5-memcached unzip
apt-get purge -qq -y owncloud*
# Install ownCloud from source of this version:
owncloud_ver=8.2.3
owncloud_hash=bfdf6166fbf6fc5438dc358600e7239d1c970613
# Migrate <= v0.10 setups that stored the ownCloud config.php in /usr/local rather than
# in STORAGE_ROOT. Move the file to STORAGE_ROOT.
if [ ! -f $STORAGE_ROOT/owncloud/config.php ] \
&& [ -f /usr/local/lib/owncloud/config/config.php ]; then
# Move config.php and symlink back into previous location.
echo "Migrating owncloud/config.php to new location."
mv /usr/local/lib/owncloud/config/config.php $STORAGE_ROOT/owncloud/config.php \
&& \
ln -sf $STORAGE_ROOT/owncloud/config.php /usr/local/lib/owncloud/config/config.php
fi
# Check if ownCloud dir exist, and check if version matches owncloud_ver (if either doesn't - install/upgrade)
if [ ! -d /usr/local/lib/owncloud/ ] \
|| ! grep -q $owncloud_ver /usr/local/lib/owncloud/version.php; then
# Download and verify
wget_verify https://download.owncloud.org/community/owncloud-$owncloud_ver.zip $owncloud_hash /tmp/owncloud.zip
# Clear out the existing ownCloud.
if [ -d /usr/local/lib/owncloud/ ]; then
echo "upgrading ownCloud to $owncloud_ver (backing up existing ownCloud directory to /tmp/owncloud-backup-$$)..."
mv /usr/local/lib/owncloud /tmp/owncloud-backup-$$
fi
# Extract ownCloud
unzip -u -o -q /tmp/owncloud.zip -d /usr/local/lib #either extracts new or replaces current files
rm -f /tmp/owncloud.zip
# The two apps we actually want are not in ownCloud core. Clone them from
# their github repositories.
mkdir -p /usr/local/lib/owncloud/apps
git_clone https://github.com/owncloudarchive/contacts 9ba2e667ae8c7ea36d8c4a4c3413c374beb24b1b '' /usr/local/lib/owncloud/apps/contacts
git_clone https://github.com/owncloudarchive/calendar 2086e738a3b7b868ec59cd61f0f88b49c3f21dd1 '' /usr/local/lib/owncloud/apps/calendar
# Fix weird permissions.
chmod 750 /usr/local/lib/owncloud/{apps,config}
# Create a symlink to the config.php in STORAGE_ROOT (for upgrades we're restoring the symlink we previously
# put in, and in new installs we're creating a symlink and will create the actual config later).
ln -sf $STORAGE_ROOT/owncloud/config.php /usr/local/lib/owncloud/config/config.php
# Make sure permissions are correct or the upgrade step won't run.
# $STORAGE_ROOT/owncloud may not yet exist, so use -f to suppress
# that error.
chown -f -R www-data.www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud
# If this isn't a new installation, immediately run the upgrade script.
# Then check for success (0=ok and 3=no upgrade needed, both are success).
if [ -f $STORAGE_ROOT/owncloud/owncloud.db ]; then
# ownCloud 8.1.1 broke upgrades. It may fail on the first attempt, but
# that can be OK.
sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then
echo "Trying ownCloud upgrade again to work around ownCloud upgrade bug..."
sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
sudo -u www-data php /usr/local/lib/owncloud/occ maintenance:mode --off
echo "...which seemed to work."
fi
fi
fi
# ### Configuring ownCloud
# Setup ownCloud if the ownCloud database does not yet exist. Running setup when
# the database does exist wipes the database and user data.
if [ ! -f $STORAGE_ROOT/owncloud/owncloud.db ]; then
# Create user data directory
mkdir -p $STORAGE_ROOT/owncloud
# Create an initial configuration file.
instanceid=oc$(echo $PRIMARY_HOSTNAME | sha1sum | fold -w 10 | head -n 1)
cat > $STORAGE_ROOT/owncloud/config.php <<EOF;
<?php
\$CONFIG = array (
'datadirectory' => '$STORAGE_ROOT/owncloud',
'instanceid' => '$instanceid',
'forcessl' => true, # if unset/false, ownCloud sends a HSTS=0 header, which conflicts with nginx config
'overwritewebroot' => '/cloud',
'overwrite.cli.url' => '/cloud',
'user_backends' => array(
array(
'class'=>'OC_User_IMAP',
'arguments'=>array('{127.0.0.1:993/imap/ssl/novalidate-cert}')
)
),
'memcache.local' => '\\OC\\Memcache\\Memcached',
"memcached_servers" => array (
array('127.0.0.1', 11211),
),
'mail_smtpmode' => 'sendmail',
'mail_smtpsecure' => '',
'mail_smtpauthtype' => 'LOGIN',
'mail_smtpauth' => false,
'mail_smtphost' => '',
'mail_smtpport' => '',
'mail_smtpname' => '',
'mail_smtppassword' => '',
'mail_from_address' => 'owncloud',
'mail_domain' => '$PRIMARY_HOSTNAME',
);
?>
EOF
# Create an auto-configuration file to fill in database settings
# when the install script is run. Make an administrator account
# here or else the install can't finish.
adminpassword=$(dd if=/dev/urandom bs=1 count=40 2>/dev/null | sha1sum | fold -w 30 | head -n 1)
cat > /usr/local/lib/owncloud/config/autoconfig.php <<EOF;
<?php
\$AUTOCONFIG = array (
# storage/database
'directory' => '$STORAGE_ROOT/owncloud',
'dbtype' => 'sqlite3',
# create an administrator account with a random password so that
# the user does not have to enter anything on first load of ownCloud
'adminlogin' => 'root',
'adminpass' => '$adminpassword',
);
?>
EOF
# Set permissions
chown -R www-data.www-data $STORAGE_ROOT/owncloud /usr/local/lib/owncloud
# Execute ownCloud's setup step, which creates the ownCloud sqlite database.
# It also wipes it if it exists. And it updates config.php with database
# settings and deletes the autoconfig.php file.
(cd /usr/local/lib/owncloud; sudo -u www-data php /usr/local/lib/owncloud/index.php;)
fi
# Update config.php.
# * trusted_domains is reset to localhost by autoconfig starting with ownCloud 8.1.1,
# so set it here. It also can change if the box's PRIMARY_HOSTNAME changes, so
# this will make sure it has the right value.
# * Some settings weren't included in previous versions of Mail-in-a-Box.
# * We need to set the timezone to the system timezone to allow fail2ban to ban
# users within the proper timeframe
# * We need to set the logdateformat to something that will work correctly with fail2ban
# Use PHP to read the settings file, modify it, and write out the new settings array.
TIMEZONE=$(cat /etc/timezone)
CONFIG_TEMP=$(/bin/mktemp)
php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
<?php
include("$STORAGE_ROOT/owncloud/config.php");
\$CONFIG['trusted_domains'] = array('$PRIMARY_HOSTNAME');
\$CONFIG['memcache.local'] = '\\OC\\Memcache\\Memcached';
\$CONFIG['overwrite.cli.url'] = '/cloud';
\$CONFIG['mail_from_address'] = 'administrator'; # just the local part, matches our master administrator address
\$CONFIG['logtimezone'] = '$TIMEZONE';
\$CONFIG['logdateformat'] = 'Y-m-d H:i:s';
echo "<?php\n\\\$CONFIG = ";
var_export(\$CONFIG);
echo ";";
?>
EOF
chown www-data.www-data $STORAGE_ROOT/owncloud/config.php
# Enable/disable apps. Note that this must be done after the ownCloud setup.
# The firstrunwizard gave Josh all sorts of problems, so disabling that.
# user_external is what allows ownCloud to use IMAP for login. The contacts
# and calendar apps are the extensions we really care about here.
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:disable firstrunwizard
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable user_external
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable contacts
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable calendar
# When upgrading, run the upgrade script again now that apps are enabled. It seems like
# the first upgrade at the top won't work because apps may be disabled during upgrade?
# Check for success (0=ok, 3=no upgrade needed).
sudo -u www-data php /usr/local/lib/owncloud/occ upgrade
if [ \( $? -ne 0 \) -a \( $? -ne 3 \) ]; then exit 1; fi
# Set PHP FPM values to support large file uploads
# (semicolon is the comment character in this file, hashes produce deprecation warnings)
tools/editconf.py /etc/php5/fpm/php.ini -c ';' \
upload_max_filesize=16G \
post_max_size=16G \
output_buffering=16384 \
memory_limit=512M \
max_execution_time=600 \
short_open_tag=On
# Set up a cron job for owncloud.
cat > /etc/cron.hourly/mailinabox-owncloud << EOF;
#!/bin/bash
# Mail-in-a-Box
sudo -u www-data php -f /usr/local/lib/owncloud/cron.php
EOF
chmod +x /etc/cron.hourly/mailinabox-owncloud
# There's nothing much of interest that a user could do as an admin for ownCloud,
# and there's a lot they could mess up, so we don't make any users admins of ownCloud.
# But if we wanted to, we would do this:
# ```
# for user in $(tools/mail.py user admins); do
# sqlite3 $STORAGE_ROOT/owncloud/owncloud.db "INSERT OR IGNORE INTO oc_group_user VALUES ('admin', '$user')"
# done
# ```
# Enable PHP modules and restart PHP.
php5enmod imap
restart_service php5-fpm

View File

@ -1,41 +1,48 @@
#!/bin/bash
# Are we running as root?
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root. Please re-run like this:"
echo
echo "sudo $0"
echo
exit
exit 1
fi
# Check that we are running on Ubuntu 14.04 LTS (or 14.04.xx).
if [ "`lsb_release -d | sed 's/.*:\s*//' | sed 's/14\.04\.[0-9]/14.04/' `" != "Ubuntu 14.04 LTS" ]; then
echo "Mail-in-a-Box only supports being installed on Ubuntu 14.04, sorry. You are running:"
# Check that we are running on Ubuntu 20.04 LTS (or 20.04.xx).
if [ "$( lsb_release --id --short )" != "Ubuntu" ] || [ "$( lsb_release --release --short )" != "22.04" ]; then
echo "Mail-in-a-Box only supports being installed on Ubuntu 22.04, sorry. You are running:"
echo
lsb_release -d | sed 's/.*:\s*//'
lsb_release --description --short
echo
echo "We can't write scripts that run on every possible setup, sorry."
exit
exit 1
fi
# Check that we have enough memory.
#
# /proc/meminfo reports free memory in kibibytes. Our baseline will be 768 MB,
# which is 750000 kibibytes.
# /proc/meminfo reports free memory in kibibytes. Our baseline will be 512 MB,
# which is 500000 kibibytes.
#
# We will display a warning if the memory is below 768 MB which is 750000 kibibytes
#
# Skip the check if we appear to be running inside of Vagrant, because that's really just for testing.
TOTAL_PHYSICAL_MEM=$(head -n 1 /proc/meminfo | awk '{print $2}')
if [ $TOTAL_PHYSICAL_MEM -lt 750000 ]; then
if [ "$TOTAL_PHYSICAL_MEM" -lt 490000 ]; then
if [ ! -d /vagrant ]; then
TOTAL_PHYSICAL_MEM=$(expr \( \( $TOTAL_PHYSICAL_MEM \* 1024 \) / 1000 \) / 1000)
TOTAL_PHYSICAL_MEM=$(( TOTAL_PHYSICAL_MEM * 1024 / 1000 / 1000 ))
echo "Your Mail-in-a-Box needs more memory (RAM) to function properly."
echo "Please provision a machine with at least 768 MB, 1 GB recommended."
echo "Please provision a machine with at least 512 MB, 1 GB recommended."
echo "This machine has $TOTAL_PHYSICAL_MEM MB memory."
exit
fi
fi
if [ "$TOTAL_PHYSICAL_MEM" -lt 750000 ]; then
echo "WARNING: Your Mail-in-a-Box has less than 768 MB of memory."
echo " It might run unreliably when under heavy load."
fi
# Check that tempfs is mounted with exec
MOUNTED_TMP_AS_NO_EXEC=$(grep "/tmp.*noexec" /proc/mounts)
MOUNTED_TMP_AS_NO_EXEC=$(grep "/tmp.*noexec" /proc/mounts || /bin/true)
if [ -n "$MOUNTED_TMP_AS_NO_EXEC" ]; then
echo "Mail-in-a-Box has to have exec rights on /tmp, please mount /tmp with exec"
exit
@ -47,16 +54,14 @@ if [ -e ~/.wgetrc ]; then
exit
fi
# Check that we are running on x86_64 or i686, any other architecture is unsupported and
# will fail later in the setup when we try to install the custom build lucene packages.
#
# Set ARM=1 to ignore this check if you have built the packages yourself. If you do this
# you are on your own!
# Check that we are running on x86_64 or i686 architecture, which are the only
# ones we support / test.
ARCHITECTURE=$(uname -m)
if [ "$ARCHITECTURE" != "x86_64" ] && [ "$ARCHITECTURE" != "i686" ]; then
if [ -z "$ARM" ]; then
echo "Mail-in-a-Box only supports x86_64 or i686 and will not work on any other architecture, like ARM."
echo "Your architecture is $ARCHITECTURE"
exit
fi
echo
echo "WARNING:"
echo "Mail-in-a-Box has only been tested on x86_64 and i686 platform"
echo "architectures. Your architecture, $ARCHITECTURE, may not work."
echo "You are on your own."
echo
fi

View File

@ -1,4 +1,5 @@
if [ -z "$NONINTERACTIVE" ]; then
#!/bin/bash
if [ -z "${NONINTERACTIVE:-}" ]; then
# Install 'dialog' so we can ask the user questions. The original motivation for
# this was being able to ask the user for input even if stdin has been redirected,
# e.g. if we piped a bootstrapping install script to bash to get started. In that
@ -7,12 +8,14 @@ if [ -z "$NONINTERACTIVE" ]; then
#
# Also install dependencies needed to validate the email address.
if [ ! -f /usr/bin/dialog ] || [ ! -f /usr/bin/python3 ] || [ ! -f /usr/bin/pip3 ]; then
echo Installing packages needed for setup...
echo "Installing packages needed for setup..."
apt-get -q -q update
apt_get_quiet install dialog python3 python3-pip || exit 1
fi
# email_validator is repeated in setup/management.sh
# Installing email_validator is repeated in setup/management.sh, but in setup/management.sh
# we install it inside a virtualenv. In this script, we don't have the virtualenv yet
# so we install the python package globally.
hide_output pip3 install "email_validator>=1.0.0" || exit 1
message_box "Mail-in-a-Box Installation" \
@ -23,13 +26,13 @@ if [ -z "$NONINTERACTIVE" ]; then
fi
# The box needs a name.
if [ -z "$PRIMARY_HOSTNAME" ]; then
if [ -z "$DEFAULT_PRIMARY_HOSTNAME" ]; then
if [ -z "${PRIMARY_HOSTNAME:-}" ]; then
if [ -z "${DEFAULT_PRIMARY_HOSTNAME:-}" ]; then
# We recommend to use box.example.com as this hosts name. The
# domain the user possibly wants to use is example.com then.
# We strip the string "box." from the hostname to get the mail
# domain. If the hostname differs, nothing happens here.
DEFAULT_DOMAIN_GUESS=$(echo $(get_default_hostname) | sed -e 's/^box\.//')
DEFAULT_DOMAIN_GUESS=$(get_default_hostname | sed -e 's/^box\.//')
# This is the first run. Ask the user for his email address so we can
# provide the best default for the box's hostname.
@ -49,11 +52,11 @@ you really want.
# user hit ESC/cancel
exit
fi
while ! management/mailconfig.py validate-email "$EMAIL_ADDR"
while ! python3 management/mailconfig.py validate-email "$EMAIL_ADDR"
do
input_box "Your Email Address" \
"That's not a valid email address.\n\nWhat email address are you setting this box up to manage?" \
$EMAIL_ADDR \
"$EMAIL_ADDR" \
EMAIL_ADDR
if [ -z "$EMAIL_ADDR" ]; then
# user hit ESC/cancel
@ -63,7 +66,7 @@ you really want.
# Take the part after the @-sign as the user's domain name, and add
# 'box.' to the beginning to create a default hostname for this machine.
DEFAULT_PRIMARY_HOSTNAME=box.$(echo $EMAIL_ADDR | sed 's/.*@//')
DEFAULT_PRIMARY_HOSTNAME=box.$(echo "$EMAIL_ADDR" | sed 's/.*@//')
fi
input_box "Hostname" \
@ -72,7 +75,7 @@ you really want.
address, so we're suggesting $DEFAULT_PRIMARY_HOSTNAME.
\n\nYou can change it, but we recommend you don't.
\n\nHostname:" \
$DEFAULT_PRIMARY_HOSTNAME \
"$DEFAULT_PRIMARY_HOSTNAME" \
PRIMARY_HOSTNAME
if [ -z "$PRIMARY_HOSTNAME" ]; then
@ -84,30 +87,30 @@ fi
# If the machine is behind a NAT, inside a VM, etc., it may not know
# its IP address on the public network / the Internet. Ask the Internet
# and possibly confirm with user.
if [ -z "$PUBLIC_IP" ]; then
if [ -z "${PUBLIC_IP:-}" ]; then
# Ask the Internet.
GUESSED_IP=$(get_publicip_from_web_service 4)
# On the first run, if we got an answer from the Internet then don't
# ask the user.
if [[ -z "$DEFAULT_PUBLIC_IP" && ! -z "$GUESSED_IP" ]]; then
if [[ -z "${DEFAULT_PUBLIC_IP:-}" && -n "$GUESSED_IP" ]]; then
PUBLIC_IP=$GUESSED_IP
# Otherwise on the first run at least provide a default.
elif [[ -z "$DEFAULT_PUBLIC_IP" ]]; then
elif [[ -z "${DEFAULT_PUBLIC_IP:-}" ]]; then
DEFAULT_PUBLIC_IP=$(get_default_privateip 4)
# On later runs, if the previous value matches the guessed value then
# don't ask the user either.
elif [ "$DEFAULT_PUBLIC_IP" == "$GUESSED_IP" ]; then
elif [ "${DEFAULT_PUBLIC_IP:-}" == "$GUESSED_IP" ]; then
PUBLIC_IP=$GUESSED_IP
fi
if [ -z "$PUBLIC_IP" ]; then
if [ -z "${PUBLIC_IP:-}" ]; then
input_box "Public IP Address" \
"Enter the public IP address of this machine, as given to you by your ISP.
\n\nPublic IP address:" \
$DEFAULT_PUBLIC_IP \
"${DEFAULT_PUBLIC_IP:-}" \
PUBLIC_IP
if [ -z "$PUBLIC_IP" ]; then
@ -119,30 +122,30 @@ fi
# Same for IPv6. But it's optional. Also, if it looks like the system
# doesn't have an IPv6, don't ask for one.
if [ -z "$PUBLIC_IPV6" ]; then
if [ -z "${PUBLIC_IPV6:-}" ]; then
# Ask the Internet.
GUESSED_IP=$(get_publicip_from_web_service 6)
MATCHED=0
if [[ -z "$DEFAULT_PUBLIC_IPV6" && ! -z "$GUESSED_IP" ]]; then
if [[ -z "${DEFAULT_PUBLIC_IPV6:-}" && -n "$GUESSED_IP" ]]; then
PUBLIC_IPV6=$GUESSED_IP
elif [[ "$DEFAULT_PUBLIC_IPV6" == "$GUESSED_IP" ]]; then
elif [[ "${DEFAULT_PUBLIC_IPV6:-}" == "$GUESSED_IP" ]]; then
# No IPv6 entered and machine seems to have none, or what
# the user entered matches what the Internet tells us.
PUBLIC_IPV6=$GUESSED_IP
MATCHED=1
elif [[ -z "$DEFAULT_PUBLIC_IPV6" ]]; then
elif [[ -z "${DEFAULT_PUBLIC_IPV6:-}" ]]; then
DEFAULT_PUBLIC_IP=$(get_default_privateip 6)
fi
if [[ -z "$PUBLIC_IPV6" && $MATCHED == 0 ]]; then
if [[ -z "${PUBLIC_IPV6:-}" && $MATCHED == 0 ]]; then
input_box "IPv6 Address (Optional)" \
"Enter the public IPv6 address of this machine, as given to you by your ISP.
\n\nLeave blank if the machine does not have an IPv6 address.
\n\nPublic IPv6 address:" \
$DEFAULT_PUBLIC_IPV6 \
"${DEFAULT_PUBLIC_IPV6:-}" \
PUBLIC_IPV6
if [ ! $PUBLIC_IPV6_EXITCODE ]; then
if [ ! -n "$PUBLIC_IPV6_EXITCODE" ]; then
# user hit ESC/cancel
exit
fi
@ -152,10 +155,10 @@ fi
# Get the IP addresses of the local network interface(s) that are connected
# to the Internet. We need these when we want to have services bind only to
# the public network interfaces (not loopback, not tunnel interfaces).
if [ -z "$PRIVATE_IP" ]; then
if [ -z "${PRIVATE_IP:-}" ]; then
PRIVATE_IP=$(get_default_privateip 4)
fi
if [ -z "$PRIVATE_IPV6" ]; then
if [ -z "${PRIVATE_IPV6:-}" ]; then
PRIVATE_IPV6=$(get_default_privateip 6)
fi
if [[ -z "$PRIVATE_IP" && -z "$PRIVATE_IPV6" ]]; then
@ -180,25 +183,22 @@ if [ "$PUBLIC_IPV6" = "auto" ]; then
fi
if [ "$PRIMARY_HOSTNAME" = "auto" ]; then
PRIMARY_HOSTNAME=$(get_default_hostname)
elif [ "$PRIMARY_HOSTNAME" = "auto-easy" ]; then
# Generate a probably-unique subdomain under our justtesting.email domain.
PRIMARY_HOSTNAME=`echo $PUBLIC_IP | sha1sum | cut -c1-5`.justtesting.email
fi
# Set STORAGE_USER and STORAGE_ROOT to default values (user-data and /home/user-data), unless
# we've already got those values from a previous run.
if [ -z "$STORAGE_USER" ]; then
STORAGE_USER=$([[ -z "$DEFAULT_STORAGE_USER" ]] && echo "user-data" || echo "$DEFAULT_STORAGE_USER")
if [ -z "${STORAGE_USER:-}" ]; then
STORAGE_USER=$([[ -z "${DEFAULT_STORAGE_USER:-}" ]] && echo "user-data" || echo "$DEFAULT_STORAGE_USER")
fi
if [ -z "$STORAGE_ROOT" ]; then
STORAGE_ROOT=$([[ -z "$DEFAULT_STORAGE_ROOT" ]] && echo "/home/$STORAGE_USER" || echo "$DEFAULT_STORAGE_ROOT")
if [ -z "${STORAGE_ROOT:-}" ]; then
STORAGE_ROOT=$([[ -z "${DEFAULT_STORAGE_ROOT:-}" ]] && echo "/home/$STORAGE_USER" || echo "$DEFAULT_STORAGE_ROOT")
fi
# Show the configuration, since the user may have not entered it manually.
echo
echo "Primary Hostname: $PRIMARY_HOSTNAME"
echo "Public IP Address: $PUBLIC_IP"
if [ ! -z "$PUBLIC_IPV6" ]; then
if [ -n "$PUBLIC_IPV6" ]; then
echo "Public IPv6 Address: $PUBLIC_IPV6"
fi
if [ "$PRIVATE_IP" != "$PUBLIC_IP" ]; then
@ -208,6 +208,6 @@ if [ "$PRIVATE_IPV6" != "$PUBLIC_IPV6" ]; then
echo "Private IPv6 Address: $PRIVATE_IPV6"
fi
if [ -f /usr/bin/git ] && [ -d .git ]; then
echo "Mail-in-a-Box Version: " $(git describe)
echo "Mail-in-a-Box Version: $(git describe --always)"
fi
echo

View File

@ -48,7 +48,7 @@ echo "public.pyzor.org:24441" > /etc/spamassassin/pyzor/servers
# * Disable localmode so Pyzor, DKIM and DNS checks can be used.
tools/editconf.py /etc/default/spampd \
DESTPORT=10026 \
ADDOPTS="\"--maxsize=500\"" \
ADDOPTS="\"--maxsize=2000\"" \
LOCALONLY=0
# Spamassassin normally wraps spam as an attachment inside a fresh
@ -61,9 +61,61 @@ tools/editconf.py /etc/default/spampd \
# content or execute scripts, and it is probably confusing to most users.
#
# Tell Spamassassin not to modify the original message except for adding
# the X-Spam-Status mail header and related headers.
# the X-Spam-Status & X-Spam-Score mail headers and related headers.
tools/editconf.py /etc/spamassassin/local.cf -s \
report_safe=0
report_safe=0 \
"add_header all Report"=_REPORT_ \
"add_header all Score"=_SCORE_
# Authentication-Results SPF/Dmarc checks
# ---------------------------------------
# OpenDKIM and OpenDMARC are configured to validate and add "Authentication-Results: ..."
# headers by checking the sender's SPF & DMARC policies. Instead of blocking mail that fails
# these checks, we can use these headers to evaluate the mail as spam.
#
# Our custom rules are added to their own file so that an update to the deb package config
# does not remove our changes.
#
# We need to escape period's in $PRIMARY_HOSTNAME since spamassassin config uses regex.
escapedprimaryhostname="${PRIMARY_HOSTNAME//./\\.}"
cat > /etc/spamassassin/miab_spf_dmarc.cf << EOF
# Evaluate DMARC Authentication-Results
header DMARC_PASS Authentication-Results =~ /$escapedprimaryhostname; dmarc=pass/
describe DMARC_PASS DMARC check passed
score DMARC_PASS -0.1
header DMARC_NONE Authentication-Results =~ /$escapedprimaryhostname; dmarc=none/
describe DMARC_NONE DMARC record not found
score DMARC_NONE 0.1
header DMARC_FAIL_NONE Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=none/
describe DMARC_FAIL_NONE DMARC check failed (p=none)
score DMARC_FAIL_NONE 2.0
header DMARC_FAIL_QUARANTINE Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=quarantine/
describe DMARC_FAIL_QUARANTINE DMARC check failed (p=quarantine)
score DMARC_FAIL_QUARANTINE 5.0
header DMARC_FAIL_REJECT Authentication-Results =~ /$escapedprimaryhostname; dmarc=fail \(p=reject/
describe DMARC_FAIL_REJECT DMARC check failed (p=reject)
score DMARC_FAIL_REJECT 10.0
# Evaluate SPF Authentication-Results
header SPF_PASS Authentication-Results =~ /$escapedprimaryhostname; spf=pass/
describe SPF_PASS SPF check passed
score SPF_PASS -0.1
header SPF_NONE Authentication-Results =~ /$escapedprimaryhostname; spf=none/
describe SPF_NONE SPF record not found
score SPF_NONE 2.0
header SPF_FAIL Authentication-Results =~ /$escapedprimaryhostname; spf=fail/
describe SPF_FAIL SPF check failed
score SPF_FAIL 5.0
EOF
# Bayesean learning
# -----------------
@ -83,11 +135,11 @@ tools/editconf.py /etc/spamassassin/local.cf -s \
# the filemode in the config file.
tools/editconf.py /etc/spamassassin/local.cf -s \
bayes_path=$STORAGE_ROOT/mail/spamassassin/bayes \
bayes_file_mode=0660
bayes_path="$STORAGE_ROOT/mail/spamassassin/bayes" \
bayes_file_mode=0666
mkdir -p $STORAGE_ROOT/mail/spamassassin
chown -R spampd:spampd $STORAGE_ROOT/mail/spamassassin
mkdir -p "$STORAGE_ROOT/mail/spamassassin"
chown -R spampd:spampd "$STORAGE_ROOT/mail/spamassassin"
# To mark mail as spam or ham, just drag it in or out of the Spam folder. We'll
# use the Dovecot antispam plugin to detect the message move operation and execute
@ -132,8 +184,8 @@ chmod a+x /usr/local/bin/sa-learn-pipe.sh
# Create empty bayes training data (if it doesn't exist). Once the files exist,
# ensure they are group-writable so that the Dovecot process has access.
sudo -u spampd /usr/bin/sa-learn --sync 2>/dev/null
chmod -R 660 $STORAGE_ROOT/mail/spamassassin
chmod 770 $STORAGE_ROOT/mail/spamassassin
chmod -R 660 "$STORAGE_ROOT/mail/spamassassin"
chmod 770 "$STORAGE_ROOT/mail/spamassassin"
# Initial training?
# sa-learn --ham storage/mail/mailboxes/*/*/cur/

View File

@ -10,7 +10,7 @@
#
# * DNSSEC DANE TLSA records
# * IMAP
# * SMTP (opportunistic TLS for port 25 and submission on port 587)
# * SMTP (opportunistic TLS for port 25 and submission on ports 465/587)
# * HTTPS
#
# The certificate is created with its CN set to the PRIMARY_HOSTNAME. It is
@ -19,16 +19,16 @@
#
# The Diffie-Hellman cipher bits are used for SMTP and HTTPS, when a
# Diffie-Hellman cipher is selected during TLS negotiation. Diffie-Hellman
# provides Perfect Forward Secrecy.
# provides Perfect Forward Secrecy.
source setup/functions.sh # load our functions
source /etc/mailinabox.conf # load global vars
# Show a status line if we are going to take any action in this file.
if [ ! -f /usr/bin/openssl ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ] \
|| [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then
|| [ ! -f "$STORAGE_ROOT/ssl/ssl_private_key.pem" ] \
|| [ ! -f "$STORAGE_ROOT/ssl/ssl_certificate.pem" ] \
|| [ ! -f "$STORAGE_ROOT/ssl/dh2048.pem" ]; then
echo "Creating initial SSL certificate and perfect forward secrecy Diffie-Hellman parameters..."
fi
@ -38,7 +38,7 @@ apt_install openssl
# Create a directory to store TLS-related things like "SSL" certificates.
mkdir -p $STORAGE_ROOT/ssl
mkdir -p "$STORAGE_ROOT/ssl"
# Generate a new private key.
#
@ -60,39 +60,39 @@ mkdir -p $STORAGE_ROOT/ssl
#
# Since we properly seed /dev/urandom in system.sh we should be fine, but I leave
# in the rest of the notes in case that ever changes.
if [ ! -f $STORAGE_ROOT/ssl/ssl_private_key.pem ]; then
if [ ! -f "$STORAGE_ROOT/ssl/ssl_private_key.pem" ]; then
# Set the umask so the key file is never world-readable.
(umask 077; hide_output \
openssl genrsa -out $STORAGE_ROOT/ssl/ssl_private_key.pem 2048)
openssl genrsa -out "$STORAGE_ROOT/ssl/ssl_private_key.pem" 2048)
fi
# Generate a self-signed SSL certificate because things like nginx, dovecot,
# etc. won't even start without some certificate in place, and we need nginx
# so we can offer the user a control panel to install a better certificate.
if [ ! -f $STORAGE_ROOT/ssl/ssl_certificate.pem ]; then
if [ ! -f "$STORAGE_ROOT/ssl/ssl_certificate.pem" ]; then
# Generate a certificate signing request.
CSR=/tmp/ssl_cert_sign_req-$$.csr
hide_output \
openssl req -new -key $STORAGE_ROOT/ssl/ssl_private_key.pem -out $CSR \
-sha256 -subj "/C=/ST=/L=/O=/CN=$PRIMARY_HOSTNAME"
openssl req -new -key "$STORAGE_ROOT/ssl/ssl_private_key.pem" -out $CSR \
-sha256 -subj "/CN=$PRIMARY_HOSTNAME"
# Generate the self-signed certificate.
CERT=$STORAGE_ROOT/ssl/$PRIMARY_HOSTNAME-selfsigned-$(date --rfc-3339=date | sed s/-//g).pem
hide_output \
openssl x509 -req -days 365 \
-in $CSR -signkey $STORAGE_ROOT/ssl/ssl_private_key.pem -out $CERT
-in $CSR -signkey "$STORAGE_ROOT/ssl/ssl_private_key.pem" -out "$CERT"
# Delete the certificate signing request because it has no other purpose.
rm -f $CSR
# Symlink the certificate into the system certificate path, so system services
# can find it.
ln -s $CERT $STORAGE_ROOT/ssl/ssl_certificate.pem
ln -s "$CERT" "$STORAGE_ROOT/ssl/ssl_certificate.pem"
fi
# Generate some Diffie-Hellman cipher bits.
# openssl's default bit length for this is 1024 bits, but we'll create
# 2048 bits of bits per the latest recommendations.
if [ ! -f $STORAGE_ROOT/ssl/dh2048.pem ]; then
openssl dhparam -out $STORAGE_ROOT/ssl/dh2048.pem 2048
if [ ! -f "$STORAGE_ROOT/ssl/dh2048.pem" ]; then
openssl dhparam -out "$STORAGE_ROOT/ssl/dh2048.pem" 2048
fi

View File

@ -4,7 +4,7 @@
source setup/functions.sh # load our functions
# Check system setup: Are we running as root on Ubuntu 14.04 on a
# Check system setup: Are we running as root on Ubuntu 18.04 on a
# machine with enough memory? Is /tmp mounted with exec.
# If not, this shows an error and exits.
source setup/preflight.sh
@ -14,7 +14,7 @@ source setup/preflight.sh
# Python may not be able to read/write files. This is also
# in the management daemon startup script and the cron script.
if [ -z `locale -a | grep en_US.utf8` ]; then
if ! locale -a | grep en_US.utf8 > /dev/null; then
# Generate locale if not exists
hide_output locale-gen en_US.UTF-8
fi
@ -46,7 +46,7 @@ fi
# in the first dialog prompt, so we should do this before that starts.
cat > /usr/local/bin/mailinabox << EOF;
#!/bin/bash
cd `pwd`
cd $PWD
source setup/start.sh
EOF
chmod +x /usr/local/bin/mailinabox
@ -60,31 +60,38 @@ source setup/questions.sh
# Run some network checks to make sure setup on this machine makes sense.
# Skip on existing installs since we don't want this to block the ability to
# upgrade, and these checks are also in the control panel status checks.
if [ -z "$DEFAULT_PRIMARY_HOSTNAME" ]; then
if [ -z "$SKIP_NETWORK_CHECKS" ]; then
if [ -z "${DEFAULT_PRIMARY_HOSTNAME:-}" ]; then
if [ -z "${SKIP_NETWORK_CHECKS:-}" ]; then
source setup/network-checks.sh
fi
fi
# Create the STORAGE_USER and STORAGE_ROOT directory if they don't already exist.
#
# Set the directory and all of its parent directories' permissions to world
# readable since it holds files owned by different processes.
#
# If the STORAGE_ROOT is missing the mailinabox.version file that lists a
# migration (schema) number for the files stored there, assume this is a fresh
# installation to that directory and write the file to contain the current
# migration number for this version of Mail-in-a-Box.
if ! id -u $STORAGE_USER >/dev/null 2>&1; then
useradd -m $STORAGE_USER
if ! id -u "$STORAGE_USER" >/dev/null 2>&1; then
useradd -m "$STORAGE_USER"
fi
if [ ! -d $STORAGE_ROOT ]; then
mkdir -p $STORAGE_ROOT
if [ ! -d "$STORAGE_ROOT" ]; then
mkdir -p "$STORAGE_ROOT"
fi
if [ ! -f $STORAGE_ROOT/mailinabox.version ]; then
echo $(setup/migrate.py --current) > $STORAGE_ROOT/mailinabox.version
chown $STORAGE_USER.$STORAGE_USER $STORAGE_ROOT/mailinabox.version
f=$STORAGE_ROOT
while [[ $f != / ]]; do chmod a+rx "$f"; f=$(dirname "$f"); done;
if [ ! -f "$STORAGE_ROOT/mailinabox.version" ]; then
setup/migrate.py --current > "$STORAGE_ROOT/mailinabox.version"
chown "$STORAGE_USER:$STORAGE_USER" "$STORAGE_ROOT/mailinabox.version"
fi
# Save the global options in /etc/mailinabox.conf so that standalone
# tools know where to look for data.
# tools know where to look for data. The default MTA_STS_MODE setting
# is blank unless set by an environment variable, but see web.sh for
# how that is interpreted.
cat > /etc/mailinabox.conf << EOF;
STORAGE_USER=$STORAGE_USER
STORAGE_ROOT=$STORAGE_ROOT
@ -93,6 +100,7 @@ PUBLIC_IP=$PUBLIC_IP
PUBLIC_IPV6=$PUBLIC_IPV6
PRIVATE_IP=$PRIVATE_IP
PRIVATE_IPV6=$PRIVATE_IPV6
MTA_STS_MODE=${DEFAULT_MTA_STS_MODE:-enforce}
EOF
# Start service configuration.
@ -106,7 +114,7 @@ source setup/dkim.sh
source setup/spamassassin.sh
source setup/web.sh
source setup/webmail.sh
source setup/owncloud.sh
source setup/nextcloud.sh
source setup/zpush.sh
source setup/management.sh
source setup/munin.sh
@ -114,7 +122,7 @@ source setup/munin.sh
# Wait for the management daemon to start...
until nc -z -w 4 127.0.0.1 10222
do
echo Waiting for the Mail-in-a-Box management daemon to start...
echo "Waiting for the Mail-in-a-Box management daemon to start..."
sleep 2
done
@ -127,38 +135,48 @@ tools/web_update
# fail2ban was first configured, but they should exist now.
restart_service fail2ban
# If DNS is already working, try to provision TLS certficates from Let's Encrypt.
# Suppress extra reasons why domains aren't getting a new certificate.
management/ssl_certificates.py -q
# If there aren't any mail users yet, create one.
source setup/firstuser.sh
# Register with Let's Encrypt, including agreeing to the Terms of Service.
# We'd let certbot ask the user interactively, but when this script is
# run in the recommended curl-pipe-to-bash method there is no TTY and
# certbot will fail if it tries to ask.
if [ ! -d "$STORAGE_ROOT/ssl/lets_encrypt/accounts/acme-v02.api.letsencrypt.org/" ]; then
echo
echo "-----------------------------------------------"
echo "Mail-in-a-Box uses Let's Encrypt to provision free SSL/TLS certificates"
echo "to enable HTTPS connections to your box. We're automatically"
echo "agreeing you to their subscriber agreement. See https://letsencrypt.org."
echo
certbot register --register-unsafely-without-email --agree-tos --config-dir "$STORAGE_ROOT/ssl/lets_encrypt"
fi
# Done.
echo
echo "-----------------------------------------------"
echo
echo Your Mail-in-a-Box is running.
echo "Your Mail-in-a-Box is running."
echo
echo Please log in to the control panel for further instructions at:
echo "Please log in to the control panel for further instructions at:"
echo
if management/status_checks.py --check-primary-hostname; then
# Show the nice URL if it appears to be resolving and has a valid certificate.
echo https://$PRIMARY_HOSTNAME/admin
echo "https://$PRIMARY_HOSTNAME/admin"
echo
echo "If you have a DNS problem put the box's IP address in the URL"
echo "(https://$PUBLIC_IP/admin) but then check the SSL fingerprint:"
openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint \
| sed "s/SHA1 Fingerprint=//"
echo "(https://$PUBLIC_IP/admin) but then check the TLS fingerprint:"
openssl x509 -in "$STORAGE_ROOT/ssl/ssl_certificate.pem" -noout -fingerprint -sha256\
| sed "s/SHA256 Fingerprint=//i"
else
echo https://$PUBLIC_IP/admin
echo "https://$PUBLIC_IP/admin"
echo
echo You will be alerted that the website has an invalid certificate. Check that
echo the certificate fingerprint matches:
echo "You will be alerted that the website has an invalid certificate. Check that"
echo "the certificate fingerprint matches:"
echo
openssl x509 -in $STORAGE_ROOT/ssl/ssl_certificate.pem -noout -fingerprint \
| sed "s/SHA1 Fingerprint=//"
openssl x509 -in "$STORAGE_ROOT/ssl/ssl_certificate.pem" -noout -fingerprint -sha256\
| sed "s/SHA256 Fingerprint=//i"
echo
echo Then you can confirm the security exception and continue.
echo "Then you can confirm the security exception and continue."
echo
fi

View File

@ -1,3 +1,4 @@
#!/bin/bash
source /etc/mailinabox.conf
source setup/functions.sh # load our functions
@ -11,8 +12,15 @@ source setup/functions.sh # load our functions
#
# First set the hostname in the configuration file, then activate the setting
echo $PRIMARY_HOSTNAME > /etc/hostname
hostname $PRIMARY_HOSTNAME
echo "$PRIMARY_HOSTNAME" > /etc/hostname
hostname "$PRIMARY_HOSTNAME"
# ### Fix permissions
# The default Ubuntu Bionic image on Scaleway throws warnings during setup about incorrect
# permissions (group writeable) set on the following directories.
chmod g-w /etc /etc/default /usr
# ### Add swap space to the system
@ -37,23 +45,23 @@ hostname $PRIMARY_HOSTNAME
# for reference
SWAP_MOUNTED=$(cat /proc/swaps | tail -n+2)
SWAP_IN_FSTAB=$(grep "swap" /etc/fstab)
ROOT_IS_BTRFS=$(grep "\/ .*btrfs" /proc/mounts)
TOTAL_PHYSICAL_MEM=$(head -n 1 /proc/meminfo | awk '{print $2}')
SWAP_IN_FSTAB=$(grep "swap" /etc/fstab || /bin/true)
ROOT_IS_BTRFS=$(grep "\/ .*btrfs" /proc/mounts || /bin/true)
TOTAL_PHYSICAL_MEM=$(head -n 1 /proc/meminfo | awk '{print $2}' || /bin/true)
AVAILABLE_DISK_SPACE=$(df / --output=avail | tail -n 1)
if
[ -z "$SWAP_MOUNTED" ] &&
[ -z "$SWAP_IN_FSTAB" ] &&
[ ! -e /swapfile ] &&
[ -z "$ROOT_IS_BTRFS" ] &&
[ $TOTAL_PHYSICAL_MEM -lt 1900000 ] &&
[ $AVAILABLE_DISK_SPACE -gt 5242880 ]
[ "$TOTAL_PHYSICAL_MEM" -lt 1900000 ] &&
[ "$AVAILABLE_DISK_SPACE" -gt 5242880 ]
then
echo "Adding a swap file to the system..."
# Allocate and activate the swap file. Allocate in 1KB chuncks
# doing it in one go, could fail on low memory systems
dd if=/dev/zero of=/swapfile bs=1024 count=$[1024*1024] status=none
dd if=/dev/zero of=/swapfile bs=1024 count=$((1024*1024)) status=none
if [ -e /swapfile ]; then
chmod 600 /swapfile
hide_output mkswap /swapfile
@ -68,17 +76,17 @@ then
fi
fi
# ### Add Mail-in-a-Box's PPA.
# ### Set log retention policy.
# We've built several .deb packages on our own that we want to include.
# One is a replacement for Ubuntu's stock postgrey package that makes
# some enhancements. The other is dovecot-lucene, a Lucene-based full
# text search plugin for (and by) dovecot, which is not available in
# Ubuntu currently.
#
# So, first ensure add-apt-repository is installed, then use it to install
# the [mail-in-a-box ppa](https://launchpad.net/~mail-in-a-box/+archive/ubuntu/ppa).
# Set the systemd journal log retention from infinite to 10 days,
# since over time the logs take up a large amount of space.
# (See https://discourse.mailinabox.email/t/journalctl-reclaim-space-on-small-mailinabox/6728/11.)
tools/editconf.py /etc/systemd/journald.conf MaxRetentionSec=10day
# ### Add PPAs.
# We install some non-standard Ubuntu packages maintained by other
# third-party providers. First ensure add-apt-repository is installed.
if [ ! -f /usr/bin/add-apt-repository ]; then
echo "Installing add-apt-repository..."
@ -86,23 +94,37 @@ if [ ! -f /usr/bin/add-apt-repository ]; then
apt_install software-properties-common
fi
hide_output add-apt-repository -y ppa:mail-in-a-box/ppa
# Ensure the universe repository is enabled since some of our packages
# come from there and minimal Ubuntu installs may have it turned off.
hide_output add-apt-repository -y universe
# Install the duplicity PPA.
hide_output add-apt-repository -y ppa:duplicity-team/duplicity-release-git
# Stock PHP is now 8.1, but we're transitioning through 8.0 because
# of Nextcloud.
hide_output add-apt-repository --y ppa:ondrej/php
# ### Update Packages
# Update system packages to make sure we have the latest upstream versions of things from Ubuntu.
# Update system packages to make sure we have the latest upstream versions
# of things from Ubuntu, as well as the directory of packages provide by the
# PPAs so we can install those packages later.
echo Updating system packages...
echo "Updating system packages..."
hide_output apt-get update
apt_get_quiet upgrade
# Old kernels pile up over time and take up a lot of disk space, and because of Mail-in-a-Box
# changes there may be other packages that are no longer needed. Clear out anything apt knows
# is safe to delete.
apt_get_quiet autoremove
# ### Install System Packages
# Install basic utilities.
#
# * haveged: Provides extra entropy to /dev/random so it doesn't stall
# when generating random numbers for private keys (e.g. during
# ldns-keygen).
# * unattended-upgrades: Apt tool to install security updates automatically.
# * cron: Runs background processes periodically.
# * ntp: keeps the system time correct
@ -112,12 +134,21 @@ apt_get_quiet upgrade
# * sudo: allows privileged users to execute commands as root without being root
# * coreutils: includes `nproc` tool to report number of processors, mktemp
# * bc: allows us to do math to compute sane defaults
# * openssh-client: provides ssh-keygen
echo Installing system packages...
apt_install python3 python3-dev python3-pip \
netcat-openbsd wget curl git sudo coreutils bc \
haveged pollinate \
unattended-upgrades cron ntp fail2ban
echo "Installing system packages..."
apt_install python3 python3-dev python3-pip python3-setuptools \
netcat-openbsd wget curl git sudo coreutils bc file \
pollinate openssh-client unzip \
unattended-upgrades cron ntp fail2ban rsyslog
# ### Suppress Upgrade Prompts
# When Ubuntu 20 comes out, we don't want users to be prompted to upgrade,
# because we don't yet support it.
if [ -f /etc/update-manager/release-upgrades ]; then
tools/editconf.py /etc/update-manager/release-upgrades Prompt=never
rm -f /var/lib/ubuntu-release-upgrader/release-upgrade-available
fi
# ### Set the system timezone
#
@ -133,8 +164,8 @@ apt_install python3 python3-dev python3-pip \
# section) and syslog (see #328). There might be other issues, and it's
# not likely the user will want to change this, so we only ask on first
# setup.
if [ -z "$NONINTERACTIVE" ]; then
if [ ! -f /etc/timezone ] || [ ! -z $FIRST_TIME_SETUP ]; then
if [ -z "${NONINTERACTIVE:-}" ]; then
if [ ! -f /etc/timezone ] || [ -n "${FIRST_TIME_SETUP:-}" ]; then
# If the file is missing or this is the user's first time running
# Mail-in-a-Box setup, run the interactive timezone configuration
# tool.
@ -160,7 +191,6 @@ fi
# * DNSSEC signing keys (see `dns.sh`)
# * our management server's API key (via Python's os.urandom method)
# * Roundcube's SECRET_KEY (`webmail.sh`)
# * ownCloud's administrator account password (`owncloud.sh`)
#
# Why /dev/urandom? It's the same as /dev/random, except that it doesn't wait
# for a constant new stream of entropy. In practice, we only need a little
@ -197,7 +227,7 @@ fi
# hardware entropy to get going, by drawing from /dev/random. haveged makes this
# less likely to stall for very long.
echo Initializing system random number generator...
echo "Initializing system random number generator..."
dd if=/dev/random of=/dev/urandom bs=1 count=32 2> /dev/null
# This is supposedly sufficient. But because we're not sure if hardware entropy
@ -208,6 +238,12 @@ pollinate -q -r
# Between these two, we really ought to be all set.
# We need an ssh key to store backups via rsync, if it doesn't exist create one
if [ ! -f /root/.ssh/id_rsa_miab ]; then
echo 'Creating SSH key for backup…'
ssh-keygen -t rsa -b 2048 -a 100 -f /root/.ssh/id_rsa_miab -N '' -q
fi
# ### Package maintenance
#
# Allow apt to install system updates automatically every day.
@ -216,7 +252,7 @@ cat > /etc/apt/apt.conf.d/02periodic <<EOF;
APT::Periodic::MaxAge "7";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::Verbose "1";
APT::Periodic::Verbose "0";
EOF
# ### Firewall
@ -224,22 +260,22 @@ EOF
# Various virtualized environments like Docker and some VPSs don't provide #NODOC
# a kernel that supports iptables. To avoid error-like output in these cases, #NODOC
# we skip this if the user sets DISABLE_FIREWALL=1. #NODOC
if [ -z "$DISABLE_FIREWALL" ]; then
if [ -z "${DISABLE_FIREWALL:-}" ]; then
# Install `ufw` which provides a simple firewall configuration.
apt_install ufw
# Allow incoming connections to SSH.
ufw_allow ssh;
ufw_limit ssh;
# ssh might be running on an alternate port. Use sshd -T to dump sshd's #NODOC
# settings, find the port it is supposedly running on, and open that port #NODOC
# too. #NODOC
SSH_PORT=$(sshd -T 2>/dev/null | grep "^port " | sed "s/port //") #NODOC
if [ ! -z "$SSH_PORT" ]; then
if [ -n "$SSH_PORT" ]; then
if [ "$SSH_PORT" != "22" ]; then
echo Opening alternate SSH port $SSH_PORT. #NODOC
ufw_allow $SSH_PORT #NODOC
echo "Opening alternate SSH port $SSH_PORT." #NODOC
ufw_limit "$SSH_PORT" #NODOC
fi
fi
@ -249,51 +285,84 @@ fi #NODOC
# ### Local DNS Service
# Install a local DNS server, rather than using the DNS server provided by the
# ISP's network configuration.
# Install a local recursive DNS server --- i.e. for DNS queries made by
# local services running on this machine.
#
# We do this to ensure that DNS queries
# that *we* make (i.e. looking up other external domains) perform DNSSEC checks.
# We could use Google's Public DNS, but we don't want to create a dependency on
# Google per our goals of decentralization. `bind9`, as packaged for Ubuntu, has
# DNSSEC enabled by default via "dnssec-validation auto".
# (This is unrelated to the box's public, non-recursive DNS server that
# answers remote queries about domain names hosted on this box. For that
# see dns.sh.)
#
# So we'll be running `bind9` bound to 127.0.0.1 for locally-issued DNS queries
# and `nsd` bound to the public ethernet interface for remote DNS queries asking
# about our domain names. `nsd` is configured later.
# The default systemd-resolved service provides local DNS name resolution. By default it
# is a recursive stub nameserver, which means it simply relays requests to an
# external nameserver, usually provided by your ISP or configured in /etc/systemd/resolved.conf.
#
# This won't work for us for three reasons.
#
# 1) We have higher security goals --- we want DNSSEC to be enforced on all
# DNS queries (some upstream DNS servers do, some don't).
# 2) We will configure postfix to use DANE, which uses DNSSEC to find TLS
# certificates for remote servers. DNSSEC validation *must* be performed
# locally because we can't trust an unencrypted connection to an external
# DNS server.
# 3) DNS-based mail server blacklists (RBLs) typically block large ISP
# DNS servers because they only provide free data to small users. Since
# we use RBLs to block incoming mail from blacklisted IP addresses,
# we have to run our own DNS server. See #1424.
#
# systemd-resolved has a setting to perform local DNSSEC validation on all
# requests (in /etc/systemd/resolved.conf, set DNSSEC=yes), but because it's
# a stub server the main part of a request still goes through an upstream
# DNS server, which won't work for RBLs. So we really need a local recursive
# nameserver.
#
# We'll install `bind9`, which as packaged for Ubuntu, has DNSSEC enabled by default via "dnssec-validation auto".
# We'll have it be bound to 127.0.0.1 so that it does not interfere with
# the public, recursive nameserver `nsd` bound to the public ethernet interfaces.
#
# About the settings:
#
# * RESOLVCONF=yes will have `bind9` take over /etc/resolv.conf to tell
# local services that DNS queries are handled on localhost.
# * Adding -4 to OPTIONS will have `bind9` not listen on IPv6 addresses
# so that we're sure there's no conflict with nsd, our public domain
# name server, on IPV6.
# * The listen-on directive in named.conf.options restricts `bind9` to
# binding to the loopback interface instead of all interfaces.
apt_install bind9 resolvconf
tools/editconf.py /etc/default/bind9 \
RESOLVCONF=yes \
# * The max-recursion-queries directive increases the maximum number of iterative queries.
# If more queries than specified are sent, bind9 returns SERVFAIL. After flushing the cache during system checks,
# we ran into the limit thus we are increasing it from 75 (default value) to 100.
apt_install bind9
tools/editconf.py /etc/default/named \
"OPTIONS=\"-u bind -4\""
if ! grep -q "listen-on " /etc/bind/named.conf.options; then
# Add a listen-on directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tlisten-on { 127.0.0.1; };\n}/" /etc/bind/named.conf.options
fi
if [ -f /etc/resolvconf/resolv.conf.d/original ]; then
echo "Archiving old resolv.conf (was /etc/resolvconf/resolv.conf.d/original, now /etc/resolvconf/resolv.conf.original)." #NODOC
mv /etc/resolvconf/resolv.conf.d/original /etc/resolvconf/resolv.conf.original #NODOC
if ! grep -q "max-recursion-queries " /etc/bind/named.conf.options; then
# Add a max-recursion-queries directive if it doesn't exist inside the options block.
sed -i "s/^}/\n\tmax-recursion-queries 100;\n}/" /etc/bind/named.conf.options
fi
# First we'll disable systemd-resolved's management of resolv.conf and its stub server.
# Breaking the symlink to /run/systemd/resolve/stub-resolv.conf means
# systemd-resolved will read it for DNS servers to use. Put in 127.0.0.1,
# which is where bind9 will be running. Obviously don't do this before
# installing bind9 or else apt won't be able to resolve a server to
# download bind9 from.
rm -f /etc/resolv.conf
tools/editconf.py /etc/systemd/resolved.conf DNSStubListener=no
echo "nameserver 127.0.0.1" > /etc/resolv.conf
# Restart the DNS services.
restart_service bind9
restart_service resolvconf
systemctl restart systemd-resolved
# ### Fail2Ban Service
# Configure the Fail2Ban installation to prevent dumb bruce-force attacks against dovecot, postfix, ssh, etc.
rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore
rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config
cat conf/fail2ban/jails.conf \
| sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \
| sed "s/PUBLIC_IP/$PUBLIC_IP/g" \
| sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \
> /etc/fail2ban/jail.d/mailinabox.conf
@ -305,3 +374,5 @@ cp -f conf/fail2ban/filter.d/* /etc/fail2ban/filter.d/
# scripts will ensure the files exist and then fail2ban is given another
# restart at the very end of setup.
restart_service fail2ban
systemctl enable fail2ban

View File

@ -8,7 +8,7 @@ source /etc/mailinabox.conf # load global vars
# Some Ubuntu images start off with Apache. Remove it since we
# will use nginx. Use autoremove to remove any Apache depenencies.
if [ -f /usr/sbin/apache2 ]; then
echo Removing apache...
echo "Removing apache..."
hide_output apt-get -y purge apache2 apache2-*
hide_output apt-get -y --purge autoremove
fi
@ -18,7 +18,8 @@ fi
# Turn off nginx's default website.
echo "Installing Nginx (web server)..."
apt_install nginx php5-fpm
apt_install nginx php"${PHP_VER}"-cli php"${PHP_VER}"-fpm idn2
rm -f /etc/nginx/sites-enabled/default
@ -30,26 +31,69 @@ sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \
conf/nginx-ssl.conf > /etc/nginx/conf.d/ssl.conf
# Fix some nginx defaults.
#
# The server_names_hash_bucket_size seems to prevent long domain names!
# The default, according to nginx's docs, depends on "the size of the
# processors cache line." It could be as low as 32. We fixed it at
# 64 in 2014 to accommodate a long domain name (20 characters?). But
# even at 64, a 58-character domain name won't work (#93), so now
# we're going up to 128.
#
# Drop TLSv1.0, TLSv1.1, following the Mozilla "Intermediate" recommendations
# at https://ssl-config.mozilla.org/#server=nginx&server-version=1.17.0&config=intermediate&openssl-version=1.1.1.
tools/editconf.py /etc/nginx/nginx.conf -s \
server_names_hash_bucket_size="128;"
server_names_hash_bucket_size="128;" \
ssl_protocols="TLSv1.2 TLSv1.3;"
# Tell PHP not to expose its version number in the X-Powered-By header.
tools/editconf.py /etc/php5/fpm/php.ini -c ';' \
tools/editconf.py /etc/php/"$PHP_VER"/fpm/php.ini -c ';' \
expose_php=Off
# Set PHPs default charset to UTF-8, since we use it. See #367.
tools/editconf.py /etc/php5/fpm/php.ini -c ';' \
tools/editconf.py /etc/php/"$PHP_VER"/fpm/php.ini -c ';' \
default_charset="UTF-8"
# Bump up PHP's max_children to support more concurrent connections
tools/editconf.py /etc/php5/fpm/pool.d/www.conf -c ';' \
pm.max_children=8
# Configure the path environment for php-fpm
tools/editconf.py /etc/php/"$PHP_VER"/fpm/pool.d/www.conf -c ';' \
env[PATH]=/usr/local/bin:/usr/bin:/bin \
# Configure php-fpm based on the amount of memory the machine has
# This is based on the nextcloud manual for performance tuning: https://docs.nextcloud.com/server/17/admin_manual/installation/server_tuning.html
# Some synchronisation issues can occur when many people access the site at once.
# The pm=ondemand setting is used for memory constrained machines < 2GB, this is copied over from PR: 1216
TOTAL_PHYSICAL_MEM=$(head -n 1 /proc/meminfo | awk '{print $2}' || /bin/true)
if [ "$TOTAL_PHYSICAL_MEM" -lt 1000000 ]
then
tools/editconf.py /etc/php/"$PHP_VER"/fpm/pool.d/www.conf -c ';' \
pm=ondemand \
pm.max_children=8 \
pm.start_servers=2 \
pm.min_spare_servers=1 \
pm.max_spare_servers=3
elif [ "$TOTAL_PHYSICAL_MEM" -lt 2000000 ]
then
tools/editconf.py /etc/php/"$PHP_VER"/fpm/pool.d/www.conf -c ';' \
pm=ondemand \
pm.max_children=16 \
pm.start_servers=4 \
pm.min_spare_servers=1 \
pm.max_spare_servers=6
elif [ "$TOTAL_PHYSICAL_MEM" -lt 3000000 ]
then
tools/editconf.py /etc/php/"$PHP_VER"/fpm/pool.d/www.conf -c ';' \
pm=dynamic \
pm.max_children=60 \
pm.start_servers=6 \
pm.min_spare_servers=3 \
pm.max_spare_servers=9
else
tools/editconf.py /etc/php/"$PHP_VER"/fpm/pool.d/www.conf -c ';' \
pm=dynamic \
pm.max_children=120 \
pm.start_servers=12 \
pm.min_spare_servers=6 \
pm.max_spare_servers=18
fi
# Other nginx settings will be configured by the management service
# since it depends on what domains we're serving, which we don't know
@ -78,34 +122,33 @@ cat conf/mozilla-autoconfig.xml \
> /var/lib/mailinabox/mozilla-autoconfig.xml
chmod a+r /var/lib/mailinabox/mozilla-autoconfig.xml
# Create a generic mta-sts.txt file which is exposed via the
# nginx configuration at /.well-known/mta-sts.txt
# more documentation is available on:
# https://www.uriports.com/blog/mta-sts-explained/
# default mode is "enforce". In /etc/mailinabox.conf change
# "MTA_STS_MODE=testing" which means "Messages will be delivered
# as though there was no failure but a report will be sent if
# TLS-RPT is configured" if you are not sure you want this yet. Or "none".
PUNY_PRIMARY_HOSTNAME=$(echo "$PRIMARY_HOSTNAME" | idn2)
cat conf/mta-sts.txt \
| sed "s/MODE/${MTA_STS_MODE}/" \
| sed "s/PRIMARY_HOSTNAME/$PUNY_PRIMARY_HOSTNAME/" \
> /var/lib/mailinabox/mta-sts.txt
chmod a+r /var/lib/mailinabox/mta-sts.txt
# make a default homepage
if [ -d $STORAGE_ROOT/www/static ]; then mv $STORAGE_ROOT/www/static $STORAGE_ROOT/www/default; fi # migration #NODOC
mkdir -p $STORAGE_ROOT/www/default
if [ ! -f $STORAGE_ROOT/www/default/index.html ]; then
cp conf/www_default.html $STORAGE_ROOT/www/default/index.html
if [ -d "$STORAGE_ROOT/www/static" ]; then mv "$STORAGE_ROOT/www/static" "$STORAGE_ROOT/www/default"; fi # migration #NODOC
mkdir -p "$STORAGE_ROOT/www/default"
if [ ! -f "$STORAGE_ROOT/www/default/index.html" ]; then
cp conf/www_default.html "$STORAGE_ROOT/www/default/index.html"
fi
chown -R $STORAGE_USER $STORAGE_ROOT/www
# We previously installed a custom init script to start the PHP FastCGI daemon. #NODOC
# Remove it now that we're using php5-fpm. #NODOC
if [ -L /etc/init.d/php-fastcgi ]; then
echo "Removing /etc/init.d/php-fastcgi, php5-cgi..." #NODOC
rm -f /etc/init.d/php-fastcgi #NODOC
hide_output update-rc.d php-fastcgi remove #NODOC
apt-get -y purge php5-cgi #NODOC
fi
# Remove obsoleted scripts. #NODOC
# exchange-autodiscover is now handled by Z-Push. #NODOC
for f in webfinger exchange-autodiscover; do #NODOC
rm -f /usr/local/bin/mailinabox-$f.php #NODOC
done #NODOC
chown -R "$STORAGE_USER" "$STORAGE_ROOT/www"
# Start services.
restart_service nginx
restart_service php5-fpm
restart_service php"$PHP_VER"-fpm
# Open ports.
ufw_allow http
ufw_allow https

205
setup/webmail.sh Executable file → Normal file
View File

@ -22,150 +22,203 @@ source /etc/mailinabox.conf # load global vars
echo "Installing Roundcube (webmail)..."
apt_install \
dbconfig-common \
php5 php5-sqlite php5-mcrypt php5-intl php5-json php5-common php-auth php-net-smtp php-net-socket php-net-sieve php-mail-mime php-crypt-gpg php5-gd php5-pspell \
tinymce libjs-jquery libjs-jquery-mousewheel libmagic1
apt_get_quiet remove php-mail-mimedecode # no longer needed since Roundcube 1.1.3
# We used to install Roundcube from Ubuntu, without triggering the dependencies #NODOC
# on Apache and MySQL, by downloading the debs and installing them manually. #NODOC
# Now that we're beyond that, get rid of those debs before installing from source. #NODOC
apt-get purge -qq -y roundcube* #NODOC
php"${PHP_VER}"-cli php"${PHP_VER}"-sqlite3 php"${PHP_VER}"-intl php"${PHP_VER}"-common php"${PHP_VER}"-curl php"${PHP_VER}"-imap \
php"${PHP_VER}"-gd php"${PHP_VER}"-pspell php"${PHP_VER}"-mbstring libjs-jquery libjs-jquery-mousewheel libmagic1 \
sqlite3
# Install Roundcube from source if it is not already present or if it is out of date.
# Combine the Roundcube version number with the commit hash of vacation_sieve to track
# whether we have the latest version.
VERSION=1.2.1
HASH=81fbfba4683522f6e54006d0300a48e6da3f3bbd
VACATION_SIEVE_VERSION=91ea6f52216390073d1f5b70b5f6bea0bfaee7e5
PERSISTENT_LOGIN_VERSION=1e9d724476a370ce917a2fcd5b3217b0c306c24e
HTML5_NOTIFIER_VERSION=4b370e3cd60dabd2f428a26f45b677ad1b7118d5
UPDATE_KEY=$VERSION:$VACATION_SIEVE_VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:a
# Combine the Roundcube version number with the commit hash of plugins to track
# whether we have the latest version of everything.
# For the latest versions, see:
# https://github.com/roundcube/roundcubemail/releases
# https://github.com/mfreiholz/persistent_login/commits/master
# https://github.com/stremlau/html5_notifier/commits/master
# https://github.com/mstilkerich/rcmcarddav/releases
# The easiest way to get the package hashes is to run this script and get the hash from
# the error message.
VERSION=1.6.6
HASH=7705d2736890c49e7ae3ac75e3ae00ba56187056
PERSISTENT_LOGIN_VERSION=bde7b6840c7d91de627ea14e81cf4133cbb3c07a # version 5.3
HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+
CARDDAV_VERSION=4.4.3
CARDDAV_HASH=74f8ba7aee33e78beb9de07f7f44b81f6071b644
UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION
# paths that are often reused.
RCM_DIR=/usr/local/lib/roundcubemail
RCM_PLUGIN_DIR=${RCM_DIR}/plugins
RCM_CONFIG=${RCM_DIR}/config/config.inc.php
needs_update=0 #NODOC
if [ ! -f /usr/local/lib/roundcubemail/version ]; then
# not installed yet #NODOC
needs_update=1 #NODOC
elif [[ "$UPDATE_KEY" != `cat /usr/local/lib/roundcubemail/version` ]]; then
elif [[ "$UPDATE_KEY" != $(cat /usr/local/lib/roundcubemail/version) ]]; then
# checks if the version is what we want
needs_update=1 #NODOC
fi
if [ $needs_update == 1 ]; then
# if upgrading from 1.3.x, clear the temp_dir
if [ -f /usr/local/lib/roundcubemail/version ]; then
if [ "$(cat /usr/local/lib/roundcubemail/version | cut -c1-3)" == '1.3' ]; then
find /var/tmp/roundcubemail/ -type f ! -name 'RCMTEMP*' -delete
fi
fi
# install roundcube
wget_verify \
https://github.com/roundcube/roundcubemail/releases/download/$VERSION/roundcubemail-$VERSION.tar.gz \
https://github.com/roundcube/roundcubemail/releases/download/$VERSION/roundcubemail-$VERSION-complete.tar.gz \
$HASH \
/tmp/roundcube.tgz
tar -C /usr/local/lib --no-same-owner -zxf /tmp/roundcube.tgz
rm -rf /usr/local/lib/roundcubemail
mv /usr/local/lib/roundcubemail-$VERSION/ /usr/local/lib/roundcubemail
mv /usr/local/lib/roundcubemail-$VERSION/ $RCM_DIR
rm -f /tmp/roundcube.tgz
# install roundcube autoreply/vacation plugin
git_clone https://github.com/arodier/Roundcube-Plugins.git $VACATION_SIEVE_VERSION plugins/vacation_sieve /usr/local/lib/roundcubemail/plugins/vacation_sieve
# install roundcube persistent_login plugin
git_clone https://github.com/mfreiholz/Roundcube-Persistent-Login-Plugin.git $PERSISTENT_LOGIN_VERSION '' /usr/local/lib/roundcubemail/plugins/persistent_login
git_clone https://github.com/mfreiholz/Roundcube-Persistent-Login-Plugin.git $PERSISTENT_LOGIN_VERSION '' ${RCM_PLUGIN_DIR}/persistent_login
# install roundcube html5_notifier plugin
git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' /usr/local/lib/roundcubemail/plugins/html5_notifier
git_clone https://github.com/kitist/html5_notifier.git $HTML5_NOTIFIER_VERSION '' ${RCM_PLUGIN_DIR}/html5_notifier
# download and verify the full release of the carddav plugin
wget_verify \
https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \
$CARDDAV_HASH \
/tmp/carddav.tar.gz
# unzip and cleanup
tar -C ${RCM_PLUGIN_DIR} -zxf /tmp/carddav.tar.gz
rm -f /tmp/carddav.tar.gz
# record the version we've installed
echo $UPDATE_KEY > /usr/local/lib/roundcubemail/version
echo $UPDATE_KEY > ${RCM_DIR}/version
fi
# ### Configuring Roundcube
# Generate a safe 24-character secret key of safe characters.
SECRET_KEY=$(dd if=/dev/urandom bs=1 count=18 2>/dev/null | base64 | fold -w 24 | head -n 1)
# Generate a secret key of PHP-string-safe characters appropriate
# for the cipher algorithm selected below.
SECRET_KEY=$(dd if=/dev/urandom bs=1 count=32 2>/dev/null | base64 | sed s/=//g)
# Create a configuration file.
#
# For security, temp and log files are not stored in the default locations
# which are inside the roundcube sources directory. We put them instead
# in normal places.
cat > /usr/local/lib/roundcubemail/config/config.inc.php <<EOF;
cat > $RCM_CONFIG <<EOF;
<?php
/*
* Do not edit. Written by Mail-in-a-Box. Regenerated on updates.
*/
\$config = array();
\$config['log_dir'] = '/var/log/roundcubemail/';
\$config['temp_dir'] = '/tmp/roundcubemail/';
\$config['temp_dir'] = '/var/tmp/roundcubemail/';
\$config['db_dsnw'] = 'sqlite:///$STORAGE_ROOT/mail/roundcube/roundcube.sqlite?mode=0640';
\$config['default_host'] = 'ssl://localhost';
\$config['default_port'] = 993;
\$config['imap_host'] = 'ssl://localhost:993';
\$config['imap_conn_options'] = array(
'ssl' => array(
'verify_peer' => false,
'verify_peer_name' => false,
),
);
\$config['imap_timeout'] = 15;
\$config['smtp_server'] = 'tls://127.0.0.1';
\$config['smtp_port'] = 587;
\$config['smtp_user'] = '%u';
\$config['smtp_pass'] = '%p';
\$config['smtp_host'] = 'tls://127.0.0.1';
\$config['smtp_conn_options'] = array(
'ssl' => array(
'verify_peer' => false,
'verify_peer_name' => false,
),
);
\$config['support_url'] = 'https://mailinabox.email/';
\$config['product_name'] = '$PRIMARY_HOSTNAME Webmail';
\$config['des_key'] = '$SECRET_KEY';
\$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'vacation_sieve', 'persistent_login');
\$config['skin'] = 'classic';
\$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things
\$config['des_key'] = '$SECRET_KEY'; # 37 characters -> ~256 bits for AES-256, see above
\$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav');
\$config['skin'] = 'elastic';
\$config['login_autocomplete'] = 2;
\$config['login_username_filter'] = 'email';
\$config['password_charset'] = 'UTF-8';
\$config['junk_mbox'] = 'Spam';
/* ensure roudcube session id's aren't leaked to other parts of the server */
\$config['session_path'] = '/mail/';
/* prevent CSRF, requires php 7.3+ */
\$config['session_samesite'] = 'Strict';
?>
EOF
# Configure vaction_sieve.
cat > /usr/local/lib/roundcubemail/plugins/vacation_sieve/config.inc.php <<EOF;
# Configure CardDav
cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF;
<?php
/* Do not edit. Written by Mail-in-a-Box. Regenerated on updates. */
\$rcmail_config['vacation_sieve'] = array(
'date_format' => 'd/m/Y',
'working_hours' => array(8,18),
'msg_format' => 'text',
'logon_transform' => array('#([a-z])[a-z]+(\.|\s)([a-z])#i', '\$1\$3'),
'transfer' => array(
'mode' => 'managesieve',
'ms_activate_script' => true,
'host' => '127.0.0.1',
'port' => '4190',
'usetls' => false,
'path' => 'vacation',
)
\$prefs['_GLOBAL']['hide_preferences'] = true;
\$prefs['_GLOBAL']['suppress_version_warning'] = true;
\$prefs['ownCloud'] = array(
'name' => 'ownCloud',
'username' => '%u', // login username
'password' => '%p', // login password
'url' => 'https://${PRIMARY_HOSTNAME}/cloud/remote.php/dav/addressbooks/users/%u/contacts/',
'active' => true,
'readonly' => false,
'refresh_time' => '02:00:00',
'fixed' => array('username','password'),
'preemptive_auth' => '1',
'hide' => false,
);
?>
EOF
# Create writable directories.
mkdir -p /var/log/roundcubemail /tmp/roundcubemail $STORAGE_ROOT/mail/roundcube
chown -R www-data.www-data /var/log/roundcubemail /tmp/roundcubemail $STORAGE_ROOT/mail/roundcube
mkdir -p /var/log/roundcubemail /var/tmp/roundcubemail "$STORAGE_ROOT/mail/roundcube"
chown -R www-data:www-data /var/log/roundcubemail /var/tmp/roundcubemail "$STORAGE_ROOT/mail/roundcube"
# Ensure the log file monitored by fail2ban exists, or else fail2ban can't start.
sudo -u www-data touch /var/log/roundcubemail/errors
sudo -u www-data touch /var/log/roundcubemail/errors.log
# Password changing plugin settings
# The config comes empty by default, so we need the settings
# The config comes empty by default, so we need the settings
# we're not planning to change in config.inc.dist...
cp /usr/local/lib/roundcubemail/plugins/password/config.inc.php.dist \
/usr/local/lib/roundcubemail/plugins/password/config.inc.php
cp ${RCM_PLUGIN_DIR}/password/config.inc.php.dist \
${RCM_PLUGIN_DIR}/password/config.inc.php
tools/editconf.py /usr/local/lib/roundcubemail/plugins/password/config.inc.php \
"\$config['password_minimum_length']=6;" \
tools/editconf.py ${RCM_PLUGIN_DIR}/password/config.inc.php \
"\$config['password_minimum_length']=8;" \
"\$config['password_db_dsn']='sqlite:///$STORAGE_ROOT/mail/users.sqlite';" \
"\$config['password_query']='UPDATE users SET password=%D WHERE email=%u';" \
"\$config['password_dovecotpw']='/usr/bin/doveadm pw';" \
"\$config['password_dovecotpw_method']='SHA512-CRYPT';" \
"\$config['password_dovecotpw_with_method']=true;"
"\$config['password_query']='UPDATE users SET password=%P WHERE email=%u';" \
"\$config['password_algorithm']='sha512-crypt';" \
"\$config['password_algorithm_prefix']='{SHA512-CRYPT}';"
# so PHP can use doveadm, for the password changing plugin
usermod -a -G dovecot www-data
# set permissions so that PHP can use users.sqlite
# could use dovecot instead of www-data, but not sure it matters
chown root.www-data $STORAGE_ROOT/mail
chmod 775 $STORAGE_ROOT/mail
chown root.www-data $STORAGE_ROOT/mail/users.sqlite
chmod 664 $STORAGE_ROOT/mail/users.sqlite
chown root:www-data "$STORAGE_ROOT/mail"
chmod 775 "$STORAGE_ROOT/mail"
chown root:www-data "$STORAGE_ROOT/mail/users.sqlite"
chmod 664 "$STORAGE_ROOT/mail/users.sqlite"
# Run Roundcube database migration script, if the database exists (it's created by
# Roundcube on first use).
if [ -f $STORAGE_ROOT/mail/roundcube/roundcube.sqlite ]; then
/usr/local/lib/roundcubemail/bin/updatedb.sh --dir /usr/local/lib/roundcubemail/SQL --package roundcube
fi
# Fix Carddav permissions:
chown -f -R root:www-data ${RCM_PLUGIN_DIR}/carddav
# root:www-data need all permissions, others only read
chmod -R 774 ${RCM_PLUGIN_DIR}/carddav
# Run Roundcube database migration script (database is created if it does not exist)
php"$PHP_VER" ${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcube
chown www-data:www-data "$STORAGE_ROOT/mail/roundcube/roundcube.sqlite"
chmod 664 "$STORAGE_ROOT/mail/roundcube/roundcube.sqlite"
# Patch the Roundcube code to eliminate an issue that causes postfix to reject our sqlite
# user database (see https://github.com/mail-in-a-box/mailinabox/issues/2185)
sed -i.miabold 's/^[^#]\+.\+PRAGMA journal_mode = WAL.\+$/#&/' \
/usr/local/lib/roundcubemail/program/lib/Roundcube/db/sqlite.php
# Because Roundcube wants to set the PRAGMA we just deleted from the source, we apply it here
# to the roundcube database (see https://github.com/roundcube/roundcubemail/issues/8035)
# Database should exist, created by migration script
hide_output sqlite3 "$STORAGE_ROOT/mail/roundcube/roundcube.sqlite" 'PRAGMA journal_mode=WAL;'
# Enable PHP modules.
php5enmod mcrypt
restart_service php5-fpm
phpenmod -v "$PHP_VER" imap
restart_service php"$PHP_VER"-fpm

View File

@ -17,25 +17,40 @@ source /etc/mailinabox.conf # load global vars
echo "Installing Z-Push (Exchange/ActiveSync server)..."
apt_install \
php-soap php5-imap libawl-php php5-xsl
php"${PHP_VER}"-soap php"${PHP_VER}"-imap libawl-php php"$PHP_VER"-xml
php5enmod imap
phpenmod -v "$PHP_VER" imap
# Copy Z-Push into place.
TARGETHASH=80cbe53de4ab8dd598d1f2af6f0a23fa396c529a
VERSION=2.7.1
TARGETHASH=f15c566b1ad50de24f3f08f505f0c3d8155c2d0d
needs_update=0 #NODOC
if [ ! -f /usr/local/lib/z-push/version ]; then
needs_update=1 #NODOC
elif [[ $TARGETHASH != `cat /usr/local/lib/z-push/version` ]]; then
elif [[ $VERSION != $(cat /usr/local/lib/z-push/version) ]]; then
# checks if the version
needs_update=1 #NODOC
fi
if [ $needs_update == 1 ]; then
git_clone https://github.com/fmbiete/Z-Push-contrib $TARGETHASH '' /usr/local/lib/z-push
# Download
wget_verify "https://github.com/Z-Hub/Z-Push/archive/refs/tags/$VERSION.zip" $TARGETHASH /tmp/z-push.zip
# Extract into place.
rm -rf /usr/local/lib/z-push /tmp/z-push
unzip -q /tmp/z-push.zip -d /tmp/z-push
mv /tmp/z-push/*/src /usr/local/lib/z-push
rm -rf /tmp/z-push.zip /tmp/z-push
# Create admin and top scripts with PHP_VER
rm -f /usr/sbin/z-push-{admin,top}
ln -s /usr/local/lib/z-push/z-push-admin.php /usr/sbin/z-push-admin
ln -s /usr/local/lib/z-push/z-push-top.php /usr/sbin/z-push-top
echo $TARGETHASH > /usr/local/lib/z-push/version
echo '#!/bin/bash' > /usr/sbin/z-push-admin
echo php"$PHP_VER" /usr/local/lib/z-push/z-push-admin.php '"$@"' >> /usr/sbin/z-push-admin
chmod 755 /usr/sbin/z-push-admin
echo '#!/bin/bash' > /usr/sbin/z-push-top
echo php"$PHP_VER" /usr/local/lib/z-push/z-push-top.php '"$@"' >> /usr/sbin/z-push-top
chmod 755 /usr/sbin/z-push-top
echo $VERSION > /usr/local/lib/z-push/version
fi
# Configure default config.
@ -53,6 +68,7 @@ cp conf/zpush/backend_combined.php /usr/local/lib/z-push/backend/combined/config
# Configure IMAP
rm -f /usr/local/lib/z-push/backend/imap/config.php
cp conf/zpush/backend_imap.php /usr/local/lib/z-push/backend/imap/config.php
sed -i "s%STORAGE_ROOT%$STORAGE_ROOT%" /usr/local/lib/z-push/backend/imap/config.php
# Configure CardDav
rm -f /usr/local/lib/z-push/backend/carddav/config.php
@ -66,6 +82,7 @@ cp conf/zpush/backend_caldav.php /usr/local/lib/z-push/backend/caldav/config.php
rm -f /usr/local/lib/z-push/autodiscover/config.php
cp conf/zpush/autodiscover_config.php /usr/local/lib/z-push/autodiscover/config.php
sed -i "s/PRIMARY_HOSTNAME/$PRIMARY_HOSTNAME/" /usr/local/lib/z-push/autodiscover/config.php
sed -i "s^define('TIMEZONE', .*^define('TIMEZONE', '$(cat /etc/timezone)');^" /usr/local/lib/z-push/autodiscover/config.php
# Some directories it will use.
@ -91,4 +108,8 @@ EOF
# Restart service.
restart_service php5-fpm
restart_service php"$PHP_VER"-fpm
# Fix states after upgrade
hide_output php"$PHP_VER" /usr/local/lib/z-push/z-push-admin.php -a fixstates

View File

@ -6,15 +6,15 @@
# try to log in to.
######################################################################
import sys, os, time, functools
import sys, os, time
# parse command line
if len(sys.argv) != 3:
print("Usage: tests/fail2ban.py \"ssh user@hostname\" hostname")
if len(sys.argv) != 4:
print('Usage: tests/fail2ban.py "ssh user@hostname" hostname owncloud_user')
sys.exit(1)
ssh_command, hostname = sys.argv[1:3]
ssh_command, hostname, owncloud_user = sys.argv[1:4]
# define some test types
@ -24,7 +24,6 @@ socket.setdefaulttimeout(10)
class IsBlocked(Exception):
"""Tests raise this exception when it appears that a fail2ban
jail is in effect, i.e. on a connection refused error."""
pass
def smtp_test():
import smtplib
@ -33,13 +32,14 @@ def smtp_test():
server = smtplib.SMTP(hostname, 587)
except ConnectionRefusedError:
# looks like fail2ban worked
raise IsBlocked()
raise IsBlocked
server.starttls()
server.ehlo_or_helo_if_needed()
try:
server.login("fakeuser", "fakepassword")
raise Exception("authentication didn't fail")
msg = "authentication didn't fail"
raise Exception(msg)
except smtplib.SMTPAuthenticationError:
# athentication should fail
pass
@ -57,11 +57,56 @@ def imap_test():
M = imaplib.IMAP4_SSL(hostname)
except ConnectionRefusedError:
# looks like fail2ban worked
raise IsBlocked()
raise IsBlocked
try:
M.login("fakeuser", "fakepassword")
raise Exception("authentication didn't fail")
msg = "authentication didn't fail"
raise Exception(msg)
except imaplib.IMAP4.error:
# authentication should fail
pass
finally:
M.logout() # shuts down connection, has nothing to do with login()
def pop_test():
import poplib
try:
M = poplib.POP3_SSL(hostname)
except ConnectionRefusedError:
# looks like fail2ban worked
raise IsBlocked
try:
M.user('fakeuser')
try:
M.pass_('fakepassword')
except poplib.error_proto:
# Authentication should fail.
M = None # don't .quit()
return
M.list()
msg = "authentication didn't fail"
raise Exception(msg)
finally:
if M:
M.quit()
def managesieve_test():
# We don't have a Python sieve client, so we'll
# just run the IMAP client and see what happens.
import imaplib
try:
M = imaplib.IMAP4(hostname, 4190)
except ConnectionRefusedError:
# looks like fail2ban worked
raise IsBlocked
try:
M.login("fakeuser", "fakepassword")
msg = "authentication didn't fail"
raise Exception(msg)
except imaplib.IMAP4.error:
# authentication should fail
pass
@ -87,17 +132,17 @@ def http_test(url, expected_status, postdata=None, qsargs=None, auth=None):
headers={'User-Agent': 'Mail-in-a-Box fail2ban tester'},
timeout=8,
verify=False) # don't bother with HTTPS validation, it may not be configured yet
except requests.exceptions.ConnectTimeout as e:
raise IsBlocked()
except requests.exceptions.ConnectTimeout:
raise IsBlocked
except requests.exceptions.ConnectionError as e:
if "Connection refused" in str(e):
raise IsBlocked()
raise IsBlocked
raise # some other unexpected condition
# return response status code
if r.status_code != expected_status:
r.raise_for_status() # anything but 200
raise IOError("Got unexpected status code %s." % r.status_code)
raise OSError("Got unexpected status code %s." % r.status_code)
# define how to run a test
@ -107,7 +152,7 @@ def restart_fail2ban_service(final=False):
if not final:
# Stop recidive jails during testing.
command += " && sudo fail2ban-client stop recidive"
os.system("%s \"%s\"" % (ssh_command, command))
os.system(f'{ssh_command} "{command}"')
def testfunc_runner(i, testfunc, *args):
print(i+1, end=" ", flush=True)
@ -121,7 +166,6 @@ def run_test(testfunc, args, count, within_seconds, parallel):
# run testfunc sequentially and still get to count requests within
# the required time. So we split the requests across threads.
import requests.exceptions
from multiprocessing import Pool
restart_fail2ban_service()
@ -137,7 +181,7 @@ def run_test(testfunc, args, count, within_seconds, parallel):
# Distribute the requests across the pool.
asyncresults = []
for i in range(count):
ar = p.apply_async(testfunc_runner, [i, testfunc] + list(args))
ar = p.apply_async(testfunc_runner, [i, testfunc, *list(args)])
asyncresults.append(ar)
# Wait for all runs to finish.
@ -183,14 +227,20 @@ if __name__ == "__main__":
# IMAP
run_test(imap_test, [], 20, 30, 4)
# POP
run_test(pop_test, [], 20, 30, 4)
# Managesieve
run_test(managesieve_test, [], 20, 30, 4)
# Mail-in-a-Box control panel
run_test(http_test, ["/admin/me", 200], 20, 30, 1)
run_test(http_test, ["/admin/login", 200], 20, 30, 1)
# Munin via the Mail-in-a-Box control panel
run_test(http_test, ["/admin/munin/", 401], 20, 30, 1)
# ownCloud
run_test(http_test, ["/cloud/remote.php/webdav", 401, None, None, ["aa", "aa"]], 20, 120, 1)
run_test(http_test, ["/cloud/remote.php/webdav", 401, None, None, [owncloud_user, "aa"]], 20, 120, 1)
# restart fail2ban so that this client machine is no longer blocked
restart_fail2ban_service(final=True)

View File

@ -7,7 +7,7 @@
# where ipaddr is the IP address of your Mail-in-a-Box
# and hostname is the domain name to check the DNS for.
import sys, re, difflib
import sys, re
import dns.reversename, dns.resolver
if len(sys.argv) < 3:
@ -27,10 +27,10 @@ def test(server, description):
("ns2." + primary_hostname, "A", ipaddr),
("www." + hostname, "A", ipaddr),
(hostname, "MX", "10 " + primary_hostname + "."),
(hostname, "TXT", "\"v=spf1 mx -all\""),
("mail._domainkey." + hostname, "TXT", "\"v=DKIM1; k=rsa; s=email; \" \"p=__KEY__\""),
(hostname, "TXT", '"v=spf1 mx -all"'),
("mail._domainkey." + hostname, "TXT", '"v=DKIM1; k=rsa; s=email; " "p=__KEY__"'),
#("_adsp._domainkey." + hostname, "TXT", "\"dkim=all\""),
("_dmarc." + hostname, "TXT", "\"v=DMARC1; p=quarantine\""),
("_dmarc." + hostname, "TXT", '"v=DMARC1; p=quarantine;"'),
]
return test2(tests, server, description)
@ -48,7 +48,7 @@ def test2(tests, server, description):
for qname, rtype, expected_answer in tests:
# do the query and format the result as a string
try:
response = dns.resolver.query(qname, rtype)
response = dns.resolver.resolve(qname, rtype)
except dns.resolver.NoNameservers:
# host did not have an answer for this query
print("Could not connect to %s for DNS query." % server)
@ -59,7 +59,7 @@ def test2(tests, server, description):
response = ["[no value]"]
response = ";".join(str(r) for r in response)
response = re.sub(r"(\"p=).*(\")", r"\1__KEY__\2", response) # normalize DKIM key
response = response.replace("\"\" ", "") # normalize TXT records (DNSSEC signing inserts empty text string components)
response = response.replace('"" ', "") # normalize TXT records (DNSSEC signing inserts empty text string components)
# is it right?
if response == expected_answer:
@ -98,7 +98,7 @@ else:
# And if that's OK, also check reverse DNS (the PTR record).
if not test_ptr("8.8.8.8", "Google Public DNS (Reverse DNS)"):
print ()
print ("The reverse DNS for %s is not correct. Consult your ISP for how to set the reverse DNS (also called the PTR record) for %s to %s." % (hostname, hostname, ipaddr))
print (f"The reverse DNS for {hostname} is not correct. Consult your ISP for how to set the reverse DNS (also called the PTR record) for {hostname} to {ipaddr}.")
sys.exit(1)
else:
print ("And the reverse DNS for the domain is correct.")

View File

@ -30,26 +30,21 @@ print("IMAP login is OK.")
# Attempt to send a mail to ourself.
mailsubject = "Mail-in-a-Box Automated Test Message " + uuid.uuid4().hex
emailto = emailaddress
msg = """From: {emailaddress}
msg = f"""From: {emailaddress}
To: {emailto}
Subject: {subject}
Subject: {mailsubject}
This is a test message. It should be automatically deleted by the test script.""".format(
emailaddress=emailaddress,
emailto=emailto,
subject=mailsubject,
)
This is a test message. It should be automatically deleted by the test script."""
# Connect to the server on the SMTP submission TLS port.
server = smtplib.SMTP(host, 587)
server = smtplib.SMTP_SSL(host)
#server.set_debuglevel(1)
server.starttls()
# Verify that the EHLO name matches the server's reverse DNS.
ipaddr = socket.gethostbyname(host) # IPv4 only!
reverse_ip = dns.reversename.from_address(ipaddr) # e.g. "1.0.0.127.in-addr.arpa."
try:
reverse_dns = dns.resolver.query(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname
reverse_dns = dns.resolver.resolve(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname
except dns.resolver.NXDOMAIN:
print("Reverse DNS lookup failed for %s. SMTP EHLO name check skipped." % ipaddr)
reverse_dns = None

View File

@ -6,11 +6,11 @@ if len(sys.argv) < 3:
sys.exit(1)
host, toaddr, fromaddr = sys.argv[1:4]
msg = """From: %s
To: %s
msg = f"""From: {fromaddr}
To: {toaddr}
Subject: SMTP server test
This is a test message.""" % (fromaddr, toaddr)
This is a test message."""
server = smtplib.SMTP(host, 25)
server.set_debuglevel(1)

View File

@ -17,7 +17,7 @@
# through some other host you can ssh into (maybe the box
# itself?):
#
# python3 --proxy user@ssh_host yourservername
# python3 tls.py --proxy user@ssh_host yourservername
#
# (This will launch "ssh -N -L10023:yourservername:testport user@ssh_host"
# to create a tunnel.)
@ -61,9 +61,9 @@ common_opts = ["--sslv2", "--sslv3", "--tlsv1", "--tlsv1_1", "--tlsv1_2", "--ren
# Assumes TLSv1, TLSv1.1, TLSv1.2.
#
# The 'old' ciphers bring compatibility back to Win XP IE 6.
MOZILLA_CIPHERS_MODERN = "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK"
MOZILLA_CIPHERS_INTERMEDIATE = "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
MOZILLA_CIPHERS_OLD = "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
MOZILLA_CIPHERS_MODERN = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
MOZILLA_CIPHERS_INTERMEDIATE = "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS"
MOZILLA_CIPHERS_OLD = "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP"
######################################################################
@ -88,14 +88,14 @@ def sslyze(opts, port, ok_ciphers):
try:
# Execute SSLyze.
out = subprocess.check_output([SSLYZE] + common_opts + opts + [connection_string])
out = subprocess.check_output([SSLYZE, *common_opts, *opts, connection_string])
out = out.decode("utf8")
# Trim output to make better for storing in git.
if "SCAN RESULTS FOR" not in out:
# Failed. Just output the error.
out = re.sub("[\w\W]*CHECKING HOST\(S\) AVAILABILITY\n\s*-+\n", "", out) # chop off header that shows the host we queried
out = re.sub("[\w\W]*SCAN RESULTS FOR.*\n\s*-+\n", "", out) # chop off header that shows the host we queried
out = re.sub("[\\w\\W]*CHECKING HOST\\(S\\) AVAILABILITY\n\\s*-+\n", "", out) # chop off header that shows the host we queried
out = re.sub("[\\w\\W]*SCAN RESULTS FOR.*\n\\s*-+\n", "", out) # chop off header that shows the host we queried
out = re.sub("SCAN COMPLETED IN .*", "", out)
out = out.rstrip(" \n-") + "\n"
@ -105,8 +105,8 @@ def sslyze(opts, port, ok_ciphers):
# Pull out the accepted ciphers list for each SSL/TLS protocol
# version outputted.
accepted_ciphers = set()
for ciphers in re.findall(" Accepted:([\w\W]*?)\n *\n", out):
accepted_ciphers |= set(re.findall("\n\s*(\S*)", ciphers))
for ciphers in re.findall(" Accepted:([\\w\\W]*?)\n *\n", out):
accepted_ciphers |= set(re.findall("\n\\s*(\\S*)", ciphers))
# Compare to what Mozilla recommends, for a given modernness-level.
print(" Should Not Offer: " + (", ".join(sorted(accepted_ciphers-set(ok_ciphers))) or "(none -- good)"))
@ -128,7 +128,7 @@ def sslyze(opts, port, ok_ciphers):
proxy_proc.terminate()
try:
proxy_proc.wait(5)
except TimeoutExpired:
except subprocess.TimeoutExpired:
proxy_proc.kill()
# Get a list of OpenSSL cipher names.
@ -142,7 +142,7 @@ for cipher in csv.DictReader(io.StringIO(urllib.request.urlopen("https://raw.git
client_compatibility = json.loads(urllib.request.urlopen("https://raw.githubusercontent.com/mail-in-a-box/user-agent-tls-capabilities/master/clients.json").read().decode("utf8"))
cipher_clients = { }
for client in client_compatibility:
if len(set(client['protocols']) & set(["TLS 1.0", "TLS 1.1", "TLS 1.2"])) == 0: continue # does not support TLS
if len(set(client['protocols']) & {"TLS 1.0", "TLS 1.1", "TLS 1.2"}) == 0: continue # does not support TLS
for cipher in client['ciphers']:
cipher_clients.setdefault(cipher_names.get(cipher), set()).add("/".join(x for x in [client['client']['name'], client['client']['version'], client['client']['platform']] if x))

View File

@ -13,18 +13,18 @@ PORT 25
* Session Resumption:
With Session IDs: OK - Supported (5 successful, 0 failed, 0 errors, 5 total attempts).
With TLS Session Tickets: NOT SUPPORTED - TLS ticket not assigned.
With TLS Session Tickets: OK - Supported
* SSLV2 Cipher Suites:
Server rejected all cipher suites.
* TLSV1_2 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-GCM-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA256 DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
@ -33,9 +33,9 @@ PORT 25
AES256-SHA256 - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
AES256-GCM-SHA384 - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA256 ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-GCM-SHA256 ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA256 ECDH-521 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-GCM-SHA256 ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA256 DH-2048 bits 128 bits 250 2.0.0 Ok
@ -46,56 +46,47 @@ PORT 25
AES128-SHA256 - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
AES128-GCM-SHA256 - 128 bits 250 2.0.0 Ok
ECDHE-RSA-DES-CBC3-SHA ECDH-256 bits 112 bits 250 2.0.0 Ok
EDH-RSA-DES-CBC3-SHA DH-2048 bits 112 bits 250 2.0.0 Ok
DES-CBC3-SHA - 112 bits 250 2.0.0 Ok
* TLSV1_1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
SEED-SHA - 128 bits 250 2.0.0 Ok
CAMELLIA128-SHA - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
ECDHE-RSA-DES-CBC3-SHA ECDH-256 bits 112 bits 250 2.0.0 Ok
EDH-RSA-DES-CBC3-SHA DH-2048 bits 112 bits 250 2.0.0 Ok
DES-CBC3-SHA - 112 bits 250 2.0.0 Ok
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
SEED-SHA - 128 bits 250 2.0.0 Ok
CAMELLIA128-SHA - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
ECDHE-RSA-DES-CBC3-SHA ECDH-256 bits 112 bits 250 2.0.0 Ok
EDH-RSA-DES-CBC3-SHA DH-2048 bits 112 bits 250 2.0.0 Ok
DES-CBC3-SHA - 112 bits 250 2.0.0 Ok
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
Should Not Offer: DHE-RSA-SEED-SHA, EDH-RSA-DES-CBC3-SHA, SEED-SHA
Could Also Offer: DH-DSS-AES128-GCM-SHA256, DH-DSS-AES128-SHA, DH-DSS-AES128-SHA256, DH-DSS-AES256-GCM-SHA384, DH-DSS-AES256-SHA, DH-DSS-AES256-SHA256, DH-DSS-CAMELLIA128-SHA, DH-DSS-CAMELLIA256-SHA, DH-DSS-DES-CBC3-SHA, DH-RSA-AES128-GCM-SHA256, DH-RSA-AES128-SHA, DH-RSA-AES128-SHA256, DH-RSA-AES256-GCM-SHA384, DH-RSA-AES256-SHA, DH-RSA-AES256-SHA256, DH-RSA-CAMELLIA128-SHA, DH-RSA-CAMELLIA256-SHA, DH-RSA-DES-CBC3-SHA, DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, DHE-DSS-AES256-SHA256, DHE-DSS-CAMELLIA128-SHA, DHE-DSS-CAMELLIA256-SHA, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-DES-CBC3-SHA, SRP-3DES-EDE-CBC-SHA, SRP-AES-128-CBC-SHA, SRP-AES-256-CBC-SHA, SRP-DSS-3DES-EDE-CBC-SHA, SRP-DSS-AES-128-CBC-SHA, SRP-DSS-AES-256-CBC-SHA, SRP-RSA-3DES-EDE-CBC-SHA, SRP-RSA-AES-128-CBC-SHA, SRP-RSA-AES-256-CBC-SHA
Supported Clients: OpenSSL/1.0.2, OpenSSL/1.0.1l, BingPreview/Jan 2015, Yahoo Slurp/Jan 2015, YandexBot/Jan 2015, Android/4.4.2, Safari/7/iOS 7.1, Safari/8/OS X 10.10, Safari/8/iOS 8.1.2, Safari/7/OS X 10.9, Safari/6/iOS 6.0.1, Firefox/31.3.0 ESR/Win 7, Baidu/Jan 2015, IE/11/Win 8.1, IE/11/Win 7, IE Mobile/11/Win Phone 8.1, Android/5.0.0, Java/8u31, Chrome/42/OS X, Googlebot/Feb 2015, Android/4.1.1, Android/4.0.4, Safari/6.0.4/OS X 10.8.4, Android/4.2.2, Android/4.3, Safari/5.1.9/OS X 10.6.8, Firefox/37/OS X, OpenSSL/0.9.8y, Java/7u25, IE/8-10/Win 7, IE/7/Vista, IE Mobile/10/Win Phone 8.0, Android/2.3.7, Java/6u45, IE/8/XP
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
SEED-SHA - 128 bits 250 2.0.0 Ok
CAMELLIA128-SHA - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
Should Not Offer: (none -- good)
Could Also Offer: AES128-CCM, AES128-CCM8, AES256-CCM, AES256-CCM8, CAMELLIA128-SHA256, CAMELLIA256-SHA256, DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, DHE-DSS-AES256-SHA256, DHE-DSS-CAMELLIA128-SHA, DHE-DSS-CAMELLIA128-SHA256, DHE-DSS-CAMELLIA256-SHA, DHE-DSS-CAMELLIA256-SHA256, DHE-DSS-SEED-SHA, DHE-RSA-AES128-CCM, DHE-RSA-AES128-CCM8, DHE-RSA-AES256-CCM, DHE-RSA-AES256-CCM8, DHE-RSA-CAMELLIA128-SHA256, DHE-RSA-CAMELLIA256-SHA256, DHE-RSA-CHACHA20-POLY1305, ECDHE-ECDSA-AES128-CCM, ECDHE-ECDSA-AES128-CCM8, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-CCM, ECDHE-ECDSA-AES256-CCM8, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-CAMELLIA128-SHA256, ECDHE-ECDSA-CAMELLIA256-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CAMELLIA128-SHA256, ECDHE-RSA-CAMELLIA256-SHA384, ECDHE-RSA-CHACHA20-POLY1305
Supported Clients: Yahoo Slurp/Jan 2015, OpenSSL/1.0.2, BingPreview/Jan 2015, OpenSSL/1.0.1l, YandexBot/Jan 2015, Android/4.4.2, Safari/6/iOS 6.0.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE Mobile/11/Win Phone 8.1, IE/11/Win 7, Baidu/Jan 2015, Firefox/31.3.0 ESR/Win 7, Android/5.0.0, Chrome/42/OS X, Java/8u31, Googlebot/Feb 2015, Firefox/37/OS X, Android/4.3, Android/4.2.2, Safari/5.1.9/OS X 10.6.8, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, IE Mobile/10/Win Phone 8.0, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, Java/7u25, Android/2.3.7, Java/6u45
PORT 587
--------
@ -112,18 +103,18 @@ PORT 587
* Session Resumption:
With Session IDs: OK - Supported (5 successful, 0 failed, 0 errors, 5 total attempts).
With TLS Session Tickets: NOT SUPPORTED - TLS ticket not assigned.
With TLS Session Tickets: OK - Supported
* SSLV2 Cipher Suites:
Server rejected all cipher suites.
* TLSV1_2 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-GCM-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-GCM-SHA384 ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA256 DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
@ -132,9 +123,9 @@ PORT 587
AES256-SHA256 - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
AES256-GCM-SHA384 - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA256 ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-GCM-SHA256 ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA256 ECDH-521 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-GCM-SHA256 ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA256 DH-2048 bits 128 bits 250 2.0.0 Ok
@ -148,31 +139,14 @@ PORT 587
* TLSV1_1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
SEED-SHA - 128 bits 250 2.0.0 Ok
CAMELLIA128-SHA - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
@ -183,9 +157,26 @@ PORT 587
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
Should Not Offer: AES128-GCM-SHA256, AES128-SHA, AES128-SHA256, AES256-GCM-SHA384, AES256-SHA, AES256-SHA256, CAMELLIA128-SHA, CAMELLIA256-SHA, DHE-RSA-CAMELLIA128-SHA, DHE-RSA-CAMELLIA256-SHA, DHE-RSA-SEED-SHA, SEED-SHA
Could Also Offer: DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384
Supported Clients: OpenSSL/1.0.2, OpenSSL/1.0.1l, BingPreview/Jan 2015, Yahoo Slurp/Jan 2015, YandexBot/Jan 2015, Android/4.4.2, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE/11/Win 7, IE Mobile/11/Win Phone 8.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/6/iOS 6.0.1, Firefox/31.3.0 ESR/Win 7, Baidu/Jan 2015, Chrome/42/OS X, Android/5.0.0, Java/8u31, Googlebot/Feb 2015, Firefox/37/OS X, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, Android/4.2.2, Android/4.3, Safari/5.1.9/OS X 10.6.8, IE/8-10/Win 7, IE/7/Vista, IE Mobile/10/Win Phone 8.0, OpenSSL/0.9.8y, Java/7u25, Java/6u45, Android/2.3.7
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
Accepted:
ECDHE-RSA-AES256-SHA ECDH-521 bits 256 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
DHE-RSA-AES256-SHA DH-2048 bits 256 bits 250 2.0.0 Ok
CAMELLIA256-SHA - 256 bits 250 2.0.0 Ok
AES256-SHA - 256 bits 250 2.0.0 Ok
ECDHE-RSA-AES128-SHA ECDH-521 bits 128 bits 250 2.0.0 Ok
DHE-RSA-SEED-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-CAMELLIA128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
DHE-RSA-AES128-SHA DH-2048 bits 128 bits 250 2.0.0 Ok
SEED-SHA - 128 bits 250 2.0.0 Ok
CAMELLIA128-SHA - 128 bits 250 2.0.0 Ok
AES128-SHA - 128 bits 250 2.0.0 Ok
Should Not Offer: AES128-GCM-SHA256, AES128-SHA, AES128-SHA256, AES256-GCM-SHA384, AES256-SHA, AES256-SHA256, CAMELLIA128-SHA, CAMELLIA256-SHA, DHE-RSA-AES128-GCM-SHA256, DHE-RSA-AES128-SHA, DHE-RSA-AES128-SHA256, DHE-RSA-AES256-GCM-SHA384, DHE-RSA-AES256-SHA, DHE-RSA-AES256-SHA256, DHE-RSA-CAMELLIA128-SHA, DHE-RSA-CAMELLIA256-SHA, DHE-RSA-SEED-SHA, ECDHE-RSA-AES128-SHA, ECDHE-RSA-AES256-SHA, SEED-SHA
Could Also Offer: ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CHACHA20-POLY1305
Supported Clients: Yahoo Slurp/Jan 2015, OpenSSL/1.0.2, BingPreview/Jan 2015, OpenSSL/1.0.1l, YandexBot/Jan 2015, Android/4.4.2, Safari/6/iOS 6.0.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE Mobile/11/Win Phone 8.1, IE/11/Win 7, Baidu/Jan 2015, Firefox/31.3.0 ESR/Win 7, Android/5.0.0, Chrome/42/OS X, Java/8u31, Googlebot/Feb 2015, Firefox/37/OS X, Android/4.3, Android/4.2.2, Safari/5.1.9/OS X 10.6.8, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, IE Mobile/10/Win Phone 8.0, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, Java/7u25, Android/2.3.7, Java/6u45
PORT 443
--------
@ -197,15 +188,15 @@ PORT 443
Client-initiated Renegotiations: OK - Rejected
Secure Renegotiation: OK - Supported
* OpenSSL Heartbleed:
OK - Not vulnerable to Heartbleed
* HTTP Strict Transport Security:
OK - HSTS header received: max-age=15768000
* Session Resumption:
With Session IDs: OK - Supported (5 successful, 0 failed, 0 errors, 5 total attempts).
With TLS Session Tickets: OK - Supported
* HTTP Strict Transport Security:
OK - HSTS header received: max-age=31536000
* OpenSSL Heartbleed:
OK - Not vulnerable to Heartbleed
Unhandled exception when processing --chrome_sha1:
exceptions.TypeError - Incorrect padding
@ -223,13 +214,18 @@ exceptions.TypeError - Incorrect padding
DHE-RSA-AES256-SHA256 DH-2048 bits 256 bits HTTP 200 OK
DHE-RSA-AES256-SHA DH-2048 bits 256 bits HTTP 200 OK
DHE-RSA-AES256-GCM-SHA384 DH-2048 bits 256 bits HTTP 200 OK
AES256-SHA256 - 256 bits HTTP 200 OK
AES256-SHA - 256 bits HTTP 200 OK
AES256-GCM-SHA384 - 256 bits HTTP 200 OK
ECDHE-RSA-AES128-SHA256 ECDH-256 bits 128 bits HTTP 200 OK
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits HTTP 200 OK
ECDHE-RSA-AES128-GCM-SHA256 ECDH-256 bits 128 bits HTTP 200 OK
DHE-RSA-AES128-SHA256 DH-2048 bits 128 bits HTTP 200 OK
DHE-RSA-AES128-SHA DH-2048 bits 128 bits HTTP 200 OK
DHE-RSA-AES128-GCM-SHA256 DH-2048 bits 128 bits HTTP 200 OK
DES-CBC3-SHA - 112 bits HTTP 200 OK
AES128-SHA256 - 128 bits HTTP 200 OK
AES128-SHA - 128 bits HTTP 200 OK
AES128-GCM-SHA256 - 128 bits HTTP 200 OK
* TLSV1_1 Cipher Suites:
Preferred:
@ -237,9 +233,13 @@ exceptions.TypeError - Incorrect padding
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits HTTP 200 OK
DHE-RSA-AES256-SHA DH-2048 bits 256 bits HTTP 200 OK
AES256-SHA - 256 bits HTTP 200 OK
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits HTTP 200 OK
DHE-RSA-AES128-SHA DH-2048 bits 128 bits HTTP 200 OK
DES-CBC3-SHA - 112 bits HTTP 200 OK
AES128-SHA - 128 bits HTTP 200 OK
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
* TLSV1 Cipher Suites:
Preferred:
@ -247,16 +247,14 @@ exceptions.TypeError - Incorrect padding
Accepted:
ECDHE-RSA-AES256-SHA ECDH-256 bits 256 bits HTTP 200 OK
DHE-RSA-AES256-SHA DH-2048 bits 256 bits HTTP 200 OK
AES256-SHA - 256 bits HTTP 200 OK
ECDHE-RSA-AES128-SHA ECDH-256 bits 128 bits HTTP 200 OK
DHE-RSA-AES128-SHA DH-2048 bits 128 bits HTTP 200 OK
DES-CBC3-SHA - 112 bits HTTP 200 OK
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
AES128-SHA - 128 bits HTTP 200 OK
Should Not Offer: (none -- good)
Could Also Offer: AES128-GCM-SHA256, AES128-SHA, AES128-SHA256, AES256-GCM-SHA384, AES256-SHA, AES256-SHA256, CAMELLIA128-SHA, CAMELLIA256-SHA, DH-DSS-AES128-GCM-SHA256, DH-DSS-AES128-SHA, DH-DSS-AES128-SHA256, DH-DSS-AES256-GCM-SHA384, DH-DSS-AES256-SHA, DH-DSS-AES256-SHA256, DH-DSS-CAMELLIA128-SHA, DH-DSS-CAMELLIA256-SHA, DH-RSA-AES128-GCM-SHA256, DH-RSA-AES128-SHA, DH-RSA-AES128-SHA256, DH-RSA-AES256-GCM-SHA384, DH-RSA-AES256-SHA, DH-RSA-AES256-SHA256, DH-RSA-CAMELLIA128-SHA, DH-RSA-CAMELLIA256-SHA, DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, DHE-DSS-AES256-SHA256, DHE-DSS-CAMELLIA128-SHA, DHE-DSS-CAMELLIA256-SHA, DHE-RSA-CAMELLIA128-SHA, DHE-RSA-CAMELLIA256-SHA, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, SRP-AES-128-CBC-SHA, SRP-AES-256-CBC-SHA, SRP-DSS-AES-128-CBC-SHA, SRP-DSS-AES-256-CBC-SHA, SRP-RSA-AES-128-CBC-SHA, SRP-RSA-AES-256-CBC-SHA
Supported Clients: OpenSSL/1.0.2, OpenSSL/1.0.1l, BingPreview/Jan 2015, YandexBot/Jan 2015, Yahoo Slurp/Jan 2015, Android/4.4.2, Safari/7/iOS 7.1, Safari/8/OS X 10.10, Safari/8/iOS 8.1.2, Safari/7/OS X 10.9, Safari/6/iOS 6.0.1, Chrome/42/OS X, IE/11/Win 8.1, IE/11/Win 7, Android/5.0.0, Java/8u31, IE Mobile/11/Win Phone 8.1, Googlebot/Feb 2015, Firefox/31.3.0 ESR/Win 7, Firefox/37/OS X, Android/4.1.1, Android/4.0.4, Baidu/Jan 2015, Safari/6.0.4/OS X 10.8.4, Android/4.2.2, Android/4.3, Safari/5.1.9/OS X 10.6.8, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, IE Mobile/10/Win Phone 8.0, Java/7u25, Android/2.3.7, Java/6u45, IE/8/XP
Could Also Offer: ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CHACHA20-POLY1305
Supported Clients: Yahoo Slurp/Jan 2015, OpenSSL/1.0.2, YandexBot/Jan 2015, BingPreview/Jan 2015, OpenSSL/1.0.1l, Android/4.4.2, Safari/6/iOS 6.0.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE Mobile/11/Win Phone 8.1, IE/11/Win 7, Android/5.0.0, Chrome/42/OS X, Java/8u31, Googlebot/Feb 2015, Firefox/31.3.0 ESR/Win 7, Firefox/37/OS X, Android/4.3, Android/4.2.2, Baidu/Jan 2015, Safari/5.1.9/OS X 10.6.8, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, IE Mobile/10/Win Phone 8.0, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, Java/7u25, Android/2.3.7, Java/6u45
PORT 993
--------
@ -279,55 +277,55 @@ _nassl.OpenSSLError - error:140940F5:SSL routines:ssl3_read_bytes:unexpected rec
* TLSV1_2 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
ECDHE-RSA-AES128-GCM-SHA256 ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA384 ECDH-384 bits 256 bits
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
ECDHE-RSA-AES256-GCM-SHA384 ECDH-384 bits 256 bits
DHE-RSA-AES256-SHA256 DH-2048 bits 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
DHE-RSA-AES256-GCM-SHA384 DH-2048 bits 256 bits
AES256-SHA256 - 256 bits
AES256-SHA - 256 bits
AES256-GCM-SHA384 - 256 bits
ECDHE-RSA-AES128-SHA256 ECDH-384 bits 128 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
ECDHE-RSA-AES128-GCM-SHA256 ECDH-384 bits 128 bits
DHE-RSA-AES128-SHA256 DH-2048 bits 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
DHE-RSA-AES128-GCM-SHA256 DH-2048 bits 128 bits
AES128-SHA256 - 128 bits
AES128-SHA - 128 bits
AES128-GCM-SHA256 - 128 bits
* TLSV1_1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
AES128-SHA - 128 bits
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
AES128-SHA - 128 bits
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
Should Not Offer: AES128-SHA, AES256-SHA, CAMELLIA128-SHA, CAMELLIA256-SHA, DHE-RSA-CAMELLIA128-SHA, DHE-RSA-CAMELLIA256-SHA
Could Also Offer: DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, DHE-RSA-AES128-GCM-SHA256, DHE-RSA-AES128-SHA256, DHE-RSA-AES256-GCM-SHA384, DHE-RSA-AES256-SHA256, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES128-SHA256, ECDHE-RSA-AES256-GCM-SHA384, ECDHE-RSA-AES256-SHA384
Supported Clients: OpenSSL/1.0.2, Firefox/31.3.0 ESR/Win 7, OpenSSL/1.0.1l, BingPreview/Jan 2015, Yahoo Slurp/Jan 2015, Baidu/Jan 2015, Safari/7/iOS 7.1, Chrome/42/OS X, Googlebot/Feb 2015, Android/4.0.4, Safari/8/iOS 8.1.2, Android/4.1.1, Android/5.0.0, Safari/6/iOS 6.0.1, YandexBot/Jan 2015, Safari/6.0.4/OS X 10.8.4, Android/4.2.2, Safari/8/OS X 10.10, Firefox/37/OS X, Safari/7/OS X 10.9, Android/4.3, Safari/5.1.9/OS X 10.6.8, Android/4.4.2, IE/8-10/Win 7, IE/7/Vista, IE/11/Win 8.1, IE/11/Win 7, OpenSSL/0.9.8y, IE Mobile/10/Win Phone 8.0, IE Mobile/11/Win Phone 8.1, Java/7u25, Java/8u31, Java/6u45, Android/2.3.7
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
AES128-SHA - 128 bits
Should Not Offer: AES128-GCM-SHA256, AES128-SHA, AES128-SHA256, AES256-GCM-SHA384, AES256-SHA, AES256-SHA256, DHE-RSA-AES128-GCM-SHA256, DHE-RSA-AES128-SHA, DHE-RSA-AES128-SHA256, DHE-RSA-AES256-GCM-SHA384, DHE-RSA-AES256-SHA, DHE-RSA-AES256-SHA256, ECDHE-RSA-AES128-SHA, ECDHE-RSA-AES256-SHA
Could Also Offer: ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CHACHA20-POLY1305
Supported Clients: Yahoo Slurp/Jan 2015, OpenSSL/1.0.2, YandexBot/Jan 2015, BingPreview/Jan 2015, OpenSSL/1.0.1l, Android/4.4.2, Safari/6/iOS 6.0.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE Mobile/11/Win Phone 8.1, IE/11/Win 7, Android/5.0.0, Chrome/42/OS X, Java/8u31, Googlebot/Feb 2015, Firefox/31.3.0 ESR/Win 7, Firefox/37/OS X, Android/4.3, Android/4.2.2, Baidu/Jan 2015, Safari/5.1.9/OS X 10.6.8, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, IE Mobile/10/Win Phone 8.0, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, Java/7u25, Android/2.3.7, Java/6u45
PORT 995
--------
@ -350,53 +348,53 @@ _nassl.OpenSSLError - error:140940F5:SSL routines:ssl3_read_bytes:unexpected rec
* TLSV1_2 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
ECDHE-RSA-AES128-GCM-SHA256 ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA384 ECDH-384 bits 256 bits
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
ECDHE-RSA-AES256-GCM-SHA384 ECDH-384 bits 256 bits
DHE-RSA-AES256-SHA256 DH-2048 bits 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
DHE-RSA-AES256-GCM-SHA384 DH-2048 bits 256 bits
AES256-SHA256 - 256 bits
AES256-SHA - 256 bits
AES256-GCM-SHA384 - 256 bits
ECDHE-RSA-AES128-SHA256 ECDH-384 bits 128 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
ECDHE-RSA-AES128-GCM-SHA256 ECDH-384 bits 128 bits
DHE-RSA-AES128-SHA256 DH-2048 bits 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
DHE-RSA-AES128-GCM-SHA256 DH-2048 bits 128 bits
AES128-SHA256 - 128 bits
AES128-SHA - 128 bits
AES128-GCM-SHA256 - 128 bits
* TLSV1_1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
AES128-SHA - 128 bits
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-CAMELLIA256-SHA DH-1024 bits 256 bits
DHE-RSA-AES256-SHA DH-1024 bits 256 bits
CAMELLIA256-SHA - 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-CAMELLIA128-SHA DH-1024 bits 128 bits
DHE-RSA-AES128-SHA DH-1024 bits 128 bits
CAMELLIA128-SHA - 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
AES128-SHA - 128 bits
* SSLV3 Cipher Suites:
Server rejected all cipher suites.
Should Not Offer: AES128-SHA, AES256-SHA, CAMELLIA128-SHA, CAMELLIA256-SHA, DHE-RSA-CAMELLIA128-SHA, DHE-RSA-CAMELLIA256-SHA
Could Also Offer: DHE-DSS-AES128-GCM-SHA256, DHE-DSS-AES128-SHA256, DHE-DSS-AES256-GCM-SHA384, DHE-DSS-AES256-SHA, DHE-RSA-AES128-GCM-SHA256, DHE-RSA-AES128-SHA256, DHE-RSA-AES256-GCM-SHA384, DHE-RSA-AES256-SHA256, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA384, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES128-SHA256, ECDHE-RSA-AES256-GCM-SHA384, ECDHE-RSA-AES256-SHA384
Supported Clients: OpenSSL/1.0.2, Firefox/31.3.0 ESR/Win 7, OpenSSL/1.0.1l, BingPreview/Jan 2015, Yahoo Slurp/Jan 2015, Baidu/Jan 2015, Safari/7/iOS 7.1, Chrome/42/OS X, Googlebot/Feb 2015, Android/4.0.4, Safari/8/iOS 8.1.2, Android/4.1.1, Android/5.0.0, Safari/6/iOS 6.0.1, YandexBot/Jan 2015, Safari/6.0.4/OS X 10.8.4, Android/4.2.2, Safari/8/OS X 10.10, Firefox/37/OS X, Safari/7/OS X 10.9, Android/4.3, Safari/5.1.9/OS X 10.6.8, Android/4.4.2, IE/8-10/Win 7, IE/7/Vista, IE/11/Win 8.1, IE/11/Win 7, OpenSSL/0.9.8y, IE Mobile/10/Win Phone 8.0, IE Mobile/11/Win Phone 8.1, Java/7u25, Java/8u31, Java/6u45, Android/2.3.7
* TLSV1 Cipher Suites:
Preferred:
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
Accepted:
ECDHE-RSA-AES256-SHA ECDH-384 bits 256 bits
DHE-RSA-AES256-SHA DH-2048 bits 256 bits
AES256-SHA - 256 bits
ECDHE-RSA-AES128-SHA ECDH-384 bits 128 bits
DHE-RSA-AES128-SHA DH-2048 bits 128 bits
AES128-SHA - 128 bits
Should Not Offer: AES128-GCM-SHA256, AES128-SHA, AES128-SHA256, AES256-GCM-SHA384, AES256-SHA, AES256-SHA256, DHE-RSA-AES128-GCM-SHA256, DHE-RSA-AES128-SHA, DHE-RSA-AES128-SHA256, DHE-RSA-AES256-GCM-SHA384, DHE-RSA-AES256-SHA, DHE-RSA-AES256-SHA256, ECDHE-RSA-AES128-SHA, ECDHE-RSA-AES256-SHA
Could Also Offer: ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-SHA256, ECDHE-ECDSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-SHA384, ECDHE-ECDSA-CHACHA20-POLY1305, ECDHE-RSA-CHACHA20-POLY1305
Supported Clients: Yahoo Slurp/Jan 2015, OpenSSL/1.0.2, YandexBot/Jan 2015, BingPreview/Jan 2015, OpenSSL/1.0.1l, Android/4.4.2, Safari/6/iOS 6.0.1, Safari/8/OS X 10.10, Safari/7/OS X 10.9, Safari/7/iOS 7.1, IE/11/Win 8.1, Safari/8/iOS 8.1.2, IE Mobile/11/Win Phone 8.1, IE/11/Win 7, Android/5.0.0, Chrome/42/OS X, Java/8u31, Googlebot/Feb 2015, Firefox/31.3.0 ESR/Win 7, Firefox/37/OS X, Android/4.3, Android/4.2.2, Baidu/Jan 2015, Safari/5.1.9/OS X 10.6.8, Android/4.0.4, Android/4.1.1, Safari/6.0.4/OS X 10.8.4, IE Mobile/10/Win Phone 8.0, IE/8-10/Win 7, IE/7/Vista, OpenSSL/0.9.8y, Java/7u25, Android/2.3.7, Java/6u45

View File

@ -1,9 +1,10 @@
#!/bin/bash
# Use this script to make an archive of the contents of all
# of the configuration files we edit with editconf.py.
for fn in `grep -hr editconf.py setup | sed "s/tools\/editconf.py //" | sed "s/ .*//" | sort | uniq`; do
for fn in $(grep -hr editconf.py setup | sed "s/tools\/editconf.py //" | sed "s/ .*//" | sort | uniq); do
echo ======================================================================
echo $fn
echo "$fn"
echo ======================================================================
cat $fn
cat "$fn"
done

View File

@ -3,4 +3,4 @@ POSTDATA=dummy
if [ "$1" == "--force" ]; then
POSTDATA=force=1
fi
curl -s -d $POSTDATA --user $(</var/lib/mailinabox/api.key): http://127.0.0.1:10222/dns/update
curl -s -d $POSTDATA --user "$(</var/lib/mailinabox/api.key):" http://127.0.0.1:10222/dns/update

View File

@ -14,19 +14,23 @@
#
# NAME VALUE
#
# If the -e option is given and VALUE is empty, the setting is removed
# from the configuration file if it is set (i.e. existing occurrences
# are commented out and no new setting is added).
#
# If the -c option is given, then the supplied character becomes the comment character
#
# If the -w option is given, then setting lines continue onto following
# lines while the lines start with whitespace, e.g.:
#
# NAME VAL
# UE
# UE
import sys, re
# sanity check
if len(sys.argv) < 3:
print("usage: python3 editconf.py /etc/file.conf [-s] [-w] [-c <CHARACTER>] [-t] NAME=VAL [NAME=VAL ...]")
print("usage: python3 editconf.py /etc/file.conf [-e] [-s] [-w] [-c <CHARACTER>] [-t] NAME=VAL [NAME=VAL ...]")
sys.exit(1)
# parse command line arguments
@ -35,6 +39,7 @@ settings = sys.argv[2:]
delimiter = "="
delimiter_re = r"\s*=\s*"
erase_setting = False
comment_char = "#"
folded_lines = False
testing = False
@ -44,6 +49,9 @@ while settings[0][0] == "-" and settings[0] != "--":
# Space is the delimiter
delimiter = " "
delimiter_re = r"\s+"
elif opt == "-e":
# Erase settings that have empty values.
erase_setting = True
elif opt == "-w":
# Line folding is possible in this file.
folded_lines = True
@ -68,69 +76,75 @@ for setting in settings:
found = set()
buf = ""
input_lines = list(open(filename))
with open(filename, encoding="utf-8") as f:
input_lines = list(f)
while len(input_lines) > 0:
line = input_lines.pop(0)
# If this configuration file uses folded lines, append any folded lines
# into our input buffer.
if folded_lines and line[0] not in (comment_char, " ", ""):
if folded_lines and line[0] not in {comment_char, " ", ""}:
while len(input_lines) > 0 and input_lines[0][0] in " \t":
line += input_lines.pop(0)
# See if this line is for any settings passed on the command line.
for i in range(len(settings)):
# Check that this line contain this setting from the command-line arguments.
# Check if this line contain this setting from the command-line arguments.
name, val = settings[i].split("=", 1)
m = re.match(
"(\s*)"
+ "(" + re.escape(comment_char) + "\s*)?"
+ re.escape(name) + delimiter_re + "(.*?)\s*$",
r"(\s*)"
"(" + re.escape(comment_char) + r"\s*)?"
+ re.escape(name) + delimiter_re + r"(.*?)\s*$",
line, re.S)
if not m: continue
indent, is_comment, existing_val = m.groups()
# If this is already the setting, do nothing.
if is_comment is None and existing_val == val:
# If this is already the setting, keep it in the file, except:
# * If we've already seen it before, then remove this duplicate line.
# * If val is empty and erase_setting is on, then comment it out.
if is_comment is None and existing_val == val and not (not val and erase_setting):
# It may be that we've already inserted this setting higher
# in the file so check for that first.
if i in found: break
buf += line
found.add(i)
break
# comment-out the existing line (also comment any folded lines)
if is_comment is None:
buf += comment_char + line.rstrip().replace("\n", "\n" + comment_char) + "\n"
else:
# the line is already commented, pass it through
buf += line
# if this option oddly appears more than once, don't add the setting again
if i in found:
# if this option already is set don't add the setting again,
# or if we're clearing the setting with -e, don't add it
if (i in found) or (not val and erase_setting):
break
# add the new setting
buf += indent + name + delimiter + val + "\n"
# note that we've applied this option
found.add(i)
break
else:
# If did not match any setting names, pass this line through.
buf += line
# Put any settings we didn't see at the end of the file.
# Put any settings we didn't see at the end of the file,
# except settings being cleared.
for i in range(len(settings)):
if i not in found:
name, val = settings[i].split("=", 1)
buf += name + delimiter + val + "\n"
if not (not val and erase_setting):
buf += name + delimiter + val + "\n"
if not testing:
# Write out the new file.
with open(filename, "w") as f:
with open(filename, "w", encoding="utf-8") as f:
f.write(buf)
else:
# Just print the new file to stdout.

View File

@ -1,131 +1,3 @@
#!/usr/bin/python3
import sys, getpass, urllib.request, urllib.error, json, re
def mgmt(cmd, data=None, is_json=False):
# The base URL for the management daemon. (Listens on IPv4 only.)
mgmt_uri = 'http://127.0.0.1:10222'
setup_key_auth(mgmt_uri)
req = urllib.request.Request(mgmt_uri + cmd, urllib.parse.urlencode(data).encode("utf8") if data else None)
try:
response = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
if e.code == 401:
try:
print(e.read().decode("utf8"))
except:
pass
print("The management daemon refused access. The API key file may be out of sync. Try 'service mailinabox restart'.", file=sys.stderr)
elif hasattr(e, 'read'):
print(e.read().decode('utf8'), file=sys.stderr)
else:
print(e, file=sys.stderr)
sys.exit(1)
resp = response.read().decode('utf8')
if is_json: resp = json.loads(resp)
return resp
def read_password():
while True:
first = getpass.getpass('password: ')
if len(first) < 4:
print("Passwords must be at least four characters.")
continue
if re.search(r'[\s]', first):
print("Passwords cannot contain spaces.")
continue
second = getpass.getpass(' (again): ')
if first != second:
print("Passwords not the same. Try again.")
continue
break
return first
def setup_key_auth(mgmt_uri):
key = open('/var/lib/mailinabox/api.key').read().strip()
auth_handler = urllib.request.HTTPBasicAuthHandler()
auth_handler.add_password(
realm='Mail-in-a-Box Management Server',
uri=mgmt_uri,
user=key,
passwd='')
opener = urllib.request.build_opener(auth_handler)
urllib.request.install_opener(opener)
if len(sys.argv) < 2:
print("Usage: ")
print(" tools/mail.py user (lists users)")
print(" tools/mail.py user add user@domain.com [password]")
print(" tools/mail.py user password user@domain.com [password]")
print(" tools/mail.py user remove user@domain.com")
print(" tools/mail.py user make-admin user@domain.com")
print(" tools/mail.py user remove-admin user@domain.com")
print(" tools/mail.py user admins (lists admins)")
print(" tools/mail.py alias (lists aliases)")
print(" tools/mail.py alias add incoming.name@domain.com sent.to@other.domain.com")
print(" tools/mail.py alias add incoming.name@domain.com 'sent.to@other.domain.com, multiple.people@other.domain.com'")
print(" tools/mail.py alias remove incoming.name@domain.com")
print()
print("Removing a mail user does not delete their mail folders on disk. It only prevents IMAP/SMTP login.")
print()
elif sys.argv[1] == "user" and len(sys.argv) == 2:
# Dump a list of users, one per line. Mark admins with an asterisk.
users = mgmt("/mail/users?format=json", is_json=True)
for domain in users:
for user in domain["users"]:
if user['status'] == 'inactive': continue
print(user['email'], end='')
if "admin" in user['privileges']:
print("*", end='')
print()
elif sys.argv[1] == "user" and sys.argv[2] in ("add", "password"):
if len(sys.argv) < 5:
if len(sys.argv) < 4:
email = input("email: ")
else:
email = sys.argv[3]
pw = read_password()
else:
email, pw = sys.argv[3:5]
if sys.argv[2] == "add":
print(mgmt("/mail/users/add", { "email": email, "password": pw }))
elif sys.argv[2] == "password":
print(mgmt("/mail/users/password", { "email": email, "password": pw }))
elif sys.argv[1] == "user" and sys.argv[2] == "remove" and len(sys.argv) == 4:
print(mgmt("/mail/users/remove", { "email": sys.argv[3] }))
elif sys.argv[1] == "user" and sys.argv[2] in ("make-admin", "remove-admin") and len(sys.argv) == 4:
if sys.argv[2] == "make-admin":
action = "add"
else:
action = "remove"
print(mgmt("/mail/users/privileges/" + action, { "email": sys.argv[3], "privilege": "admin" }))
elif sys.argv[1] == "user" and sys.argv[2] == "admins":
# Dump a list of admin users.
users = mgmt("/mail/users?format=json", is_json=True)
for domain in users:
for user in domain["users"]:
if "admin" in user['privileges']:
print(user['email'])
elif sys.argv[1] == "alias" and len(sys.argv) == 2:
print(mgmt("/mail/aliases"))
elif sys.argv[1] == "alias" and sys.argv[2] == "add" and len(sys.argv) == 5:
print(mgmt("/mail/aliases/add", { "address": sys.argv[3], "forwards_to": sys.argv[4] }))
elif sys.argv[1] == "alias" and sys.argv[2] == "remove" and len(sys.argv) == 4:
print(mgmt("/mail/aliases/remove", { "address": sys.argv[3] }))
else:
print("Invalid command-line arguments.")
sys.exit(1)
#!/bin/bash
# This script has moved.
management/cli.py "$@"

Some files were not shown because too many files have changed in this diff Show More