Compare commits

...

91 Commits

Author SHA1 Message Date
fe93cb3474 Merge pull request #3089 from NginxProxyManager/develop
v2.10.4
2023-08-02 11:44:02 +10:00
fa851b61da Bump version 2023-07-31 07:25:09 +10:00
3333a32612 Merge pull request #2971 from wolviex/certbot-dnsplugin-user-site-fix
drop --user on pip install dns plugin
2023-07-31 07:21:18 +10:00
9a79fce498 Merge pull request #3078 from andycandy-de/patch-1
Corrected docker-compose.yml
2023-07-26 10:27:30 +10:00
b1180f5077 Corrected docker-compose.yml
The mysql folder should not be mounted to the npm container!
2023-07-25 18:00:48 +02:00
5454352fe5 Merge pull request #2929 from FlixMa/develop
Add strato.de to certbot dns plugins
2023-07-20 12:25:37 +10:00
aee93a2f6f Merge pull request #2932 from nietzscheanic/patch-1
Fix for ignored ssl_protocols and ssl_ciphers directive in conf.d/inc…
2023-07-20 12:25:09 +10:00
f38cb5b500 Merge pull request #2942 from wrouesnel/444_default_support
Add support for nginx 444 default response
2023-07-20 12:23:57 +10:00
f1b7156c89 Merge pull request #3000 from xrh0905/xrh0905-patch-sed
Fix device or resource busy when patching IPv6 settings
2023-07-20 12:17:34 +10:00
98465cf1b0 Merge pull request #3018 from NginxProxyManager/dependabot/npm_and_yarn/docs/semver-7.5.2
Bump semver from 7.3.2 to 7.5.2 in /docs
2023-07-20 12:16:11 +10:00
137e865b66 Merge pull request #3069 from lug-gh/develop
update year 2022 -> 2023 in footer
2023-07-20 12:16:01 +10:00
e740fb4064 update year 2022 -> 2023 2023-07-19 13:27:17 +02:00
f91f0ee8db Merge pull request #3044 from 6twenty/2741-suppress-s6-supervise-disk-writes
Fix #2741 - Prevent excessive disk writes by only adding frontend service when in development
2023-07-19 13:09:12 +10:00
1c9f751512 Fix path to frontend service 2023-07-19 14:05:57 +12:00
a602bdd514 Bump semver from 7.3.2 to 7.5.2 in /docs
Bumps [semver](https://github.com/npm/node-semver) from 7.3.2 to 7.5.2.
- [Release notes](https://github.com/npm/node-semver/releases)
- [Changelog](https://github.com/npm/node-semver/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-semver/compare/v7.3.2...v7.5.2)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-19 00:02:51 +00:00
f7b2be68cc Merge pull request #3048 from NginxProxyManager/dependabot/npm_and_yarn/docs/tough-cookie-4.1.3
Bump tough-cookie from 4.0.0 to 4.1.3 in /docs
2023-07-19 10:02:40 +10:00
ab4586fc6b Merge pull request #3049 from deftdawg/patch-1
Add bunny.net DNS to DNS challenges
2023-07-19 10:02:29 +10:00
a984a68065 Merge pull request #3051 from NginxProxyManager/dependabot/npm_and_yarn/backend/semver-5.7.2
Bump semver from 5.7.1 to 5.7.2 in /backend
2023-07-19 10:02:04 +10:00
52875fca6e Merge pull request #3053 from NginxProxyManager/dependabot/npm_and_yarn/test/semver-7.5.4
Bump semver from 7.3.2 to 7.5.4 in /test
2023-07-19 10:01:55 +10:00
63b50fcd95 Merge pull request #3054 from NginxProxyManager/dependabot/npm_and_yarn/frontend/semver-5.7.2
Bump semver from 5.7.1 to 5.7.2 in /frontend
2023-07-19 10:01:47 +10:00
5ab4aea03f Merge pull request #3065 from NginxProxyManager/dependabot/npm_and_yarn/test/word-wrap-1.2.4
Bump word-wrap from 1.2.3 to 1.2.4 in /test
2023-07-19 10:01:40 +10:00
d73135378e Merge pull request #3066 from NginxProxyManager/dependabot/npm_and_yarn/backend/word-wrap-1.2.4
Bump word-wrap from 1.2.3 to 1.2.4 in /backend
2023-07-19 10:01:30 +10:00
e19d685cb6 Merge pull request #3067 from NginxProxyManager/dependabot/npm_and_yarn/frontend/word-wrap-1.2.4
Bump word-wrap from 1.2.3 to 1.2.4 in /frontend
2023-07-19 10:01:20 +10:00
c8caaa56d9 Bump word-wrap from 1.2.3 to 1.2.4 in /backend
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-18 20:59:11 +00:00
11a98f4c12 Bump word-wrap from 1.2.3 to 1.2.4 in /frontend
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-18 20:59:11 +00:00
4a85d4ac4e Bump word-wrap from 1.2.3 to 1.2.4 in /test
Bumps [word-wrap](https://github.com/jonschlinkert/word-wrap) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/jonschlinkert/word-wrap/releases)
- [Commits](https://github.com/jonschlinkert/word-wrap/compare/1.2.3...1.2.4)

---
updated-dependencies:
- dependency-name: word-wrap
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-18 20:59:08 +00:00
3138ba46ce Bump semver from 5.7.1 to 5.7.2 in /frontend
Bumps [semver](https://github.com/npm/node-semver) from 5.7.1 to 5.7.2.
- [Release notes](https://github.com/npm/node-semver/releases)
- [Changelog](https://github.com/npm/node-semver/blob/v5.7.2/CHANGELOG.md)
- [Commits](https://github.com/npm/node-semver/compare/v5.7.1...v5.7.2)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-12 05:37:17 +00:00
cdd0b2e6d3 Bump semver from 7.3.2 to 7.5.4 in /test
Bumps [semver](https://github.com/npm/node-semver) from 7.3.2 to 7.5.4.
- [Release notes](https://github.com/npm/node-semver/releases)
- [Changelog](https://github.com/npm/node-semver/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-semver/compare/v7.3.2...v7.5.4)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-12 02:34:02 +00:00
f458730d87 Bump semver from 5.7.1 to 5.7.2 in /backend
Bumps [semver](https://github.com/npm/node-semver) from 5.7.1 to 5.7.2.
- [Release notes](https://github.com/npm/node-semver/releases)
- [Changelog](https://github.com/npm/node-semver/blob/v5.7.2/CHANGELOG.md)
- [Commits](https://github.com/npm/node-semver/compare/v5.7.1...v5.7.2)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-11 02:29:25 +00:00
d20873dcbb Add bunny.net DNS to DNS challenges
- Add support for bunny.net DNS challenges using @mwt's certbot-dns-bunny plugin.
2023-07-08 22:48:54 -04:00
d1e9407e4d Bump tough-cookie from 4.0.0 to 4.1.3 in /docs
Bumps [tough-cookie](https://github.com/salesforce/tough-cookie) from 4.0.0 to 4.1.3.
- [Release notes](https://github.com/salesforce/tough-cookie/releases)
- [Changelog](https://github.com/salesforce/tough-cookie/blob/master/CHANGELOG.md)
- [Commits](https://github.com/salesforce/tough-cookie/compare/v4.0.0...v4.1.3)

---
updated-dependencies:
- dependency-name: tough-cookie
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-08 14:17:45 +00:00
63ee69f432 Fix device or resource busy when patching IPv6 settings 2023-06-15 11:17:02 +08:00
f39e527680 drop --user on pip install dns plugin godaddy
Do not install dns_plugin into the user site because it will lack sys.path precedence to urllib3 in /opt/certbot/lib/python3.7/site-packages
2023-06-01 11:02:06 -07:00
2dd4434ceb Add support for nginx 444 default response
The default nginx 444 response drops the inbound connection without
sending any response to the client.
2023-05-22 11:59:50 +10:00
81054631f9 Fix for ignored ssl_protocols and ssl_ciphers directive in conf.d/include/ssl-ciphers.conf
nginx only uses the `ssl_protocols` directive in the `server{}` block of the first processed host config, which is the default config in `/etc/nginx/conf.d/default.conf`. in version `v2.9.20` the default ssl site was dropped by using `ssl_reject_handshake on` in the default host config. but beside the include of `conf.d/include/ssl-ciphers.conf` was removed from the default host config. that's why `tlsv1.3` isn't applied by default anymore, same thing with the defined cipher suites. npm is so broken since `2023-03-16`.

commit that broke the config -> a7f0c3b730
2023-05-19 14:13:29 +02:00
53d61bd626 Try to fix linter error in certbot plugin definitions. 2023-05-18 14:14:38 +02:00
847e879b3f Update certbot-dns-plugins.js
Add dns wildcard certificate support for strato.de using the provided certbot plugin
2023-05-18 13:44:52 +02:00
824c837a38 Merge pull request #2906 from NginxProxyManager/develop
Fix certbot plugins install when using PUID/PGID
2023-05-10 14:40:15 +10:00
2a06384a4a Merge branch 'master' into develop 2023-05-10 14:40:06 +10:00
05307aa253 Fix certbot plugins install when using PUID/PGID 2023-05-10 14:39:08 +10:00
3d2406ac3d Merge pull request #2905 from NginxProxyManager/develop
v2.10.3
2023-05-10 14:09:04 +10:00
0127dc7f03 Bump version 2023-05-10 11:32:22 +10:00
4349d42636 Merge pull request #2904 from NginxProxyManager/s6-verbose
Fixes for s6 timeout at startup
2023-05-10 11:31:17 +10:00
4b6f9d9419 Remove s6 service timeout 2023-05-10 09:57:24 +10:00
c3f019c911 Test ipv6 disabled in ci 2023-05-09 08:19:09 +10:00
ecf0290203 Update s6-overlay 2023-05-09 08:15:44 +10:00
4f41fe0c95 Update s6-overlay 2023-05-05 08:46:54 +10:00
c3735fdbbb Missed a file that was explicit verbose 2023-05-04 12:30:27 +10:00
c432c34fb3 Small refactor of user/groups and add checks during startup. Only use -x in bash scripts when DEBUG=true set in env vars 2023-05-04 10:03:06 +10:00
a1245bc161 Split up ownership to indentify point of failure 2023-05-04 08:27:38 +10:00
db4ab1d548 Verbose debugging of s6 scripts 2023-05-03 16:01:27 +10:00
86ddd9c83c Merge pull request #2784 from NginxProxyManager/develop
v2.10.2
2023-03-31 09:37:08 +10:00
67208e43cc Merge branch 'master' into develop 2023-03-31 08:27:00 +10:00
ddf80302c6 Bump version 2023-03-31 08:25:45 +10:00
5f2576946d Merge pull request #2783 from NginxProxyManager/uidgid
Make PUID and PGID optional
2023-03-31 08:25:07 +10:00
9fe07fa6c3 Update documentation 2023-03-30 15:37:59 +10:00
d9b9af543e Fix text replacement whoops 2023-03-30 15:03:57 +10:00
eb2e2e0478 Throw in a docker restart during testing phase 2023-03-30 14:44:15 +10:00
9225d5d442 Tweak test 2023-03-30 13:00:22 +10:00
308a7149ed Tweak test 2023-03-30 12:55:20 +10:00
8a4a7d0caf Allow 201 as success in test result 2023-03-30 12:51:26 +10:00
5d03ede100 Add test for creating a host 2023-03-30 12:44:28 +10:00
4a86bb42cc Different approach, always create npmuser
even if the user id is zero, and then we'll always use it
2023-03-30 11:19:16 +10:00
dad8561ea1 Use numbers for permissions in case npmuser doesn't exist 2023-03-30 10:20:20 +10:00
56a92e5c0e Run as root by default
Optionally run as another user/group only if
the env vars are specified. Should give flexibility
to those who need to run processes as root and open ports
without having to request additional priveleges
2023-03-30 09:04:37 +10:00
9d672f5813 Own this nginx folder too 2023-03-29 14:04:48 +10:00
d5ed70dbb6 Own this nginx folder too 2023-03-29 14:03:58 +10:00
c197e66d62 Merge pull request #2764 from NginxProxyManager/develop
v2.10.1
2023-03-29 08:54:30 +10:00
91cf3c8873 Tweaks to docker compose ci after updates 2023-03-29 08:24:28 +10:00
7f5e0414ac Bump version 2023-03-29 07:22:15 +10:00
d179887c15 Another fix for #2734, only chown parts of /etc/nginx 2023-03-28 10:39:26 +10:00
35abb4d7ae Execute permissions missing on script 2023-03-28 09:33:30 +10:00
61b290e220 Chown each folder on separately
Really not sure why this fixes #2734 however it does actually
help the ownership script succeed specifically on arm7/raspbian
2023-03-28 08:50:10 +10:00
e1bcef6e5c Merge pull request #2749 from NginxProxyManager/develop
v2.10.0
2023-03-27 12:17:07 +10:00
81f51f9e2d Merge branch 'master' into develop 2023-03-27 08:29:08 +10:00
661953db25 Bump version 2023-03-27 08:26:42 +10:00
065c2dac42 Merge pull request #2721 from NginxProxyManager/docker-user-group
Docker users and groups, refactor configuration
2023-03-27 08:19:57 +10:00
c40e48e678 Fix docker restart because user already exists 2023-03-23 10:21:34 +10:00
124cb18e17 Fix renewing certs because of permission errors 2023-03-22 13:40:36 +10:00
5ac9dc0758 Attempt to set HOME for npmuser backend 2023-03-22 13:00:26 +10:00
9a799d51ce Optimize docker image a bit 2023-03-22 09:42:16 +10:00
77eb618758 Fix pip installs running as non-root user 2023-03-22 09:41:59 +10:00
79fedfcea4 Use consistent docker-compose file version in docs 2023-03-22 09:41:19 +10:00
8fdb8ac853 Update docs 2023-03-21 18:26:28 +10:00
4fdc80be01 Fix logical error with keys and mysql config 2023-03-21 17:59:27 +10:00
f8e6c8d018 Fix mistake with debug output 2023-03-21 17:49:39 +10:00
c3469de61b Linting fixes 2023-03-21 17:11:16 +10:00
ea61b15a40 don't zip log files anymore 2023-03-21 16:59:36 +10:00
60175e6d8c Updates for ci stack 2023-03-21 16:56:45 +10:00
2a07445005 Refactor configuration
- No longer use config npm package
- Prefer config from env vars, though still has support for config file
- No longer writes a config file for database config
- Writes keys to a new file in /data folder
- Removes a lot of cruft and improves config understanding
2023-03-21 16:53:39 +10:00
dad3e1da7c Adds support to run processes as a user/group, defined
with PUID and PGID environment variables

- Detects if image is run with a user in docker command and fails if so
- Adds s6 prepare scripts for adding a 'npmuser'
- Split up and refactor the s6 prepare scripts
- Runs nginx and backend node as 'npmuser'
- Changes ownership of files required at startup
2023-03-20 16:56:52 +10:00
50 changed files with 1604 additions and 1155 deletions

View File

@ -1 +1 @@
2.9.22 2.10.4

24
Jenkinsfile vendored
View File

@ -14,9 +14,9 @@ pipeline {
ansiColor('xterm') ansiColor('xterm')
} }
environment { environment {
IMAGE = "nginx-proxy-manager" IMAGE = 'nginx-proxy-manager'
BUILD_VERSION = getVersion() BUILD_VERSION = getVersion()
MAJOR_VERSION = "2" MAJOR_VERSION = '2'
BRANCH_LOWER = "${BRANCH_NAME.toLowerCase().replaceAll('/', '-')}" BRANCH_LOWER = "${BRANCH_NAME.toLowerCase().replaceAll('/', '-')}"
COMPOSE_PROJECT_NAME = "npm_${BRANCH_LOWER}_${BUILD_NUMBER}" COMPOSE_PROJECT_NAME = "npm_${BRANCH_LOWER}_${BUILD_NUMBER}"
COMPOSE_FILE = 'docker/docker-compose.ci.yml' COMPOSE_FILE = 'docker/docker-compose.ci.yml'
@ -90,20 +90,24 @@ pipeline {
steps { steps {
// Bring up a stack // Bring up a stack
sh 'docker-compose up -d fullstack-sqlite' sh 'docker-compose up -d fullstack-sqlite'
sh './scripts/wait-healthy $(docker-compose ps -q fullstack-sqlite) 120' sh './scripts/wait-healthy $(docker-compose ps --all -q fullstack-sqlite) 120'
// Stop and Start it, as this will test it's ability to restart with existing data
sh 'docker-compose stop fullstack-sqlite'
sh 'docker-compose start fullstack-sqlite'
sh './scripts/wait-healthy $(docker-compose ps --all -q fullstack-sqlite) 120'
// Run tests // Run tests
sh 'rm -rf test/results' sh 'rm -rf test/results'
sh 'docker-compose up cypress-sqlite' sh 'docker-compose up cypress-sqlite'
// Get results // Get results
sh 'docker cp -L "$(docker-compose ps -q cypress-sqlite):/test/results" test/' sh 'docker cp -L "$(docker-compose ps --all -q cypress-sqlite):/test/results" test/'
} }
post { post {
always { always {
// Dumps to analyze later // Dumps to analyze later
sh 'mkdir -p debug' sh 'mkdir -p debug'
sh 'docker-compose logs fullstack-sqlite | gzip > debug/docker_fullstack_sqlite.log.gz' sh 'docker-compose logs fullstack-sqlite > debug/docker_fullstack_sqlite.log'
sh 'docker-compose logs db | gzip > debug/docker_db.log.gz' sh 'docker-compose logs db > debug/docker_db.log'
// Cypress videos and screenshot artifacts // Cypress videos and screenshot artifacts
dir(path: 'test/results') { dir(path: 'test/results') {
archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml' archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml'
@ -116,20 +120,20 @@ pipeline {
steps { steps {
// Bring up a stack // Bring up a stack
sh 'docker-compose up -d fullstack-mysql' sh 'docker-compose up -d fullstack-mysql'
sh './scripts/wait-healthy $(docker-compose ps -q fullstack-mysql) 120' sh './scripts/wait-healthy $(docker-compose ps --all -q fullstack-mysql) 120'
// Run tests // Run tests
sh 'rm -rf test/results' sh 'rm -rf test/results'
sh 'docker-compose up cypress-mysql' sh 'docker-compose up cypress-mysql'
// Get results // Get results
sh 'docker cp -L "$(docker-compose ps -q cypress-mysql):/test/results" test/' sh 'docker cp -L "$(docker-compose ps --all -q cypress-mysql):/test/results" test/'
} }
post { post {
always { always {
// Dumps to analyze later // Dumps to analyze later
sh 'mkdir -p debug' sh 'mkdir -p debug'
sh 'docker-compose logs fullstack-mysql | gzip > debug/docker_fullstack_mysql.log.gz' sh 'docker-compose logs fullstack-mysql > debug/docker_fullstack_mysql.log'
sh 'docker-compose logs db | gzip > debug/docker_db.log.gz' sh 'docker-compose logs db > debug/docker_db.log'
// Cypress videos and screenshot artifacts // Cypress videos and screenshot artifacts
dir(path: 'test/results') { dir(path: 'test/results') {
archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml' archiveArtifacts allowEmptyArchive: true, artifacts: '**/*', excludes: '**/*.xml'

View File

@ -1,7 +1,7 @@
<p align="center"> <p align="center">
<img src="https://nginxproxymanager.com/github.png"> <img src="https://nginxproxymanager.com/github.png">
<br><br> <br><br>
<img src="https://img.shields.io/badge/version-2.9.22-green.svg?style=for-the-badge"> <img src="https://img.shields.io/badge/version-2.10.4-green.svg?style=for-the-badge">
<a href="https://hub.docker.com/repository/docker/jc21/nginx-proxy-manager"> <a href="https://hub.docker.com/repository/docker/jc21/nginx-proxy-manager">
<img src="https://img.shields.io/docker/stars/jc21/nginx-proxy-manager.svg?style=for-the-badge"> <img src="https://img.shields.io/docker/stars/jc21/nginx-proxy-manager.svg?style=for-the-badge">
</a> </a>
@ -56,7 +56,7 @@ I won't go in to too much detail here but here are the basics for someone new to
2. Create a docker-compose.yml file similar to this: 2. Create a docker-compose.yml file similar to this:
```yml ```yml
version: '3' version: '3.8'
services: services:
app: app:
image: 'jc21/nginx-proxy-manager:latest' image: 'jc21/nginx-proxy-manager:latest'
@ -70,6 +70,8 @@ services:
- ./letsencrypt:/etc/letsencrypt - ./letsencrypt:/etc/letsencrypt
``` ```
This is the bare minimum configuration required. See the [documentation](https://nginxproxymanager.com/setup/) for more.
3. Bring up your stack by running 3. Bring up your stack by running
```bash ```bash

View File

@ -2,6 +2,7 @@ const express = require('express');
const bodyParser = require('body-parser'); const bodyParser = require('body-parser');
const fileUpload = require('express-fileupload'); const fileUpload = require('express-fileupload');
const compression = require('compression'); const compression = require('compression');
const config = require('./lib/config');
const log = require('./logger').express; const log = require('./logger').express;
/** /**
@ -24,7 +25,7 @@ app.enable('trust proxy', ['loopback', 'linklocal', 'uniquelocal']);
app.enable('strict routing'); app.enable('strict routing');
// pretty print JSON when not live // pretty print JSON when not live
if (process.env.NODE_ENV !== 'production') { if (config.debug()) {
app.set('json spaces', 2); app.set('json spaces', 2);
} }
@ -65,7 +66,7 @@ app.use(function (err, req, res, next) {
} }
}; };
if (process.env.NODE_ENV === 'development' || (req.baseUrl + req.path).includes('nginx/certificates')) { if (config.debug() || (req.baseUrl + req.path).includes('nginx/certificates')) {
payload.debug = { payload.debug = {
stack: typeof err.stack !== 'undefined' && err.stack ? err.stack.split('\n') : null, stack: typeof err.stack !== 'undefined' && err.stack ? err.stack.split('\n') : null,
previous: err.previous previous: err.previous
@ -74,7 +75,7 @@ app.use(function (err, req, res, next) {
// Not every error is worth logging - but this is good for now until it gets annoying. // Not every error is worth logging - but this is good for now until it gets annoying.
if (typeof err.stack !== 'undefined' && err.stack) { if (typeof err.stack !== 'undefined' && err.stack) {
if (process.env.NODE_ENV === 'development' || process.env.DEBUG) { if (config.debug()) {
log.debug(err.stack); log.debug(err.stack);
} else if (typeof err.public == 'undefined' || !err.public) { } else if (typeof err.public == 'undefined' || !err.public) {
log.warn(err.message); log.warn(err.message);

View File

@ -1,33 +1,27 @@
const config = require('config'); const config = require('./lib/config');
if (!config.has('database')) { if (!config.has('database')) {
throw new Error('Database config does not exist! Please read the instructions: https://github.com/jc21/nginx-proxy-manager/blob/master/doc/INSTALL.md'); throw new Error('Database config does not exist! Please read the instructions: https://nginxproxymanager.com/setup/');
} }
function generateDbConfig() { function generateDbConfig() {
if (config.database.engine === 'knex-native') { const cfg = config.get('database');
return config.database.knex; if (cfg.engine === 'knex-native') {
} else return cfg.knex;
return { }
client: config.database.engine, return {
connection: { client: cfg.engine,
host: config.database.host, connection: {
user: config.database.user, host: cfg.host,
password: config.database.password, user: cfg.user,
database: config.database.name, password: cfg.password,
port: config.database.port database: cfg.name,
}, port: cfg.port
migrations: { },
tableName: 'migrations' migrations: {
} tableName: 'migrations'
}; }
};
} }
module.exports = require('knex')(generateDbConfig());
let data = generateDbConfig();
if (typeof config.database.version !== 'undefined') {
data.version = config.database.version;
}
module.exports = require('knex')(data);

View File

@ -40,6 +40,210 @@
} }
} }
}, },
"/nginx/proxy-hosts": {
"get": {
"operationId": "getProxyHosts",
"summary": "Get all proxy hosts",
"tags": ["Proxy Hosts"],
"security": [
{
"BearerAuth": ["users"]
}
],
"parameters": [
{
"in": "query",
"name": "expand",
"description": "Expansions",
"schema": {
"type": "string",
"enum": ["access_list", "owner", "certificate"]
}
}
],
"responses": {
"200": {
"description": "200 response",
"content": {
"application/json": {
"examples": {
"default": {
"value": [
{
"id": 1,
"created_on": "2023-03-30T01:12:23.000Z",
"modified_on": "2023-03-30T02:15:40.000Z",
"owner_user_id": 1,
"domain_names": ["aasdasdad"],
"forward_host": "asdasd",
"forward_port": 80,
"access_list_id": 0,
"certificate_id": 0,
"ssl_forced": 0,
"caching_enabled": 0,
"block_exploits": 0,
"advanced_config": "sdfsdfsdf",
"meta": {
"letsencrypt_agree": false,
"dns_challenge": false,
"nginx_online": false,
"nginx_err": "Command failed: /usr/sbin/nginx -t -g \"error_log off;\"\nnginx: [emerg] unknown directive \"sdfsdfsdf\" in /data/nginx/proxy_host/1.conf:37\nnginx: configuration file /etc/nginx/nginx.conf test failed\n"
},
"allow_websocket_upgrade": 0,
"http2_support": 0,
"forward_scheme": "http",
"enabled": 1,
"locations": [],
"hsts_enabled": 0,
"hsts_subdomains": 0,
"owner": {
"id": 1,
"created_on": "2023-03-30T01:11:50.000Z",
"modified_on": "2023-03-30T01:11:50.000Z",
"is_deleted": 0,
"is_disabled": 0,
"email": "admin@example.com",
"name": "Administrator",
"nickname": "Admin",
"avatar": "",
"roles": ["admin"]
},
"access_list": null,
"certificate": null
},
{
"id": 2,
"created_on": "2023-03-30T02:11:49.000Z",
"modified_on": "2023-03-30T02:11:49.000Z",
"owner_user_id": 1,
"domain_names": ["test.example.com"],
"forward_host": "1.1.1.1",
"forward_port": 80,
"access_list_id": 0,
"certificate_id": 0,
"ssl_forced": 0,
"caching_enabled": 0,
"block_exploits": 0,
"advanced_config": "",
"meta": {
"letsencrypt_agree": false,
"dns_challenge": false,
"nginx_online": true,
"nginx_err": null
},
"allow_websocket_upgrade": 0,
"http2_support": 0,
"forward_scheme": "http",
"enabled": 1,
"locations": [],
"hsts_enabled": 0,
"hsts_subdomains": 0,
"owner": {
"id": 1,
"created_on": "2023-03-30T01:11:50.000Z",
"modified_on": "2023-03-30T01:11:50.000Z",
"is_deleted": 0,
"is_disabled": 0,
"email": "admin@example.com",
"name": "Administrator",
"nickname": "Admin",
"avatar": "",
"roles": ["admin"]
},
"access_list": null,
"certificate": null
}
]
}
},
"schema": {
"$ref": "#/components/schemas/ProxyHostsList"
}
}
}
}
}
},
"post": {
"operationId": "createProxyHost",
"summary": "Create a Proxy Host",
"tags": ["Proxy Hosts"],
"security": [
{
"BearerAuth": ["users"]
}
],
"parameters": [
{
"in": "body",
"name": "proxyhost",
"description": "Proxy Host Payload",
"required": true,
"schema": {
"$ref": "#/components/schemas/ProxyHostObject"
}
}
],
"responses": {
"201": {
"description": "201 response",
"content": {
"application/json": {
"examples": {
"default": {
"value": {
"id": 3,
"created_on": "2023-03-30T02:31:27.000Z",
"modified_on": "2023-03-30T02:31:27.000Z",
"owner_user_id": 1,
"domain_names": ["test2.example.com"],
"forward_host": "1.1.1.1",
"forward_port": 80,
"access_list_id": 0,
"certificate_id": 0,
"ssl_forced": 0,
"caching_enabled": 0,
"block_exploits": 0,
"advanced_config": "",
"meta": {
"letsencrypt_agree": false,
"dns_challenge": false
},
"allow_websocket_upgrade": 0,
"http2_support": 0,
"forward_scheme": "http",
"enabled": 1,
"locations": [],
"hsts_enabled": 0,
"hsts_subdomains": 0,
"certificate": null,
"owner": {
"id": 1,
"created_on": "2023-03-30T01:11:50.000Z",
"modified_on": "2023-03-30T01:11:50.000Z",
"is_deleted": 0,
"is_disabled": 0,
"email": "admin@example.com",
"name": "Administrator",
"nickname": "Admin",
"avatar": "",
"roles": ["admin"]
},
"access_list": null,
"use_default_location": true,
"ipv6": true
}
}
},
"schema": {
"$ref": "#/components/schemas/ProxyHostObject"
}
}
}
}
}
}
},
"/schema": { "/schema": {
"get": { "get": {
"operationId": "schema", "operationId": "schema",
@ -55,14 +259,10 @@
"get": { "get": {
"operationId": "refreshToken", "operationId": "refreshToken",
"summary": "Refresh your access token", "summary": "Refresh your access token",
"tags": [ "tags": ["Tokens"],
"Tokens"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["tokens"]
"tokens"
]
} }
], ],
"responses": { "responses": {
@ -104,19 +304,14 @@
"scope": { "scope": {
"minLength": 1, "minLength": 1,
"type": "string", "type": "string",
"enum": [ "enum": ["user"]
"user"
]
}, },
"secret": { "secret": {
"minLength": 1, "minLength": 1,
"type": "string" "type": "string"
} }
}, },
"required": [ "required": ["identity", "secret"],
"identity",
"secret"
],
"type": "object" "type": "object"
} }
} }
@ -144,23 +339,17 @@
} }
}, },
"summary": "Request a new access token from credentials", "summary": "Request a new access token from credentials",
"tags": [ "tags": ["Tokens"]
"Tokens"
]
} }
}, },
"/settings": { "/settings": {
"get": { "get": {
"operationId": "getSettings", "operationId": "getSettings",
"summary": "Get all settings", "summary": "Get all settings",
"tags": [ "tags": ["Settings"],
"Settings"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["settings"]
"settings"
]
} }
], ],
"responses": { "responses": {
@ -194,14 +383,10 @@
"get": { "get": {
"operationId": "getSetting", "operationId": "getSetting",
"summary": "Get a setting", "summary": "Get a setting",
"tags": [ "tags": ["Settings"],
"Settings"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["settings"]
"settings"
]
} }
], ],
"parameters": [ "parameters": [
@ -244,14 +429,10 @@
"put": { "put": {
"operationId": "updateSetting", "operationId": "updateSetting",
"summary": "Update a setting", "summary": "Update a setting",
"tags": [ "tags": ["Settings"],
"Settings"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["settings"]
"settings"
]
} }
], ],
"parameters": [ "parameters": [
@ -305,14 +486,10 @@
"get": { "get": {
"operationId": "getUsers", "operationId": "getUsers",
"summary": "Get all users", "summary": "Get all users",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -322,9 +499,7 @@
"description": "Expansions", "description": "Expansions",
"schema": { "schema": {
"type": "string", "type": "string",
"enum": [ "enum": ["permissions"]
"permissions"
]
} }
} }
], ],
@ -345,9 +520,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm", "avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm",
"roles": [ "roles": ["admin"]
"admin"
]
} }
] ]
}, },
@ -362,9 +535,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm", "avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm",
"roles": [ "roles": ["admin"],
"admin"
],
"permissions": { "permissions": {
"visibility": "all", "visibility": "all",
"proxy_hosts": "manage", "proxy_hosts": "manage",
@ -389,14 +560,10 @@
"post": { "post": {
"operationId": "createUser", "operationId": "createUser",
"summary": "Create a User", "summary": "Create a User",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -426,9 +593,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm", "avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm",
"roles": [ "roles": ["admin"],
"admin"
],
"permissions": { "permissions": {
"visibility": "all", "visibility": "all",
"proxy_hosts": "manage", "proxy_hosts": "manage",
@ -454,14 +619,10 @@
"get": { "get": {
"operationId": "getUser", "operationId": "getUser",
"summary": "Get a user", "summary": "Get a user",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -501,9 +662,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm", "avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm",
"roles": [ "roles": ["admin"]
"admin"
]
} }
} }
}, },
@ -518,14 +677,10 @@
"put": { "put": {
"operationId": "updateUser", "operationId": "updateUser",
"summary": "Update a User", "summary": "Update a User",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -574,9 +729,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm", "avatar": "//www.gravatar.com/avatar/6193176330f8d38747f038c170ddb193?default=mm",
"roles": [ "roles": ["admin"]
"admin"
]
} }
} }
}, },
@ -591,14 +744,10 @@
"delete": { "delete": {
"operationId": "deleteUser", "operationId": "deleteUser",
"summary": "Delete a User", "summary": "Delete a User",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -637,14 +786,10 @@
"put": { "put": {
"operationId": "updateUserAuth", "operationId": "updateUserAuth",
"summary": "Update a User's Authentication", "summary": "Update a User's Authentication",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -700,14 +845,10 @@
"put": { "put": {
"operationId": "updateUserPermissions", "operationId": "updateUserPermissions",
"summary": "Update a User's Permissions", "summary": "Update a User's Permissions",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -755,14 +896,10 @@
"put": { "put": {
"operationId": "loginAsUser", "operationId": "loginAsUser",
"summary": "Login as this user", "summary": "Login as this user",
"tags": [ "tags": ["Users"],
"Users"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["users"]
"users"
]
} }
], ],
"parameters": [ "parameters": [
@ -797,9 +934,7 @@
"name": "Jamie Curnow", "name": "Jamie Curnow",
"nickname": "James", "nickname": "James",
"avatar": "//www.gravatar.com/avatar/3c8d73f45fd8763f827b964c76e6032a?default=mm", "avatar": "//www.gravatar.com/avatar/3c8d73f45fd8763f827b964c76e6032a?default=mm",
"roles": [ "roles": ["admin"]
"admin"
]
} }
} }
} }
@ -807,11 +942,7 @@
"schema": { "schema": {
"type": "object", "type": "object",
"description": "Login object", "description": "Login object",
"required": [ "required": ["expires", "token", "user"],
"expires",
"token",
"user"
],
"additionalProperties": false, "additionalProperties": false,
"properties": { "properties": {
"expires": { "expires": {
@ -840,14 +971,10 @@
"get": { "get": {
"operationId": "reportsHosts", "operationId": "reportsHosts",
"summary": "Report on Host Statistics", "summary": "Report on Host Statistics",
"tags": [ "tags": ["Reports"],
"Reports"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["reports"]
"reports"
]
} }
], ],
"responses": { "responses": {
@ -878,14 +1005,10 @@
"get": { "get": {
"operationId": "getAuditLog", "operationId": "getAuditLog",
"summary": "Get Audit Log", "summary": "Get Audit Log",
"tags": [ "tags": ["Audit Log"],
"Audit Log"
],
"security": [ "security": [
{ {
"BearerAuth": [ "BearerAuth": ["audit-log"]
"audit-log"
]
} }
], ],
"responses": { "responses": {
@ -925,10 +1048,7 @@
"type": "object", "type": "object",
"description": "Health object", "description": "Health object",
"additionalProperties": false, "additionalProperties": false,
"required": [ "required": ["status", "version"],
"status",
"version"
],
"properties": { "properties": {
"status": { "status": {
"type": "string", "type": "string",
@ -944,11 +1064,7 @@
"revision": 0 "revision": 0
}, },
"additionalProperties": false, "additionalProperties": false,
"required": [ "required": ["major", "minor", "revision"],
"major",
"minor",
"revision"
],
"properties": { "properties": {
"major": { "major": {
"type": "integer", "type": "integer",
@ -969,10 +1085,7 @@
"TokenObject": { "TokenObject": {
"type": "object", "type": "object",
"description": "Token object", "description": "Token object",
"required": [ "required": ["expires", "token"],
"expires",
"token"
],
"additionalProperties": false, "additionalProperties": false,
"properties": { "properties": {
"expires": { "expires": {
@ -988,16 +1101,147 @@
} }
} }
}, },
"ProxyHostObject": {
"type": "object",
"description": "Proxy Host object",
"required": [
"id",
"created_on",
"modified_on",
"owner_user_id",
"domain_names",
"forward_host",
"forward_port",
"access_list_id",
"certificate_id",
"ssl_forced",
"caching_enabled",
"block_exploits",
"advanced_config",
"meta",
"allow_websocket_upgrade",
"http2_support",
"forward_scheme",
"enabled",
"locations",
"hsts_enabled",
"hsts_subdomains",
"certificate",
"use_default_location",
"ipv6"
],
"additionalProperties": false,
"properties": {
"id": {
"type": "integer",
"description": "Proxy Host ID",
"minimum": 1,
"example": 1
},
"created_on": {
"type": "string",
"description": "Created Date",
"example": "2020-01-30T09:36:08.000Z"
},
"modified_on": {
"type": "string",
"description": "Modified Date",
"example": "2020-01-30T09:41:04.000Z"
},
"owner_user_id": {
"type": "integer",
"minimum": 1,
"example": 1
},
"domain_names": {
"type": "array",
"minItems": 1,
"items": {
"type": "string",
"minLength": 1
}
},
"forward_host": {
"type": "string",
"minLength": 1
},
"forward_port": {
"type": "integer",
"minimum": 1
},
"access_list_id": {
"type": "integer"
},
"certificate_id": {
"type": "integer"
},
"ssl_forced": {
"type": "integer"
},
"caching_enabled": {
"type": "integer"
},
"block_exploits": {
"type": "integer"
},
"advanced_config": {
"type": "string"
},
"meta": {
"type": "object"
},
"allow_websocket_upgrade": {
"type": "integer"
},
"http2_support": {
"type": "integer"
},
"forward_scheme": {
"type": "string"
},
"enabled": {
"type": "integer"
},
"locations": {
"type": "array"
},
"hsts_enabled": {
"type": "integer"
},
"hsts_subdomains": {
"type": "integer"
},
"certificate": {
"type": "object",
"nullable": true
},
"owner": {
"type": "object",
"nullable": true
},
"access_list": {
"type": "object",
"nullable": true
},
"use_default_location": {
"type": "boolean"
},
"ipv6": {
"type": "boolean"
}
}
},
"ProxyHostsList": {
"type": "array",
"description": "Proxyn Hosts list",
"items": {
"$ref": "#/components/schemas/ProxyHostObject"
}
},
"SettingObject": { "SettingObject": {
"type": "object", "type": "object",
"description": "Setting object", "description": "Setting object",
"required": [ "required": ["id", "name", "description", "value", "meta"],
"id",
"name",
"description",
"value",
"meta"
],
"additionalProperties": false, "additionalProperties": false,
"properties": { "properties": {
"id": { "id": {
@ -1057,17 +1301,7 @@
"UserObject": { "UserObject": {
"type": "object", "type": "object",
"description": "User object", "description": "User object",
"required": [ "required": ["id", "created_on", "modified_on", "is_disabled", "email", "name", "nickname", "avatar", "roles"],
"id",
"created_on",
"modified_on",
"is_disabled",
"email",
"name",
"nickname",
"avatar",
"roles"
],
"additionalProperties": false, "additionalProperties": false,
"properties": { "properties": {
"id": { "id": {
@ -1117,9 +1351,7 @@
}, },
"roles": { "roles": {
"description": "Roles applied", "description": "Roles applied",
"example": [ "example": ["admin"],
"admin"
],
"type": "array", "type": "array",
"items": { "items": {
"type": "string" "type": "string"
@ -1137,10 +1369,7 @@
"AuthObject": { "AuthObject": {
"type": "object", "type": "object",
"description": "Authentication Object", "description": "Authentication Object",
"required": [ "required": ["type", "secret"],
"type",
"secret"
],
"properties": { "properties": {
"type": { "type": {
"type": "string", "type": "string",
@ -1167,64 +1396,37 @@
"visibility": { "visibility": {
"type": "string", "type": "string",
"description": "Visibility Type", "description": "Visibility Type",
"enum": [ "enum": ["all", "user"]
"all",
"user"
]
}, },
"access_lists": { "access_lists": {
"type": "string", "type": "string",
"description": "Access Lists Permissions", "description": "Access Lists Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
}, },
"dead_hosts": { "dead_hosts": {
"type": "string", "type": "string",
"description": "404 Hosts Permissions", "description": "404 Hosts Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
}, },
"proxy_hosts": { "proxy_hosts": {
"type": "string", "type": "string",
"description": "Proxy Hosts Permissions", "description": "Proxy Hosts Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
}, },
"redirection_hosts": { "redirection_hosts": {
"type": "string", "type": "string",
"description": "Redirection Permissions", "description": "Redirection Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
}, },
"streams": { "streams": {
"type": "string", "type": "string",
"description": "Streams Permissions", "description": "Streams Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
}, },
"certificates": { "certificates": {
"type": "string", "type": "string",
"description": "Certificates Permissions", "description": "Certificates Permissions",
"enum": [ "enum": ["hidden", "view", "manage"]
"hidden",
"view",
"manage"
]
} }
} }
}, },
@ -1251,4 +1453,4 @@
} }
} }
} }
} }

View File

@ -3,9 +3,6 @@
const logger = require('./logger').global; const logger = require('./logger').global;
async function appStart () { async function appStart () {
// Create config file db settings if environment variables have been set
await createDbConfigFromEnvironment();
const migrate = require('./migrate'); const migrate = require('./migrate');
const setup = require('./setup'); const setup = require('./setup');
const app = require('./app'); const app = require('./app');
@ -42,90 +39,6 @@ async function appStart () {
}); });
} }
async function createDbConfigFromEnvironment() {
return new Promise((resolve, reject) => {
const envMysqlHost = process.env.DB_MYSQL_HOST || null;
const envMysqlPort = process.env.DB_MYSQL_PORT || null;
const envMysqlUser = process.env.DB_MYSQL_USER || null;
const envMysqlName = process.env.DB_MYSQL_NAME || null;
let envSqliteFile = process.env.DB_SQLITE_FILE || null;
const fs = require('fs');
const filename = (process.env.NODE_CONFIG_DIR || './config') + '/' + (process.env.NODE_ENV || 'default') + '.json';
let configData = {};
try {
configData = require(filename);
} catch (err) {
// do nothing
}
if (configData.database && configData.database.engine && !configData.database.fromEnv) {
logger.info('Manual db configuration already exists, skipping config creation from environment variables');
resolve();
return;
}
if ((!envMysqlHost || !envMysqlPort || !envMysqlUser || !envMysqlName) && !envSqliteFile){
envSqliteFile = '/data/database.sqlite';
logger.info(`No valid environment variables for database provided, using default SQLite file '${envSqliteFile}'`);
}
if (envMysqlHost && envMysqlPort && envMysqlUser && envMysqlName) {
const newConfig = {
fromEnv: true,
engine: 'mysql',
host: envMysqlHost,
port: envMysqlPort,
user: envMysqlUser,
password: process.env.DB_MYSQL_PASSWORD,
name: envMysqlName,
};
if (JSON.stringify(configData.database) === JSON.stringify(newConfig)) {
// Config is unchanged, skip overwrite
resolve();
return;
}
logger.info('Generating MySQL knex configuration from environment variables');
configData.database = newConfig;
} else {
const newConfig = {
fromEnv: true,
engine: 'knex-native',
knex: {
client: 'sqlite3',
connection: {
filename: envSqliteFile
},
useNullAsDefault: true
}
};
if (JSON.stringify(configData.database) === JSON.stringify(newConfig)) {
// Config is unchanged, skip overwrite
resolve();
return;
}
logger.info('Generating SQLite knex configuration');
configData.database = newConfig;
}
// Write config
fs.writeFile(filename, JSON.stringify(configData, null, 2), (err) => {
if (err) {
logger.error('Could not write db config to config file: ' + filename);
reject(err);
} else {
logger.debug('Wrote db configuration to config file: ' + filename);
resolve();
}
});
});
}
try { try {
appStart(); appStart();
} catch (err) { } catch (err) {

View File

@ -1,22 +1,24 @@
const _ = require('lodash'); const _ = require('lodash');
const fs = require('fs'); const fs = require('fs');
const https = require('https'); const https = require('https');
const tempWrite = require('temp-write'); const tempWrite = require('temp-write');
const moment = require('moment'); const moment = require('moment');
const logger = require('../logger').ssl; const logger = require('../logger').ssl;
const error = require('../lib/error'); const config = require('../lib/config');
const utils = require('../lib/utils'); const error = require('../lib/error');
const certificateModel = require('../models/certificate'); const utils = require('../lib/utils');
const dnsPlugins = require('../global/certbot-dns-plugins'); const certificateModel = require('../models/certificate');
const internalAuditLog = require('./audit-log'); const dnsPlugins = require('../global/certbot-dns-plugins');
const internalNginx = require('./nginx'); const internalAuditLog = require('./audit-log');
const internalHost = require('./host'); const internalNginx = require('./nginx');
const letsencryptStaging = process.env.NODE_ENV !== 'production'; const internalHost = require('./host');
const archiver = require('archiver');
const path = require('path');
const { isArray } = require('lodash');
const letsencryptStaging = config.useLetsencryptStaging();
const letsencryptConfig = '/etc/letsencrypt.ini'; const letsencryptConfig = '/etc/letsencrypt.ini';
const certbotCommand = 'certbot'; const certbotCommand = 'certbot';
const archiver = require('archiver');
const path = require('path');
const { isArray } = require('lodash');
function omissions() { function omissions() {
return ['is_deleted']; return ['is_deleted'];
@ -46,6 +48,8 @@ const internalCertificate = {
const cmd = certbotCommand + ' renew --non-interactive --quiet ' + const cmd = certbotCommand + ' renew --non-interactive --quiet ' +
'--config "' + letsencryptConfig + '" ' + '--config "' + letsencryptConfig + '" ' +
'--work-dir "/tmp/letsencrypt-lib" ' +
'--logs-dir "/tmp/letsencrypt-log" ' +
'--preferred-challenges "dns,http" ' + '--preferred-challenges "dns,http" ' +
'--disable-hook-validation ' + '--disable-hook-validation ' +
(letsencryptStaging ? '--staging' : ''); (letsencryptStaging ? '--staging' : '');
@ -833,6 +837,8 @@ const internalCertificate = {
const cmd = certbotCommand + ' certonly ' + const cmd = certbotCommand + ' certonly ' +
'--config "' + letsencryptConfig + '" ' + '--config "' + letsencryptConfig + '" ' +
'--work-dir "/tmp/letsencrypt-lib" ' +
'--logs-dir "/tmp/letsencrypt-log" ' +
'--cert-name "npm-' + certificate.id + '" ' + '--cert-name "npm-' + certificate.id + '" ' +
'--agree-tos ' + '--agree-tos ' +
'--authenticator webroot ' + '--authenticator webroot ' +
@ -871,13 +877,15 @@ const internalCertificate = {
const escapedCredentials = certificate.meta.dns_provider_credentials.replaceAll('\'', '\\\'').replaceAll('\\', '\\\\'); const escapedCredentials = certificate.meta.dns_provider_credentials.replaceAll('\'', '\\\'').replaceAll('\\', '\\\\');
const credentialsCmd = 'mkdir -p /etc/letsencrypt/credentials 2> /dev/null; echo \'' + escapedCredentials + '\' > \'' + credentialsLocation + '\' && chmod 600 \'' + credentialsLocation + '\''; const credentialsCmd = 'mkdir -p /etc/letsencrypt/credentials 2> /dev/null; echo \'' + escapedCredentials + '\' > \'' + credentialsLocation + '\' && chmod 600 \'' + credentialsLocation + '\'';
// we call `. /opt/certbot/bin/activate` (`.` is alternative to `source` in dash) to access certbot venv // we call `. /opt/certbot/bin/activate` (`.` is alternative to `source` in dash) to access certbot venv
let prepareCmd = '. /opt/certbot/bin/activate && pip install ' + dns_plugin.package_name + (dns_plugin.version_requirement || '') + ' ' + dns_plugin.dependencies + ' && deactivate'; const prepareCmd = '. /opt/certbot/bin/activate && pip install --no-cache-dir ' + dns_plugin.package_name + (dns_plugin.version_requirement || '') + ' ' + dns_plugin.dependencies + ' && deactivate';
// Whether the plugin has a --<name>-credentials argument // Whether the plugin has a --<name>-credentials argument
const hasConfigArg = certificate.meta.dns_provider !== 'route53'; const hasConfigArg = certificate.meta.dns_provider !== 'route53';
let mainCmd = certbotCommand + ' certonly ' + let mainCmd = certbotCommand + ' certonly ' +
'--config "' + letsencryptConfig + '" ' + '--config "' + letsencryptConfig + '" ' +
'--work-dir "/tmp/letsencrypt-lib" ' +
'--logs-dir "/tmp/letsencrypt-log" ' +
'--cert-name "npm-' + certificate.id + '" ' + '--cert-name "npm-' + certificate.id + '" ' +
'--agree-tos ' + '--agree-tos ' +
'--email "' + certificate.meta.letsencrypt_email + '" ' + '--email "' + certificate.meta.letsencrypt_email + '" ' +
@ -974,6 +982,8 @@ const internalCertificate = {
const cmd = certbotCommand + ' renew --force-renewal ' + const cmd = certbotCommand + ' renew --force-renewal ' +
'--config "' + letsencryptConfig + '" ' + '--config "' + letsencryptConfig + '" ' +
'--work-dir "/tmp/letsencrypt-lib" ' +
'--logs-dir "/tmp/letsencrypt-log" ' +
'--cert-name "npm-' + certificate.id + '" ' + '--cert-name "npm-' + certificate.id + '" ' +
'--preferred-challenges "dns,http" ' + '--preferred-challenges "dns,http" ' +
'--no-random-sleep-on-renew ' + '--no-random-sleep-on-renew ' +
@ -1004,6 +1014,8 @@ const internalCertificate = {
let mainCmd = certbotCommand + ' renew ' + let mainCmd = certbotCommand + ' renew ' +
'--config "' + letsencryptConfig + '" ' + '--config "' + letsencryptConfig + '" ' +
'--work-dir "/tmp/letsencrypt-lib" ' +
'--logs-dir "/tmp/letsencrypt-log" ' +
'--cert-name "npm-' + certificate.id + '" ' + '--cert-name "npm-' + certificate.id + '" ' +
'--disable-hook-validation ' + '--disable-hook-validation ' +
'--no-random-sleep-on-renew ' + '--no-random-sleep-on-renew ' +

View File

@ -1,9 +1,9 @@
const _ = require('lodash'); const _ = require('lodash');
const fs = require('fs'); const fs = require('fs');
const logger = require('../logger').nginx; const logger = require('../logger').nginx;
const utils = require('../lib/utils'); const config = require('../lib/config');
const error = require('../lib/error'); const utils = require('../lib/utils');
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG; const error = require('../lib/error');
const internalNginx = { const internalNginx = {
@ -65,7 +65,7 @@ const internalNginx = {
} }
}); });
if (debug_mode) { if (config.debug()) {
logger.error('Nginx test failed:', valid_lines.join('\n')); logger.error('Nginx test failed:', valid_lines.join('\n'));
} }
@ -101,7 +101,7 @@ const internalNginx = {
* @returns {Promise} * @returns {Promise}
*/ */
test: () => { test: () => {
if (debug_mode) { if (config.debug()) {
logger.info('Testing Nginx configuration'); logger.info('Testing Nginx configuration');
} }
@ -184,7 +184,7 @@ const internalNginx = {
generateConfig: (host_type, host) => { generateConfig: (host_type, host) => {
const nice_host_type = internalNginx.getFileFriendlyHostType(host_type); const nice_host_type = internalNginx.getFileFriendlyHostType(host_type);
if (debug_mode) { if (config.debug()) {
logger.info('Generating ' + nice_host_type + ' Config:', JSON.stringify(host, null, 2)); logger.info('Generating ' + nice_host_type + ' Config:', JSON.stringify(host, null, 2));
} }
@ -239,7 +239,7 @@ const internalNginx = {
.then((config_text) => { .then((config_text) => {
fs.writeFileSync(filename, config_text, {encoding: 'utf8'}); fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
if (debug_mode) { if (config.debug()) {
logger.success('Wrote config:', filename, config_text); logger.success('Wrote config:', filename, config_text);
} }
@ -249,7 +249,7 @@ const internalNginx = {
resolve(true); resolve(true);
}) })
.catch((err) => { .catch((err) => {
if (debug_mode) { if (config.debug()) {
logger.warn('Could not write ' + filename + ':', err.message); logger.warn('Could not write ' + filename + ':', err.message);
} }
@ -268,7 +268,7 @@ const internalNginx = {
* @returns {Promise} * @returns {Promise}
*/ */
generateLetsEncryptRequestConfig: (certificate) => { generateLetsEncryptRequestConfig: (certificate) => {
if (debug_mode) { if (config.debug()) {
logger.info('Generating LetsEncrypt Request Config:', certificate); logger.info('Generating LetsEncrypt Request Config:', certificate);
} }
@ -292,14 +292,14 @@ const internalNginx = {
.then((config_text) => { .then((config_text) => {
fs.writeFileSync(filename, config_text, {encoding: 'utf8'}); fs.writeFileSync(filename, config_text, {encoding: 'utf8'});
if (debug_mode) { if (config.debug()) {
logger.success('Wrote config:', filename, config_text); logger.success('Wrote config:', filename, config_text);
} }
resolve(true); resolve(true);
}) })
.catch((err) => { .catch((err) => {
if (debug_mode) { if (config.debug()) {
logger.warn('Could not write ' + filename + ':', err.message); logger.warn('Could not write ' + filename + ':', err.message);
} }
@ -416,8 +416,8 @@ const internalNginx = {
* @param {string} config * @param {string} config
* @returns {boolean} * @returns {boolean}
*/ */
advancedConfigHasDefaultLocation: function (config) { advancedConfigHasDefaultLocation: function (cfg) {
return !!config.match(/^(?:.*;)?\s*?location\s*?\/\s*?{/im); return !!cfg.match(/^(?:.*;)?\s*?location\s*?\/\s*?{/im);
}, },
/** /**

184
backend/lib/config.js Normal file
View File

@ -0,0 +1,184 @@
const fs = require('fs');
const NodeRSA = require('node-rsa');
const logger = require('../logger').global;
const keysFile = '/data/keys.json';
let instance = null;
// 1. Load from config file first (not recommended anymore)
// 2. Use config env variables next
const configure = () => {
const filename = (process.env.NODE_CONFIG_DIR || './config') + '/' + (process.env.NODE_ENV || 'default') + '.json';
if (fs.existsSync(filename)) {
let configData;
try {
configData = require(filename);
} catch (err) {
// do nothing
}
if (configData && configData.database) {
logger.info(`Using configuration from file: ${filename}`);
instance = configData;
instance.keys = getKeys();
return;
}
}
const envMysqlHost = process.env.DB_MYSQL_HOST || null;
const envMysqlUser = process.env.DB_MYSQL_USER || null;
const envMysqlName = process.env.DB_MYSQL_NAME || null;
if (envMysqlHost && envMysqlUser && envMysqlName) {
// we have enough mysql creds to go with mysql
logger.info('Using MySQL configuration');
instance = {
database: {
engine: 'mysql',
host: envMysqlHost,
port: process.env.DB_MYSQL_PORT || 3306,
user: envMysqlUser,
password: process.env.DB_MYSQL_PASSWORD,
name: envMysqlName,
},
keys: getKeys(),
};
return;
}
const envSqliteFile = process.env.DB_SQLITE_FILE || '/data/database.sqlite';
logger.info(`Using Sqlite: ${envSqliteFile}`);
instance = {
database: {
engine: 'knex-native',
knex: {
client: 'sqlite3',
connection: {
filename: envSqliteFile
},
useNullAsDefault: true
}
},
keys: getKeys(),
};
};
const getKeys = () => {
// Get keys from file
if (!fs.existsSync(keysFile)) {
generateKeys();
} else if (process.env.DEBUG) {
logger.info('Keys file exists OK');
}
try {
return require(keysFile);
} catch (err) {
logger.error('Could not read JWT key pair from config file: ' + keysFile, err);
process.exit(1);
}
};
const generateKeys = () => {
logger.info('Creating a new JWT key pair...');
// Now create the keys and save them in the config.
const key = new NodeRSA({ b: 2048 });
key.generateKeyPair();
const keys = {
key: key.exportKey('private').toString(),
pub: key.exportKey('public').toString(),
};
// Write keys config
try {
fs.writeFileSync(keysFile, JSON.stringify(keys, null, 2));
} catch (err) {
logger.error('Could not write JWT key pair to config file: ' + keysFile + ': ' . err.message);
process.exit(1);
}
logger.info('Wrote JWT key pair to config file: ' + keysFile);
};
module.exports = {
/**
*
* @param {string} key ie: 'database' or 'database.engine'
* @returns {boolean}
*/
has: function(key) {
instance === null && configure();
const keys = key.split('.');
let level = instance;
let has = true;
keys.forEach((keyItem) =>{
if (typeof level[keyItem] === 'undefined') {
has = false;
} else {
level = level[keyItem];
}
});
return has;
},
/**
* Gets a specific key from the top level
*
* @param {string} key
* @returns {*}
*/
get: function (key) {
instance === null && configure();
if (key && typeof instance[key] !== 'undefined') {
return instance[key];
}
return instance;
},
/**
* Is this a sqlite configuration?
*
* @returns {boolean}
*/
isSqlite: function () {
instance === null && configure();
return instance.database.knex && instance.database.knex.client === 'sqlite3';
},
/**
* Are we running in debug mdoe?
*
* @returns {boolean}
*/
debug: function () {
return !!process.env.DEBUG;
},
/**
* Returns a public key
*
* @returns {string}
*/
getPublicKey: function () {
instance === null && configure();
return instance.keys.pub;
},
/**
* Returns a private key
*
* @returns {string}
*/
getPrivateKey: function () {
instance === null && configure();
return instance.keys.key;
},
/**
* @returns {boolean}
*/
useLetsencryptStaging: function () {
return !!process.env.LE_STAGING;
}
};

View File

@ -5,7 +5,7 @@ const definitions = require('../../schema/definitions.json');
RegExp.prototype.toJSON = RegExp.prototype.toString; RegExp.prototype.toJSON = RegExp.prototype.toString;
const ajv = require('ajv')({ const ajv = require('ajv')({
verbose: true, //process.env.NODE_ENV === 'development', verbose: true,
allErrors: true, allErrors: true,
format: 'full', // strict regexes for format checks format: 'full', // strict regexes for format checks
coerceTypes: true, coerceTypes: true,

View File

@ -1,11 +1,11 @@
const db = require('../db'); const db = require('../db');
const config = require('config'); const config = require('../lib/config');
const Model = require('objection').Model; const Model = require('objection').Model;
Model.knex(db); Model.knex(db);
module.exports = function () { module.exports = function () {
if (config.database.knex && config.database.knex.client === 'sqlite3') { if (config.isSqlite()) {
// eslint-disable-next-line // eslint-disable-next-line
return Model.raw("datetime('now','localtime')"); return Model.raw("datetime('now','localtime')");
} }

View File

@ -6,44 +6,36 @@
const _ = require('lodash'); const _ = require('lodash');
const jwt = require('jsonwebtoken'); const jwt = require('jsonwebtoken');
const crypto = require('crypto'); const crypto = require('crypto');
const config = require('../lib/config');
const error = require('../lib/error'); const error = require('../lib/error');
const logger = require('../logger').global;
const ALGO = 'RS256'; const ALGO = 'RS256';
let public_key = null;
let private_key = null;
function checkJWTKeyPair() {
if (!public_key || !private_key) {
let config = require('config');
public_key = config.get('jwt.pub');
private_key = config.get('jwt.key');
}
}
module.exports = function () { module.exports = function () {
let token_data = {}; let token_data = {};
let self = { const self = {
/** /**
* @param {Object} payload * @param {Object} payload
* @returns {Promise} * @returns {Promise}
*/ */
create: (payload) => { create: (payload) => {
if (!config.getPrivateKey()) {
logger.error('Private key is empty!');
}
// sign with RSA SHA256 // sign with RSA SHA256
let options = { const options = {
algorithm: ALGO, algorithm: ALGO,
expiresIn: payload.expiresIn || '1d' expiresIn: payload.expiresIn || '1d'
}; };
payload.jti = crypto.randomBytes(12) payload.jti = crypto.randomBytes(12)
.toString('base64') .toString('base64')
.substr(-8); .substring(-8);
checkJWTKeyPair();
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
jwt.sign(payload, private_key, options, (err, token) => { jwt.sign(payload, config.getPrivateKey(), options, (err, token) => {
if (err) { if (err) {
reject(err); reject(err);
} else { } else {
@ -62,13 +54,15 @@ module.exports = function () {
* @returns {Promise} * @returns {Promise}
*/ */
load: function (token) { load: function (token) {
if (!config.getPublicKey()) {
logger.error('Public key is empty!');
}
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
checkJWTKeyPair();
try { try {
if (!token || token === null || token === 'null') { if (!token || token === null || token === 'null') {
reject(new error.AuthError('Empty token')); reject(new error.AuthError('Empty token'));
} else { } else {
jwt.verify(token, public_key, {ignoreExpiration: false, algorithms: [ALGO]}, (err, result) => { jwt.verify(token, config.getPublicKey(), {ignoreExpiration: false, algorithms: [ALGO]}, (err, result) => {
if (err) { if (err) {
if (err.name === 'TokenExpiredError') { if (err.name === 'TokenExpiredError') {
@ -132,7 +126,7 @@ module.exports = function () {
* @returns {Integer} * @returns {Integer}
*/ */
getUserId: (default_value) => { getUserId: (default_value) => {
let attrs = self.get('attrs'); const attrs = self.get('attrs');
if (attrs && typeof attrs.id !== 'undefined' && attrs.id) { if (attrs && typeof attrs.id !== 'undefined' && attrs.id) {
return attrs.id; return attrs.id;
} }

View File

@ -10,7 +10,6 @@
"bcrypt": "^5.0.0", "bcrypt": "^5.0.0",
"body-parser": "^1.19.0", "body-parser": "^1.19.0",
"compression": "^1.7.4", "compression": "^1.7.4",
"config": "^3.3.1",
"express": "^4.17.3", "express": "^4.17.3",
"express-fileupload": "^1.1.9", "express-fileupload": "^1.1.9",
"gravatar": "^1.8.0", "gravatar": "^1.8.0",
@ -22,7 +21,6 @@
"moment": "^2.29.4", "moment": "^2.29.4",
"mysql": "^2.18.1", "mysql": "^2.18.1",
"node-rsa": "^1.0.8", "node-rsa": "^1.0.8",
"nodemon": "^2.0.2",
"objection": "3.0.1", "objection": "3.0.1",
"path": "^0.12.7", "path": "^0.12.7",
"signale": "1.4.0", "signale": "1.4.0",
@ -36,8 +34,9 @@
"author": "Jamie Curnow <jc@jc21.com>", "author": "Jamie Curnow <jc@jc21.com>",
"license": "MIT", "license": "MIT",
"devDependencies": { "devDependencies": {
"eslint": "^6.8.0", "eslint": "^8.36.0",
"eslint-plugin-align-assignments": "^1.1.2", "eslint-plugin-align-assignments": "^1.1.2",
"nodemon": "^2.0.2",
"prettier": "^2.0.4" "prettier": "^2.0.4"
} }
} }

View File

@ -1,6 +1,4 @@
const fs = require('fs'); const config = require('./lib/config');
const NodeRSA = require('node-rsa');
const config = require('config');
const logger = require('./logger').setup; const logger = require('./logger').setup;
const certificateModel = require('./models/certificate'); const certificateModel = require('./models/certificate');
const userModel = require('./models/user'); const userModel = require('./models/user');
@ -9,62 +7,6 @@ const utils = require('./lib/utils');
const authModel = require('./models/auth'); const authModel = require('./models/auth');
const settingModel = require('./models/setting'); const settingModel = require('./models/setting');
const dns_plugins = require('./global/certbot-dns-plugins'); const dns_plugins = require('./global/certbot-dns-plugins');
const debug_mode = process.env.NODE_ENV !== 'production' || !!process.env.DEBUG;
/**
* Creates a new JWT RSA Keypair if not alread set on the config
*
* @returns {Promise}
*/
const setupJwt = () => {
return new Promise((resolve, reject) => {
// Now go and check if the jwt gpg keys have been created and if not, create them
if (!config.has('jwt') || !config.has('jwt.key') || !config.has('jwt.pub')) {
logger.info('Creating a new JWT key pair...');
// jwt keys are not configured properly
const filename = config.util.getEnv('NODE_CONFIG_DIR') + '/' + (config.util.getEnv('NODE_ENV') || 'default') + '.json';
let config_data = {};
try {
config_data = require(filename);
} catch (err) {
// do nothing
if (debug_mode) {
logger.debug(filename + ' config file could not be required');
}
}
// Now create the keys and save them in the config.
let key = new NodeRSA({ b: 2048 });
key.generateKeyPair();
config_data.jwt = {
key: key.exportKey('private').toString(),
pub: key.exportKey('public').toString(),
};
// Write config
fs.writeFile(filename, JSON.stringify(config_data, null, 2), (err) => {
if (err) {
logger.error('Could not write JWT key pair to config file: ' + filename);
reject(err);
} else {
logger.info('Wrote JWT key pair to config file: ' + filename);
delete require.cache[require.resolve('config')];
resolve();
}
});
} else {
// JWT key pair exists
if (debug_mode) {
logger.debug('JWT Keypair already exists');
}
resolve();
}
});
};
/** /**
* Creates a default admin users if one doesn't already exist in the database * Creates a default admin users if one doesn't already exist in the database
@ -119,8 +61,8 @@ const setupDefaultUser = () => {
.then(() => { .then(() => {
logger.info('Initial admin setup completed'); logger.info('Initial admin setup completed');
}); });
} else if (debug_mode) { } else if (config.debug()) {
logger.debug('Admin user setup not required'); logger.info('Admin user setup not required');
} }
}); });
}; };
@ -151,8 +93,8 @@ const setupDefaultSettings = () => {
logger.info('Default settings added'); logger.info('Default settings added');
}); });
} }
if (debug_mode) { if (config.debug()) {
logger.debug('Default setting setup not required'); logger.info('Default setting setup not required');
} }
}); });
}; };
@ -189,7 +131,7 @@ const setupCertbotPlugins = () => {
}); });
if (plugins.length) { if (plugins.length) {
const install_cmd = '. /opt/certbot/bin/activate && pip install ' + plugins.join(' ') + ' && deactivate'; const install_cmd = '. /opt/certbot/bin/activate && pip install --no-cache-dir ' + plugins.join(' ') + ' && deactivate';
promises.push(utils.exec(install_cmd)); promises.push(utils.exec(install_cmd));
} }
@ -225,8 +167,7 @@ const setupLogrotation = () => {
}; };
module.exports = function () { module.exports = function () {
return setupJwt() return setupDefaultUser()
.then(setupDefaultUser)
.then(setupDefaultSettings) .then(setupDefaultSettings)
.then(setupCertbotPlugins) .then(setupCertbotPlugins)
.then(setupLogrotation); .then(setupLogrotation);

View File

@ -24,6 +24,12 @@ server {
} }
{% endif %} {% endif %}
{%- if value == "444" %}
location / {
return 444;
}
{% endif %}
{%- if value == "redirect" %} {%- if value == "redirect" %}
location / { location / {
return 301 {{ meta.redirect }}; return 301 {{ meta.redirect }};

File diff suppressed because it is too large Load Diff

View File

@ -10,9 +10,13 @@ ARG BUILD_VERSION
ARG BUILD_COMMIT ARG BUILD_COMMIT
ARG BUILD_DATE ARG BUILD_DATE
# See: https://github.com/just-containers/s6-overlay/blob/master/README.md
ENV SUPPRESS_NO_CONFIG_WARNING=1 \ ENV SUPPRESS_NO_CONFIG_WARNING=1 \
S6_FIX_ATTRS_HIDDEN=1 \
S6_BEHAVIOUR_IF_STAGE2_FAILS=1 \ S6_BEHAVIOUR_IF_STAGE2_FAILS=1 \
S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0 \
S6_FIX_ATTRS_HIDDEN=1 \
S6_KILL_FINISH_MAXTIME=10000 \
S6_VERBOSITY=1 \
NODE_ENV=production \ NODE_ENV=production \
NPM_BUILD_VERSION="${BUILD_VERSION}" \ NPM_BUILD_VERSION="${BUILD_VERSION}" \
NPM_BUILD_COMMIT="${BUILD_COMMIT}" \ NPM_BUILD_COMMIT="${BUILD_COMMIT}" \
@ -35,21 +39,17 @@ COPY frontend/dist /app/frontend
COPY global /app/global COPY global /app/global
WORKDIR /app WORKDIR /app
RUN yarn install RUN yarn install \
&& yarn cache clean
# add late to limit cache-busting by modifications # add late to limit cache-busting by modifications
COPY docker/rootfs / COPY docker/rootfs /
# Remove frontend service not required for prod, dev nginx config as well # Remove frontend service not required for prod, dev nginx config as well
RUN rm -rf /etc/services.d/frontend /etc/nginx/conf.d/dev.conf RUN rm -rf /etc/s6-overlay/s6-rc.d/user/contents.d/frontend /etc/nginx/conf.d/dev.conf \
&& chmod 644 /etc/logrotate.d/nginx-proxy-manager \
# Change permission of logrotate config file && pip uninstall --yes setuptools \
RUN chmod 644 /etc/logrotate.d/nginx-proxy-manager && pip install --no-cache-dir "setuptools==58.0.0"
# fix for pip installs
# https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1769
RUN pip uninstall --yes setuptools \
&& pip install "setuptools==58.0.0"
VOLUME [ "/data", "/etc/letsencrypt" ] VOLUME [ "/data", "/etc/letsencrypt" ]
ENTRYPOINT [ "/init" ] ENTRYPOINT [ "/init" ]

View File

@ -1,9 +1,13 @@
FROM jc21/nginx-full:certbot-node FROM jc21/nginx-full:certbot-node
LABEL maintainer="Jamie Curnow <jc@jc21.com>" LABEL maintainer="Jamie Curnow <jc@jc21.com>"
ENV S6_LOGGING=0 \ # See: https://github.com/just-containers/s6-overlay/blob/master/README.md
SUPPRESS_NO_CONFIG_WARNING=1 \ ENV SUPPRESS_NO_CONFIG_WARNING=1 \
S6_FIX_ATTRS_HIDDEN=1 S6_BEHAVIOUR_IF_STAGE2_FAILS=1 \
S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0 \
S6_FIX_ATTRS_HIDDEN=1 \
S6_KILL_FINISH_MAXTIME=10000 \
S6_VERBOSITY=2
RUN echo "fs.file-max = 65535" > /etc/sysctl.conf \ RUN echo "fs.file-max = 65535" > /etc/sysctl.conf \
&& apt-get update \ && apt-get update \

View File

@ -1,17 +1,18 @@
# WARNING: This is a CI docker-compose file used for building and testing of the entire app, it should not be used for production. # WARNING: This is a CI docker-compose file used for building and testing of the entire app, it should not be used for production.
version: "3" version: '3.8'
services: services:
fullstack-mysql: fullstack-mysql:
image: ${IMAGE}:ci-${BUILD_NUMBER} image: "${IMAGE}:ci-${BUILD_NUMBER}"
environment: environment:
NODE_ENV: "development" DEBUG: 'true'
LE_STAGING: 'true'
FORCE_COLOR: 1 FORCE_COLOR: 1
DB_MYSQL_HOST: "db" DB_MYSQL_HOST: 'db'
DB_MYSQL_PORT: 3306 DB_MYSQL_PORT: '3306'
DB_MYSQL_USER: "npm" DB_MYSQL_USER: 'npm'
DB_MYSQL_PASSWORD: "npm" DB_MYSQL_PASSWORD: 'npm'
DB_MYSQL_NAME: "npm" DB_MYSQL_NAME: 'npm'
volumes: volumes:
- npm_data:/data - npm_data:/data
expose: expose:
@ -26,11 +27,15 @@ services:
timeout: 3s timeout: 3s
fullstack-sqlite: fullstack-sqlite:
image: ${IMAGE}:ci-${BUILD_NUMBER} image: "${IMAGE}:ci-${BUILD_NUMBER}"
environment: environment:
NODE_ENV: "development" DEBUG: 'true'
LE_STAGING: 'true'
FORCE_COLOR: 1 FORCE_COLOR: 1
DB_SQLITE_FILE: "/data/database.sqlite" DB_SQLITE_FILE: '/data/mydb.sqlite'
PUID: 1000
PGID: 1000
DISABLE_IPV6: 'true'
volumes: volumes:
- npm_data:/data - npm_data:/data
expose: expose:
@ -45,26 +50,26 @@ services:
db: db:
image: jc21/mariadb-aria image: jc21/mariadb-aria
environment: environment:
MYSQL_ROOT_PASSWORD: "npm" MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: "npm" MYSQL_DATABASE: 'npm'
MYSQL_USER: "npm" MYSQL_USER: 'npm'
MYSQL_PASSWORD: "npm" MYSQL_PASSWORD: 'npm'
volumes: volumes:
- db_data:/var/lib/mysql - db_data:/var/lib/mysql
cypress-mysql: cypress-mysql:
image: ${IMAGE}-cypress:ci-${BUILD_NUMBER} image: "${IMAGE}-cypress:ci-${BUILD_NUMBER}"
build: build:
context: ../test/ context: ../test/
dockerfile: cypress/Dockerfile dockerfile: cypress/Dockerfile
environment: environment:
CYPRESS_baseUrl: "http://fullstack-mysql:81" CYPRESS_baseUrl: 'http://fullstack-mysql:81'
volumes: volumes:
- cypress-logs:/results - cypress-logs:/results
command: cypress run --browser chrome --config-file=${CYPRESS_CONFIG:-cypress/config/ci.json} command: cypress run --browser chrome --config-file=${CYPRESS_CONFIG:-cypress/config/ci.json}
cypress-sqlite: cypress-sqlite:
image: ${IMAGE}-cypress:ci-${BUILD_NUMBER} image: "${IMAGE}-cypress:ci-${BUILD_NUMBER}"
build: build:
context: ../test/ context: ../test/
dockerfile: cypress/Dockerfile dockerfile: cypress/Dockerfile

View File

@ -1,6 +1,7 @@
# WARNING: This is a DEVELOPMENT docker-compose file, it should not be used for production. # WARNING: This is a DEVELOPMENT docker-compose file, it should not be used for production.
version: "3.5" version: '3.8'
services: services:
npm: npm:
image: nginxproxymanager:dev image: nginxproxymanager:dev
container_name: npm_core container_name: npm_core
@ -14,14 +15,19 @@ services:
networks: networks:
- nginx_proxy_manager - nginx_proxy_manager
environment: environment:
NODE_ENV: "development" PUID: 1000
PGID: 1000
FORCE_COLOR: 1 FORCE_COLOR: 1
DEVELOPMENT: "true" # specifically for dev:
DB_MYSQL_HOST: "db" DEBUG: 'true'
DB_MYSQL_PORT: 3306 DEVELOPMENT: 'true'
DB_MYSQL_USER: "npm" LE_STAGING: 'true'
DB_MYSQL_PASSWORD: "npm" # db:
DB_MYSQL_NAME: "npm" DB_MYSQL_HOST: 'db'
DB_MYSQL_PORT: '3306'
DB_MYSQL_USER: 'npm'
DB_MYSQL_PASSWORD: 'npm'
DB_MYSQL_NAME: 'npm'
# DB_SQLITE_FILE: "/data/database.sqlite" # DB_SQLITE_FILE: "/data/database.sqlite"
# DISABLE_IPV6: "true" # DISABLE_IPV6: "true"
volumes: volumes:
@ -42,10 +48,10 @@ services:
networks: networks:
- nginx_proxy_manager - nginx_proxy_manager
environment: environment:
MYSQL_ROOT_PASSWORD: "npm" MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: "npm" MYSQL_DATABASE: 'npm'
MYSQL_USER: "npm" MYSQL_USER: 'npm'
MYSQL_PASSWORD: "npm" MYSQL_PASSWORD: 'npm'
volumes: volumes:
- db_data:/var/lib/mysql - db_data:/var/lib/mysql

View File

@ -0,0 +1,54 @@
#!/bin/bash
set -e
CYAN='\E[1;36m'
BLUE='\E[1;34m'
YELLOW='\E[1;33m'
RED='\E[1;31m'
RESET='\E[0m'
export CYAN BLUE YELLOW RED RESET
PUID=${PUID:-0}
PGID=${PGID:-0}
NPMUSER=npm
NPMGROUP=npm
NPMHOME=/tmp/npmuserhome
export NPMUSER NPMGROUP NPMHOME
if [[ "$PUID" -ne '0' ]] && [ "$PGID" = '0' ]; then
# set group id to same as user id,
# the user probably forgot to specify the group id and
# it would be rediculous to intentionally use the root group
# for a non-root user
PGID=$PUID
fi
export PUID PGID
log_info () {
echo -e "${BLUE} ${CYAN}$1${RESET}"
}
log_error () {
echo -e "${RED} $1${RESET}"
}
# The `run` file will only execute 1 line so this helps keep things
# logically separated
log_fatal () {
echo -e "${RED}--------------------------------------${RESET}"
echo -e "${RED}ERROR: $1${RESET}"
echo -e "${RED}--------------------------------------${RESET}"
/run/s6/basedir/bin/halt
exit 1
}
# param $1: group_name
get_group_id () {
if [ "${1:-}" != '' ]; then
getent group "$1" | cut -d: -f3
fi
}

View File

@ -1,46 +0,0 @@
#!/bin/bash
# This command reads the `DISABLE_IPV6` env var and will either enable
# or disable ipv6 in all nginx configs based on this setting.
# Lowercase
DISABLE_IPV6=$(echo "${DISABLE_IPV6:-}" | tr '[:upper:]' '[:lower:]')
CYAN='\E[1;36m'
BLUE='\E[1;34m'
YELLOW='\E[1;33m'
RED='\E[1;31m'
RESET='\E[0m'
FOLDER=$1
if [ "$FOLDER" == "" ]; then
echo -e "${RED} $0 requires a absolute folder path as the first argument!${RESET}"
echo -e "${YELLOW} ie: $0 /data/nginx${RESET}"
exit 1
fi
FILES=$(find "$FOLDER" -type f -name "*.conf")
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ]; then
# IPV6 is disabled
echo "Disabling IPV6 in hosts"
echo -e "${BLUE} ${CYAN}Disabling IPV6 in hosts: ${YELLOW}${FOLDER}${RESET}"
# Iterate over configs and run the regex
for FILE in $FILES
do
echo -e " ${BLUE} ${YELLOW}${FILE}${RESET}"
sed -E -i 's/^([^#]*)listen \[::\]/\1#listen [::]/g' "$FILE"
done
else
# IPV6 is enabled
echo -e "${BLUE} ${CYAN}Enabling IPV6 in hosts: ${YELLOW}${FOLDER}${RESET}"
# Iterate over configs and run the regex
for FILE in $FILES
do
echo -e " ${BLUE} ${YELLOW}${FILE}${RESET}"
sed -E -i 's/^(\s*)#listen \[::\]/\1listen [::]/g' "$FILE"
done
fi

View File

@ -32,6 +32,7 @@ server {
server_name localhost; server_name localhost;
access_log /data/logs/fallback_access.log standard; access_log /data/logs/fallback_access.log standard;
error_log /dev/null crit; error_log /dev/null crit;
include conf.d/include/ssl-ciphers.conf;
ssl_reject_handshake on; ssl_reject_handshake on;
return 444; return 444;

View File

@ -1,7 +1,7 @@
# run nginx in foreground # run nginx in foreground
daemon off; daemon off;
pid /run/nginx/nginx.pid;
user root; user npm;
# Set number of worker processes automatically based on number of CPU cores. # Set number of worker processes automatically based on number of CPU cores.
worker_processes auto; worker_processes auto;
@ -57,7 +57,7 @@ http {
} }
# Real IP Determination # Real IP Determination
# Local subnets: # Local subnets:
set_real_ip_from 10.0.0.0/8; set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12; # Includes Docker subnet set_real_ip_from 172.16.0.0/12; # Includes Docker subnet

View File

@ -3,17 +3,19 @@
set -e set -e
echo " Starting backend ..." . /bin/common.sh
if [ "$DEVELOPMENT" == "true" ]; then
cd /app || exit 1 cd /app || exit 1
# If yarn install fails: add --verbose --network-concurrency 1
yarn install log_info 'Starting backend ...'
node --max_old_space_size=250 --abort_on_uncaught_exception node_modules/nodemon/bin/nodemon.js
if [ "${DEVELOPMENT:-}" = 'true' ]; then
s6-setuidgid "$PUID:$PGID" yarn install
exec s6-setuidgid "$PUID:$PGID" bash -c "export HOME=$NPMHOME;node --max_old_space_size=250 --abort_on_uncaught_exception node_modules/nodemon/bin/nodemon.js"
else else
cd /app || exit 1
while : while :
do do
node --abort_on_uncaught_exception --max_old_space_size=250 index.js s6-setuidgid "$PUID:$PGID" bash -c "export HOME=$NPMHOME;node --abort_on_uncaught_exception --max_old_space_size=250 index.js"
sleep 1 sleep 1
done done
fi fi

View File

@ -5,11 +5,17 @@ set -e
# This service is DEVELOPMENT only. # This service is DEVELOPMENT only.
if [ "$DEVELOPMENT" == "true" ]; then if [ "$DEVELOPMENT" = 'true' ]; then
. /bin/common.sh
cd /app/frontend || exit 1 cd /app/frontend || exit 1
# If yarn install fails: add --verbose --network-concurrency 1 HOME=$NPMHOME
yarn install export HOME
yarn watch mkdir -p /app/frontend/dist
chown -R "$PUID:$PGID" /app/frontend/dist
log_info 'Starting frontend ...'
s6-setuidgid "$PUID:$PGID" yarn install
exec s6-setuidgid "$PUID:$PGID" yarn watch
else else
exit 0 exit 0
fi fi

View File

@ -3,5 +3,7 @@
set -e set -e
echo " Starting nginx ..." . /bin/common.sh
exec nginx
log_info 'Starting nginx ...'
exec s6-setuidgid "$PUID:$PGID" nginx

View File

@ -0,0 +1,22 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
. /bin/common.sh
if [ "$(id -u)" != "0" ]; then
log_fatal "This docker container must be run as root, do not specify a user.\nYou can specify PUID and PGID env vars to run processes as that user and group after initialization."
fi
if [ "$DEBUG" = "true" ]; then
set -x
fi
. /etc/s6-overlay/s6-rc.d/prepare/10-usergroup.sh
. /etc/s6-overlay/s6-rc.d/prepare/20-paths.sh
. /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
. /etc/s6-overlay/s6-rc.d/prepare/40-dynamic.sh
. /etc/s6-overlay/s6-rc.d/prepare/50-ipv6.sh
. /etc/s6-overlay/s6-rc.d/prepare/60-secrets.sh
. /etc/s6-overlay/s6-rc.d/prepare/90-banner.sh

View File

@ -0,0 +1,40 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
log_info "Configuring $NPMUSER user ..."
if id -u "$NPMUSER" 2>/dev/null; then
# user already exists
usermod -u "$PUID" "$NPMUSER"
else
# Add user
useradd -o -u "$PUID" -U -d "$NPMHOME" -s /bin/false "$NPMUSER"
fi
log_info "Configuring $NPMGROUP group ..."
if [ "$(get_group_id "$NPMGROUP")" = '' ]; then
# Add group. This will not set the id properly if it's already taken
groupadd -f -g "$PGID" "$NPMGROUP"
else
groupmod -o -g "$PGID" "$NPMGROUP"
fi
# Set the group ID and check it
groupmod -o -g "$PGID" "$NPMGROUP"
if [ "$(get_group_id "$NPMGROUP")" != "$PGID" ]; then
echo "ERROR: Unable to set group id properly"
exit 1
fi
# Set the group against the user and check it
usermod -G "$PGID" "$NPMGROUP"
if [ "$(id -g "$NPMUSER")" != "$PGID" ] ; then
echo "ERROR: Unable to set group against the user properly"
exit 1
fi
# Home for user
mkdir -p "$NPMHOME"
chown -R "$PUID:$PGID" "$NPMHOME"

View File

@ -0,0 +1,41 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
log_info 'Checking paths ...'
# Ensure /data is mounted
if [ ! -d '/data' ]; then
log_fatal '/data is not mounted! Check your docker configuration.'
fi
# Ensure /etc/letsencrypt is mounted
if [ ! -d '/etc/letsencrypt' ]; then
log_fatal '/etc/letsencrypt is not mounted! Check your docker configuration.'
fi
# Create required folders
mkdir -p \
/data/nginx \
/data/custom_ssl \
/data/logs \
/data/access \
/data/nginx/default_host \
/data/nginx/default_www \
/data/nginx/proxy_host \
/data/nginx/redirection_host \
/data/nginx/stream \
/data/nginx/dead_host \
/data/nginx/temp \
/data/letsencrypt-acme-challenge \
/run/nginx \
/tmp/nginx/body \
/var/log/nginx \
/var/lib/nginx/cache/public \
/var/lib/nginx/cache/private \
/var/cache/nginx/proxy_temp
touch /var/log/nginx/error.log || true
chmod 777 /var/log/nginx/error.log || true
chmod -R 777 /var/cache/nginx || true
chmod 644 /etc/logrotate.d/nginx-proxy-manager

View File

@ -0,0 +1,27 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
log_info 'Setting ownership ...'
# root
chown root /tmp/nginx
# npm user and group
chown -R "$PUID:$PGID" /data
chown -R "$PUID:$PGID" /etc/letsencrypt
chown -R "$PUID:$PGID" /run/nginx
chown -R "$PUID:$PGID" /tmp/nginx
chown -R "$PUID:$PGID" /var/cache/nginx
chown -R "$PUID:$PGID" /var/lib/logrotate
chown -R "$PUID:$PGID" /var/lib/nginx
chown -R "$PUID:$PGID" /var/log/nginx
# Don't chown entire /etc/nginx folder as this causes crashes on some systems
chown -R "$PUID:$PGID" /etc/nginx/nginx
chown -R "$PUID:$PGID" /etc/nginx/nginx.conf
chown -R "$PUID:$PGID" /etc/nginx/conf.d
# Prevents errors when installing python certbot plugins when non-root
chown -R "$PUID:$PGID" /opt/certbot

View File

@ -0,0 +1,17 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
log_info 'Dynamic resolvers ...'
DISABLE_IPV6=$(echo "${DISABLE_IPV6:-}" | tr '[:upper:]' '[:lower:]')
# Dynamically generate resolvers file, if resolver is IPv6, enclose in `[]`
# thanks @tfmm
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ];
then
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) ipv6=off valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
else
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
fi

View File

@ -0,0 +1,39 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# This command reads the `DISABLE_IPV6` env var and will either enable
# or disable ipv6 in all nginx configs based on this setting.
set -e
log_info 'IPv6 ...'
# Lowercase
DISABLE_IPV6=$(echo "${DISABLE_IPV6:-}" | tr '[:upper:]' '[:lower:]')
process_folder () {
FILES=$(find "$1" -type f -name "*.conf")
SED_REGEX=
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ]; then
# IPV6 is disabled
echo "Disabling IPV6 in hosts in: $1"
SED_REGEX='s/^([^#]*)listen \[::\]/\1#listen [::]/g'
else
# IPV6 is enabled
echo "Enabling IPV6 in hosts in: $1"
SED_REGEX='s/^(\s*)#listen \[::\]/\1listen [::]/g'
fi
for FILE in $FILES
do
echo "- ${FILE}"
echo "$(sed -E "$SED_REGEX" "$FILE")" > $FILE
done
# ensure the files are still owned by the npm user
chown -R "$PUID:$PGID" "$1"
}
process_folder /etc/nginx/conf.d
process_folder /data/nginx

View File

@ -0,0 +1,30 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
# in s6, environmental variables are written as text files for s6 to monitor
# search through full-path filenames for files ending in "__FILE"
log_info 'Docker secrets ...'
for FILENAME in $(find /var/run/s6/container_environment/ | grep "__FILE$"); do
echo "[secret-init] Evaluating ${FILENAME##*/} ..."
# set SECRETFILE to the contents of the full-path textfile
SECRETFILE=$(cat "${FILENAME}")
# if SECRETFILE exists / is not null
if [[ -f "${SECRETFILE}" ]]; then
# strip the appended "__FILE" from environmental variable name ...
STRIPFILE=$(echo "${FILENAME}" | sed "s/__FILE//g")
# echo "[secret-init] Set STRIPFILE to ${STRIPFILE}" # DEBUG - rm for prod!
# ... and set value to contents of secretfile
# since s6 uses text files, this is effectively "export ..."
printf $(cat "${SECRETFILE}") > "${STRIPFILE}"
# echo "[secret-init] Set ${STRIPFILE##*/} to $(cat ${STRIPFILE})" # DEBUG - rm for prod!"
echo "Success: ${STRIPFILE##*/} set from ${FILENAME##*/}"
else
echo "Cannot find secret in ${FILENAME}"
fi
done

View File

@ -0,0 +1,18 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
set +x
echo "
-------------------------------------
_ _ ____ __ __
| \ | | _ \| \/ |
| \| | |_) | |\/| |
| |\ | __/| | | |
|_| \_|_| |_| |_|
-------------------------------------
User: $NPMUSER PUID:$PUID ID:$(id -u "$NPMUSER") GROUP:$(id -g "$NPMUSER")
Group: $NPMGROUP PGID:$PGID ID:$(get_group_id "$NPMGROUP")
-------------------------------------
"

View File

@ -1,93 +0,0 @@
#!/command/with-contenv bash
# shellcheck shell=bash
set -e
DATA_PATH=/data
# Ensure /data is mounted
if [ ! -d "$DATA_PATH" ]; then
echo '--------------------------------------'
echo "ERROR: $DATA_PATH is not mounted! Check your docker configuration."
echo '--------------------------------------'
/run/s6/basedir/bin/halt
exit 1
fi
echo " Checking folder structure ..."
# Create required folders
mkdir -p /tmp/nginx/body \
/run/nginx \
/var/log/nginx \
/data/nginx \
/data/custom_ssl \
/data/logs \
/data/access \
/data/nginx/default_host \
/data/nginx/default_www \
/data/nginx/proxy_host \
/data/nginx/redirection_host \
/data/nginx/stream \
/data/nginx/dead_host \
/data/nginx/temp \
/var/lib/nginx/cache/public \
/var/lib/nginx/cache/private \
/var/cache/nginx/proxy_temp \
/data/letsencrypt-acme-challenge
touch /var/log/nginx/error.log && chmod 777 /var/log/nginx/error.log && chmod -R 777 /var/cache/nginx
chown root /tmp/nginx
# Dynamically generate resolvers file, if resolver is IPv6, enclose in `[]`
# thanks @tfmm
if [ "$DISABLE_IPV6" == "true" ] || [ "$DISABLE_IPV6" == "on" ] || [ "$DISABLE_IPV6" == "1" ] || [ "$DISABLE_IPV6" == "yes" ];
then
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) ipv6=off valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
else
echo resolver "$(awk 'BEGIN{ORS=" "} $1=="nameserver" { sub(/%.*$/,"",$2); print ($2 ~ ":")? "["$2"]": $2}' /etc/resolv.conf) valid=10s;" > /etc/nginx/conf.d/include/resolvers.conf
fi
echo "Changing ownership of /data/logs to $(id -u):$(id -g)"
chown -R "$(id -u):$(id -g)" /data/logs
# Handle IPV6 settings
/bin/handle-ipv6-setting /etc/nginx/conf.d
/bin/handle-ipv6-setting /data/nginx
# ref: https://github.com/linuxserver/docker-baseimage-alpine/blob/master/root/etc/cont-init.d/01-envfile
# in s6, environmental variables are written as text files for s6 to monitor
# search through full-path filenames for files ending in "__FILE"
echo " Secrets-init ..."
for FILENAME in $(find /var/run/s6/container_environment/ | grep "__FILE$"); do
echo "[secret-init] Evaluating ${FILENAME##*/} ..."
# set SECRETFILE to the contents of the full-path textfile
SECRETFILE=$(cat "${FILENAME}")
# if SECRETFILE exists / is not null
if [[ -f "${SECRETFILE}" ]]; then
# strip the appended "__FILE" from environmental variable name ...
STRIPFILE=$(echo "${FILENAME}" | sed "s/__FILE//g")
# echo "[secret-init] Set STRIPFILE to ${STRIPFILE}" # DEBUG - rm for prod!
# ... and set value to contents of secretfile
# since s6 uses text files, this is effectively "export ..."
printf $(cat "${SECRETFILE}") > "${STRIPFILE}"
# echo "[secret-init] Set ${STRIPFILE##*/} to $(cat ${STRIPFILE})" # DEBUG - rm for prod!"
echo "[secret-init] Success! ${STRIPFILE##*/} set from ${FILENAME##*/}"
else
echo "[secret-init] cannot find secret in ${FILENAME}"
fi
done
echo
echo "-------------------------------------
_ _ ____ __ __
| \ | | _ \| \/ |
| \| | |_) | |\/| |
| |\ | __/| | | |
|_| \_|_| |_| |_|
-------------------------------------
"

View File

@ -1,2 +1,2 @@
# shellcheck shell=bash # shellcheck shell=bash
/etc/s6-overlay/s6-rc.d/prepare/script.sh /etc/s6-overlay/s6-rc.d/prepare/00-all.sh

View File

@ -8,8 +8,8 @@ BLUE='\E[1;34m'
GREEN='\E[1;32m' GREEN='\E[1;32m'
RESET='\E[0m' RESET='\E[0m'
S6_OVERLAY_VERSION=3.1.4.1 S6_OVERLAY_VERSION=3.1.5.0
TARGETPLATFORM=${1:unspecified} TARGETPLATFORM=${1:-linux/amd64}
# Determine the correct binary file for the architecture given # Determine the correct binary file for the architecture given
case $TARGETPLATFORM in case $TARGETPLATFORM in

View File

@ -1,5 +1,26 @@
# Advanced Configuration # Advanced Configuration
## Running processes as a user/group
By default, the services (nginx etc) will run as `root` user inside the docker container.
You can change this behaviour by setting the following environment variables.
Not only will they run the services as this user/group, they will change the ownership
on the `data` and `letsencrypt` folders at startup.
```yml
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
environment:
PUID: 1000
PGID: 1000
# ...
```
This may have the side effect of a failed container start due to permission denied trying
to open port 80 on some systems. The only course to fix that is to remove the variables
and run as the default root user.
## Best Practice: Use a Docker network ## Best Practice: Use a Docker network
For those who have a few of their upstream services running in Docker on the same Docker For those who have a few of their upstream services running in Docker on the same Docker
@ -25,7 +46,7 @@ networks:
Let's look at a Portainer example: Let's look at a Portainer example:
```yml ```yml
version: '3' version: '3.8'
services: services:
portainer: portainer:
@ -60,14 +81,14 @@ healthcheck:
timeout: 3s timeout: 3s
``` ```
## Docker Secrets ## Docker File Secrets
This image supports the use of Docker secrets to import from file and keep sensitive usernames or passwords from being passed or preserved in plaintext. This image supports the use of Docker secrets to import from files and keep sensitive usernames or passwords from being passed or preserved in plaintext.
You can set any environment variable from a file by appending `__FILE` (double-underscore FILE) to the environmental variable name. You can set any environment variable from a file by appending `__FILE` (double-underscore FILE) to the environmental variable name.
```yml ```yml
version: "3.7" version: '3.8'
secrets: secrets:
# Secrets are single-line text files where the sole content is the secret # Secrets are single-line text files where the sole content is the secret
@ -96,9 +117,7 @@ services:
# DB_MYSQL_PASSWORD: "npm" # use secret instead # DB_MYSQL_PASSWORD: "npm" # use secret instead
DB_MYSQL_PASSWORD__FILE: /run/secrets/MYSQL_PWD DB_MYSQL_PASSWORD__FILE: /run/secrets/MYSQL_PWD
DB_MYSQL_NAME: "npm" DB_MYSQL_NAME: "npm"
# If you would rather use Sqlite uncomment this # If you would rather use Sqlite, remove all DB_MYSQL_* lines above
# and remove all DB_MYSQL_* lines above
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host # Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true' # DISABLE_IPV6: 'true'
volumes: volumes:
@ -108,6 +127,7 @@ services:
- MYSQL_PWD - MYSQL_PWD
depends_on: depends_on:
- db - db
db: db:
image: jc21/mariadb-aria image: jc21/mariadb-aria
restart: unless-stopped restart: unless-stopped
@ -119,7 +139,7 @@ services:
# MYSQL_PASSWORD: "npm" # use secret instead # MYSQL_PASSWORD: "npm" # use secret instead
MYSQL_PASSWORD__FILE: /run/secrets/MYSQL_PWD MYSQL_PASSWORD__FILE: /run/secrets/MYSQL_PWD
volumes: volumes:
- ./data/mysql:/var/lib/mysql - ./mysql:/var/lib/mysql
secrets: secrets:
- DB_ROOT_PWD - DB_ROOT_PWD
- MYSQL_PWD - MYSQL_PWD

View File

@ -5,7 +5,7 @@
Create a `docker-compose.yml` file: Create a `docker-compose.yml` file:
```yml ```yml
version: "3" version: '3.8'
services: services:
app: app:
image: 'jc21/nginx-proxy-manager:latest' image: 'jc21/nginx-proxy-manager:latest'
@ -20,7 +20,7 @@ services:
# Uncomment the next line if you uncomment anything in the section # Uncomment the next line if you uncomment anything in the section
# environment: # environment:
# Uncomment this if you want to change the location of # Uncomment this if you want to change the location of
# the SQLite DB file within the container # the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite" # DB_SQLITE_FILE: "/data/database.sqlite"
@ -51,7 +51,7 @@ are going to use.
Here is an example of what your `docker-compose.yml` will look like when using a MariaDB container: Here is an example of what your `docker-compose.yml` will look like when using a MariaDB container:
```yml ```yml
version: "3" version: '3.8'
services: services:
app: app:
image: 'jc21/nginx-proxy-manager:latest' image: 'jc21/nginx-proxy-manager:latest'
@ -64,6 +64,7 @@ services:
# Add any other Stream port you want to expose # Add any other Stream port you want to expose
# - '21:21' # FTP # - '21:21' # FTP
environment: environment:
# Mysql/Maria connection parameters:
DB_MYSQL_HOST: "db" DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306 DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm" DB_MYSQL_USER: "npm"
@ -86,7 +87,7 @@ services:
MYSQL_USER: 'npm' MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm' MYSQL_PASSWORD: 'npm'
volumes: volumes:
- ./data/mysql:/var/lib/mysql - ./mysql:/var/lib/mysql
``` ```
::: warning ::: warning
@ -118,13 +119,12 @@ Please note that the `jc21/mariadb-aria:latest` image might have some problems o
After the app is running for the first time, the following will happen: After the app is running for the first time, the following will happen:
1. The database will initialize with table structures 1. GPG keys will be generated and saved in the data folder
2. GPG keys will be generated and saved in the configuration file 2. The database will initialize with table structures
3. A default admin user will be created 3. A default admin user will be created
This process can take a couple of minutes depending on your machine. This process can take a couple of minutes depending on your machine.
## Default Administrator User ## Default Administrator User
``` ```
@ -134,49 +134,3 @@ Password: changeme
Immediately after logging in with this default user you will be asked to modify your details and change your password. Immediately after logging in with this default user you will be asked to modify your details and change your password.
## Configuration File
::: warning
This section is meant for advanced users
:::
If you would like more control over the database settings you can define a custom config JSON file.
Here's an example for `sqlite` configuration as it is generated from the environment variables:
```json
{
"database": {
"engine": "knex-native",
"knex": {
"client": "sqlite3",
"connection": {
"filename": "/data/database.sqlite"
},
"useNullAsDefault": true
}
}
}
```
You can modify the `knex` object with your custom configuration, but note that not all knex clients might be installed in the image.
Once you've created your configuration file you can mount it to `/app/config/production.json` inside you container using:
```
[...]
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
[...]
volumes:
- ./config.json:/app/config/production.json
[...]
[...]
```
**Note:** After the first run of the application, the config file will be altered to include generated encryption keys unique to your installation.
These keys affect the login and session management of the application. If these keys change for any reason, all users will be logged out.

View File

@ -9,3 +9,4 @@ This project will automatically update any databases or other requirements so yo
any crazy instructions. These steps above will pull the latest updates and recreate the docker any crazy instructions. These steps above will pull the latest updates and recreate the docker
containers. containers.
See the [list of releases](https://github.com/NginxProxyManager/nginx-proxy-manager/releases) for any upgrade steps specific to each release.

View File

@ -8477,9 +8477,11 @@ semver@^6.0.0, semver@^6.1.0, semver@^6.2.0, semver@^6.3.0:
integrity sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw== integrity sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw==
semver@^7.3.2: semver@^7.3.2:
version "7.3.2" version "7.5.2"
resolved "https://registry.yarnpkg.com/semver/-/semver-7.3.2.tgz#604962b052b81ed0786aae84389ffba70ffd3938" resolved "https://registry.yarnpkg.com/semver/-/semver-7.5.2.tgz#5b851e66d1be07c1cdaf37dfc856f543325a2beb"
integrity sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ== integrity sha512-SoftuTROv/cRjCze/scjGyiDtcUyxw1rgYQSZY7XTmtR5hX+dm76iDbTH8TkLPHCQmlbQVSSbNZCPM2hb0knnQ==
dependencies:
lru-cache "^6.0.0"
send@0.17.2, send@^0.17.1: send@0.17.2, send@^0.17.1:
version "0.17.2" version "0.17.2"
@ -9498,13 +9500,14 @@ toposort@^2.0.2:
integrity sha1-riF2gXXRVZ1IvvNUILL0li8JwzA= integrity sha1-riF2gXXRVZ1IvvNUILL0li8JwzA=
tough-cookie@^4.0.0: tough-cookie@^4.0.0:
version "4.0.0" version "4.1.3"
resolved "https://registry.yarnpkg.com/tough-cookie/-/tough-cookie-4.0.0.tgz#d822234eeca882f991f0f908824ad2622ddbece4" resolved "https://registry.yarnpkg.com/tough-cookie/-/tough-cookie-4.1.3.tgz#97b9adb0728b42280aa3d814b6b999b2ff0318bf"
integrity sha512-tHdtEpQCMrc1YLrMaqXXcj6AxhYi/xgit6mZu1+EDWUn+qhUf8wMQoFIy9NXuq23zAwtcB0t/MjACGR18pcRbg== integrity sha512-aX/y5pVRkfRnfmuX+OdbSdXvPe6ieKX/G2s7e98f4poJHnqH3281gDPm/metm6E/WRamfx7WC4HUqkWHfQHprw==
dependencies: dependencies:
psl "^1.1.33" psl "^1.1.33"
punycode "^2.1.1" punycode "^2.1.1"
universalify "^0.1.2" universalify "^0.2.0"
url-parse "^1.5.3"
tough-cookie@~2.5.0: tough-cookie@~2.5.0:
version "2.5.0" version "2.5.0"
@ -9690,11 +9693,16 @@ unique-string@^2.0.0:
dependencies: dependencies:
crypto-random-string "^2.0.0" crypto-random-string "^2.0.0"
universalify@^0.1.0, universalify@^0.1.2: universalify@^0.1.0:
version "0.1.2" version "0.1.2"
resolved "https://registry.yarnpkg.com/universalify/-/universalify-0.1.2.tgz#b646f69be3942dabcecc9d6639c80dc105efaa66" resolved "https://registry.yarnpkg.com/universalify/-/universalify-0.1.2.tgz#b646f69be3942dabcecc9d6639c80dc105efaa66"
integrity sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg== integrity sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg==
universalify@^0.2.0:
version "0.2.0"
resolved "https://registry.yarnpkg.com/universalify/-/universalify-0.2.0.tgz#6451760566fa857534745ab1dde952d1b1761be0"
integrity sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg==
universalify@^1.0.0: universalify@^1.0.0:
version "1.0.0" version "1.0.0"
resolved "https://registry.yarnpkg.com/universalify/-/universalify-1.0.0.tgz#b61a1da173e8435b2fe3c67d29b9adf8594bd16d" resolved "https://registry.yarnpkg.com/universalify/-/universalify-1.0.0.tgz#b61a1da173e8435b2fe3c67d29b9adf8594bd16d"
@ -9796,10 +9804,10 @@ url-parse-lax@^3.0.0:
dependencies: dependencies:
prepend-http "^2.0.0" prepend-http "^2.0.0"
url-parse@^1.4.3, url-parse@^1.4.7: url-parse@^1.4.3, url-parse@^1.4.7, url-parse@^1.5.3:
version "1.5.9" version "1.5.10"
resolved "https://registry.yarnpkg.com/url-parse/-/url-parse-1.5.9.tgz#05ff26484a0b5e4040ac64dcee4177223d74675e" resolved "https://registry.yarnpkg.com/url-parse/-/url-parse-1.5.10.tgz#9d3c2f736c1d75dd3bd2be507dcc111f1e2ea9c1"
integrity sha512-HpOvhKBvre8wYez+QhHcYiVvVmeF6DVnuSOOPhe3cTum3BnqHhvKaZm8FU5yTiOu/Jut2ZpB2rA/SbBA1JIGlQ== integrity sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==
dependencies: dependencies:
querystringify "^2.1.1" querystringify "^2.1.1"
requires-port "^1.0.0" requires-port "^1.0.0"

View File

@ -18,6 +18,10 @@
<input class="custom-control-input" name="value" value="404" type="radio" required <%- value === '404' ? 'checked' : '' %>> <input class="custom-control-input" name="value" value="404" type="radio" required <%- value === '404' ? 'checked' : '' %>>
<div class="custom-control-label"><%- i18n('settings', 'default-site-404') %></div> <div class="custom-control-label"><%- i18n('settings', 'default-site-404') %></div>
</label> </label>
<label class="custom-control custom-radio">
<input class="custom-control-input" name="value" value="444" type="radio" required <%- value === '444' ? 'checked' : '' %>>
<div class="custom-control-label"><%- i18n('settings', 'default-site-444') %></div>
</label>
<label class="custom-control custom-radio"> <label class="custom-control custom-radio">
<input class="custom-control-input" name="value" value="redirect" type="radio" required <%- value === 'redirect' ? 'checked' : '' %>> <input class="custom-control-input" name="value" value="redirect" type="radio" required <%- value === 'redirect' ? 'checked' : '' %>>
<div class="custom-control-label"><%- i18n('settings', 'default-site-redirect') %></div> <div class="custom-control-label"><%- i18n('settings', 'default-site-redirect') %></div>

View File

@ -60,7 +60,7 @@
}, },
"footer": { "footer": {
"fork-me": "Fork me on Github", "fork-me": "Fork me on Github",
"copy": "&copy; 2022 <a href=\"{url}\" target=\"_blank\">jc21.com</a>.", "copy": "&copy; 2023 <a href=\"{url}\" target=\"_blank\">jc21.com</a>.",
"theme": "Theme by <a href=\"{url}\" target=\"_blank\">Tabler</a>" "theme": "Theme by <a href=\"{url}\" target=\"_blank\">Tabler</a>"
}, },
"dashboard": { "dashboard": {
@ -287,6 +287,7 @@
"default-site": "Default Site", "default-site": "Default Site",
"default-site-congratulations": "Congratulations Page", "default-site-congratulations": "Congratulations Page",
"default-site-404": "404 Page", "default-site-404": "404 Page",
"default-site-444": "No Response (444)",
"default-site-html": "Custom Page", "default-site-html": "Custom Page",
"default-site-redirect": "Redirect" "default-site-redirect": "Redirect"
} }

View File

@ -5698,19 +5698,19 @@ semver-diff@^3.1.1:
semver "^6.3.0" semver "^6.3.0"
"semver@2 || 3 || 4 || 5", semver@^5.3.0, semver@^5.4.1, semver@^5.5.0, semver@^5.6.0, semver@^5.7.1: "semver@2 || 3 || 4 || 5", semver@^5.3.0, semver@^5.4.1, semver@^5.5.0, semver@^5.6.0, semver@^5.7.1:
version "5.7.1" version "5.7.2"
resolved "https://registry.yarnpkg.com/semver/-/semver-5.7.1.tgz#a954f931aeba508d307bbf069eff0c01c96116f7" resolved "https://registry.yarnpkg.com/semver/-/semver-5.7.2.tgz#48d55db737c3287cd4835e17fa13feace1c41ef8"
integrity sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ== integrity sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==
semver@^6.0.0, semver@^6.1.2, semver@^6.2.0, semver@^6.3.0: semver@^6.0.0, semver@^6.1.2, semver@^6.2.0, semver@^6.3.0:
version "6.3.0" version "6.3.1"
resolved "https://registry.yarnpkg.com/semver/-/semver-6.3.0.tgz#ee0a64c8af5e8ceea67687b133761e1becbd1d3d" resolved "https://registry.yarnpkg.com/semver/-/semver-6.3.1.tgz#556d2ef8689146e46dcea4bfdd095f3434dffcb4"
integrity sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw== integrity sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==
semver@^7.3.2, semver@^7.3.4: semver@^7.3.2, semver@^7.3.4:
version "7.3.5" version "7.5.4"
resolved "https://registry.yarnpkg.com/semver/-/semver-7.3.5.tgz#0b621c879348d8998e4b0e4be94b3f12e6018ef7" resolved "https://registry.yarnpkg.com/semver/-/semver-7.5.4.tgz#483986ec4ed38e1c6c48c34894a9182dbff68a6e"
integrity sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ== integrity sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA==
dependencies: dependencies:
lru-cache "^6.0.0" lru-cache "^6.0.0"
@ -6742,9 +6742,9 @@ window-size@0.1.0:
integrity sha1-VDjNLqk7IC76Ohn+iIeu58lPnJ0= integrity sha1-VDjNLqk7IC76Ohn+iIeu58lPnJ0=
word-wrap@~1.2.3: word-wrap@~1.2.3:
version "1.2.3" version "1.2.4"
resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.3.tgz#610636f6b1f703891bd34771ccb17fb93b47079c" resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.4.tgz#cb4b50ec9aca570abd1f52f33cd45b6c61739a9f"
integrity sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ== integrity sha512-2V81OA4ugVo5pRo46hAoD2ivUJx8jXmWXfUkY4KFNw0hEptvN0QfH3K4nHiwzGeKl5rFKedV48QVoqYavy4YpA==
wordwrap@0.0.2: wordwrap@0.0.2:
version "0.0.2" version "0.0.2"

View File

@ -66,6 +66,16 @@ dns_azure_zone2 = example.org:/subscriptions/99800903-fb14-4992-9aff-12eaf274462
full_plugin_name: 'dns-azure', full_plugin_name: 'dns-azure',
}, },
//####################################################// //####################################################//
bunny: {
display_name: 'bunny.net',
package_name: 'certbot-dns-bunny',
version_requirement: '~=0.0.9',
dependencies: '',
credentials: `# Bunny API token used by Certbot (see https://dash.bunny.net/account/settings)
dns_bunny_api_key = xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx`,
full_plugin_name: 'dns-bunny',
},
//####################################################//
cloudflare: { cloudflare: {
display_name: 'Cloudflare', display_name: 'Cloudflare',
package_name: 'certbot-dns-cloudflare', package_name: 'certbot-dns-cloudflare',
@ -521,6 +531,19 @@ aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`,
full_plugin_name: 'dns-route53', full_plugin_name: 'dns-route53',
}, },
//####################################################// //####################################################//
strato: {
display_name: 'Strato',
package_name: 'certbot-dns-strato',
version_requirement: '~=0.1.1',
dependencies: '',
credentials: `dns_strato_username = user
dns_strato_password = pass
# uncomment if domain name contains special characters
# insert domain display name as seen on your account page here
# dns_strato_domain_display_name = my-punicode-url.de`,
full_plugin_name: 'dns-strato',
},
//####################################################//
transip: { transip: {
display_name: 'TransIP', display_name: 'TransIP',
package_name: 'certbot-dns-transip', package_name: 'certbot-dns-transip',

View File

@ -0,0 +1,48 @@
/// <reference types="Cypress" />
describe('Hosts endpoints', () => {
let token;
before(() => {
cy.getToken().then((tok) => {
token = tok;
});
});
it('Should be able to create a http host', function() {
cy.task('backendApiPost', {
token: token,
path: '/api/nginx/proxy-hosts',
data: {
domain_names: ['test.example.com'],
forward_scheme: 'http',
forward_host: '1.1.1.1',
forward_port: 80,
access_list_id: '0',
certificate_id: 0,
meta: {
letsencrypt_agree: false,
dns_challenge: false
},
advanced_config: '',
locations: [],
block_exploits: false,
caching_enabled: false,
allow_websocket_upgrade: false,
http2_support: false,
hsts_enabled: false,
hsts_subdomains: false,
ssl_forced: false
}
}).then((data) => {
cy.validateSwaggerSchema('post', 201, '/nginx/proxy-hosts', data);
expect(data).to.have.property('id');
expect(data.id).to.be.greaterThan(0);
expect(data).to.have.property('enabled');
expect(data.enabled).to.be.greaterThan(0);
expect(data).to.have.property('meta');
expect(typeof data.meta.nginx_online).to.be.equal('undefined');
});
});
});

View File

@ -126,7 +126,7 @@ BackendApi.prototype._putPostJson = function(fn, path, data, returnOnError) {
logger('Response data:', data); logger('Response data:', data);
if (!returnOnError && data instanceof Error) { if (!returnOnError && data instanceof Error) {
reject(data); reject(data);
} else if (!returnOnError && response.statusCode != 200) { } else if (!returnOnError && (response.statusCode < 200 || response.statusCode >= 300)) {
if (typeof data === 'object' && typeof data.error === 'object' && typeof data.error.message !== 'undefined') { if (typeof data === 'object' && typeof data.error === 'object' && typeof data.error.message !== 'undefined') {
reject(new Error(data.error.code + ': ' + data.error.message)); reject(new Error(data.error.code + ': ' + data.error.message));
} else { } else {

View File

@ -2061,15 +2061,10 @@ sax@0.5.x:
resolved "https://registry.yarnpkg.com/sax/-/sax-0.5.8.tgz#d472db228eb331c2506b0e8c15524adb939d12c1" resolved "https://registry.yarnpkg.com/sax/-/sax-0.5.8.tgz#d472db228eb331c2506b0e8c15524adb939d12c1"
integrity sha1-1HLbIo6zMcJQaw6MFVJK25OdEsE= integrity sha1-1HLbIo6zMcJQaw6MFVJK25OdEsE=
semver@^7.2.1: semver@^7.2.1, semver@^7.3.2:
version "7.3.2" version "7.5.4"
resolved "https://registry.yarnpkg.com/semver/-/semver-7.3.2.tgz#604962b052b81ed0786aae84389ffba70ffd3938" resolved "https://registry.yarnpkg.com/semver/-/semver-7.5.4.tgz#483986ec4ed38e1c6c48c34894a9182dbff68a6e"
integrity sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ== integrity sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA==
semver@^7.3.2:
version "7.3.8"
resolved "https://registry.yarnpkg.com/semver/-/semver-7.3.8.tgz#07a78feafb3f7b32347d725e33de7e2a2df67798"
integrity sha512-NB1ctGL5rlHrPJtFDVIVzTyQylMLu9N9VICA6HSFJo8MCGVTMW6gfpicwKmmK/dAjTOrqu5l63JJOpDSrAis3A==
dependencies: dependencies:
lru-cache "^6.0.0" lru-cache "^6.0.0"
@ -2450,9 +2445,9 @@ wide-align@1.1.3:
string-width "^1.0.2 || 2" string-width "^1.0.2 || 2"
word-wrap@^1.2.3, word-wrap@~1.2.3: word-wrap@^1.2.3, word-wrap@~1.2.3:
version "1.2.3" version "1.2.4"
resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.3.tgz#610636f6b1f703891bd34771ccb17fb93b47079c" resolved "https://registry.yarnpkg.com/word-wrap/-/word-wrap-1.2.4.tgz#cb4b50ec9aca570abd1f52f33cd45b6c61739a9f"
integrity sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ== integrity sha512-2V81OA4ugVo5pRo46hAoD2ivUJx8jXmWXfUkY4KFNw0hEptvN0QfH3K4nHiwzGeKl5rFKedV48QVoqYavy4YpA==
workerpool@6.0.0: workerpool@6.0.0:
version "6.0.0" version "6.0.0"