Athena
Delphi
Eos
Hecate
Metis
Persephone
For Hecate to work, she needs
eos
) to point toHere we will example server blocks to deploy in fron of the eos
Web Apps. Take each of these blocks, put them together in the correct order, and Hecate will nginx.conf
file and need to have the other main
stream
http
and - as appropriate - blocks.
When deploying the server
blocks, you need to change these placeholder values:
${HOSTNAME}
${BASE_DOMAIN}
${backendIP}
<sub>
(where applicable)In hecate
, there are four main parts we need to be aware of. Like previously, we will start with a high level overview of the overall structure and get more granular from there.
Here is a super basic overview of how the Hecate nginx.conf.template file is structured:
Building this out a little bit and adding some text, it looks like
In text only, it looks something like this:
###
# NGINX main configuration context
###
worker_processes auto;
events { # <- Open
worker_connections 1024;
} # <- Close
###
# STREAM BLOCK
###
stream { # <- Open
} # <- Close
...
###
# HTTP BLOCK
###
http { # <- Open HTTP
...
###
# SERVER BLOCKS
###
server { # <- Open server
} # <- Close server
} # <- Close HTTP
Notice that each of the
open curly brackets
{
has a corresponding
}
close curly bracket
Also note that the Server block/s are nested (contained) within the HTTP block
We are here
Don't change this.
It needs to be here because we are placing this as the nginx.conf
.
Don't put anything above it unless it is # commented out
It should always look like this:
###
# NGINX main configuration context
###
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
...
We are here
Because engine is a web server, it use the only has to deal with HTTP because the web is built on top of HTTP/S
. However sometimes it needs to handle things which aren't HTTP/S
.
The stream
block lets NGINX
handle this traffic.
Commonly, this means traffic which is TCP
or UDP
which are 'lower level' networking languages which HTTP/S
is built on top of, or SMTP
.
While the stream block is not always needed because the vast majority of traffic on the web is HTTP/S
, we need to include it here because when it is needed, it is essential.
Example Web Apps we need it for here include Wazuh
and mailcow
.
For Wazuh, a stream block looks similar to:
...
###
# STREAM BLOCK
###
stream {
# -- 1515 --
upstream wazuh_manager_1515 {
server ${backendIP}:1515;
}
server {
listen 1515;
proxy_pass wazuh_manager_1515;
}
# -- 1514 --
upstream wazuh_manager_1514 {
server ${backendIP}:1514;
}
server {
listen 1514;
proxy_pass wazuh_manager_1514;
# -- 55000 --
upstream wazuh_manager_55000 {
server ${backendIP}:55000;
}
server {
listen 55000;
}
...
...
#--------------------------------------------------
# MAILCOW STREAMS
#--------------------------------------------------
# --- SMTP (port 25) ---
upstream mailcow_smtp {
# The Postfix container typically listens on port 25 internally
server ${backendIP}:25;
}
server {
listen 25;
proxy_pass mailcow_smtp;
}
# --- Submission (port 587) ---
upstream mailcow_submission {
server ${backendIP}:587;
}
server {
listen 587;
proxy_pass mailcow_submission;
}
# --- SMTPS (port 465) ---
upstream mailcow_smtps {
server ${backendIP}:465;
}
server {
listen 465;
proxy_pass mailcow_smtps;
}
# --- POP3 (port 110) ---
upstream mailcow_pop3 {
server ${backendIP}:110;
}
server {
listen 110;
proxy_pass mailcow_pop3;
}
# --- POP3S (port 995) ---
upstream mailcow_pop3s {
server ${backendIP}:995;
}
server {
listen 995;
proxy_pass mailcow_pop3s;
}
# --- IMAP (port 143) ---
upstream mailcow_imap {
server ${backendIP}:143;
}
server {
listen 143;
proxy_pass mailcow_imap;
}
# --- IMAPS (port 993) ---
upstream mailcow_imaps {
server ${backendIP}:993;
}
server {
listen 993;
proxy_pass mailcow_imaps;
}
...
We are here
###
# HTTP BLOCK
###
http {
# Hide NGINX version
server_tokens off;
include mime.types;
default_type application/octet-stream;
# Enable debug logging
error_log /var/log/nginx/error.log warn; #change warn to debug if installing a development server
# enable access logging
access_log /var/log/nginx/access.log;
...
We are here
Here is a minimal example/template block which can be adapted to your needs.
For clarity in this example, we have included other example blocks surrounding this example server
block. This is to help give us a better understanding of what a full template nginx.conf
file looks like.
We will only include this full template in this example block template. For simplicity, in the rest of the templates, we will only provide the server
block.
###
# NGINX main configuration context
###
...
###
# STREAM BLOCK
###
...
###
# HTTP BLOCK
###
...
###
# SERVER BLOCKS
###
...
#--------------------------------------------------
# WEB PAGE: ${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP → HTTPS
server {
listen 80;
listen [::]:80;
server_name ${BASE_DOMAIN}; # or _ for any host
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name ${BASE_DOMAIN}; # or _ for any host
# SSL certificates
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
location / {
proxy_pass http://${backendIP}:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Handle WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
...
#--------------------------------------------------
# WIKI_JS WEB UI: wiki.${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP traffic to HTTPS for Wiki.js
server {
listen 80;
listen [::]:80;
server_name wiki.${BASE_DOMAIN};
return 301 https://$host$request_uri; # Redirect to HTTPS
}
# HTTPS for Wiki.js
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name wiki.${BASE_DOMAIN};
# SSL certificates
ssl_certificate /etc/nginx/certs/wiki.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/wiki.privkey.pem;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Proxy settings for Wiki.js
location / {
proxy_pass http://${backendIP}:11080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
Remeber, Wazuh requires a stream
block, shown above
As well as an http
block:
#--------------------------------------------------
# WAZUH WEB UI: delphi.${BASE_DOMAIN}
#--------------------------------------------------
server {
listen 80;
listen [::]:80;
server_name delphi.${BASE_DOMAIN};
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name delphi.${BASE_DOMAIN};
# SSL certificates
ssl_certificate /etc/nginx/certs/wazuh.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/wazuh.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Proxy settings
location / {
proxy_pass https://${backendIP}:5601/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
}
}
...
#--------------------------------------------------
# MAILCOW PAGE: mail.${BASE_DOMAIN}
#--------------------------------------------------
# --- Server block for HTTP (80) ---
server {
listen 80;
listen [::]:80;
server_name mail.${BASE_DOMAIN} autodiscover.* autoconfig.*;
return 301 https://$host$request_uri; # Redirect everything else to HTTPS
}
# --- Server block for HTTPS (443) ---
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mail.${BASE_DOMAIN} autodiscover.* autoconfig.*;
http2 on;
# SSL certs if you're terminating TLS here
ssl_certificate /etc/nginx/certs/mail.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/mail.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Forward all HTTPS traffic to Mailcow’s internal Nginx (on port 12443)
location / {
proxy_pass https://${backendIP}:12443;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
}
}
#--------------------------------------------------
# ERPNext WEB UI: erp.${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP traffic to HTTPS for ERPNext
server {
listen 80;
listen [::]:80;
server_name erp.${BASE_DOMAIN};
return 301 https://$host$request_uri; # Redirect to HTTPS
}
# HTTPS for ERPNext
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name erp.${BASE_DOMAIN};
# SSL certificates
ssl_certificate /etc/nginx/certs/erp.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/erp.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Proxy settings for ERPNext
location / {
proxy_pass http://${backendIP}:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Handle WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
#--------------------------------------------------
# JENKINS WEB UI: jenkins.${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP traffic to HTTPS for Jenkins
server {
listen 80;
listen [::]:80;
server_name jenkins.${BASE_DOMAIN};
return 301 https://$host$request_uri; # Redirect to HTTPS
}
# HTTPS for Jenkins
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name jenkins.${BASE_DOMAIN};
http2 on;
# SSL certificates
ssl_certificate /etc/nginx/certs/jenkins.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/jenkins.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Proxy settings for Jenkins
location / {
proxy_pass http://${backendIP}:9080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Handle WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
#--------------------------------------------------
# MATTERMOST PAGE: collaborate.${BASE_DOMAIN}
#--------------------------------------------------
# --- Server block for HTTP (80) ---
server {
listen 80;
listen [::]:80;
server_name collaborate.${BASE_DOMAIN};
return 301 https://$host$request_uri; # Redirect everything else to HTTPS
}
# --- Server block for HTTPS (443) ---
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name collaborate.${BASE_DOMAIN};
http2 on;
# SSL certs if you're terminating TLS here
ssl_certificate /etc/nginx/certs/collaborate.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/collaborate.privkey.pem;
# Recommended SSL/TLS config
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_early_data on;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
# HSTS: ensure browsers use HTTPS only (six months)
add_header Strict-Transport-Security max-age=15768000;
# Enable OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
# Standard (HTTP) traffic
location / {
proxy_pass http://${backendIP}:8065;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 100M;
proxy_set_header Connection "";
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_http_version 1.1;
}
# WebSocket (real-time) connections for Mattermost
location ~ /api/v[0-9]+/(users/)?websocket$ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Basic proxy settings
client_max_body_size 50M;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
# Timeouts
client_body_timeout 60s;
send_timeout 300s;
lingering_timeout 5s;
proxy_connect_timeout 90s;
proxy_send_timeout 300s;
proxy_read_timeout 90s;
proxy_http_version 1.1;
proxy_pass http://${backendIP}:8065;
}
}
#--------------------------------------------------
# ANALYTICS PAGE: analytics.${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP → HTTPS
server {
listen 80;
listen [::]:80;
server_name analytics.${BASE_DOMAIN}; # or _ for any host
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name analytics.${BASE_DOMAIN}; # or _ for any host
# SSL certificates
ssl_certificate /etc/nginx/certs/analytics.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/analytics.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
location / {
proxy_pass http://${backendIP}:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
}
}
#--------------------------------------------------
# NEXTCLOUD PAGE: cloud.${BASE_DOMAIN}
#--------------------------------------------------
server {
listen 80;
listen [::]:80; # comment to disable IPv6
if ($scheme = "http") {
return 301 https://$host$request_uri;
}
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
proxy_buffering off;
proxy_request_buffering off;
client_max_body_size 0;
client_body_buffer_size 512k;
proxy_read_timeout 86400s;
server_name cloud.${BASE_DOMAIN};
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
location / {
proxy_pass http://${backendIP}:11000$request_uri; # Adjust to match APACHE_PORT and APACHE_IP_BINDING. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md#adapting-the-sample-web-server-configurations-below
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header Early-Data $ssl_early_data;
# Websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
# SSL certificates
ssl_certificate /etc/nginx/certs/cloud.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/cloud.privkey.pem;
ssl_dhparam /etc/dhparam; # curl -L https://ssl-config.mozilla.org/ffdhe2048.txt -o /etc/dhparam
ssl_early_data on;
ssl_session_timeout 1d;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ecdh_curve x25519:x448:secp521r1:secp384r1:secp256r1;
ssl_prefer_server_ciphers on;
ssl_conf_command Options PrioritizeChaCha;
ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256;
}
}
#--------------------------------------------------
# GRAFANA: observe.${BASE_DOMAIN}
#--------------------------------------------------
# Redirect HTTP → HTTPS
server {
listen 80;
listen [::]:80;
server_name observe.${BASE_DOMAIN}; # or _ for any host
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name observe.${BASE_DOMAIN}; # or _ for any host
# SSL certificates
ssl_certificate /etc/nginx/certs/observe.fullchain.pem;
ssl_certificate_key /etc/nginx/certs/observe.privkey.pem;
# Basic SSL config (adapt cipher suites, etc. to your needs)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;
ssl_prefer_server_ciphers on;
# Increase buffer sizes (optional, for large responses)
client_max_body_size 50M; # Adjust as needed
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Enable error interception and define custom error page handling
proxy_intercept_errors on;
error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 421 422 423 424 425 426 428 429 431 451
500 501 502 503 504 505 506 507 508 510 511 /custom_error.html;
# Serve the custom error page internally
location = /custom_error.html {
root /usr/share/nginx/html;
internal;
}
location / {
proxy_pass http://${backendIP}:8069;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_buffer_size 128k;
proxy_buffers 64 512k;
proxy_busy_buffers_size 512k;
# Handle WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
...
With certificates in place and nginx.conf updated, start your container:
cd $HOME/hecate
docker compose down
docker compose up -d
You should now test your endpoints. Using a private browsing window, navigate to:
Each of the web applications listed will be accessible via the relevant subdomain so make sure this is set up in your DNS provider
Secure email: git@cybermonkey.net.au
Website: cybermonkey.net.au
# ___ _ __ __ _
# / __|___ __| |___ | \/ |___ _ _ | |_____ _ _
# | (__/ _ \/ _` / -_) | |\/| / _ \ ' \| / / -_) || |
# \___\___/\__,_\___| |_| |_\___/_||_|_\_\___|\_, |
# / __| _| |__ ___ _ _ |__/
# | (_| || | '_ \/ -_) '_|
# \___\_, |_.__/\___|_|
# |__/
Athena
Delphi
Eos
Hecate
Metis
Persephone
© 2025 Code Monkey Cybersecurity. ABN: 77 177 673 061. All rights reserved.