Checklist
I have read intro post: About the Installation Issues category
I have read the tutorials, help and searched for similar issues
I provide relevant information about my server (component names and versions, etc.)
I provide a copy of my logs and healthcheck
I describe the steps I have taken to trouble shoot the problem
I describe the steps on how to reproduce the issue
I’m trying to set up Passbolt for my kubernetes cluster using the Helm chart. I keep getting a 500 error on my healthcheck and liveness probes, which causes the pods to restart infinitely.
We are using the load balancer for HTTPS handling, so i’ve followed the steps found on different posts to set up Passbolt as HTTP. We are also using an external postgres database for storage, so that is also set in the values.
Here is my (anonymized) values.yml:
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## Dependencies configuration parameters
## Redis dependency parameters
# -- Install redis as a depending chart
redisDependencyEnabled: true
# -- Install mariadb as a depending chart
mariadbDependencyEnabled: false # We don't want mariadb to be installed as a dependency
# -- Install mariadb as a depending chart
postgresqlDependencyEnabled: false # We don't want postgresql to be installed as a dependency
# Configure redis dependency chart
redis:
auth:
# -- Enable redis authentication
enabled: true
password: PASSWORD
sentinel:
# -- Enable redis sentinel
enabled: false
## Passbolt configuration
## Passbolt container and sidecar parameters
app:
# -- Configure pasbolt deployment init container that waits for database
databaseInitContainer:
# -- Toggle pasbolt deployment init container that waits for database
enabled: true
# Allowed options: mariadb, mysql or postgresql
database:
kind: postgresql
cache:
# Use CACHE_CAKE_DEFAULT_* variables to configure the connection to redis instance
# on the passboltEnv configuration section
redis:
# -- By enabling redis the chart will mount a configuration file on /etc/passbolt/app.php
# That instructs passbolt to store sessions on redis and to use it as a general cache.
enabled: true
sentinelProxy:
# -- Inject a haproxy sidecar container configured as a proxy to redis sentinel
# Make sure that CACHE_CAKE_DEFAULT_SERVER is set to '127.0.0.1' to use the proxy
enabled: false
tls:
# -- If autogenerate is true, the chart will generate a secret with a certificate for APP_FULL_BASE_URL hostname
# -- if autogenerate is false, existingSecret should be filled with an existing tls kind secret name
#@ignored
autogenerate: true
#existingSecret: ""
## Passbolt environment parameters
# -- Configure passbolt gpg directory
gpgPath: /etc/passbolt/gpg
# -- Name of the existing secret for the GPG server keypair. The secret must contain the `serverkey.asc` and `serverkey_private.asc` keys.
gpgExistingSecret: "passbolt-secrets"
# -- Configure passbolt jwt directory
jwtPath: /etc/passbolt/jwt
# -- Name of the existing secret for the JWT server keypair. The secret must contain the `jwt.key` and `jwt.pem` keys.
jwtExistingSecret: "passbolt-secrets"
passboltEnv:
plain:
# -- Configure passbolt privacy url
PASSBOLT_LEGAL_PRIVACYPOLICYURL: https://www.passbolt.com/privacy
# -- Configure passbolt fullBaseUrl
APP_FULL_BASE_URL: https://passbolt.local
# -- Configure passbolt to force ssl
PASSBOLT_SSL_FORCE: false
# -- Toggle passbolt public registration
PASSBOLT_REGISTRATION_PUBLIC: false
# -- Configure passbolt cake cache server
CACHE_CAKE_DEFAULT_SERVER: passbolt-redis-master
# -- Configure database host
DATASOURCES_DEFAULT_HOST: DATABASE_URL
# -- Configure database port
DATASOURCES_DEFAULT_PORT: 5432
# -- Configure passbolt default email service port
EMAIL_TRANSPORT_DEFAULT_PORT: 587
# -- Toggle passbolt debug mode
DEBUG: true
# -- Configure email used on gpg key. This is used when automatically creating a new gpg server key and when automatically calculating the fingerprint.
PASSBOLT_KEY_EMAIL: passbolt@domain.dk
# -- Toggle passbolt selenium mode
PASSBOLT_SELENIUM_ACTIVE: false
# -- Configure passbolt default email from
EMAIL_DEFAULT_FROM: passbolt@domain.dk
# -- Configure passbolt default email from name
EMAIL_DEFAULT_FROM_NAME: Passbolt
# -- Configure passbolt default email host
EMAIL_TRANSPORT_DEFAULT_HOST: URL
# -- Configure passbolt default email timeout
EMAIL_TRANSPORT_DEFAULT_TIMEOUT: 30
# -- Toggle passbolt tls
EMAIL_TRANSPORT_DEFAULT_TLS: true
# -- Configure passbolt jwt private key path
PASSBOLT_JWT_SERVER_KEY: /var/www/passbolt/config/jwt/jwt.key
# -- Configure passbolt jwt public key path
PASSBOLT_JWT_SERVER_PEM: /var/www/passbolt/config/jwt/jwt.pem
# -- Toggle passbolt jwt authentication
PASSBOLT_PLUGINS_JWT_AUTHENTICATION_ENABLED: true
# -- Download Command for kubectl
KUBECTL_DOWNLOAD_CMD: curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
secret:
# -- Configure passbolt default database username
DATASOURCES_DEFAULT_USERNAME: passbolt
# -- Configure passbolt default database
DATASOURCES_DEFAULT_DATABASE: passbolt
CACHE_CAKE_DEFAULT_PASSWORD: PASSWORD
# -- Configure passbolt default database-port
# -- Environment variables to add to the passbolt pods
extraEnv: []
# -- Environment variables from secrets or configmaps to add to the passbolt pods
extraEnvFrom:
- secretRef:
name: passbolt-secrets-env # <- This provides the DATASOURCES_DEFAULT_PASSWORD
## Passbolt deployment parameters
service:
# -- Configure passbolt service type
type: ClusterIP
# -- Annotations to add to the service
annotations: {}
# -- Configure the service ports
ports:
http:
# -- Configure passbolt HTTP service port
port: 80
# -- Configure passbolt HTTP service targetPort
targetPort: 80
# -- Configure passbolt HTTP service port name
name: http
# -- Configure passbolt container livenessProbe
livenessProbe:
# @ignore
httpGet:
port: https
scheme: HTTPS
path: /healthcheck/status.json
httpHeaders:
- name: Host
value: passbolt.local
initialDelaySeconds: 20
periodSeconds: 10
# -- Configure passbolt container RadinessProbe
readinessProbe:
# @ignore
httpGet:
port: https
scheme: HTTPS
httpHeaders:
- name: Host
value: passbolt.local
path: /healthcheck/status.json
initialDelaySeconds: 5
periodSeconds: 10
I’ve tried to make the passbolt.local domain into the actual domain, change the tls settings, and such. But nothing seems to change the fact that i get a 500 whenever it actually connects to the probes.
Here is the output of the passbolt-depl-srv logs:
Defaulted container "passbolt" out of: passbolt, passbolt-depl-srv-init (init)
gpg: keybox '/var/lib/passbolt/.gnupg/pubring.kbx' created
gpg: /var/lib/passbolt/.gnupg/trustdb.gpg: trustdb created
gpg: key 40F7C61D338F5494: public key "Hiper Passbolt <passbolt@hiper.dk>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: key 40F7C61D338F5494: "Hiper Passbolt <passbolt@hiper.dk>" not changed
gpg: key 40F7C61D338F5494: secret key imported
gpg: Total number processed: 1
gpg: unchanged: 1
gpg: secret keys read: 1
gpg: secret keys imported: 1
.+......+.....+....+...........+.+...+..+.+........................+......+..+......+.+..............+....+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..........+..+.......+......+...+.....+....+..+.+..............+...+.+......+...+...+..+......+......+.+..+.............+..+...+....+...+........+....+...+..+.............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.+...+.............+............+...+...+....................+.......+........+....+.....+............+.........+.....................+.+........+....+..+....+..+....+...+.....+.+..+.......+...+.........+...+.......................+....+.....+......+.......+..+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
...+.........+..+...+...+...............+....+...+.....+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*...+..................+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*......+......+...+.....................+......+.......+.....+...+...+..........+.....+.+..............+......+....+..+...+.......+........+............+...+.......+.........+.....+......+....+...+........+....+......+........+.+.....+..........+...+.....+..........+......+..+...+................+.........+........................+.....+.......+......+..+.......+........+......+....+..+..................+.+.........+...........+..........+..+...+....+..+.........+......+....+..+.........................+............+..+.+..+............+......+.......+...+..+.+......+...+......+...........+...+.+......+......+..+...+....+.........+..............+............+..........+........+.+.....+.......+...............+..+...+...+.........+.......+...+..+.+...+.....+....+.....+.......+...+..+....+..+....+...+.....+.......+......+...............+.....+..................+......+.+.....+.........+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
Installing passbolt
____ __ ____
/ __ \____ _____ ____/ /_ ____ / / /_
/ /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/
/ ____/ /_/ (__ |__ ) /_/ / /_/ / / /
/_/ \__,_/____/____/_.___/\____/_/\__/
Open source password manager for teams
-------------------------------------------------------------------------------
openssl_pkey_get_details(): Argument #1 ($key) must be of type OpenSSLAsymmetricKey, bool given
Running migrations
____ __ ____
/ __ \____ _____ ____/ /_ ____ / / /_
/ /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/
/ ____/ /_/ (__ |__ ) /_/ / /_/ / / /
/_/ \__,_/____/____/_.___/\____/_/\__/
Open source password manager for teams
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Running migration scripts.
-------------------------------------------------------------------------------
using migration paths
- /etc/passbolt/Migrations
using seed paths
- /etc/passbolt/Seeds
using environment default
using adapter pgsql
using database passbolt
ordering by creation time
All Done. Took 0.0094s
Clearing cake caches
Clearing _cake_model_
Cleared _cake_model_ cache
Clearing _cake_core_
Cleared _cake_core_ cache
Enjoy! ☮
/usr/lib/python3/dist-packages/supervisor/options.py:474: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
self.warnings.warn(
2024-05-03 10:08:55,848 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2024-05-03 10:08:55,848 INFO Included extra file "/etc/supervisor/conf.d/cron.conf" during parsing
2024-05-03 10:08:55,848 INFO Included extra file "/etc/supervisor/conf.d/nginx.conf" during parsing
2024-05-03 10:08:55,848 INFO Included extra file "/etc/supervisor/conf.d/php.conf" during parsing
2024-05-03 10:08:55,850 INFO RPC interface 'supervisor' initialized
2024-05-03 10:08:55,850 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-05-03 10:08:55,851 INFO supervisord started with pid 1
2024-05-03 10:08:56,853 INFO spawned: 'php-fpm' with pid 89
2024-05-03 10:08:56,854 INFO spawned: 'nginx' with pid 90
[03-May-2024 10:08:56] NOTICE: fpm is running, pid 89
[03-May-2024 10:08:56] NOTICE: ready to handle connections
[03-May-2024 10:08:56] NOTICE: systemd monitor interval set to 10000ms
2024-05-03 10:08:57,907 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-05-03 10:08:57,907 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
10.0.3.193 - - [03/May/2024:10:09:07 +0000] "GET /healthcheck/status.json HTTP/2.0" 500 397 "-" "kube-probe/1.28+"
10.0.3.193 - - [03/May/2024:10:09:17 +0000] "GET /healthcheck/status.json HTTP/2.0" 500 397 "-" "kube-probe/1.28+"
10.0.3.193 - - [03/May/2024:10:09:17 +0000] "GET /healthcheck/status.json HTTP/2.0" 500 397 "-" "kube-probe/1.28+"
10.0.3.193 - - [03/May/2024:10:09:27 +0000] "GET /healthcheck/status.json HTTP/2.0" 500 397 "-" "kube-probe/1.28+"
10.0.3.193 - - [03/May/2024:10:09:27 +0000] "GET /healthcheck/status.json HTTP/2.0" 500 397 "-" "kube-probe/1.28+"
And the events from kubectl describe pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/passbolt-depl-srv-f8558f896-8srjz to ip-10-0-3-193.eu-west-1.compute.internal
Normal Pulled 10m kubelet Container image "postgres" already present on machine
Normal Created 10m kubelet Created container passbolt-depl-srv-init
Normal Started 10m kubelet Started container passbolt-depl-srv-init
Normal Pulled 10m (x2 over 10m) kubelet Container image "passbolt/passbolt:4.6.2-1-ce" already present on machine
Normal Created 10m (x2 over 10m) kubelet Created container passbolt
Normal Started 10m (x2 over 10m) kubelet Started container passbolt
Normal Killing 10m kubelet Container passbolt failed liveness probe, will be restarted
Warning Unhealthy 9m29s (x5 over 10m) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 5m58s (x5 over 6m39s) kubelet Back-off restarting failed container passbolt in pod passbolt-depl-srv-f8558f896-8srjz_default(d9b54d91-7e4a-4db9-959f-795f0c4c92fb)
Warning Unhealthy 49s (x40 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Note: When i provide a wrong password for the database, the healthcheck seems to return 200 OK