Checklist
[x ] I have read intro post: https://community.passbolt.com/t/about-the-installation-issues-category/12
[x ] I have read the tutorials, help and searched for similar issues
[x ] I provide relevant information about my server (component names and versions, etc.)
[x ] I provide a copy of my logs and healthcheck
[x ] I describe the steps I have taken to trouble shoot the problem
[ x] I describe the steps on how to reproduce the issue
Hi Passbolt Community,
I’m trying to set up Passbolt Community Edition (CE) using the official Docker image and Docker Compose for a Proof of Concept (POC), but I’m consistently running into critical GPG configuration failures during the initial container startup, even after trying multiple versions and complete data resets. I’m hoping someone might have insights into what could be wrong in my specific environment.
My Environment:
Host OS: Ubuntu 24.04 LTS (Fresh Install)
Kernel: 6.8.0-57-generic (output from uname -a inside container was Linux ... 6.8.0-57-generic #59-Ubuntu SMP PREEMPT_DYNAMIC Sat Mar 15 17:40:59 UTC 2025 x86_64 GNU/Linux)
Docker Version:28.0.4 (Installed from Docker’s official APT repository)
Docker Compose: Plugin Version 2.34.0 (Installed via docker-compose-plugin package)
Passbolt Image Tags Tried::latest-ce (which corresponds to 4.12.1-1-ce), :4.11.0-1-ce
Setup Method:
Standard Docker Compose setup based on official documentation examples.
Using MariaDB (mariadb:10.11) as the database backend in a separate container.
Using an .env file for secrets (APP_KEY, database credentials, email settings, APP_FULL_BASE_URL).
APP_KEY was generated using openssl rand -base64 32 after attempts with cake passbolt generate_app_key failed.
APP_FULL_BASE_URL is set correctly to https://passbolt.coqui.cloud.
Database credentials and email settings were populated (using test password values for now, user/db names confirmed).
Using named Docker volumes for persistence (passbolt_db_data, passbolt_passbolt_gpg, passbolt_passbolt_avatars).
Also running Caddy (caddy:latest) as a reverse proxy in a separate Docker Compose stack, connected via an external Docker network (proxy-net). Both caddy and passbolt-app containers are confirmed attached to proxy-net using docker network inspect.
The Problem:
The docker compose up -d command successfully starts the passbolt-db and passbolt-app containers without apparent errors in the main docker compose logs. Database migrations seem to complete.
However, running the healthcheck command (docker compose exec --user www-data passbolt-app /usr/share/php/passbolt/bin/cake passbolt healthcheck --verbose) consistently shows multiple critical failures in the GPG Configuration section, regardless of the image version used (latest-ce or 4.11.0-1-ce) and even after performing a complete reset (running docker compose down, removing all related Docker volumes: passbolt_db_data, passbolt_passbolt_gpg, passbolt_passbolt_avatars, then running docker compose pull and docker compose up -d).
Key Healthcheck Failures (Consistently Appear):
GPG Configuration ... [FAIL] The server OpenPGP key is not set. [FAIL] The server key fingerprint doesn't match... [FAIL] The server public key defined ... is not in the keyring [FAIL] The server key does not have a valid email id. [FAIL] The private key cannot be used to decrypt a message [FAIL] The private key cannot be used to decrypt and verify a message [FAIL] The public key cannot be used to verify a signature.
(The JWT permission failure also appeared initially but seemed resolved after one reset, then reappeared - might be secondary).(Also getting FAIL Could not reach the /healthcheck/status with the url specified in App.fullBaseUrl potentially related to the GPG state).
Troubleshooting Done:
Confirmed .env variables are present and APP_KEY / APP_FULL_BASE_URL are set.
Confirmed docker-compose.yml syntax is correct.
Performed multiple complete resets including removing all named volumes.
Confirmed ufw rules are correct (though not relevant to internal GPG init).
Confirmed cake commands are run as www-data using docker compose exec --user www-data ....
Internal diagnostic tools (ss, netstat, ps) are unavailable in the CE image, limiting internal inspection.
Question:
Has anyone else encountered persistent GPG initialization failures like this specifically on Ubuntu 24.04 / Kernel 6.8 / Docker 28 (perhaps on Contabo)? Is this a known issue, or are there any other potential causes or diagnostic steps I could try, short of complex manual GPG key generation and configuration?
Time Synchronization Check (Based on Community Suggestion):
Checked host server time using timedatectl status.
Confirmation: As of Mon Apr 7 11:04:06 CEST 2025 (which is Mon Apr 7 09:04:06 UTC), the output confirmed System clock synchronized: yes and NTP service: active. Client machine time was also verified as accurate. Time sync does not appear to be the issue.
Host Server Reboot: Rebooted the Ubuntu 24.04 host server.
Post-Reboot Attempt: Started the Passbolt stack (docker compose up -d). Container logs showed GPG entropy gathering (+++...) followed by apparently successful service startup (nginx, php-fpm running).
Health Check #1 (Post-Reboot): Ran sudo docker exec -it passbolt-app su -s /bin/bash -c "/usr/share/php/passbolt/bin/cake passbolt healthcheck" www-data.
Result: Still showed 7 GPG-related [FAIL] messages (Server key not set, fingerprint mismatch, key not in keyring, invalid email ID, crypto ops failed). Also showed [FAIL] Passbolt is not configured to force SSL use despite PASSBOLT_SSL_FORCE=true being set in .env.
Attempt Clean Auto-Generation:
Stopped stack (docker compose down).
Removed GPG named volume (sudo docker volume rm passbolt_gpg).
Verified no specific GPG key variables (PASSBOLT_GPG_SERVER_KEY_*) were set in .env.
Restarted stack with force recreation (sudo docker compose up -d --force-recreate).
Health Check #2 (After Volume Clear): Ran the health check again.
Result: Identical to Health Check #1. Same 7 GPG failures and SSL Force failure, indicating auto-generation did not result in a valid setup and ENV VARS were still not applied correctly.
Attempt Manual Key Configuration:
Generated a new GPG key pair inside the running container as www-data using gpg --batch --gen-key. Verified key creation with gpg --list-keys --fingerprint.
Exported the public (.asc) and private (_private.asc) keys to the host.
Updated .env file: removed any _PUBLIC or _PRIVATE key variables, added PASSBOLT_GPG_SERVER_KEY_FINGERPRINT= with the correct fingerprint.
Updated docker-compose.yml: commented out the passbolt_gpg volume mount, added mounts for the exported .asc files to /etc/passbolt/gpg/serverkey.asc:ro and /etc/passbolt/gpg/serverkey_private.asc:ro.
Restarted stack with force recreation (sudo docker compose down && sudo docker compose up -d --force-recreate).
Health Check #3 (After Manual Key Config): Ran the health check again.
Result: Still identical to Health Checks #1 and #2. The 7 GPG failures persist, and the PASSBOLT_SSL_FORCE=true setting is still ignored.
Current Situation: Unable to get a healthy Passbolt CE latest-ce (v4.12.1) instance on Ubuntu 24.04 / Kernel 6.8. Both automatic GPG key generation and manual key configuration methods result in the same GPG failures reported by the internal health check. Environment variables set in the .env file (like PASSBOLT_SSL_FORCE or the GPG fingerprint) do not seem to be correctly recognized or applied by the application startup/health check process in this specific environment. Suspecting an incompatibility or bug.
can you try to go inside the container and do a: gpg --home /var/lib/passbolt/.gnupg --list-keys
and see if you are seeing your server key?
Also can you confirm the right on the keyring:
Thank you for the troubleshooting suggestions regarding the Passbolt CE (latest-ce, v4.12.1) installation issues on Ubuntu 24.04 LTS (Kernel 6.8.0-57, Docker 28.0.4).
Following up on your advice:
Time Sync Check: We checked the host server time. timedatectl status confirmed System clock synchronized: yes and NTP service: active (using Chrony) around Mon Apr 7 09:04 UTC / 11:04 CEST. Client time was also confirmed accurate. Time synchronization does not appear to be the cause of the GPG issues.
Server Reboot: We rebooted the host server as a general measure. After the reboot, the Passbolt container started, and I was able to access the web UI. It prompted me that the server key had changed (new fingerprint 0FAB A105 96E7 8649 90DA 6953 A4E8 E71E 5431 8890). After accepting this new auto-generated key, I could successfully log in and the application appears functionally working.
Internal Health Check Issues: Despite the successful login via the UI, we ran the internal health check (sudo docker exec -it passbolt-app su -s /bin/bash -c "/usr/share/php/passbolt/bin/cake passbolt healthcheck" www-data) again to verify the status based on your other suggestions. It still reported multiple [FAIL] errors, primarily:
7 GPG configuration errors (key not set, fingerprint mismatch, key not in keyring, invalid email, crypto ops failed).
1 JWT Authentication error (/etc/passbolt/jwt/ directory should not be writable).
1 Application configuration error (Passbolt is not configured to force SSL use, despite PASSBOLT_SSL_FORCE=true being set in .env).
It also continued to show warnings indicating other .env variables (PASSBOLT_SECURITY_SMTP_SETTINGS_ENDPOINTS_DISABLED=true, PASSBOLT_EMAIL_VALIDATE_MX=true) were not recognized.
Further Checks based on your suggestions:
Check Keyring: Running gpg --list-keys inside the container after the stack was restarted showed No public key, indicating keys generated inside don’t persist container recreation.
Check Mounted Key Perms: We attempted the manual key generation + mounting method again. We verified host key file permissions (chmod 644 ...) and checked permissions inside the container (ls -al /etc/passbolt/gpg/), which seemed readable by www-data. Restarting and running the health check still yielded the same 7 GPG failures.
Try ENV VAR Keys: We also tried providing the full key content via PASSBOLT_GPG_SERVER_KEY_PUBLIC and _PRIVATE environment variables (removing volume mounts). The health check still showed the same 7 GPG failures and ignored other ENV VARS.
Fix JWT Perms: We successfully fixed the JWT permission failure ([FAIL] changed to [PASS]) by running the chown/chmod commands suggested by the health check output via docker exec.
Current Situation:
The Passbolt application is functionally working - I can access the UI and log in successfully after accepting the key that was automatically generated following the host reboot and a clean volume start (docker volume rm passbolt_gpg before docker compose up -d --force-recreate).
However, the internal passbolt healthcheck script continues to report critical GPG failures and ignores several environment variables (PASSBOLT_SSL_FORCE, PASSBOLT_GPG_SERVER_KEY_FINGERPRINT, SMTP/MX settings).
Conclusion: It seems the passbolt healthcheck script itself has issues accurately detecting the GPG status or applying environment variables on this specific platform (Ubuntu 24.04 / Kernel 6.8 / Docker 28). While the application works now after getting a clean key generation post-reboot, the health check results are misleading.
Thanks again for your suggestions regarding the Passbolt CE (latest-ce, v4.12.1) setup on Ubuntu 24.04 / Kernel 6.8 / Docker 28.0.4. Here’s what we tried based on your advice:
Checked Keyring (gpg --list-keys): We found that keys generated manually inside the container using docker exec ... gpg --gen-key were not persistent. After stopping and restarting the container (docker compose down/up), running gpg --list-keys again showed “No public key”. So, checking the keyring state this way wasn’t reliable across restarts.
Checked .gnupg Permissions: We ran ls -ail /var/lib/passbolt/ inside the container and confirmed the .gnupg directory permissions looked correct (drwx------ owned by www-data).
Checked Mounted Key File Permissions: We had previously attempted manual configuration by mounting exported key files (.asc) into /etc/passbolt/gpg/ and setting the PASSBOLT_GPG_SERVER_KEY_FINGERPRINT in .env. Based on your suggestion, we double-checked permissions:
Inside the container (ls -al /etc/passbolt/gpg/), the mounted files showed read access for www-data.
On the host (ls -al ~/docker-apps/passbolt/passbolt_server_key*), we ensured they were world-readable (sudo chmod 644 ...).
Result: After confirming permissions and restarting passbolt-app, the passbolt healthcheckstill showed the same 7 GPG failures and ignored ENV VARS (like PASSBOLT_SSL_FORCE).
Tried ENV VARs for Full Key Content: We then tried your other suggestion:
Removed the key file mounts and the passbolt_gpg volume mount from docker-compose.yml.
Removed the _FINGERPRINT variable from .env.
Added PASSBOLT_GPG_SERVER_KEY_PUBLIC and PASSBOLT_GPG_SERVER_KEY_PRIVATE to .env, pasting the full multi-line key content within quotes.
Restarted the stack (down, up --force-recreate).
Result: The passbolt healthcheckstill showed the identical 7 GPG failures and ignored ENV VARS.
Attempted Healthcheck with Exported Fingerprint: We tried the command export PASSBOLT_GPG_SERVER_KEY_FINGERPRINT=$(...); /usr/share/php/passbolt/bin/cake passbolt healthcheck. This failed immediately with gpg: error reading key: No public key because the key generated inside wasn’t persistent after the required container restart.
Current Working State & Discrepancy:
Interestingly, the only way we achieved a state where I can actually log in and use the Passbolt web UI was by doing the following after the initial host server reboot:
Reverted docker-compose.yml to use the standard named volume (- passbolt_gpg:/etc/passbolt/gpg).
Reverted .env to have noPASSBOLT_GPG_SERVER_KEY_* variables defined.
Ensured the passbolt_gpg volume was deleted (sudo docker volume rm passbolt_gpg).
Started the stack (sudo docker compose up -d --force-recreate).
The UI then prompted that the “server key has changed” (new fingerprint 0FAB...8890).
After accepting this new key, login works and the application appears fully functional.
Conclusion for Max: Even though the application is working functionally now (post-reboot, clean volume auto-generation), the internal passbolt healthcheck script still reports 7 critical GPG failures and 1 SSL Force failure (ignoring ENV VAR). We did fix the JWT permission failure. This strongly suggests the health check script itself is giving false negatives or cannot correctly read the configuration/key state in this specific environment (Ubuntu 24.04/Kernel 6.8/Docker 28), while the core application is able to function.