[Docker] Error when restarting server

Good morning, and welcome to this impossible love history between a human and a docker rootless instance. Please take a seat, remember to switch off your phone to enjoy the full experience and don’t hesitate to comment on every suggestion!

Once upon a time, last year in August, you may read a topic about a human dating to a Passbolt PRO docker rootless instance to know them better.

The spoiler for that story is that the relationship didn’t finish with “They were happily ever after” because the human was not able to configure his Apache server as a reverse proxy. So they took distance each other until a few days ago, when they had contact again! And this is what this topic is about. Will they have a happy ending? Let’s read them!

Disclaimer: sorry for the themed introduction, I wanted to do something different to make you laugh a bit :slight_smile:

There was a cold morning in Spain when the human made a docker compose up -d and met again his rootles instance. The instance responded to his dating request without any errors, except for a clue about the ending:

docker logs passbolt_db
2024-02-12 08:54:48+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.2.2+maria~ubu2204 started.
2024-02-12 08:54:48+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2024-02-12 08:54:48+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.2.2+maria~ubu2204 started.
2024-02-12 08:54:49+00:00 [Note] [Entrypoint]: Initializing database files


PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following command:

'/usr/bin/mariadb-secure-installation'

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the MariaDB Knowledgebase at https://mariadb.com/kb

Please report any problems at https://mariadb.org/jira

The latest information about MariaDB is available at https://mariadb.org/.

Consider joining MariaDB's strong and vibrant community:
https://mariadb.org/get-involved/

2024-02-12 08:54:50+00:00 [Note] [Entrypoint]: Database files initialized
2024-02-12 08:54:50+00:00 [Note] [Entrypoint]: Starting temporary server
2024-02-12 08:54:50+00:00 [Note] [Entrypoint]: Waiting for server startup
2024-02-12  8:54:50 0 [Note] Starting MariaDB 11.2.2-MariaDB-1:11.2.2+maria~ubu2204 source revision 929532a9426d085111c24c63de9c23cc54382259 as process 107
2024-02-12  8:54:50 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2024-02-12  8:54:50 0 [Note] InnoDB: Number of transaction pools: 1
2024-02-12  8:54:50 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2024-02-12  8:54:50 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
2024-02-12  8:54:50 0 [Note] InnoDB: Using liburing
2024-02-12  8:54:50 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
2024-02-12  8:54:50 0 [Note] InnoDB: Completed initialization of buffer pool
2024-02-12  8:54:50 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
2024-02-12  8:54:50 0 [Note] InnoDB: End of log at LSN=46300
2024-02-12  8:54:50 0 [Note] InnoDB: Opened 3 undo tablespaces
2024-02-12  8:54:50 0 [Note] InnoDB: 128 rollback segments in 3 undo tablespaces are active.
2024-02-12  8:54:50 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
2024-02-12  8:54:50 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
2024-02-12  8:54:50 0 [Note] InnoDB: log sequence number 46300; transaction id 14
2024-02-12  8:54:50 0 [Note] Plugin 'FEEDBACK' is disabled.
2024-02-12  8:54:50 0 [Note] Plugin 'wsrep-provider' is disabled.
2024-02-12  8:54:50 0 [Warning] 'user' entry 'root@c51d974835f4' ignored in --skip-name-resolve mode.
2024-02-12  8:54:50 0 [Warning] 'proxies_priv' entry '@% root@c51d974835f4' ignored in --skip-name-resolve mode.
2024-02-12  8:54:50 0 [Note] mariadbd: Event Scheduler: Loaded 0 events
2024-02-12  8:54:50 0 [Note] mariadbd: ready for connections.
Version: '11.2.2-MariaDB-1:11.2.2+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: 0  mariadb.org binary distribution
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: Temporary server started.
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: GENERATED ROOT PASSWORD: $password
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: Creating database dockerbolt
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: Creating user $passboltdbuser
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: Giving user $passboltdbuser access to schema dockerbolt
2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation)

2024-02-12 08:54:51+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/dump.sql


2024-02-12 08:54:54+00:00 [Note] [Entrypoint]: Stopping temporary server
2024-02-12  8:54:54 0 [Note] mariadbd (initiated by: unknown): Normal shutdown
2024-02-12  8:54:54 0 [Note] InnoDB: FTS optimize thread exiting.
2024-02-12  8:54:54 0 [Note] InnoDB: Starting shutdown...
2024-02-12  8:54:54 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2024-02-12  8:54:54 0 [Note] InnoDB: Restricted to 2016 pages due to innodb_buf_pool_dump_pct=25
2024-02-12  8:54:54 0 [Note] InnoDB: Buffer pool(s) dump completed at 240212  8:54:54
2024-02-12  8:54:55 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
2024-02-12  8:54:55 0 [Note] InnoDB: Shutdown completed; log sequence number 76830867; transaction id 407
2024-02-12  8:54:55 0 [Note] mariadbd: Shutdown complete

2024-02-12 08:54:55+00:00 [Note] [Entrypoint]: Temporary server stopped

2024-02-12 08:54:55+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up.

2024-02-12  8:54:55 0 [Note] Starting MariaDB 11.2.2-MariaDB-1:11.2.2+maria~ubu2204 source revision 929532a9426d085111c24c63de9c23cc54382259 as process 1
2024-02-12  8:54:55 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2024-02-12  8:54:55 0 [Note] InnoDB: Number of transaction pools: 1
2024-02-12  8:54:55 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2024-02-12  8:54:55 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
2024-02-12  8:54:55 0 [Note] InnoDB: Using liburing
2024-02-12  8:54:55 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
2024-02-12  8:54:55 0 [Note] InnoDB: Completed initialization of buffer pool
2024-02-12  8:54:55 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
2024-02-12  8:54:55 0 [Note] InnoDB: End of log at LSN=76830867
2024-02-12  8:54:55 0 [Note] InnoDB: Opened 3 undo tablespaces
2024-02-12  8:54:55 0 [Note] InnoDB: 128 rollback segments in 3 undo tablespaces are active.
2024-02-12  8:54:55 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
2024-02-12  8:54:55 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
2024-02-12  8:54:55 0 [Note] InnoDB: log sequence number 76830867; transaction id 408
2024-02-12  8:54:55 0 [Note] Plugin 'FEEDBACK' is disabled.
2024-02-12  8:54:55 0 [Note] Plugin 'wsrep-provider' is disabled.
2024-02-12  8:54:55 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2024-02-12  8:54:55 0 [Note] Server socket created on IP: '0.0.0.0'.
2024-02-12  8:54:55 0 [Note] Server socket created on IP: '::'.
2024-02-12  8:54:55 0 [Note] mariadbd: Event Scheduler: Loaded 0 events
2024-02-12  8:54:55 0 [Note] mariadbd: ready for connections.
Version: '11.2.2-MariaDB-1:11.2.2+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: $dbport  mariadb.org binary distribution
2024-02-12  8:54:55 0 [Note] InnoDB: Buffer pool(s) load completed at 240212  8:54:55
**2024-02-12  8:54:56 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)**

As the instance was responding fine and they were having a great day, the human didn’t give importance to the last warning, because the conversation between each other flowed without errors:

docker logs passbolt_server
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 8 seconds
gpg: keybox '/var/lib/passbolt/.gnupg/pubring.kbx' created
gpg: /var/lib/passbolt/.gnupg/trustdb.gpg: trustdb created
gpg: key $adminkeyid: public key "$admindata" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: key $adminkeyid: "$admindata" not changed
gpg: key $adminkeyid: secret key imported
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
Subscription file found: /etc/passbolt/subscription_key.txt

     ____                  __          ____  
    / __ \____  _____ ____/ /_  ____  / / /_ 
   / /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/ 
  / ____/ /_/ (__  |__  ) /_/ / /_/ / / /    
 /_/    \__,_/____/____/_.___/\____/_/\__/   

 Open source password manager for teams
-------------------------------------------------------------------------------
The subscription key /etc/passbolt/subscription_key.txt has been successfully imported in the database.
Installing passbolt

     ____                  __          ____  
    / __ \____  _____ ____/ /_  ____  / / /_ 
   / /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/ 
  / ____/ /_/ (__  |__  ) /_/ / /_/ / / /    
 /_/    \__,_/____/____/_.___/\____/_/\__/   

 Open source password manager for teams
-------------------------------------------------------------------------------
Thanks for choosing Passbolt Pro
Below are your subscription key details
Customer id:    $customerid
Users limit:    $X (currently: $Y)
Valid from:     $dateX
Expires on:     $dateY
For any question / feedback / subscription renewal,
kindly contact us at sales@passbolt.com
Running baseline checks, please wait...
Some tables are already present in the database. A new installation would override existing data.
Please use --force to proceed anyway.
Running migrations

     ____                  __          ____  
    / __ \____  _____ ____/ /_  ____  / / /_ 
   / /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/ 
  / ____/ /_/ (__  |__  ) /_/ / /_/ / / /    
 /_/    \__,_/____/____/_.___/\____/_/\__/   

 Open source password manager for teams
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Thanks for choosing Passbolt Pro
Below are your subscription key details
Customer id:    $customerid
Users limit:    $X (currently: $Y)
Valid from:     $dateX
Expires on:     $dateY
For any question / feedback / subscription renewal,
kindly contact us at sales@passbolt.com
 Running migration scripts.
-------------------------------------------------------------------------------
using migration paths 
 - /etc/passbolt/Migrations
using seed paths 
using environment default
using adapter mysql
using database dockerbolt
ordering by creation time

All Done. Took 0.0102s
Clearing cake caches
Clearing _cake_model_
Cleared _cake_model_ cache
Clearing _cake_core_
Cleared _cake_core_ cache
Enjoy! ☮

2024-02-12 08:54:58,061 INFO Included extra file "/etc/supervisor/conf.d/cron.conf" during parsing
2024-02-12 08:54:58,061 INFO Included extra file "/etc/supervisor/conf.d/nginx.conf" during parsing
2024-02-12 08:54:58,061 INFO Included extra file "/etc/supervisor/conf.d/php.conf" during parsing
2024-02-12 08:54:58,063 INFO RPC interface 'supervisor' initialized
2024-02-12 08:54:58,063 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-02-12 08:54:58,063 CRIT could not write pidfile /var/run/supervisord.pid
2024-02-12 08:54:59,067 INFO spawned: 'php-fpm' with pid 252
2024-02-12 08:54:59,070 INFO spawned: 'nginx' with pid 253
2024-02-12 08:54:59,071 INFO spawned: 'cron' with pid 254
time="2024-02-12T08:54:59Z" level=info msg="read crontab: /etc/cron.d/passbolt-pro-server"
[12-Feb-2024 08:54:59] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
[12-Feb-2024 08:54:59] NOTICE: [pool www] 'user' directive is ignored when FPM is not running as root
[12-Feb-2024 08:54:59] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
[12-Feb-2024 08:54:59] NOTICE: [pool www] 'group' directive is ignored when FPM is not running as root
[12-Feb-2024 08:54:59] NOTICE: fpm is running, pid 252
[12-Feb-2024 08:54:59] NOTICE: ready to handle connections
[12-Feb-2024 08:54:59] NOTICE: systemd monitor interval set to 10000ms
time="2024-02-12T08:55:00Z" level=info msg=starting iteration=0 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-12T08:55:00Z" level=info msg="job succeeded" iteration=0 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
2024-02-12 08:55:00,157 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-12 08:55:00,157 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-12 08:55:00,157 INFO success: cron entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time="2024-02-12T08:56:00Z" level=info msg=starting iteration=1 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-12T08:56:00Z" level=info msg="job succeeded" iteration=1 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-12T08:57:00Z" level=info msg=starting iteration=2 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-12T08:57:00Z" level=info msg="job succeeded" iteration=2 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"

At the end of the day, each one returned home and agreed to meet again tomorrow.
Two days later in the morning, the human was waiting a while for the Passbolt instance but it never came to the meeting.
So he checked his messages and found that there was a server restart and the instance sent some error messages that would not let it to meet him. (From the last ending clue):

docker logs passbolt_db
2024-02-12  8:54:56 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:58:28 0 [Note] mariadbd (initiated by: unknown): Normal shutdown
2024-02-14  8:58:28 0 [Note] InnoDB: FTS optimize thread exiting.
2024-02-14  8:58:28 0 [Note] InnoDB: Starting shutdown...
2024-02-14  8:58:28 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2024-02-14  8:58:28 0 [Note] InnoDB: Restricted to 2016 pages due to innodb_buf_pool_dump_pct=25
2024-02-14  8:58:28 0 [Note] InnoDB: Buffer pool(s) dump completed at 240214  8:58:28
2024-02-14  8:58:28 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
2024-02-14  8:58:28 0 [Note] InnoDB: Shutdown completed; log sequence number 76949685; transaction id 420
2024-02-14  8:58:28 0 [Note] mariadbd: Shutdown complete

2024-02-14 08:59:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.2.2+maria~ubu2204 started.
2024-02-14 08:59:20+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2024-02-14 08:59:20+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.2.2+maria~ubu2204 started.
2024-02-14 08:59:20+00:00 [Note] [Entrypoint]: MariaDB upgrade not required
2024-02-14  8:59:20 0 [Note] Starting MariaDB 11.2.2-MariaDB-1:11.2.2+maria~ubu2204 source revision 929532a9426d085111c24c63de9c23cc54382259 as process 1
2024-02-14  8:59:20 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2024-02-14  8:59:20 0 [Note] InnoDB: Number of transaction pools: 1
2024-02-14  8:59:20 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2024-02-14  8:59:20 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts)
2024-02-14  8:59:20 0 [Note] InnoDB: Using liburing
2024-02-14  8:59:20 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB
2024-02-14  8:59:20 0 [Note] InnoDB: Completed initialization of buffer pool
2024-02-14  8:59:20 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes)
2024-02-14  8:59:20 0 [Note] InnoDB: End of log at LSN=76949685
2024-02-14  8:59:20 0 [Note] InnoDB: Opened 3 undo tablespaces
2024-02-14  8:59:20 0 [Note] InnoDB: 128 rollback segments in 3 undo tablespaces are active.
2024-02-14  8:59:20 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ...
2024-02-14  8:59:20 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB.
2024-02-14  8:59:20 0 [Note] InnoDB: log sequence number 76949685; transaction id 419
2024-02-14  8:59:20 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2024-02-14  8:59:20 0 [Note] Plugin 'FEEDBACK' is disabled.
2024-02-14  8:59:20 0 [Note] Plugin 'wsrep-provider' is disabled.
2024-02-14  8:59:20 0 [Note] Server socket created on IP: '0.0.0.0'.
2024-02-14  8:59:20 0 [Note] Server socket created on IP: '::'.
2024-02-14  8:59:20 0 [Note] mariadbd: Event Scheduler: Loaded 0 events
2024-02-14  8:59:20 0 [Note] mariadbd: ready for connections.
Version: '11.2.2-MariaDB-1:11.2.2+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: 18001  mariadb.org binary distribution
2024-02-14  8:59:20 0 [Note] InnoDB: Buffer pool(s) load completed at 240214  8:59:20
2024-02-14  8:59:20 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:21 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:22 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:23 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:24 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:26 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:30 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:37 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  8:59:50 11 [Warning] Aborted connection 11 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2024-02-14  9:00:16 12 [Warning] Aborted connection 12 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)

Then he decided to visit it to check what happened and had this conversation:

docker logs passbolt_server
time="2024-02-14T08:57:00Z" level=info msg="job succeeded" iteration=2882 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-14T08:58:00Z" level=info msg=starting iteration=2883 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
time="2024-02-14T08:58:00Z" level=info msg="job succeeded" iteration=2883 job.command="$PASSBOLT_BASE_DIR/bin/cron > $PASSBOLT_LOG_DIR/cron.log 2> $PASSBOLT_LOG_DIR/cron-error.log" job.position=0 job.schedule="* * * * *"
[14-Feb-2024 08:58:28] NOTICE: Terminating ...
2024-02-14 08:58:28,480 WARN received SIGTERM indicating exit request
2024-02-14 08:58:28,480 INFO waiting for php-fpm, nginx, cron to die
time="2024-02-14T08:58:28Z" level=info msg="received terminated, shutting down"
time="2024-02-14T08:58:28Z" level=info msg="waiting for jobs to finish"
time="2024-02-14T08:58:28Z" level=info msg=exiting
[14-Feb-2024 08:58:28] NOTICE: exiting, bye-bye!
2024-02-14 08:58:28,483 INFO stopped: cron (exit status 0)
2024-02-14 08:58:28,483 INFO exited: nginx (exit status 0; expected)
2024-02-14 08:58:28,489 INFO stopped: php-fpm (exit status 0)
2024-02-14 08:58:28,489 WARN received SIGTERM indicating exit request
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 1 seconds
ln: failed to create symbolic link '/etc/passbolt/gpg/serverkey.asc': File exists
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 0 seconds
ln: failed to create symbolic link '/etc/passbolt/gpg/serverkey.asc': File exists
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 0 seconds
ln: failed to create symbolic link '/etc/passbolt/gpg/serverkey.asc': File exists
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 0 seconds
ln: failed to create symbolic link '/etc/passbolt/gpg/serverkey.asc': File exists
wait-for.sh: waiting for passbolt_db:$dbport without a timeout
wait-for.sh: passbolt_db:$dbport is available after 0 seconds

He can make a docker compose down -v and another docker compose up -d and the history starts again, but at some point, the ending is the same.
Now it’s your turn! Will you help the human to meet with the Passbolt instance or will write another unhappy ending for them?

This is the docker-compose file:

version: "3.9"
name: passbolt
services:
    passbolt_db:
        container_name: passbolt_db
        image: mariadb:latest
        restart: unless-stopped
        env_file:
            - database.env
        secrets:
            - db_database
            - db_username
            - db_password
        volumes:
            - database_volume:/var/lib/mysql
            - ./passbolt.sql:/docker-entrypoint-initdb.d/dump.sql

    passbolt_server:
        container_name: passbolt_server
        image: passbolt/passbolt:latest-pro-non-root
        restart: unless-stopped
        depends_on:
            - passbolt_db
        env_file:
            - passbolt.env
        secrets:
            - db_database
            - db_username
            - db_password
            - server_key_public
            - server_key_private
            - ssl_cert
            - ssl_key
        volumes:
            - jwt_volume:/etc/passbolt/jwt
            - gpg_volume:/etc/passbolt/gpg
            - ./subscription_key.txt:/etc/passbolt/subscription_key.txt:ro
        command:
            - /bin/bash
            - -c
            - |
              # Uncomment line below if you are using ldap sync. Change '* * * * *' to your preferred sync schedule. More info: https://en.wikipedia.org/wiki/Cron
              # echo '* * * * * www-data exec /bin/bash -c "source /etc/environment && /usr/share/php/passbolt/bin/cake directory_sync all" >> /var/log/cron.log 2>&1' >> /etc/cron.d/passbolt-pro-server
              /usr/bin/wait-for.sh -t 0 passbolt_db:$dbport -- /docker-entrypoint.sh
        ports:
            - 8080:8080
            - 4433:4433
secrets:
    db_database:
        file: secrets/db_database.txt
    db_username:
        file: secrets/db_username.txt
    db_password:
        file: secrets/db_password.txt
    server_key_public:
        file: secrets/serverkey.asc
    server_key_private:
        file: secrets/serverkey_private.asc
    ssl_cert:
        file: secrets/fullchain.pem
    ssl_key:
        file: secrets/privkey.pem
volumes:
    jwt_volume:
    gpg_volume:
    database_volume:

The database env file:

MYSQL_RANDOM_ROOT_PASSWORD: "true"
MYSQL_TCP_PORT: $dbport
MYSQL_DATABASE_FILE: /run/secrets/db_database
MYSQL_USER_FILE: /run/secrets/db_username
MYSQL_PASSWORD_FILE: /run/secrets/db_password

The passbolt env file:

APP_FULL_BASE_URL: https://passbolt.domain.tld:4433

DATASOURCES_DEFAULT_HOST: passbolt_db
DATASOURCES_DEFAULT_PORT: $dbport
DATASOURCES_DEFAULT_DATABASE_FILE: /run/secrets/db_database
DATASOURCES_DEFAULT_USERNAME_FILE: /run/secrets/db_username
DATASOURCES_DEFAULT_PASSWORD_FILE: /run/secrets/db_password

PASSBOLT_GPG_SERVER_KEY_PUBLIC_FILE: /run/secrets/server_key_public
PASSBOLT_GPG_SERVER_KEY_PRIVATE_FILE: /run/secrets/server_key_private

PASSBOLT_SSL_SERVER_CERT_FILE: /run/secrets/ssl_cert
PASSBOLT_SSL_SERVER_KEY_FILE: /run/secrets/ssl_key

EMAIL_TRANSPORT_DEFAULT_HOST: "$mailhost"
EMAIL_TRANSPORT_DEFAULT_PORT: $mailport
EMAIL_TRANSPORT_DEFAULT_USERNAME: "$mailuser"
EMAIL_TRANSPORT_DEFAULT_PASSWORD: "$mailpass"
EMAIL_TRANSPORT_DEFAULT_CLIENT: "$mailserver"

EMAIL_DEFAULT_FROM: "$mailuser"
EMAIL_DEFAULT_FROM_NAME: "Passbolt"

PASSBOLT_GPG_SERVER_KEY_FINGERPRINT: "$fingerprint"
PASSBOLT_KEY_EMAIL: "$adminmail"
PASSBOLT_SSL_FORCE: true
PASSBOLT_SECURITY_SMTP_SETTINGS_ENDPOINTS_DISABLED: true
PASSBOLT_EMAIL_VALIDATE_MX: true
PASSBOLT_PLUGINS_TOTP_RESOURCE_TYPES_ENABLED: true

P.S: Happy Valentine’s Day to everyone!

1 Like

I’m not sure where this symbolic error is coming from. There are no operations with the server key at least no ln -s with it in the containers definition.

Maybe it does an ln -s when the container creates and copies the server key provided by the docker secrets.
The container is created from the image without any changes, so there has to be something trying to create again the symbolic links even when it exists on the volume

Ahh I didn’t realize you use docker secrets. Without digging a lot about it you might have a workaround specifying the path of the serverkey in a location where www-data has write permissions in the container. You can test with PASSBOLT_GPG_SERVER_KEY_PUBLIC=/tmp/key.asc and PASSBOLT_GPG_SERVER_KEY_PRIVATE=/tmp/key.asc and mount the secret in such locations.

#nottested

That maybe could work but I would like to use Docker secrets to secure the keys.
Do you know if this could happen too in docker non-rootless containers using secrets?
Maybe I’m the only one trying to use secrets and I found such a bug xd

Forget what I just said about the env vars and mounting on a separate location.

Docker secrets and non-root works out of the box I just confirmed. The problem of your setup might be that you are mounting /etc/passbolt/gpg as a docker volume on top of using docker secrets.

The following sample docker-compose works:

version: "3.9"
services:
  db:
    image: mariadb:10.11
    restart: unless-stopped
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: "true"
      MYSQL_DATABASE: "passbolt"
      MYSQL_USER: "passbolt"
      MYSQL_PASSWORD: "P4ssb0lt"
    volumes:
      - database_volume:/var/lib/mysql

  passbolt:
    #image: passbolt/passbolt:latest-ce
    #Alternatively you can use rootless:
    image: passbolt/passbolt:latest-ce-non-root
    restart: unless-stopped
    depends_on:
      - db
    environment:
      APP_FULL_BASE_URL: https://passbolt.local
      DATASOURCES_DEFAULT_HOST: "db"
      DATASOURCES_DEFAULT_USERNAME: "passbolt"
      DATASOURCES_DEFAULT_PASSWORD: "P4ssb0lt"
      DATASOURCES_DEFAULT_DATABASE: "passbolt"
      PASSBOLT_GPG_SERVER_KEY_PUBLIC_FILE: /run/secrets/pubkey
      PASSBOLT_GPG_SERVER_KEY_PRIVATE_FILE: /run/secrets/key
    volumes:
      - jwt_volume:/etc/passbolt/jwt
    command:
      [
        "/usr/bin/wait-for.sh",
        "-t",
        "0",
        "db:3306",
        "--",
        "/docker-entrypoint.sh",
      ]
    ports:
     # - 80:80
     # - 443:443
    #Alternatively for non-root images:
      - 80:8080
      - 443:4433
secrets:
  pubkey:
    file: pubkey
  key:
    file: key

volumes:
  database_volume:
  jwt_volume:

Hello @diego and thank you for answering.
I tried to delete the gpg volume as you posted but it is still failing with the same error.
Here you are my actual docker compose

version: "3.9"
name: passbolt
services:
    passbolt_db:
        container_name: passbolt_db
        image: mariadb:latest
        restart: unless-stopped
        env_file:
            - database.env
        secrets:
            - db_database
            - db_username
            - db_password
        volumes:
            - database_volume:/var/lib/mysql
            - ./passbolt.sql:/docker-entrypoint-initdb.d/dump.sql

    passbolt_server:
        container_name: passbolt_server
        image: passbolt/passbolt:latest-pro-non-root
        restart: unless-stopped
        depends_on:
            - passbolt_db
        env_file:
            - passbolt.env
        secrets:
            - db_database
            - db_username
            - db_password
            - server_key_public
            - server_key_private
            - ssl_cert
            - ssl_key
        volumes:
            - jwt_volume:/etc/passbolt/jwt
            - ./subscription_key.txt:/etc/passbolt/subscription_key.txt:ro
        command:
            - /bin/bash
            - -c
            - |
              # Uncomment line below if you are using ldap sync. Change '* * * * *' to your preferred sync schedule. More info: https://en.wikipedia.org/wiki/Cron
              # echo '* * * * * www-data exec /bin/bash -c "source /etc/environment && /usr/share/php/passbolt/bin/cake directory_sync all" >> /var/log/cron.log 2>&1' >> /etc/cron.d/passbolt-pro-server
              /usr/bin/wait-for.sh -t 0 passbolt_db:3306 -- /docker-entrypoint.sh
        ports:
            - 8080:8080
            - 4433:4433
secrets:
    db_database:
        file: secrets/db_database.txt
    db_username:
        file: secrets/db_username.txt
    db_password:
        file: secrets/db_password.txt
    server_key_public:
        file: secrets/serverkey.asc
    server_key_private:
        file: secrets/serverkey_private.asc
    ssl_cert:
        file: secrets/fullchain.pem
    ssl_key:
        file: secrets/privkey.pem
volumes:
    jwt_volume:
    database_volume:

Env files are the same as before

I tried the same on a Raspberry Pi I have to check if it is something wrong due to the different test made but the same error is shown where I restart the server (or just the container).
I have to delete the Passbolt container and recreate again to get it working again

Maybe you can connect to your running container and check if your serverkey is correctly mounted from your secret and not from the gpg volume. Also when you restart the container you can modify the entrypoint to get some information about the files like ls -las /etc/passbolt/gpg at the beginning of the entrypoint.

I can run commands only before restarting, because when it throws the error the container keeps restarting and I can’t access it.
This is the output when it is running. Keep in mind that on the Raspberry Pi, the GPG volume was never created.

www-data@b31b3b17d238:/usr/share/php/passbolt$ ls -las /etc/passbolt/gpg
total 8
4 drwxrwx--- 1 root     www-data 4096 Feb 19 16:37 .
4 drwxrwx--- 1 root     www-data 4096 Feb 19 16:37 ..
0 lrwxrwxrwx 1 www-data www-data   30 Feb 19 16:37 serverkey.asc -> /run/secrets/server_key_public
0 lrwxrwxrwx 1 www-data www-data   31 Feb 19 16:37 serverkey_private.asc -> /run/secrets/server_key_private

This is if I try to get a bash when it is restarting:

user@rpi:~ $ docker ps
CONTAINER ID   IMAGE                                   COMMAND                  CREATED        STATUS                          PORTS                                                                                                                                   NAMES
b31b3b17d238   passbolt/passbolt:latest-pro-non-root   "/bin/bash -c '# Unc…"   17 hours ago   Restarting (1) 16 seconds ago                                                                                                                                           passbolt_server
7217d7dcc8e5   mariadb:latest                          "docker-entrypoint.s…"   17 hours ago   Up 17 hours                     3306/tcp                                                                                                                                passbolt_db
user@rpi:~ $ docker exec -it passbolt_server /bin/bash
Error response from daemon: Container b31b3b17d238f7a9bbd57f94f9b3d90654f22ec526f7563c5bb6208bc6482b21 is restarting, wait until the container is running

Hi again! I did some testing and wanted to update the thread to try to find the problem together and fix it.

To recap, I’m using the latest Docker Passbolt rootless PRO image (so it’s updated to the newest version). Also, I’m using Docker secrets (see the first post for more info on docker-compose and environment variables).
When I restart the server or restart the container without removing it, the container throws an error about a symlink of an existing serverkey.asc file.

Last week I created a new instance to try to configure Traefik as a reverse proxy and do some testing (because we all know it’s not recommended to test on your production instance). I’m using the Docker Passbolt PRO image (not a rootless one), but I’m using exactly the same docker-compose and environment variables except for the image which I remove non-root to test if it was a rootless thing.

I’ve seen the same thing happen to me, when I stop the container and start it again without deleting it (docker stop and docker start) or reboot the server it stays in a loop with the error. I’ve tried with the CE version images and it also happens to me.

Does anyone else have the same problem? What could be throwing that error? I currently don’t have the GPG volume that they told me to try to delete and the same error keeps coming up.