Passbolt integration with external database which is hosted on cloud as managed MariaDB service

Is there any possibility to connect passbolt with external database which is hosted in any cloud as managed service?

If yes, please share me the steps for integration,

I’m unable to found the exact steps to configure passbolt with cloud managed mariadb service.

Requirement:
Need to integrate passbolt with azure hosted MariaDB managed instance

What are you using to host passbolt?

Debian, Ubuntu, docker?

if its linux package, please edit the /etc/passbolt/passbolt.php and change the host in

// Database configuration.
    'Datasources' => [
        'default' => [
            'host' => 'localhost',
            //'port' => 'non_standard_port_number',
            'username' => 'user',
            'password' => 'secret',
            'database' => 'passbolt',
        ],
    ],

If you are using docker please change the following env variable: DATASOURCES_DEFAULT_HOST

Have a look here if you need other env var: Passbolt Help | Passbolt reference environment variables

@max , I would say docker image.
Im planning to use this helm charts: GitHub - passbolt/charts-passbolt: Run Passbolt on Kubernetes with helm, no strings attached in kubernetes which is again hosted in azure cloud (AKS)

Also let me know whether we can use postgresql with passbolt? if yes, how we can achieve the same thing using above mentioned helm charts.

I could see option for passbolt with psql in docker installation but in kubernetes didn’t found any documentation

I believe postgres is possible but not “officially” supported yet.

You’ll be looking at adding these to your values.yaml file

DATASOURCES_DEFAULT_HOST
DATASOURCES_DEFAULT_PORT
DATASOURCES_DEFAULT_URL
DATASOURCES_DEFAULT_DRIVER

You can see how these are currently set up in the configmap-app-config.yaml file

You’ll also likely want to set mariadbDependencyEnabled to false since you won’t be using the container for that

I will try to do modify my values.yaml file and will let you know

its not working for me to pass external database using below
data:
DATASOURCES_DEFAULT_DATABASE
DATASOURCES_DEFAULT_HOST
DATASOURCES_DEFAULT_PASSWORD
DATASOURCES_DEFAULT_URL
DATASOURCES_DEFAULT_USERNAME

Somehow its trying to take things from default settings which is mentioned as below in helm charts
data:
{{ include “passbolt-library.secret-range.tpl” .Values.passboltEnv.secret | nindent 2 }}

for secrets.

If I didn’t mention any values for this one: .Values.passboltEnv.secret, its not connecting to mariadb which we used in init container to connect with database before making passbolt is available

Just to be sure, you are adding those to the .Values.passboltEnv.secret section, correct?

Yes! I’m adding to that section.
If I deploy passbolt, the init container is not able to make connection with database to complete the connection check mentioned in deployment.yaml. So passbolt app also not coming up

I also tried with passing those secrets as in extraEnvFrom:

  • secretRef:
    name: passbolt-secrets

it didn’t helped me. somehow its not taking values properly to connect.

Can you post the yaml to try to understand what is happening?
If you use a separate env file paste it too, and please censore sensible data

This is my values.yaml

redisDependencyEnabled: false
redis:
auth:
enabled: false
sentinel:
enabled: false

mariadbDependencyEnabled: false
mariadb:
primary:
persistence:
enabled: false
secondary:
persistence:
enabled: false

app:
initImage:
client: mariadb
repository: mariadb
pullPolicy: IfNotPresent
tag: latest
image:
repository: passbolt/passbolt
pullPolicy: IfNotPresent
tag: 4.1.2-1-ce
cache:
redis:
enabled: false
sentinelProxy:
enabled: false
cronJobEmail:
enabled: false
schedule: “* * * * *”
extraPodLabels: {}

gpgPath: /etc/passbolt/gpg
jwtPath: /etc/passbolt/jwt

jobCreateGpgKeys:
extraPodLabels: {}

passboltEnv:
plain:
PASSBOLT_LEGAL_PRIVACYPOLICYURL: Privacy With Passbolt: Website Privacy Policy
APP_FULL_BASE_URL: https://passbolt.global
PASSBOLT_SSL_FORCE: true
PASSBOLT_REGISTRATION_PUBLIC: true
EMAIL_TRANSPORT_DEFAULT_PORT: 587
DEBUG: false
PASSBOLT_KEY_EMAIL: passbolt@global
PASSBOLT_SELENIUM_ACTIVE: false
PASSBOLT_PLUGINS_LICENSE_LICENSE: /etc/passbolt/subscription_key.txt
EMAIL_DEFAULT_FROM: no-reply@passbolt.local
EMAIL_TRANSPORT_DEFAULT_HOST: 127.0.0.1
EMAIL_TRANSPORT_DEFAULT_TLS: true
PASSBOLT_JWT_SERVER_KEY: /var/www/passbolt/config/jwt/jwt.key
PASSBOLT_JWT_SERVER_PEM: /var/www/passbolt/config/jwt/jwt.pem
PASSBOLT_PLUGINS_JWT_AUTHENTICATION_ENABLED: true
KUBECTL_DOWNLOAD_CMD: curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl”
secret: {}
extraEnv:
extraEnvFrom:
- secretRef:
name: passbolt-db-secret

replicaCount: 1

autoscaling:
enabled: false

rbacEnabled: true

livenessProbe:
httpGet:
port: https
scheme: HTTPS
path: /healthcheck/status.json
httpHeaders:
- name: Host
value: passbolt.local
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
port: https
scheme: HTTPS
httpHeaders:
- name: Host
value: passbolt.local
path: /healthcheck/status.json
initialDelaySeconds: 5
periodSeconds: 10

networkPolicy:
enabled: false

imagePullSecrets:
nameOverride: “”
fullnameOverride: “”

serviceAccount:
create: true
annotations: {}

podAnnotations: {}

podSecurityContext:
{}

service:
type: ClusterIP
port: 443
targetPort: 443
name: https
annotations: {}

ingress:
enabled: true
annotations: {}
hosts:
- host: passbolt.global
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: tls-global-secret
hosts:
- passbolt.global

Also, I have secret file with below values
DATASOURCES_DEFAULT_DATABASE
DATASOURCES_DEFAULT_HOST
DATASOURCES_DEFAULT_PASSWORD
DATASOURCES_DEFAULT_URL
DATASOURCES_DEFAULT_USERNAME

Another thing to try here since you mentioned that the connection couldn’t be completed. You could deploy just a plain vanilla mariadb container in your cluster and try to connect to the db from there. That would rule out a connection issue like the cluster not having access to the resource.

do you want me to try with mariadb running in kubernetes cluster?

Yea I think that would be a good step to at least rule out an issue with communication from the cluster to the database

I tried with passbolt and mariadb in kubernetes cluster. But its not working for me, im not able to reach the passbolt

So just to confirm you were able to get the pods up and running in the cluster but you don’t have connectivity to them from outside of the cluster?

Yes… I tried to connect the passbolt via port forwarding and using ingress as well
No access to the app

Are you seeing any errors when trying to connect with either of those methods?

These are the logs I could see

gpg: keybox ‘/var/lib/passbolt/.gnupg/pubring.kbx’ created
gpg: /var/lib/passbolt/.gnupg/trustdb.gpg: trustdb created
gpg: key 21382955CB3F4F9A: public key “Passbolt default user passbolt@yourdomain.com” imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: key 21382955CB3F4F9A: “Passbolt default user passbolt@yourdomain.com” not changed
gpg: key 21382955CB3F4F9A: secret key imported
gpg: Total number processed: 1
gpg: unchanged: 1
gpg: secret keys read: 1
gpg: secret keys imported: 1
…+…+.+…+…+…+…+…+.+…+…+…+…+.+…+…+.+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+.+…+…+.+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+.+…+…+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+.+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+.+…+.+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
…+…+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++…+…+…+…+…+…+…+.+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+.+…+…+…+…+.+…+…+…+…+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+…+…+.+…+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+…+…+…+.+…+…+…+.+…+…+…+…+…+…+…+…+…+…+…+.+…+…+…+.+…+…+…+…+…+…+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Installing passbolt

 ____                  __          ____  
/ __ \____  _____ ____/ /_  ____  / / /_ 

/ // / __ `/ / / __ / __ / / _/
/ / // ( |
) /
/ / /
/ / / /
/
/ _
,
/
//./_//__/

Open source password manager for teams

Running baseline checks, please wait…
Could not use key for signing. get_key failed
Please run ./bin/cake passbolt healthcheck for more information and help.
Running migrations

 ____                  __          ____  
/ __ \____  _____ ____/ /_  ____  / / /_ 

/ // / __ `/ / / __ / __ / / _/
/ / // ( |
) /
/ / /
/ / / /
/
/ _
,
/
//./_//__/

Open source password manager for teams


Running migration scripts.

using migration paths

  • /etc/passbolt/Migrations
    using seed paths
    using environment default
    using adapter mysql
    using database passbolt
    ordering by creation time

All Done. Took 0.0130s
Clearing cake caches
Clearing cake_model
Cleared cake_model cache
Clearing cake_core
Cleared cake_core cache
Enjoy! :peace_symbol:

/usr/lib/python3/dist-packages/supervisor/options.py:474: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a “-c” argument specifying an absolute path to a configuration file for improved security.
self.warnings.warn(
2023-08-17 10:50:02,128 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2023-08-17 10:50:02,128 INFO Included extra file “/etc/supervisor/conf.d/cron.conf” during parsing
2023-08-17 10:50:02,128 INFO Included extra file “/etc/supervisor/conf.d/nginx.conf” during parsing
2023-08-17 10:50:02,129 INFO Included extra file “/etc/supervisor/conf.d/php.conf” during parsing
2023-08-17 10:50:02,132 INFO RPC interface ‘supervisor’ initialized
2023-08-17 10:50:02,132 CRIT Server ‘unix_http_server’ running without any HTTP authentication checking
2023-08-17 10:50:02,132 INFO supervisord started with pid 1
2023-08-17 10:50:03,135 INFO spawned: ‘php-fpm’ with pid 94
2023-08-17 10:50:03,138 INFO spawned: ‘nginx’ with pid 95
[17-Aug-2023 10:50:03] NOTICE: fpm is running, pid 94
[17-Aug-2023 10:50:03] NOTICE: ready to handle connections
[17-Aug-2023 10:50:03] NOTICE: systemd monitor interval set to 10000ms
10.104.53.99 - - [17/Aug/2023:10:50:04 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
2023-08-17 10:50:04,229 INFO success: php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-17 10:50:04,229 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-17 10:50:04,230 INFO reaped unknown pid 103 (exit status 0)
2023-08-17 10:50:04,230 INFO reaped unknown pid 105 (exit status 0)
2023-08-17 10:50:04,230 INFO reaped unknown pid 107 (exit status 0)
2023-08-17 10:50:04,231 INFO reaped unknown pid 109 (exit status 0)
2023-08-17 10:50:04,231 INFO reaped unknown pid 111 (exit status 0)
10.104.53.99 - - [17/Aug/2023:10:50:13 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
2023-08-17 10:50:13,939 INFO reaped unknown pid 113 (exit status 0)
2023-08-17 10:50:13,940 INFO reaped unknown pid 115 (exit status 0)
2023-08-17 10:50:13,940 INFO reaped unknown pid 117 (exit status 0)
2023-08-17 10:50:13,941 INFO reaped unknown pid 119 (exit status 0)
2023-08-17 10:50:13,941 INFO reaped unknown pid 121 (exit status 0)
2023/08/17 10:50:15 [info] 99#99: *7 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:15 [info] 96#96: *6 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:16 [info] 98#98: *5 client closed connection while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
10.104.53.99 - - [17/Aug/2023:10:50:23 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
10.104.53.99 - - [17/Aug/2023:10:50:23 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
2023/08/17 10:50:31 [info] 96#96: *13 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:31 [info] 96#96: *14 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:31 [info] 96#96: *12 client closed connection while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
10.104.53.99 - - [17/Aug/2023:10:50:33 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
10.104.53.99 - - [17/Aug/2023:10:50:33 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
2023/08/17 10:50:36 [info] 97#97: *19 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:36 [info] 97#97: *20 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
10.104.53.99 - - [17/Aug/2023:10:50:43 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
10.104.53.99 - - [17/Aug/2023:10:50:43 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
2023/08/17 10:50:47 [info] 96#96: *26 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:47 [info] 96#96: *27 SSL_do_handshake() failed (SSL: error:0A000416:SSL routines::sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2023/08/17 10:50:48 [info] 96#96: *25 client closed connection while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
10.104.53.99 - - [17/Aug/2023:10:50:53 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”
10.104.53.99 - - [17/Aug/2023:10:50:53 +0000] “GET /healthcheck/status.json HTTP/2.0” 200 220 “-” “kube-probe/1.22”

It looks like there are some SSL issues showing there. How are you handling that for your cluster?

I’m using my internal company domain and SSL/TLS certificates added in ingress