Skip to content

IVAAP Operations

Starting IVAAP for the first time

Starting IVAAP for the first time will largely depend on your method of deployment and deployment type. Since IVAAP uses a Helm template, there are many environments and approaches to take. First, follow the Pre-Deployment Steps for pre-deployment setup. Then, decide your deployment method by reading the Deployment Methods section of the General Helm Configuration guide.

Once you have decided on your deployment method, refer to the other guides under the IVAAP Helm Template section for environment specific setup details.

K3s Single Server Deployment

AWS EKS

Azure AKS

Openshift

Regardless of the deployment method chosen, the Helm command should essentially be the same and universal, unless deploying with GitOps tools.

    helm upgrade --install ivaap /opt/ivaap/IVAAPHelmTemplate \
        -f /opt/ivaap/IVAAPHelmTemplate/values.yaml \
        -f /opt/ivaap/my-modified-values.yaml \
        --namespace <namespace>

Helm with Multiple Values Files provides the context needed for understanding how this helm template is meant to be deployed.

Before deployment, it is important to make sure all Pre-Deployment Steps are completed, as well as ensuring the PostgreSQL database is running, schema is loaded, and the IVAAP deployment is able to access this database. Each environment specific guides referenced above contain details on PostgreSQL setup depending on where IVAAP is deployed. Reference these guides, as well as the IVAAP Deployment Operations Guide - Database Administration

SLB provides a default super admin user called ivaaproot@int.com. This user can be accessed in local authentication deployments from the admin client (https://<IVAAP_DOMAIN>/admin). After first time deployment, we recommend logging in as this user to begin initial IVAAP administration configuration. The password will be provided after running the k3s install script, or provided if not deploying on k3s. This is not applicable for deployments using external authentication, as the local database users will not be accessible.

Primary Operations

Kubernetes basics

In case you are unfamiliar with kubernetes, we will go over some basics in the below sections using the kubectl command. These basics will focus on IVAAP specifically, so for more information understanding Kubernetes, refer to Kubernetes official documentation.

IVAAP Pods

IVAAP will deploy several pods, the largest of which is the backend pod, which will have multiple containers within it depending on the number of data connectors. View these pods with the command kubectl get pods -n <namespace>:

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl get pods -n ivaap
NAME                                                  READY   STATUS    RESTARTS   AGE
adminserver-deployment-6b4bfb4497-c4bp6               1/1     Running   0          14d
ivaap-activemq-deployment-6b6956684f-rfhdt            1/1     Running   0          49d
ivaap-admin-deployment-5b8f44cc4c-8jp96               1/1     Running   0          27d
ivaap-backend-deployment-765b4dc65-cm6fg              16/16   Running   0          24h
ivaap-dashboard-deployment-96b65978f-k5zhp            1/1     Running   0          35d
ivaap-dashboard-publish-deployment-6bc4d97ddf-wwcjc   1/1     Running   0          35d
ivaap-infinispan-deployment-0                         1/1     Running   0          50d
ivaap-proxy-deployment-6875dcf849-qjh8h               1/1     Running   0          12d
ivaap-scheduledtasks-deployment-645f6fd6d7-zdps7      3/3     Running   0          29d

For more pod information, kubectl describe pod <pod_name> -n <namespace> can be used:

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl describe pod ivaap-proxy-deployment-6875dcf849-qjh8h -n ivaap
Name:             ivaap-proxy-deployment-6875dcf849-qjh8h
Namespace:        ivaap
Priority:         0
Service Account:  default
Node:             nodename/10.0.0.26
Start Time:       Fri, 01 Aug 2025 19:24:25 +0000
Labels:           app=ivaap-proxy
                  pod-template-hash=6875dcf849
Annotations:      dapr.io/enabled: true
                  dapr.io/id: ivaap-proxy
Status:           Running
IP:               11.40.0.100
IPs:
  IP:           11.40.0.100
Controlled By:  ReplicaSet/ivaap-proxy-deployment-6875dcf849
Containers:
  ivaap-proxy:
    Container ID:   containerd://03f6f2cfdade42b0d840d1da809d9f64fd07e2ca5deb14b53ef60f324ddfee17
    Image:          245634265005.dkr.ecr.us-west-2.amazonaws.com/ivaap/proxy:proxy-2024.0-cce98866-20250627T135324Z
    Image ID:       245634265005.dkr.ecr.us-west-2.amazonaws.com/ivaap/proxy@sha256:4f25728cb58dd145ae65168d372fa9355219f6556b04628d6e6881a60f4bcdd7
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 01 Aug 2025 19:24:26 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      proxy       ConfigMap  Optional: false
    Environment:  <none>
    Mounts:
      /etc/nginx/logs from logs-proxy-pv (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z8znt (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  logs-proxy-pv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  logs-proxy-pvc
    ReadOnly:   false
  kube-api-access-z8znt:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

IVAAP Secrets

IVAAP deployments will contain many kubernetes secrets. Two will be created based on Pre-Deployment Steps (if applicable), but the important one are the four secret references from the IVAAP Helm Template.

  • activemq-conf-secrets
  • adminserver-conf-secrets
  • circle-of-trust-secrets
  • ivaap-license-secret

These secrets can be seen with the command kubectl get secrets -n <namespace>:

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl get secrets -n ivaap
NAME                            TYPE                             DATA   AGE
activemq-conf-secrets           Opaque                           1      63d
adminserver-conf-secrets        Opaque                           8      63d
circle-of-trust-secrets         Opaque                           3      63d
intecrcred                      kubernetes.io/dockerconfigjson   1      27d
ivaap-license-secret            Opaque                           1      63d
ivaap-tls-secret                kubernetes.io/tls                2      63d

More information about the secret can be seen with kubectl describe secret <secret_name> -n <namespace>:

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl describe secret circle-of-trust-secrets -n ivaap
Name:         circle-of-trust-secrets
Namespace:    ivaap
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: ivaap
              meta.helm.sh/release-namespace: ivaap

Type:  Opaque

Data
====
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY:  15 bytes
IVAAP_TRUST_PRIVATE_KEY:                 2176 bytes
IVAAP_TRUST_PUBLIC_KEY:                  392 bytes
As seen from the output, the secret circle-of-trust-secrets containes 3 keys:

  • IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY
  • IVAAP_TRUST_PRIVATE_KEY
  • IVAAP_TRUST_PUBLIC_KEY

There are a couple of ways to see each key's value. The command kubectl get secret <secret_name> -n <namespace> -o yaml will show the base64 encoded values of the secrets, which can then be decoded to show the true value.

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl get secret circle-of-trust-secrets -n ivaap -o yaml
apiVersion: v1
data:
  IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: bXlTdXBlclNlY3JldEtleQ==
  IVAAP_TRUST_PRIVATE_KEY: dDZ6K1k0QU9yckJyOUpIWU9IbEN1eG1JUGdDWkkzMzhPTm5qbTluRkQ0RUY1ZUFQaG5YczVTZlhacG9tV3pOYWRpTkFyMVZrc1dsaTQvTDdJNjhSNjZMOTVPdmZqNTBqeEVjN1BvMGN3UUJublc4MlBPWlF0Wkd0ZlltaGE4TVdWN2Y0cHBzWWgzbW9uVkZlWDB4R3QvcFJ5VlpCa0IwRjhCTGtFcGhTN0lURWRGNnB0WU41cTV1UHA2dnNZdXRDK1ErWEROeHBBcFNFUnQxb0VzaHYwQURjMEhjSXFvbmpaZmwxUVkrRUJPN0VudXBhKzZTcElFS3VTbEZud0pNaDRmRlc1Sng5YmI5dnBOMWNUQ2ljNXppak10WVYrdU03WDIxaTdUUWphQml6dWE5TE0vb0pVbmNpTzFweUo3bmVxbllUL1BYanEwUFFSMEhmcWRDMlF4cnc1SklBcWxhd0k2MmxqbE1PSDlHRFBaRWhsK3JJby9QcXFXdEFkTU9kcUpEakQxbUJCdEovUTR2SGFoQURHbzhmY21KWlg0Y2N4NzQySGNOODhkbXJZcjZRZnNTR1AySkIvOG1oTmNDSW9ENGhIdHNqOGJFemdXUERkdVRBdW9pNTdDM0xJSTZPRFZucnI4VUZvL0FBWGtaS1VpajNyRnRIU2pweC9BWW5xdnEwUVg4MjF4L2VkeGNxcnRYaVl4bS9XY1pyRVpMOWszUktacXdTeUV5Vk90NHgycjVGRTVXSlova2FMQi83TldoQTBEaXBCc1BLZFZwbVZYcXdEUXc5cS9RYzNJUWNCcDZ6SzhDRVpvTmUrMFdYaTdzMjFhZzBIcTMxN0xRZkdkejhEYVNrL0RwamsyOEFQN3RzL2p4OHBra3pQTEYyaDlZdkltbkM0ZEpVV2dNR2U2aG5SemtMVk50RlJBcmVDbjlCQ3lhL05LL05oTGU0ZnZqaHlyeW81aTVRcUdUeDVvY2FkSlNiUFBEL1Z4fdncpalkemgnjboeGeWWWTCaTQzMUdlbG1lZmlUK1RiNHhoU3hxWDlWdHpPb1lzVTZUQW1DclgzTnNYUEhQOU9qem0ycVpaaitHWk93MVlkcnA3WGZmSkxvQWFDV2UrR09TNkJVbHgyVG1GZEVDZG90ZFVtSUgrOFBQZFhVRllYUStoZm8xczNMeTZ2VFBqVGY5VEM4eHRTcFA0RlQ3TGUzVXEvYjIvMHBWVXhZY2lrWU1WU01rNTl6akpxQksxYmI3VDJ6L3g4WE12eXh6cGR6bW9RU1MwSWpjZGtwVnJNWkErczB3cnd2QU1WbTIwTTJpM3Izb2hERGIzZ3pqR0oyYnl5RlhuMWl6T2M5eHB0SDJRU2J3bTZJN2NFL05PZHhBQ1c3OWtKUjFPTm1VV3g2UXVmYnU4d1dPS2xBK0VMTFh2YUxMUXZxQ2YyRW82RVJpTmlHSmQxZlVZU0phY1o4R3l1THlGb2xZayt1VjNSSEZ2cHhnV0VmVTBhK0dheGJtOCszeHg4T1hOMzIwa1RRTktqSTQyY3JxZW9CeW0zMDNyMFo1U0NlU2MrY0tDR1J5ZUhXRHBKeHRQNk1ndmEvSWlhbHI4Z2plNFQ4Z3NQaHlpdVZtL0puQXdMTC96RXN1RjF4cVlzMGVpMDVQdThXa3QyYndtbEs4MU9MWEFBSWh3VExYUk1oUXMzRk9EQjBxTFlMZHM4b2ZsZUlGdG1UQVd5ckhZMmxUZGp0R3VEaUhQZzR0QjNjZDgvcWFnaUxpMytPcHpCMmVMUVJDaXBPVGlzbEdXNXRCdkcrSWdaV1JnT3A1N1RoNVBsMWQySFY3bVZwNlFINHpJOXRVR24xcHBPeWdEVFgyTExSQit1N1MzdDZDZkdkc1BtNDdkdFF4V3owVTVPdDF0WThIUzZWNzJGYy9RN2srS005OEJHcEtCc3ZuNE9RaWREcitKQmNoQUFRcTdjSmxma0JTWGdjeFR3LzRaOEk4ODZlTHRSNU1uOHhvNituZXRzMmd0ci9XRDhHSTFETUh1SlZkclc5MU1rcHpGSS90SDJqbG9BMlc5ejRmN2pXWHMwbnZiU0N0MGdXSUFGbTZhUFNaQXNMN2I1eENyUHhycjdJOHdRMmxRem42RUZYVGdMVlNRQVNsbFUvekdBQUNCREpMc0xQcjVNdDFOMGVvYUN2bm83NXcxeTROZkQ0MzloTlpFRXNjYVZUWGkvOStkbWRlUTRObUUzYldDcHlRQSs5bXdXcERwNG9va0tJWFdFYnFRdW9LRzNmVlNUWHhHMzBTUEErcFdPNGtsa1ZTbVlpZTNYSEZHRzdXc3dNS3lxWnBGSm4veDVGMw==
  IVAAP_TRUST_PUBLIC_KEY: TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFsQmRYTCtMV1JkWkc3dEFJNGtHamtKN1BwUFkzdC9aODhwOXhJUVdSWDNTVzlQenY5SnZVSnhOODdyU1JmUDd5Zmo1Vk1ybGV2VjFOQldUSXlJZmdWbTRQd0FkdkZKSDlYTDNzaFJMUG9VRkVjVFhVaURrQ1FHY2VJSEx4elBQZXhNbnNvZ2NjZHNqUGtJbnQ2Q3R1bXplNm5DcHpJMnRnSzNPTWFSTENwUUg4cjYrY2FQM0w2Q2t5RlZNRXF2TVg3QmhtbUNkbmRRSWxXa29BWWYxdlhVQ0N6d29tbzM3c09WSzVsZnlYRXUydkNWY2pzTWFJZEU0T1l1MFhWQ1dMZ3BZT0UxTkEza0ZLaXNQem1UY1JCcWJ2ajlSdHp1VS9SUFM3QmR3L2R0Z2pLUFhuNEFIMVJYdlBGamg3L1lJRk5STDZBdlQyYTVrVlVlVVV5emYwTVFJREFRQUI=
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: ivaap
    meta.helm.sh/release-namespace: ivaap
  creationTimestamp: "2025-06-12T15:00:14Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: circle-of-trust-secrets
  namespace: ivaap
  resourceVersion: "187444"
  uid: 5c946003-643c-4953-9ff3-b4657b5ac88c
type: Opaque

Alternatively, passing one of the key's into the command below will output the decoded value:

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl get secret circle-of-trust-secrets -n ivaap -o jsonpath='{.data.IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY}' | base64 -d
mySuperSecretKey

Shell into pods/containers

Sometimes, there are needs to shell into a pod, or the containers within a pod. To do this, the kubectl exec command is used.

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl exec -it adminserver-deployment-6b4bfb4497-c4bp6 -n ivaap -- /bin/bash
adminserver-deployment-6b4bfb4497-c4bp6:/opt/ivaap/adminserver$ 

The above example gives me a bash shell inside the adminserver pod. The syntax is simple; kubectl exec -it <pod_name> -n <namespace> -- <command>. Since the last part is just a command, any command can be passed, such as printenv to show the environment variables inside the pod.

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl exec -it adminserver-deployment-6b4bfb4497-c4bp6 -n ivaap -- printenv
PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=adminserver-deployment-6b4bfb4497-c4bp6
JAVA_HOME=/opt/java/openjdk
...
...
...

Since the backend pod has multiple containers, the -c flag should be passed to specify which container you want to execute the command in. This must match the name of the backend node found under .Values.ivaapBackendNodes.

user@linux:/opt/ivaap/IVAAPHelmTemplate$ kubectl exec -it ivaap-backend-deployment-765b4dc65-cm6fg -n ivaap -c playnode -- /bin/bash
ivaap-backend-deployment-765b4dc65-cm6fg:/opt/ivaap/ivaap-playserver/deployment$ 

It is important to note there are some pods that do not support bash. In these case, /bin/sh should be used.

Updating Environment Variables

If any changes are made to the helm deployment configuration, the helm upgrade command will need to be performed each time.

        helm upgrade ivaap /opt/ivaap/IVAAPHelmTemplate \
            -f /opt/ivaap/IVAAPHelmTemplate/values.yaml \
            -f /opt/ivaap/IVAAPHelmTemplate/my-modified-values.yaml \
            --namespace <namespace>

However, if updating environment variables within a configmap, just running the helm upgrade is not enough. Environment variables are not automatically detected as changes where pods will need to be recreated. For other changes, such as new image tags, the pods are automatically deleted and recreated to reflect the change - this is not the case with environment variables.

To update the deployment after environment variable changes, first run the helm upgrade command to apply the change, then kill the pod(s) that require this environment variable change.

user@linux:~$ kubectl delete pod ivaap-proxy-deployment-6875dcf849-qjh8h -n ivaap
pod "ivaap-proxy-deployment-6875dcf849-qjh8h" deleted

This also applies to envrionment variable changes within kubernetes secrets. The kubernetes secret will be automatically updated with the variable change, however the pods that use these secrets will not be recreated, and will still use the old secret. These pods will also need to be deleted before the new secret will take effect.

Logs

To ensure effective monitoring and troubleshooting, it is recommended that all application logs be exported to a centralized logging solution. While Kubernetes stores pod logs locally and provides access via kubectl logs, these logs are ephemeral and are lost when pods are restarted or removed.

Integrating with a logging platform such as ELK (Elasticsearch, Logstash, Kibana), Grafana Loki, or a managed cloud logging service will provide:

  • Centralized, persistent log storage
  • Powerful search and filtering capabilities
  • Correlation across multiple services and namespaces
  • Better visibility into errors, performance issues, and usage patterns

You can export logs using tools like Fluent Bit, Filebeat, or Promtail to ship them to your chosen backend. This setup is essential for production environments to maintain historical log data and support advanced analysis.

K3s Persisted logs

In K3s deployments, logs can also be persisted to the file system. Refer to K3s Single Server Deployment guide for logging setup.

Making and Restoring Backups

PostgreSQL

Sometimes it may be necessary to make backups of the PostgreSQL database. It is recommended to backup the database before ALL upgrades to IVAAP in case unexpected issues arise during migration.

Creating this backup will largely be dependent on how your database is deployed. Typically, though, the psql and pg_dump commands that will be used to access the database or to create the backup dump should be similar no matter how it is deployed.

Requirements

In order to create the backup, it is required to have linux binary postgresql-client that matches the version of your postgres database. If your database is version 15.x, you will need postgresql-client-15.

user@linux:~$ psql --version
psql (PostgreSQL) 15.13 (Ubuntu 15.13-1.pgdg24.04+1)

user@linux:~$ pg_dump --version
pg_dump (PostgreSQL) 15.13 (Ubuntu 15.13-1.pgdg24.04+1)

user@linux:~$ pg_restore --version
pg_restore (PostgreSQL) 15.13 (Ubuntu 15.13-1.pgdg24.04+1)

Additionally, your linux machine where you will be collecting the backup will need access to the postgres database. This can be checked using netcat with the following command: nc -v <database_host> <database_port>

user@linux:~$ nc -v ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com 5432
Connection to ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com (10.0.0.148) 5432 port [tcp/postgresql] succeeded!

From here, you can test connection to the database to ensure you have access. This can be done with the following psql command: psql -h <database_host> -U <database_username> -d <database_name>. This will prompt for password; enter the password, and if successful, you will be connected to the database.

user@linux:~$ psql -h ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com -U ivaapserver -d postgres
Password for user ivaapserver: 
psql (15.13 (Ubuntu 15.13-1.pgdg24.04+1), server 15.12)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-ECDSA-AES256-GCM-SHA384, compression: off)
Type "help" for help.

postgres=>

Creating the Backup

Creating the database dump will be done using the pg_dump command.

pg_dump -h DB_HOSTNAME -U DB_USERNAME -p DB_PORT -Fc -W DB_NAME > DB_DUMP_FILENAME.psql

This command creates a backup of the database and saves it to the file name specified. Enter the db password when prompted.

user@linux:~$ pg_dump -h ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com -U ivaapserver -p 5432 -Fc -W ivaap_db > 2025-06-02_ivaap-db-eks-dump.psql

Loading the Database Dump

To load the database dump or restore the backup, it may be required to convert the file using pg_restore.

user@linux:~$ psql -f 2025-06-02_ivaap-db-eks-dump.psql -h ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com -U ivaapserver -d new_ivaapdb
Password for user ivaapserver: 
The input is a PostgreSQL custom-format dump.
Use the pg_restore command-line client to restore this dump to a database.

To convert this file with pg_restore, the syntax is pg_restore -f NEW_CONVERTED_FILENAME.psql -F c EXISTING_FILENAME.psql

pg_restore -f 2025-06-02_ivaap-db-eks-dump_converted.psql -F c 2025-06-02_ivaap-db-eks-dump.psql

Now, you can load the new converted file.

user@linux:~$ psql -f 2025-06-02_ivaap-db-eks-dump_converted.psql -h ivaap-eks.djabeijtoc.us-east-2.rds.amazonaws.com -U ivaapserver -d new_ivaapdb
Password for user ivaapserver: 
SET
SET
SET
SET
SET
 set_config 
------------

(1 row)

SET
SET
SET
SET
CREATE SCHEMA
ALTER SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
...
...
...

Backing Up IVAAP Dir

Since IVAAP is deployed as a Helm Template, there should be only a single configuration file that would require backing up. This would be the modified deployment values.yaml file that contains the deployment specific configuration. It is recommended to keep this and the Helm template in your own git repository. If you do not have a git repository, you can create a backup of the entire /opt/ivaap directory by saving it as a tar file.

sudo tar -czvf ivaap.tar.gz /opt/ivaap

Final Commercial License

IVAAP is sometimes delivered with a short-term evaluation license for ease of deployment or proof of concepts. Once IVAAP is up and running, SLB will provide a new license tied to your account. Please supply SLB with the hostname and/or unique IP address to be used for the deployment, and upon receiving the final license, update the value for LM_LICENSE_FILE in your deployment. This variable is stored as a kubernetes secret:

secrets:
  type:
    k8sSecrets:
      ivaap-license-secret:
        LM_LICENSE_FILE: ""

The license will be provided as a .dat file. This file will show the license in multiple lines.

FEATURE IVAAPServer INTD 1.0 26-may-2025 uncounted \
    VENDOR_STRING=users:16 HOSTID=ANY SIGN="00A4 721B 40CF 7861 \
    3FF1 F85B X5X3 8E6E 9909 5F8C 0043 064E A0C1 C479 6AA3 B55E \
    0D59 E08E AM83 3C97 D4DB"

For the variable value, this needs to be a single line, and between three curly brackets {{{}}}. Below is an example of how the variable value should look:

{{{FEATURE IVAAPServer INTD 1.0 26-may-2025 uncounted VENDOR_STRING=users:16 HOSTID=ANY SIGN="00A4 721B 40CF 7861 3FF1 F85B X5X3 8E6E 9909 5F8C 0043 064E A0C1 C479 6AA3 B55E 0D59 E08E AM83 3C97 D4DB"}}}

Further actions may be required, such as base64 encoding, depending on your deployment. Refer to the IVAAP Secrets section of the General Helm Configuration guide for configuring Kubernetes secrets.

HTTPS

HTTPS is a requirement for IVAAP. It is recommended to only use certificates from a well known Certificate Authority, and to avoid using self signed certificates. Self signed certificates cause additional complexities that may require passing these certificates into some datanode's Java Virtual Machine. If some external components that IVAAP connects to are using a self-signed certificate, refer to the Adding Self Signed Root CA to Java Keystore section below.

Let's Encrypt

To enable HTTPS for your website, you will need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you can use the following script that will create a swag (Secure Web Application Gateway) container to set up an NGINX web server that automates free SSL server certificate generation and renewal processes. You need to add this deployment server ip to the DNS target.

Update with your valid DNS name. After creating the certificates at /opt/ivaap/certs/letsencrypt path, you can terminate the swag containers and update your TLS kubernetes secret for IVAAP.

docker run \
  --name=swag \
  --cap-add=NET_ADMIN \
  –rm \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=America/NewYork \
  -e VALIDATION=http \
  -p 443:443 \
  -p 80:80 \
  -v /opt/ivaap/certs/letsencrypt:/config/etc/letsencrypt \
  -e URL=<your DNS> \
  ghcr.io/linuxserver/swag

Tip

Port 80 on the host must be open for this to work.

Self Signed Certificates

Adding Self Signed Root CA to Java Keystore

In some cases, IVAAP may connect to an external server that uses self-signed certificates. This could be an external authentication server, OSDU server, or other data server. In these situations, certain components of the IVAAP backend may not trust the external server due to not recognizing the CA.

However, our base image for building the backend comes from Eclipse Temurin, and they have incorporated a simple way to pass these certificates into the java keystore. Using this method, SLB has added simple configuration options to pass the Root CA as a kubernetes secret, which will be passed into the keystore when the component starts up.

For more information on how this works in the Eclipse Temurin base image, refer to Eclipse Temurin's official documentation.

Adminserver

If IVAAP is connecting to an external authentication server using self-signed certificates, you may experience similar errors as the code block below after an attempted login:

2025-09-25 14:15:17.786 [com.interactive.ivaap.server.akkaservices.isexternal.IsExternalWorkerActor.AsyncWorkerControllerThread] INFO  c.i.i.s.a.AuthenticationDispatcherLoader - Authentication dispatcher set to LocalDexAuthenticationDispatcher
2025-09-25 14:15:27.878 [com.interactive.ivaap.server.akkaservices.callback.CallbackWorkerActor.AsyncWorkerControllerThread] WARN  c.i.i.s.actors.AbstractWorkerActor - Received error  com.interactive.ivaap.server.akkaservices.callback.CallbackRequest@ee0d5c0 in worker actor. Forwarding to requestor
javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...
...
...
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...
...
...
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...
...
...
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

To resolve this issue, we first need to create a kubernetes secret using the Root CA of the external authentication server.

kubectl create secret generic <secretName> \
 --from-file=root-ca.crt=/path/to/RootCA.crt \
 -n <namespace>

Once the secret is created, Eclipse Temurin expects the root certificate to be located in /certificates directory inside the adminserver pod. To simplify this process, SLB has added an extraVolumes section to the adminserver block in the Helm Template's values.yaml file.

######################################
##        IVAAP Adminserver         ##
######################################
ivaapAdmin:
  extraVolumes: []
  extraVolumeMounts: []
  hostAlias:
    enabled: false
    ipAddress: ""
    hostName: ""
  adminserver:
    repoName: ivaap/adminserver
    tag: adminserver-3.0.47-88a0153-eclipse-temurin-21-jdk-alpine-202509251826Z

Below is an example of how this can be configured in your deployment yaml:

ivaapAdmin:
  extraVolumes:
    - name: root-ca-cert
      secret:
        secretName: <secretName>
  extraVolumeMounts:
    - name: root-ca-cert
      mountPath: /certificates
      readOnly: true

The name of your volume can be anything of your choosing. The important part to remember for this functionality is that secretName points to the same name given to your secret in the previous step, and that the mountPath of this secret is /certificates.

In additional to this, two environment variables need to be applied to the adminserver in your deployment values yaml; JAVA_TOOL_OPTIONS and USE_SYSTEM_CA_CERTS:

configmap:
  adminserver:
    JAVA_TOOL_OPTIONS: "-Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit"
    USE_SYSTEM_CA_CERTS: "true"

USE_SYSTEM_CA_CERTS needs to be set to true in order for the Eclipse Temurin base image to know certificates need to be added to the JAVA keystore.

JAVA_TOOL_OPTIONS needs to be set the the above JAVA_OPS due to our images using a non-root user. If the user inside adminserver were root, this step would not be needed as JAVA would use the expected keystore. However, since our user is UID 1000, a new keystore is created in /tmp at startup.

Once all of these changes are applied to the deployment, the external authentication's root CA certificate should be available in the keystore. There are a few things that can be checked to validate this.

First, check that the root CA certificate has been added to the adminserver pod, and is visible in /certificates.

user@linux:~$ ki exec ivaap adminserver
adminserver-deployment-6dccc476bc-82xwp:/opt/ivaap/adminserver$ ls /certificates/
root-ca.crt
adminserver-deployment-6dccc476bc-82xwp:/opt/ivaap/adminserver$ cat /certificates/root-ca.crt
-----BEGIN CERTIFICATE-----
...
...
...
...
-----END CERTIFICATE-----

Second, we can observe the logs when the adminserver starts up. At the very beginning of the logs, you should see something like this:

Running script to check for USE_SYSTEM_CA_CERTS
This will add mounted certificates to the JAVA keystore.
Using a temporary truststore at /tmp/tmp.bT32kovnmc
Picked up JAVA_TOOL_OPTIONS: -Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=/tmp/tmp.bT32kovnmc -Djavax.net.ssl.trustStorePassword=changeit
Importing keystore /tmp/tmp.6bSWgx43mT to /tmp/tmp.bT32kovnmc...
Entry for alias digicertassuredidrootca successfully imported.
Entry for alias anfsecureserverrootca successfully imported.
Entry for alias vtrusrootca successfully imported.
Entry for alias ssl.comtlseccrootca2022 successfully imported.
...
...
...
...
Entry for alias ssl.comevrootcertificationauthorityrsar2 successfully imported.
Import command completed:  142 entries successfully imported, 0 entries failed or cancelled
Adding certificate with alias Schlumberger Root CA2 to the JVM truststore
Picked up JAVA_TOOL_OPTIONS: -Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=/tmp/tmp.bT32kovnmc -Djavax.net.ssl.trustStorePassword=changeit
Certificate was added to keystore

In our log above, we see that a certificate with alias 'Schlumberger Root CA2' has been added to the keystore successfully. The final confirmation would be to attempt to login and ensure the keystore errors have been resolved.

Backend Nodes

Adding self-signed Root CA certificates to the backend nodes is essentially the same process process as the adminserver since the backend nodes and Adminserver share the same Eclipse Temurin base image. Often times with the backend node, this kind of issue is encountered when the IVAAP data connector is attempting to connect to an external data server which uses a self-signed certificate. A common example is an on-prem OSDU deployment. In these cases, the same steps as Adminserver can be followed.

  • Create kubernetes secret with Root CA
    kubectl create secret generic <secretName> \
     --from-file=root-ca.crt=/path/to/RootCA.crt \
     -n <namespace>
    
  • Set the .Values.ivaapBackendNode.extraVolumes and .Values.ivaapBackendNodes.extraVolumeMounts to mount the kubernetes secret to /certificates
    ivaapBackendNodes:
      extraVolumes:
        - name: root-ca-cert
          secret:
            secretName: <secretName>
      extraVolumeMounts:
        - name: root-ca-cert
          mountPath: /certificates
          readOnly: true
    
  • Set the environment variables JAVA_TOOL_OPTIONS and USE_SYSTEM_CA_CERTS. In the example below, we will set this for the opensdus3r3node, but refer to the Backend Node Configmaps section of the General Helm Configuration guide.
    nodeEnvConfigMaps:
      opensdus3r3node:
        JAVA_TOOL_OPTIONS: "-Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit"
        USE_SYSTEM_CA_CERTS: "true"
    

After applying, the same steps for validation found in the above adminserver section can be done for the backend node to confirm the certificate was added to the JVM keystore.

user@linux:~$ ki exec ivaap-bhuat backend opensdus3r3node
ubuntu@ivaap-backend-deployment-dd6596c84-bgrvh:/opt/ivaap/ivaap-playserver/deployment$ ls /certificates/
root-ca.crt
ubuntu@ivaap-backend-deployment-dd6596c84-bgrvh:/opt/ivaap/ivaap-playserver/deployment$ cat /certificates/*
-----BEGIN CERTIFICATE-----
...
...
...
...
-----END CERTIFICATE-----

The same logs should be visible as well, indicating the certificate was added:

Using a temporary truststore at /tmp/tmp.bT32kovnmc
Picked up JAVA_TOOL_OPTIONS: -Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=/tmp/tmp.bT32kovnmc -Djavax.net.ssl.trustStorePassword=changeit
Importing keystore /tmp/tmp.6bSWgx43mT to /tmp/tmp.bT32kovnmc...
Entry for alias digicertassuredidrootca successfully imported.
Entry for alias anfsecureserverrootca successfully imported.
Entry for alias vtrusrootca successfully imported.
Entry for alias ssl.comtlseccrootca2022 successfully imported.
...
...
...
...
Entry for alias ssl.comevrootcertificationauthorityrsar2 successfully imported.
Import command completed:  142 entries successfully imported, 0 entries failed or cancelled
Adding certificate with alias Schlumberger Root CA2 to the JVM truststore
Picked up JAVA_TOOL_OPTIONS: -Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=/tmp/tmp.bT32kovnmc -Djavax.net.ssl.trustStorePassword=changeit
Certificate was added to keystore

Adding Self Signed Root CA to Linux Truststore

Some backend data connectors use native code. Native code for IVAAP will use curl or libcurl for the majority of calls. So, similarly to the JVM, this can break if using self-signed certificates. In cases like this, you may have already added the root CA certificate to the JVM keystore, but not to the linux truststore inside of the container. This will cause behavior where some data will work and can be visualized, but other data (such as seismic) will not. Below is an example log from the opensdus3r3node where this error occured:

Caused by: java.io.IOException: Seismic dms lock dataset failed: CURL error received for: 'https://osdu-data-platform.com/path/to/seismic/data'. CURL error code: 60, CURL error string: 'SSL peer certificate or SSH remote key was not OK' -

To resolve this, set the environment variable SSL_CERT_FILE to point to the same root CA certificate that was mounted for the JVM keystore.

nodeEnvConfigMaps:
  opensdus3r3node:
    JAVA_TOOL_OPTIONS: "-Djavax.net.ssl.trustStore=$JRE_CACERTS_PATH -Djavax.net.ssl.trustStorePassword=changeit"
    USE_SYSTEM_CA_CERTS: "true"
    SSL_CERT_FILE: "/certificates/root-ca.crt"

In some cases, CURL_CA_BUNDLE environment variable may also need to be set to point to the same certificate.