General Helm Configuration
This section will go through all of the primary configuration requirements for IVAAP using the Helm Template. The steps in this section are for general IVAAP deployment that is universal for all supported platforms. For specific deployment details pertaining to your platform, refer to the other guides that are relevant to your environment.
IVAAP Deliverable Package¶
user@linux:~/2025.1.1_BASE_IVAAPHelmTemplate_Chart-v1.1.8-2025-11-18$ ls
IVAAPHelmTemplate IVAAP_Documentation_2025.1 docker-images.tar.gz ivaap-helpers ivaap-postgres-2024.1-2024-12-06.sql
The IVAAP package will be delivered as a .tar.gz file. When extracted, you will see the following content:
IVAAPHelmTemplate- The IVAAP Helm Template. This contains all of the IVAAP helm charts that will be used for deployment. This directory will and should not be modified during deployment.IVAAP_Documentation_2025.1- The latest IVAAP Documentation package that contains all related documentation for deployment and configuration. Inside the documentation directory is anindex.htmlfile - simply open this file with a web browser to browse the documentation locally.ivaap-helpers- This directory contains helpful resources for deployment, including scripts, functions, and aliases. We very highly recommend configuring and using our functions and aliases by sourcing in your.bashrc. Information on how to setup and useivaap-helperscan be found in the IVAAP Helpers supplemental guide.ivaap-postgres-2024.1-2024-12-06.sql- A starting PostgreSQL schema to be used to initialize the Postgres database. This file name is subject to change with new schema versions.docker-images.tar.gz- The base package container images. This is an image tar file that contains all of the images necessary to run IVAAP.
Note
The BASE package image tar file does not contain any backend data connectors. The connectors will be delivered as separate, individual image .tar.gz files in the connectors directory of your delivery artifact.
Pre-Deployment Steps¶
Before deploying IVAAPHelmTemplate, there will be some pre-deployment setup that should be handled first, depending on environment and specific needs. For environment specific details, please refer to the Deployment Guides specific to your environment. Then, refer back here once evironment specific setup is complete.
IVAAP Namespace¶
Before getting started, it is best to create IVAAP's namespace, as it will be needed to begin adding secrets for the deployment to use. For simplicity, this guide will be written in the context of our namespace being called ivaap.
kubectl create namespace ivaap
Pre-Deployment Secrets¶
TLS Secret¶
IVAAP's requires TLS certificates, and the certificate and key can be passed as a kubernetes secret. The configuration for this can be found in .Values.environment.TLSSecret:
environment:
TLSSecret:
# ----- TLS is required for IVAAP. If not terminated externally, TLS cert and key should be set as a kubernetes secret.
# ----- Only set TLSSecret.enabled=true if TLS termination will happen within the ingress controller for the deployment.
# ----- TLSSecret enabled does not need to be set if termination happens external to the IVAAP deployment.
# ----- The secretName should be referenced here as the value "secretName"
# ----- Below is an example bash script for creating this kubernetes secret:
# -----
# ----- #!/bin/bash
# ----- kubectl create -n ivaap secret tls <secretName> \
# ----- --cert=path/to/cert.crt \
# ----- --key=path/to/private.key
enabled: ""
secretName: ""
To use this kubernetes secret, set .Values.environment.TLSSecret.enabled to true, then set .Values.environment.TLSSecret.secretName to the name you want to call the secret. The secret can be created with the following linux command:
kubectl create -n ivaap secret tls <secretName> \
--cert=path/to/cert.crt \
--key=path/to/private.key
This method requires the certificate and key to be on the file system where you are performing the deployment. Once the secret is created, these files can be removed.
Any method can be used for creating the TLS Secret. The important thing to remember is that the correct secret name is referenced in .Values.environment.TLSSecret.secretName in order for IVAAP to be able to use the secret.
If deploying in AWS EKS, TLS secret does not have to be configured if you are assigning DNS in Route53 to the ingress controller. IVAAP's Application Load Balancer will collect the TLS certificate from ACM automatically and terminate HTTPS.
IVAAP Image Pull Secret¶
For your kubernetes cluster, there may already be an existing method of authenticating to your container image registry to pull the images for deployment. However, if there is no such method already in place, we have included an optional configuration for using a secret to authenticate. This can be found at .Values.environment.imagePullSecrets:
environment:
imagePullSecrets:
# ----- [ OPTIONAL ]
# ----- imagePullSecrets allows for simple secret to access a container registry in order to pull docker images
# ----- Example script to delete and recreate this secret to re-authenticate (requires docker):
# -----
# ----- #!/bin/bash
# ----- kubectl delete secret <secret_name> -n <namespace>
# ----- aws ecr get-login-password --region <aws_region> | docker login --username AWS --password-stdin <container_registry_url>
# ----- kubectl create secret generic <secret_name> --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson -n <namespace>
# -----
# ----- This is an optional feature and does not have to be enabled if your cluster already has access to your container registry.
enabled: ""
secretName: ""
Configuration is similar to TLSSecret. Set .Values.environment.imagePullSecrets.enabled to true, and set .Values.environment.TLSSecret.secretName to the desired name of your secret. Our example to create the secret does require docker, but there are other methods of creating this secret for authenticating to your registry. No matter what method you choose to create the secret, the logic
in the helm template will still work as long as the correct secretName is referenced.
#!/bin/bash
kubectl delete secret <secretName> -n <namespace>
aws ecr get-login-password --region <aws_region> | docker login --username AWS --password-stdin <container_registry_url>
kubectl create secret generic <secretName> --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson -n <namespace>
Deployment Methods¶
IVAAP can be deployed many different ways with Helm. Since this is a universal template, no files need direct modification. The primary deployment method will be to deploy using two values.yaml files. Optionally, the values required for your deployment can just be passed into helm as well, using GitOps tools or deployment pipelines.
Helm with Multiple Values Files¶
Deploying with two values.yaml files is the primary deployment method for IVAAPHelmTemplate. The values.yaml file found in the template should remain just that - a template. This file contains some default values, and will create the general structure for the deployment. A second values.yaml file should be created from the template file. This second file is where all IVAAP configuration for your environment should live.
To create this file, simply copy values.yaml to a new file, and begin stripping the new file down to only contain the configuration details you need for your deployment. As a starting point, some example yaml files have been provided in the template, located at IVAAPHelmTemplate/deployment-examples/deploy-with-two-values-files. Below is a stripped down version of the example example-k3s-native-secrets.yaml:
namespace: "ivaap"
environment:
hostname: "ivaap-domain.com"
type:
k3s:
enabled: true
logging:
# ----- Set persistLogs to true in order to persist logs to the k3s host file system.
# ----- This requires proper permissions in order to work - if permissions are incorrect, IVAAP will fail.
# ----- Refer to File Permissions section of the K3s Single Server Deployment guide.
persistLogs: false
localLogsPath: "/opt/ivaap/ivaap-volumes/logs"
authentication:
externalAuthEnabled: "false"
TLSSecret:
# ----- Refer to values.yaml for configuration details
enabled: true
secretName: "ivaap-tls-secret"
ivaapBackendNodes:
dataNodes:
geofilesmasternode:
enabled: true
geofilesseismicnode:
enabled: true
geofilesreservoirsnode:
enabled: true
datanodePVCs:
geofiles:
localPath: /opt/ivaap/ivaap-volumes/geofiles
mountPath: /opt/ivaap/ivaap-volumes/geofiles
secrets:
base64EncodedValues: false
type:
nativek8s:
enabled: true
k8sSecrets:
circle-of-trust-secrets:
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: 'detail.Kindly10'
IVAAP_TRUST_PRIVATE_KEY: 'hWgDxGVKDb5j6oHiWwRvR+TBeMCb0NM'
IVAAP_TRUST_PUBLIC_KEY: 'MIIBIjANBgkqhkiG9w0B'
activemq-conf-secrets:
IVAAP_WS_MQ_QUEUE_PASSWORD: 'ENC(UVtkT9XocmNVOKUIUDncEA==)'
ivaap-license-secret:
# ----- This is a dummy, placeholder license, but use this same format.
# ----- Place your license between the three curly brackets {{{}}} and on a single line.
LM_LICENSE_FILE: '{{{FEATURE IVAAPServer INTD 1.0 22-jan-2025 uncounted VENDOR_STRING=users:10 HOSTID=ANY SIGN="0183 2198 7E60 A2E1 33CF 1CC3 A5D4 5670 N8S7 D200 K9K9 D400 8686 3F9E ASDF 1ADB JKLH 722F C7A7 C9E4 991F"}}}'
adminserver-conf-secrets:
# ----- PostgreSQL DB Connection Configuration
IVAAP_SERVER_ADMIN_DATABASE_HOST: 'ivaap-postgres-host'
IVAAP_SERVER_ADMIN_DATABASE_NAME: 'ivaapdb'
IVAAP_SERVER_ADMIN_DATABASE_PORT: '5432'
IVAAP_SERVER_ADMIN_DATABASE_USERNAME: 'ivaapserver'
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY: 'dbEncryptionKey'
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD: 'dbEncryptedPassword'
IVAAP_COMMON_TEAM_ENCRYPTION_KEY: 'commonEncryptionKey'
nodeEnvConfigMaps:
playnode:
IVAAP_EXAMPLE_ENVAR: "EXAMPLE_VALUE"
geofilesmasternode:
IVAAP_EXAMPLE_ENVAR: "EXAMPLE_VALUE"
configmap:
adminserver:
IVAAP_SERVER_ADMIN_AUTO_MIGRATE: "false"
In this example, values have been configured to:
- Deploy in K3s environment on a single server VM
- Persist logs to the file system
- Use local authentication
- Configuration for TLS and Image pull secrets
- Deploy Geofiles backend nodes (including path to the local data)
- Use Native kubernetes secrets, configured directly.
All of the values in this second yaml file reference values in the primary values.yaml from the template. Any value found in this second values file will override the values in the primary values.yaml. Once the second file is created, deployment can be performed with the following command:
helm upgrade --install ivaap /opt/ivaap/IVAAPHelmTemplate \
-f /opt/ivaap/IVAAPHelmTemplate/values.yaml \
-f /opt/ivaap/deployment-config.values.yaml \
--namespace <namespace>
This process can be used in all kubernetes environment using helm. The template's primary values.yaml should always be referenced first in the command, then the modified second values file. This allows the second files values to take priority over the defaults in values.yaml.
GitOps Tools¶
IVAAPHelmTemplate was designed to be easy to use with GitOps tools, since everything is configurable via values.yaml. Inside the template, we have included examples of deployment with some popular GitOps tools, located at IVAAPHelmTemplate/deployment-examples/deploy-with-gitops. For all GitOps methods, the IVAAPHelmTemplate will need to be stored in your own git repository in order for the GitOps tool to pull from. SLB does not provide direct access to IVAAPHelmTemplate repository.
Any GitOps tool can be used to deploy the IVAAPHelmTemplate, and the example deployment files in this section should be viewed only as examples. The deployment files may vary depending on your environment and tool.
Additionally, all GitOps methods can also still be deployed using two values.yaml files, instead of passing values through the GitOps deployment yaml itself.
ArgoCD¶
Deploying with ArgoCD is simple, and somewhat similar to deploying with a second values file. Once a second values file has been created and configured based on your environment's needs, it is simple to modify it to work as an ArgoCD yaml deployment file. Below is the example from the helm template:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ivaap
namespace: argo-cd
spec:
project: default
source:
repoURL: 'https://User@git-repo.com/IVAAPHelmTemplate'
path: '.'
targetRevision: 'repoBranch'
helm:
valueFiles:
- values.yaml
values: |
namespace: "ivaap"
environment:
hostname: "ivaap-domain.com"
type:
aws:
enabled: true
# ----- Include subnets for your VPC
# ----- Refer to templates/aws/ivaap-ingress.yaml
awsSubnets: "subnet1,subnet2"
authentication:
externalAuthEnabled: "true"
ifExternal:
externalAuthType: "awsCognito"
TLSSecret:
# ----- Refer to values.yaml for configuration details
enabled: true
secretName: "ivaap-tls-secret"
imagePullSecrets:
# ----- Refer to values.yaml for configuration details
enabled: "true"
secretName: "intecrcred"
secrets:
base64EncodedValues: true
type:
nativek8s:
enabled: true
# All secrets defined in this section must be base64 encoded.
k8sSecrets:
circle-of-trust-secrets:
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: "bXlFbmNyeXB0aW9uS2V5"
IVAAP_TRUST_PRIVATE_KEY: "BASE64_ENCODED_PRIVATE_KEY"
IVAAP_TRUST_PUBLIC_KEY: "BASE64_ENCODED_PUBLIC_KEY"
activemq-conf-secrets:
IVAAP_WS_MQ_QUEUE_PASSWORD: "RU5DKFVWdGtUOVhvY21OVk9LVUlVRG5jRUE9PSk="
ivaap-license-secret:
LM_LICENSE_FILE: "BASE64_ENCODED_LICENSE"
adminserver-conf-secrets:
# ----- PostgreSQL DB Connection Configuration
IVAAP_SERVER_ADMIN_DATABASE_HOST: "aXZhYXAtcG9zdGdyZXMtaG9zdA=="
IVAAP_SERVER_ADMIN_DATABASE_NAME: "aXZhYXBkYg=="
IVAAP_SERVER_ADMIN_DATABASE_PORT: "NTQzMg=="
IVAAP_SERVER_ADMIN_DATABASE_USERNAME: "aXZhYXBzZXJ2ZXI="
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY: "ZGJFbmNyeXB0aW9uS2V5"
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD: "ZW5jcnlwdGVkLWRiLXBhc3N3b3Jk"
# ----- AWS Cognito Authentication
# ----- Only use this section if .Values.environment.authentication.ifExternal.externalAuthType equals awsCognito
#######
# ----- https://ivaap-domain.com/IVAAPServer/api/v2/callback
IVAAP_AWS_COGNITO_CALLBACK_URL: "aHR0cHM6Ly9pdmFhcC1kb21haW4uY29tL0lWQUFQU2VydmVyL2FwaS92Mi9jYWxsYmFjaw=="
# ----- https://ivaap-domain.com/ivaap/viewer/ivaap.html
IVAAP_AWS_COGNITO_VIEWER_URL: "aHR0cHM6Ly9pdmFhcC1kb21haW4uY29tL2l2YWFwL3ZpZXdlci9pdmFhcC5odG1s"
# ----- user1@email.com,user2@email.com,user3@email.com
IVAAP_AWS_COGNITO_ADMIN_USERS: "dXNlcjFAZW1haWwuY29tLHVzZXIyQGVtYWlsLmNvbSx1c2VyM0BlbWFpbC5jb20="
# ----- https://cognito-idp.REGION.amazonaws.com/REGION_aa1b2c3D4/.well-known/openid-configuration
IVAAP_AWS_COGNITO_DISCOVERY_URL: "aHR0cHM6Ly9jb2duaXRvLWlkcC5SRUdJT04uYW1hem9uYXdzLmNvbS9SRUdJT05fYWExYjJjM0Q0Ly53ZWxsLWtub3duL29wZW5pZC1jb25maWd1cmF0aW9u"
IVAAP_AWS_COGNITO_CLIENT_ID: "BASE64_ENCODED_CLIENT_ID"
IVAAP_AWS_COGNITO_ENCRYPTED_CLIENT_SECRET: "BASE64_ENCODED_ENCRYPTED_CLIENT_SECRET"
# ----- openid email
IVAAP_AWS_COGNITO_SCOPE: "b3BlbmlkIGVtYWls"
ivaapFrontend:
osdu: true
ivaapBackendNodes:
dataNodes:
opensdublobstorager3node:
enabled: "true"
configmap:
adminserver:
IVAAP_SERVER_ADMIN_AUTO_MIGRATE: "false"
IVAAP_REQUIRE_EXTERNAL_AUTH: "true"
IVAAP_AWS_COGNITO_USER_DOMAIN_NAME: "DefaultDomain"
IVAAP_AWS_COGNITO_USER_GROUP_NAME: "DefaultGroup"
destination:
server: 'https://kubernetes.default.svc'
namespace: ivaap
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
syncOptions:
- Validate=false
- Timeout=600
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
As you can see, all values can be configured at spec.source.helm.values, and passed into ArgoCD. Alternatively, ArgoCD deployments can still be done with a second, hard-coded values.yaml file stored in your repo with the template.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ivaap
namespace: argo-cd
spec:
project: default
source:
repoURL: 'https://User@git-repo.com/IVAAPHelmTemplate'
path: '.'
targetRevision: 'repoBranch'
helm:
valueFiles:
- values.yaml
- deployment-config.values.yaml
destination:
server: 'https://kubernetes.default.svc'
namespace: ivaap
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
syncOptions:
- Validate=false
- Timeout=600
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
Notice spec.source.helm.valueFiles references both the template values.yaml, as well as the second values file, deployment-config.values.yaml. This is a good way to track the entire deployment using git, including IVAAP configuration. Of course, ensure to have this be stored in a private repository when setting sensitive secrets.
Passing Values Through Pipeline¶
IVAAPHelmTemplate deployments can also be done through automation tools, like Jenkins and GitHub Actions. These tools allow for full control of when to deploy, and can be configured for manual deployments, or triggered based on a change in the git repository.
Likewise, these deployment methods can also still use two values.yaml files instead of passing values directly through the pipeline.
Jenkins¶
Jenkins is one of the most popular choices for automation tools. IVAAP can be deployed easily with a Jenkins pipeline.
Jenkins pipelines can also be configured to act similar GitOps. Jenkins does not sync like GitOps tools, but a Jenkins pipeline can be configured to run when a new commit is pushed to a repository. This type of setup can act similarly to GitOps since it will deploy automatically when the changes are detected.
Below is an example Jenkinsfile for deploying IVAAPHelmTemplate:
pipeline {
agent any
environment {
KUBECONFIG_FILE = "${WORKSPACE}/kubeconfig"
}
tools {
helm 'helm3'
}
stages {
stage('Prepare kubeconfig') {
steps {
withCredentials([string(credentialsId: 'kubeconfig', variable: 'KCFG')]) {
writeFile file: "${KUBECONFIG_FILE}", text: "${KCFG}"
sh 'chmod 600 ${KUBECONFIG_FILE}'
}
}
}
stage('Checkout Helm Chart') {
steps {
git url: 'https://git-repo.com/IVAAPHelmTemplate', branch: 'repoBranch'
}
}
stage('Deploy with Helm') {
steps {
withCredentials([
string(credentialsId: 'enc_key', variable: 'ENC_KEY'),
file(credentialsId: 'private_key', variable: 'PRIVATE_KEY_FILE'),
string(credentialsId: 'public_key', variable: 'PUBLIC_KEY'),
string(credentialsId: 'mq_password', variable: 'MQ_PASS'),
string(credentialsId: 'license', variable: 'LICENSE'),
string(credentialsId: 'pg_host', variable: 'PG_HOST'),
string(credentialsId: 'pg_name', variable: 'PG_NAME'),
string(credentialsId: 'pg_port', variable: 'PG_PORT'),
string(credentialsId: 'pg_user', variable: 'PG_USER'),
string(credentialsId: 'pg_encryption_key', variable: 'PG_ENC_KEY'),
string(credentialsId: 'pg_encrypted_password', variable: 'PG_ENC_PASS'),
string(credentialsId: 'cognito_callback_url', variable: 'COGNITO_CALLBACK'),
string(credentialsId: 'cognito_viewer_url', variable: 'COGNITO_VIEWER'),
string(credentialsId: 'cognito_admin_users', variable: 'COGNITO_ADMINS'),
string(credentialsId: 'cognito_discovery_url', variable: 'COGNITO_DISCOVERY'),
string(credentialsId: 'cognito_client_id', variable: 'COGNITO_ID'),
string(credentialsId: 'cognito_encrypted_client_secret', variable: 'COGNITO_SECRET')
]) {
withEnv(["KUBECONFIG=${KUBECONFIG_FILE}"]) {
script {
def PRIVATE_KEY_B64 = sh(script: "base64 -w 0 ${PRIVATE_KEY_FILE}", returnStdout: true).trim()
def helmCmd = """
helm upgrade --install ivaap . --namespace ivaap --create-namespace \
--set namespace=ivaap \
--set environment.hostname=ivaap-domain.com \
--set environment.type.aws.enabled=true \
--set environment.type.aws.awsSubnets=subnet1,subnet2 \
--set environment.authentication.externalAuthEnabled=true \
--set environment.authentication.ifExternal.externalAuthType=awsCognito \
--set environment.TLSSecret.enabled=true \
--set environment.TLSSecret.secretName=ivaap-tls-secret \
--set environment.imagePullSecrets.enabled=true \
--set environment.imagePullSecrets.secretName=intecrcred \
--set secrets.type.nativek8s.enabled=true \
--set secrets.base64EncodedValues=true \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY=$(echo $ENC_KEY | base64 -w 0) \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PRIVATE_KEY=${PRIVATE_KEY_B64} \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PUBLIC_KEY=$(echo $PUBLIC_KEY | base64 -w 0) \
--set secrets.k8sSecrets.activemq-conf-secrets.IVAAP_WS_MQ_QUEUE_PASSWORD=$(echo $MQ_PASS | base64 -w 0) \
--set secrets.k8sSecrets.ivaap-license-secret.LM_LICENSE_FILE=$(echo $LICENSE | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_HOST=$(echo $PG_HOST | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_NAME=$(echo $PG_NAME | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_PORT=$(echo $PG_PORT | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_USERNAME=$(echo $PG_USER | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY=$(echo $PG_ENC_KEY | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD=$(echo $PG_ENC_PASS | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_CALLBACK_URL=$(echo $COGNITO_CALLBACK | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_VIEWER_URL=$(echo $COGNITO_VIEWER | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_ADMIN_USERS=$(echo $COGNITO_ADMINS | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_DISCOVERY_URL=$(echo $COGNITO_DISCOVERY | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_CLIENT_ID=$(echo $COGNITO_ID | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_ENCRYPTED_CLIENT_SECRET=$(echo $COGNITO_SECRET | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_SCOPE=$(echo 'openid email' | base64 -w 0) \
--set ivaapFrontend.osdu=true \
--set ivaapBackendNodes.dataNodes.opensdublobstorager3node.enabled=true \
--set configmap.adminserver.IVAAP_SERVER_ADMIN_AUTO_MIGRATE=false \
--set configmap.adminserver.IVAAP_REQUIRE_EXTERNAL_AUTH=true \
--set configmap.adminserver.IVAAP_AWS_COGNITO_USER_DOMAIN_NAME=DefaultDomain \
--set configmap.adminserver.IVAAP_AWS_COGNITO_USER_GROUP_NAME=DefaultGroup
"""
sh helmCmd
}
}
}
}
}
}
post {
success { echo "Helm deployment complete." }
failure { echo "Helm deployment failed." }
}
}
Credentials can be configured in Jenkins for each of the kubernetes secrets. This allows jenkins to also act as it's own secrets manager.
GitHub Actions¶
GitHub Actions is also compatible with IVAAPHelmTemplate, and is very similar setup to the Jenkinsfile, other than being written in yaml. Below is an example where the IVAAP Kubernetes Secrets will be passed into helm from GitHub Secrets:
name: Deploy IVAAP with Helm
on:
workflow_dispatch:
inputs:
repoBranch:
description: 'Branch of IVAAPHelmTemplate to deploy'
required: true
default: 'main'
jobs:
deploy:
runs-on: ubuntu-latest
env:
KUBECONFIG_FILE: ${{ github.workspace }}/kubeconfig
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Helm
uses: azure/setup-helm@v4
with:
version: v3.14.0
- name: Prepare kubeconfig
run: |
echo "${{ secrets.KUBECONFIG }}" > $KUBECONFIG_FILE
chmod 600 $KUBECONFIG_FILE
- name: Checkout Helm Chart
uses: actions/checkout@v4
with:
repository: your-org/IVAAPHelmTemplate
ref: ${{ github.event.inputs.repoBranch }}
path: IVAAPHelmTemplate
- name: Helm Deploy
run: |
cd IVAAPHelmTemplate
helm upgrade --install ivaap . --namespace ivaap --create-namespace \
--set namespace=ivaap \
--set environment.hostname=ivaap-domain.com \
--set environment.type.aws.enabled=true \
--set environment.type.aws.awsSubnets=subnet1,subnet2 \
--set environment.authentication.externalAuthEnabled=true \
--set environment.authentication.ifExternal.externalAuthType=awsCognito \
--set environment.TLSSecret.enabled=true \
--set environment.TLSSecret.secretName=ivaap-tls-secret \
--set environment.imagePullSecrets.enabled=true \
--set environment.imagePullSecrets.secretName=intecrcred \
--set secrets.type.nativek8s.enabled=true \
--set secrets.base64EncodedValues=true \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY=$(echo "${{ secrets.ENC_KEY }}" | base64 -w 0) \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PRIVATE_KEY=$(echo "${{ secrets.PRIVATE_KEY }}" | base64 -w 0) \
--set secrets.k8sSecrets.circle-of-trust-secrets.IVAAP_TRUST_PUBLIC_KEY=$(echo "${{ secrets.PUBLIC_KEY }}" | base64 -w 0) \
--set secrets.k8sSecrets.activemq-conf-secrets.IVAAP_WS_MQ_QUEUE_PASSWORD=$(echo "${{ secrets.MQ_PASS }}" | base64 -w 0) \
--set secrets.k8sSecrets.ivaap-license-secret.LM_LICENSE_FILE=$(echo "${{ secrets.LICENSE }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_HOST=$(echo "${{ secrets.PG_HOST }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_NAME=$(echo "${{ secrets.PG_NAME }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_PORT=$(echo "${{ secrets.PG_PORT }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_USERNAME=$(echo "${{ secrets.PG_USER }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY=$(echo "${{ secrets.PG_ENC_KEY }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD=$(echo "${{ secrets.PG_ENC_PASS }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_CALLBACK_URL=$(echo "${{ secrets.COGNITO_CALLBACK }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_VIEWER_URL=$(echo "${{ secrets.COGNITO_VIEWER }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_ADMIN_USERS=$(echo "${{ secrets.COGNITO_ADMINS }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_DISCOVERY_URL=$(echo "${{ secrets.COGNITO_DISCOVERY }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_CLIENT_ID=$(echo "${{ secrets.COGNITO_ID }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_ENCRYPTED_CLIENT_SECRET=$(echo "${{ secrets.COGNITO_SECRET }}" | base64 -w 0) \
--set secrets.k8sSecrets.adminserver-conf-secrets.IVAAP_AWS_COGNITO_SCOPE=$(echo 'openid email' | base64 -w 0) \
--set ivaapFrontend.osdu=true \
--set ivaapBackendNodes.dataNodes.opensdublobstorager3node.enabled=true \
--set configmap.adminserver.IVAAP_SERVER_ADMIN_AUTO_MIGRATE=false \
--set configmap.adminserver.IVAAP_REQUIRE_EXTERNAL_AUTH=true \
--set configmap.adminserver.IVAAP_AWS_COGNITO_USER_DOMAIN_NAME=DefaultDomain \
--set configmap.adminserver.IVAAP_AWS_COGNITO_USER_GROUP_NAME=DefaultGroup
env:
KUBECONFIG: ${{ env.KUBECONFIG_FILE }}
Container Images¶
Pushing Images to a Container Registry¶
When deploying to a cloud Kubernetes provider, you will be required to store the images in a container registry for the kubernetes service to pull from. Container images will be provided by SLB via multiple tar.gz files. These images will need to be loaded, retagged, and pushed to your own container registry.
Note
This is not required for K3s deployments, as images can be loaded directly into K3s, which will be addressed later in this guide.
For easy retagging, a script is included in the ivaap-helpers repository ( ivaap-helpers/image-retagging/retag-images.sh ). This script will load the docker images from the provided tar file, retag for your own registry, and push. To use this script, first authenticate to your registry - then create a directory called images next to the script, place all image tar files in that directory, and run the script. You will be prompted to enter your base registry - enter the registry and press enter (Example: caspian.azurecr.io). For more details on this retagging script, refer to the IVAAP Helpers supplemental guide.
Registry and Image Structure¶
The base container registry URL can be found in .Values.registry.base.
# ----- Docker Registry Base URL
registry:
# ----- Default SLB Registry
base: "caspian.azurecr.io"
Though it is not required to keep the image repos and tags as they are configured in values.yaml, it is highly recommended to do so to reduce additional configuration. To better understand this structure, below is full context for the reverse proxy image:
# ----- From values.yaml
registry:
base: "caspian.azurecr.io"
ivaapProxy:
proxy:
repoName: ivaap/proxy
tag: proxy-2024.0-cce98866-20250627T135324Z
# ----- From templates/common/proxy.yaml
spec:
template:
spec:
containers:
- name: ivaap-proxy
image: "{{ .Values.registry.base }}/{{ .Values.ivaapProxy.proxy.repoName }}:{{ .Values.ivaapProxy.proxy.tag }}"
The above yaml snippets show a clear picture of how the full image is pieced together using the base registry, repo name, and tag. The fully proxy image is caspian.azurecr.io/ivaap/proxy/proxy-2024.0-cce98866-20250627T135324Z.
Loading Images in K3s¶
For K3s deployments, images do not have to be pushed to a container registry, and can instead be loaded directly into K3s using k3s ctr image import
k3s ctr images import /path/to/deployment/docker-images.tar.gz
Do this for all image tar files in your delivery package. Once complete, your k3s deployment will have access to all images without any dependency to pull from a registry.
3rd Party Services¶
NGINX Reverse Proxy¶
IVAAP in kubernetes uses a simple HTTP NGINX configuration as it's reverse proxy. Even though the config is HTTP, it is expected for HTTPS to be terminated at the load balancer or ingress controller. Thus, we do have many security features baked into the reverse proxy image, including security headers. Some of these are configurable options.
configmap:
proxy:
IVAAP_HYPERTEXT_PROTOCOL: "https"
IVAAP_DEPLOYMENT_TYPE: "k8s"
IVAAP_STOMP_ENABLED: "true"
# ----- [OPTIONAL] - Advanced proxy settings
# ----- These settings are for enhanced security headers, and will require NGINX knowledge combined with trial and error.
# ----- Content Security Policy can be a very difficult and troublesome setup process, depending on the level of security goal.
# IVAAP_PROXY_GLOBAL_ALLOW_ORIGIN: ""
# IVAAP_PROXY_CONTENT_SECURITY_POLICY: ""
Above is the proxy configmap where environment variables can be configured. The default three that are enabled here should remain as is. The two optional settings are for additional security features. IVAAP_PROXY_GLOBAL_ALLOW_ORIGIN points to the security header ACCESS-CONTROL-ALLOW-ORIGIN, and by default is set to the * wildcard. For improved security, this header can be set to the IVAAP domain.
IVAAP_PROXY_CONTENT_SECURITY_POLICY is a much more advanced feature, and can require extensive troubleshooting, trial, and error to achieve a high level of security. A Content Security Policy (CSP) gives you full control on what is and is not allowed. Below is a basic example:
configmap:
proxy:
IVAAP_HYPERTEXT_PROTOCOL: "https"
IVAAP_DEPLOYMENT_TYPE: "k8s"
IVAAP_STOMP_ENABLED: "true"
IVAAP_PROXY_CONTENT_SECURITY_POLICY: |
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; object-src 'none'; base-uri 'self';";
Apache ActiveMQ¶
IVAAP’s messaging is backed by an Apache ActiveMQ server. ActiveMQ is a Java-based message broker which is designed to securely and efficiently handle a large number of concurrent messages via an isolated message channel for each active websocket connection.
The configmap for ActiveMQ can be founder here in values.yaml:
configmap:
backendactivemq:
IVAAP_WS_MQ_QUEUE_HOST: "ivaap-activemq-service"
IVAAP_WS_MQ_CONN_RETRY_COUNT: 20
IVAAP_WS_MQ_CONN_RETRY_INTERVAL: 5000
IVAAP_WS_MQ_POOLED_CONN_ENABLED: "true"
ACTIVEMQ_OPTS_MEMORY: "-Xms512M -Xmx4G -Djava.net.preferIPv4Stack=true -Divaapsystem.mq.queue.enabled=on"
Tuning ActiveMQ¶
IVAAP’s connection to ActiveMQ is tuneable to increase retry counts and times when it could be useful due to different environments. These envars should be applied to playnode and mqgatewaynode.
IVAAP_WS_MQ_CONN_RETRY_INTERVAL=150000
IVAAP_WS_MQ_CONN_RETRY_COUNT=3
Infinispan¶
Infinispan is an optional caching service used by IVAAP for realtime data caching. When enabled, the Infinispan server will handle all of the caching for the IVAAP backend nodes and scheduled tasks. These additions to IVAAP allows for higher performance when monitoring real-time data, as well as general performance increases by reducing overhead of all internal communication and cluster activities.
At the time of writing this document, Infinispan is only available for use with the witsmlnode.
To enable Infinispan, there are a few things that need to be set in values.yaml.
- The witsmlnode must be deployed (
.Values.ivaapBackendNodes.dataNodes.witsmlnode.enabled)
ivaapBackendNodes:
dataNodes:
witsmlnode:
enabled: true
repoName: ivaap/backend/witsmlnode
tag: witsmlnode-3.0-1-c2f557d-20250708T223606Z
- Infinispan must be enabled (
.Values.ivaapBackendNodes.infinispan.enabled)
ivaapBackendNodes:
infinispan:
enabled: true
- Realtime must be enabled (
.Values.ivaapBackendNodes.realtimeEnabled)
ivaapBackendNodes:
realtimeEnabled: true
IVAAP_JCACHE_ENABLEDfor infinispan configmap must be set totrue
configmap:
backendinfinispan:
IVAAP_JCACHE_ENABLED: "true"
These are the primary config changes required to deploy and use infinispan. However, there is still additional configuration needed.
backendinfinispan:
IVAAP_JCACHE_ENABLED: "false"
# ----- If IVAAP_JCACHE_ENABLED = true, set the envars below following deployment documentation.
IVAAP_INFINISPAN_CLIENT_HOTROD_AUTH__USERNAME: "Ivaapuser"
IVAAP_INFINISPAN_CLIENT_HOTROD_AUTH__PASSWORD_ENCRYPTED: f1B5Cw7mwc+Kdd36XZXnlg==
IVAAP_INFINISPAN_CLIENT_ENCRYPTION_KEY:: "shrank.Salary5"
# ----- Below are examples for deployer and admin users - refer to deployment guide for steps on how to create these users.
IVAAP_INFINISPAN_DEPLOYER_USER_CREDENTIALS: |
Ivaapuser=scram-sha-1\:BYGcIAzOEPEN3b8lpDxiWMAktGrZDzonv8fVogf7GVi5EZl/Aw\=\=;scram-sha-256\:BYGcIAxfDp7OIFiYv/6kE70SuTNXwYL09mMcBSEa5GmeCdrHo4f1Zhnk7rmpX29C6w\=\=;scram-sha-384\:BYGcIAx73MYDX2sCVxPjN0rC2V2MOHKx+/MEjI5cRkWIiV7iMx6V/mYnh2JhDyQdsXdRUMADwOslW6R2HYLcoRs\=;scram-sha-512\:BYGcIAxWR7ckt/v7jx+/LmCOS0gjgzY+kCT3pPqNYS6ieWpdHcMjlZKz8x+5kb481vu/xvB+NrXnGv/4+9OG4UQBb2iuSU8RxNDYG4jZ5ZNh;digest-md5\:AglJdmFhcHVzZXIHZGVmYXVsdNPsjpYwvLHo3YKK73ruRyE\=;digest-sha\:AglJdmFhcHVzZXIHZGVmYXVsdHBvc1SDdhG4Wdz8Nj4EInFL/7g2;digest-sha-256\:AglJdmFhcHVzZXIHZGVmYXVsdHNQU71b3743dchoy3ndXEQoRnlagWT7vcF+7UbMHTsd;digest-sha-384\:AglJdmFhcHVzZXIHZGVmYXVsdNQbtLiAnOdR1tvjEgrC1v3/DolyfNeSENnaCw1M6utG7ZbpWv26fA1YezRaazKluQ\=\=;digest-sha-512\:AglJdmFhcHVzZXIHZGVmYXVsdC0df0zp7rNZe6+20TgE/xsZXSznDg9VG6uil9p6PNqxHoJaOPpjUmZ8gRcyjsm0wks/2gPHdJj0RCIkH49kHfE\=;
IVAAP_INFINISPAN_ADMIN_USER_CREDENTIALS: |
Ivaapadmin=scram-sha-1\:BYGcIAwRd0UDYHjlThc8A1Hr2WeYKALggq0vP0Ysz4APdrfsaw\=\=;scram-sha-256\:BYGcIAxgERNdoIxqKqDuFZgTw71fhl0gJf7OZejVe/Cbs5Q4KpKZ/Mdh3nwQgPB/5A\=\=;scram-sha-384\:BYGcIAzpBATsSURyndcX3Myidxdk+MGKvOvmz06aWfTaLfoJXe7sd9AEDlBEJw38EWDxZG0sGvAh+Q8994PMftw\=;scram-sha-512\:BYGcIAyfx3GqwstbkFBgDNKv1NCQxu2LAgdrvX94OhxzxittEEGsXEh1UTnX+XzMT2f9ev3yKdFiS3reo1jD9SZEo3ZapSz7juNFUh0/x+eG;digest-md5\:AgpJdmFhcGFkbWluB2RlZmF1bHRGtJdvUGUl6lqxZUsg4pzy;digest-sha\:AgpJdmFhcGFkbWluB2RlZmF1bHTeLEsNROrcUcKL6q6TtIDi/rRfkw\=\=;digest-sha-256\:AgpJdmFhcGFkbWluB2RlZmF1bHTuxqS7anEX+F29iMQ6FCrKjZQhDcTUBz0yrW8J8BoMiA\=\=;digest-sha-384\:AgpJdmFhcGFkbWluB2RlZmF1bHSOLSA6smivQ24d0+/+jMbQmdP/xUG0hMMMiEOTmlK/vH01QS5I1djeJuwiEZ8oIiA\=;digest-sha-512\:AgpJdmFhcGFkbWluB2RlZmF1bHTPO9JgyuMtW1w9lZLKHpBrGFEmyV6q4fYEHLJKSVpK4jknsxXh0UchRlDz5SuWDqdRs4DRwUSxVTk3/pVxCgo4;
For more details on how to configure infinispan, refer to the in-depth IVAAP Scalability Supplemental guide.
Authentication¶
IVAAP is compatible with multiple authentication types.
environment:
authentication:
# ----- If authentication.externalAuthEnabled = false, local/vanilla authentication will be used.
# ----- Local Authentication: Users are stored in PostgreSQL database and managed by IVAAP administrator users
# ----- External Authentication:
# ----- Azure AD: Users are managed by external Azure AD Service.
# ----- AWS Cognito: Users are managed by external AWS Cognito Service.
externalAuthEnabled: ""
ifExternal:
# ----- Options: awsCognito, azureAD
externalAuthType: ""
Local Authentication¶
With local authentication, all user details and information are created and stored in the PostgreSQL database. This does not rely on any external IdP. The IVAAP Adminserver handles all user authentication and connection with the database. Users are created manually by an IVAAP Administrator. This is the default configuration used by IVAAP, unless .Values.environment.authentication.externalAuthEnabled is set to true.
External Authentication¶
AWS Cognito¶
To enable AWS Cognito authentication, the following values must be set:
environment:
authentication:
externalAuthEnabled: "true"
ifExternal:
externalAuthType: "awsCognito"
These settings will force IVAAP to use external authentication, but additional configuration is still needed. Now IVAAP needs to know details about the external authentication server in order to connect.
The following IVAAP_AWS_COGNITO secret values must be set:
secrets:
type:
k8sSecrets:
adminserver-conf-secrets:
# ----- AWS Cognito Authentication
# ----- Only use this section if .Values.environment.authentication.ifExternal.externalAuthType equals awsCognito
IVAAP_AWS_COGNITO_CALLBACK_URL: ""
IVAAP_AWS_COGNITO_VIEWER_URL: ""
IVAAP_AWS_COGNITO_ADMIN_USERS: ""
IVAAP_AWS_COGNITO_DISCOVERY_URL: ""
IVAAP_AWS_COGNITO_CLIENT_ID: ""
IVAAP_AWS_COGNITO_ENCRYPTED_CLIENT_SECRET: ""
IVAAP_AWS_COGNITO_SCOPE: ""
As well as a few variables found in the adminserver configmap:
configmap:
adminserver:
# ----- AWSCognito Only
IVAAP_AWS_COGNITO_USER_DOMAIN_NAME: "DefaultDomain"
IVAAP_AWS_COGNITO_USER_GROUP_NAME: "DefaultGroup"
IVAAP_AWS_COGNITO_END_SESSION_URL: ""
For more in-depth information on configuration for external authentication, please refer to the IVAAP Authentication guide.
Azure AD¶
To enable AWS Cognito authentication, the following values must be set:
environment:
authentication:
externalAuthEnabled: "true"
ifExternal:
externalAuthType: "azureAD"
These settings will force IVAAP to use external authentication, but additional configuration is still needed. Now IVAAP needs to know details about the external authentication server in order to connect.
The following IVAAP_AZURE_AD secret values must be set:
secrets:
type:
k8sSecrets:
adminserver-conf-secrets:
# ----- Azure AD Authentication
# ----- Only use this section if .Values.environment.authentication.ifExternal.externalAuthType equals azureAD
IVAAP_AZURE_AD_CALLBACK_URL: ""
IVAAP_AZURE_AD_VIEWER_URL: ""
IVAAP_AZURE_AD_ADMIN_USERS: ""
IVAAP_AZURE_AD_DISCOVERY_URL: ""
IVAAP_AZURE_AD_CLIENT_ID: ""
IVAAP_AZURE_AD_ENCRYPTED_CLIENT_SECRET: ""
IVAAP_AZURE_AD_SCOPE: ""
IVAAP_AZURE_AD_TENANT_ID: ""
As well as a few variables found in the adminserver configmap:
configmap:
adminserver:
# ----- AzureAD Only
IVAAP_AZURE_AD_USER_DOMAIN_NAME: "DefaultDomain"
IVAAP_AZURE_AD_USER_GROUP_NAME: "DefaultGroup"
IVAAP_AZURE_AD_USE_USER_INFO_ENDPOINT: ""
IVAAP_AZURE_AD_END_SESSION_URL: ""
For more in-depth information on configuration for external authentication, please refer to the IVAAP Authentication guide.
IVAAP Frontend¶
IVAAP's frontend is comprised of three components:
######################################
## IVAAP Frontend ##
######################################
ivaapFrontend:
viewerName: ivaap-dashboard
viewer2Name: ivaap-dashboard-publish
adminName: ivaap-admin
osdu: false
images:
viewer:
repoName: ivaap/frontend/dashboard-standard
tag: ivaap-dashboard-standard-3.4.0
externalTag: ivaap-dashboard-standard-external-3.4.0
osduTag: osdu-standard-3.4.0
viewer2:
repoName: ivaap/frontend/dashboard-publish
tag: ivaap-dashboard-publish-3.4.0
externalTag: ivaap-dashboard-publish-external-3.4.0
osduTag: osdu-publish-3.4.0
admin:
repoName: ivaap/frontend/admin-client
tag: ivaap_admin-3.0.1
For each of the two viewer containers, there are three tags - standard tag, external tag, and OSDU tag. The tag that is deployed is determined by specific configurations. The standard tag is deployed by default, and implies that local authentication is being used. The external tag is deployed if .Values.environment.authentication.externalAuthEnabled is set to true.
The OSDU tag requires external authentication. It can be deployed if .Values.environment.authentication.externalAuthEnabled is set to true, and if .Values.ivaapFrontend.osdu is set to true.
All of these variations will be provided in the docker image tar file.
IVAAP Backend¶
Core Nodes¶
The core nodes are required and deployed in ALL IVAAP deployments. These core nodes, combined with the enabled data nodes, make up the entirety of the IVAAP Backend Pekko cluster.
# [ IVAAP Backend Core Nodes ]
# ----- Core nodes used for all IVAAP deployments
coreNodes:
seednode:
repoName: ivaap/backend/seednode
tag: seednode-3.0-3-0af410b-20250708T223606Z
adminnode:
repoName: ivaap/backend/adminnode
tag: adminnode-3.0-3-0af410b-20250708T223606Z
playnode:
repoName: ivaap/backend/playnode
tag: playnode-3.0-3-0af410b-20250708T223606Z
epsgnode:
repoName: ivaap/backend/epsgnode
tag: epsgnode-3.0-3-0af410b-20250708T223606Z
mqgatewaynode:
repoName: ivaap/backend/mqgatewaynode
tag: mqgatewaynode-3.0-3-0af410b-20250708T223606Z
messagingnode:
repoName: ivaap/backend/messagingnode
tag: messagingnode-3.0-3-0af410b-20250708T223606Z
Seednode¶
The seed node is the orchestration point of the cluster, with an address and port known to all other nodes.
Playnode¶
The play node is the “gateway” for HTTP and WS requests into the cluster, i.e., the initial contact point for requests, and it routes these requests to the correct node. It also moves web socket messages to and from a dedicated ActiveMQ websocket client instance topic when enabled.
Adminnode¶
The admin node (not to be confused with the Admin Backend or Adminserver) handles root data source requests, such as connector configuration requests. It also supplies metrics and licensing information.
EPSGNode¶
The epsg node is used to perform all coordinate conversions.
MQGatewaynode¶
The mqgateway node is the “gateway” which moves ActiveMQ websocket topic messages to and from data nodes in the cluster. It is not end-user accessible as a datasource.
Messagingnode¶
The messaging node is for sending messages to a channel. Each channel is designated by a URL.
Data Nodes (Connectors)¶
The data nodes for IVAAP are the primary nodes used as connectors. Each of these serve a specific purpose for connecting to certain types of data.
# [ IVAAP Backend Data Nodes ]
# ----- IVAAP Data nodes - deployment specific, and depends on data type to be visualized.
# ----- Set enabled to true for the nodes provided to you.
dataNodes:
dataimportnode:
enabled: false
repoName: ivaap/backend/dataimportnode
tag: dataimportnode-3.0-3-0af410b-20250708T223606Z
mongonode:
enabled: false
repoName: ivaap/backend/mongonode
tag: mongonode-3.0-6-77b1f74-20250708T223606Z
s3node:
enabled: false
repoName: ivaap/backend/s3node
tag: s3node-3.0-4-7b59464-20250708T223606Z
blobstoragenode:
enabled: false
repoName: ivaap/backend/blobstoragenode
tag: blobstoragenode-3.0-3-bb04d54-20250708T223606Z
cloudstoragenode:
enabled: false
repoName: ivaap/backend/cloudstoragenode
tag: cloudstoragenode-3.0-3-fd24b41-20250708T223606Z
ppdmnode:
enabled: false
repoName: ivaap/backend/ppdmnode
tag: ppdmnode-3.0-4-5fcc873-20250708T223606Z
witsmlnode:
enabled: false
repoName: ivaap/backend/witsmlnode
tag: witsmlnode-3.0-1-c2f557d-20250708T223606Z
# ----- OSDU Data nodes
opensdus3r3node:
enabled: false
repoName: ivaap/backend/opensdus3r3node
tag: opensdus3r3node-3.0-4-6d06752-20250708T223606Z
opensdublobstorager3node:
enabled: false
repoName: ivaap/backend/opensdublobstorager3node
tag: opensdublobstorager3node-3.0-4-6d06752-20250708T223606Z
opensducloudstoragenode:
enabled: false
repoName: ivaap/backend/opensducloudstoragenode
tag: opensducloudstoragenode-3.0-4-6d06752-20250708T223606Z
# ----- Geofiles Data Nodes
geofilesmasternode:
enabled: false
repoName: ivaap/backend/geofilesmasternode
tag: geofilesmasternode-3.0-2-f38d7a2-20250708T223606Z
pvcs:
- geofiles
geofilesseismicnode:
enabled: false
repoName: ivaap/backend/geofilesseismicnode
tag: geofilesseismicnode-3.0-2-f38d7a2-20250708T223606Z
pvcs:
- geofiles
geofilesreservoirsnode:
enabled: false
repoName: ivaap/backend/geofilesreservoirsnode
tag: geofilesreservoirsnode-3.0-2-f38d7a2-20250708T223606Z
pvcs:
- geofiles
Enabling/Disabling Data Nodes¶
To enable or disable a data node, simply set the enabled flag to true/false. Each datanode will have this flag - .Values.ivaapBackendNodes.dataNodes.<nodename>.enabled. For example, to enable the witsmlnode, all that is required is the following:
ivaapBackendNodes:
dataNodes:
witsmlnode:
enabled: true
This will deploy the node using the configured image tag. This image tag must be available in the docker image tar file in order for it to be used. License restrictions may apply. Reach out to SLB for more information regarding adding new nodes to your license.
Data Node Persistent Volumes (datanodePVCs)¶
Data nodes can mount persistent storage via the datanodePVCs system. Each entry in datanodePVCs defines a named volume with a mount path and storage backend. Data nodes reference these volumes by name in their pvcs list — the Helm template automatically creates the corresponding PersistentVolume and PersistentVolumeClaim resources for any PVC referenced by an enabled data node.
Three storage types are supported:
| Type | Use Case | Required Fields |
|---|---|---|
hostPath (default) |
k3s only — mounts a local directory on the node | localPath, mountPath |
nfs |
NFS share — works on all platforms | type: nfs, server, path, mountPath |
smb |
SMB/CIFS share — requires the smb.csi.k8s.io CSI driver |
type: smb, source, secretName, mountPath |
Warning
hostPath volumes are only supported on k3s deployments. The Helm template will fail with an error if a hostPath PVC is used on a non-k3s platform. Use nfs or smb for EKS, AKS, and OpenShift deployments.
hostPath Example (k3s only)¶
datanodePVCs:
geofiles:
localPath: /opt/ivaap/ivaap-volumes/geofiles
mountPath: /opt/ivaap/ivaap-volumes/geofiles
If localPath is omitted, it defaults to /opt/ivaap/ivaap-volumes/<pvc-name>.
NFS Example¶
datanodePVCs:
geofiles:
type: nfs
server: 192.168.1.100
path: /exports/geofiles
mountPath: /opt/ivaap/ivaap-volumes/geofiles
storageCapacity: 100Gi
SMB Example¶
SMB Dependency
The CSI driver for SMB will need to be installed on your kubernetes cluster as a pre-requisite to SMB attachment.
datanodePVCs:
geofiles:
type: smb
source: "//fileserver/geofiles-share"
secretName: smb-credentials
mountPath: /opt/ivaap/ivaap-volumes/geofiles
storageCapacity: 100Gi
The SMB secret (.Values.datanodePVCs.geofiles.secretName) must contain the SMB's username and password keys and be created in the IVAAP namespace before deployment.
Example:
kubectl -n <namespace> create secret generic <secret_name> \
--from-literal=username='USERNAME' \
--from-literal=password='PASSWORD'
Linking PVCs to Data Nodes¶
Data nodes reference PVCs by name in their pvcs list. A node can mount multiple PVCs, and multiple nodes can share the same PVC:
ivaapBackendNodes:
dataNodes:
geofilesmasternode:
enabled: true
repoName: ivaap/backend/geofilesmasternode
tag: geofilesmasternode-3.0-2-f38d7a2-20250708T223606Z
pvcs:
- geofiles
Only PVCs that are referenced by at least one enabled data node will have their PV/PVC resources created.
Multi-Namespace Deployments¶
PersistentVolumes are cluster-scoped in Kubernetes, so deploying multiple IVAAP instances in separate namespaces on the same cluster requires unique PV names. The Helm template automatically prefixes PV names with the namespace (<namespace>-<pvcname>-pv), ensuring each namespace gets its own PV/PVC pair without conflicts.
For NFS and SMB volumes, ReadWriteMany access mode is used so that multiple pods (or namespaces pointing to the same backing storage) can read and write concurrently. hostPath volumes use ReadWriteOnce.
Witsml Scheduled Tasks¶
IVAAP Supports real-time monitoring with the Witsmlnode. For real-time monitoring to work, the witsml scheduled tasks are required.
ivaapBackendTasks:
deploymentName: ivaap-scheduledtasks
replicas: 1
# [ IVAAP Backend Data Scheduled Tasks ]
# ----- These tasks will only be enabled if the witsmlnode is deployed and Values.ivaapBackendNodes.realtimeEnabled = true
scheduledTasks:
witsmlwelltrajectorypollingtask:
repoName: ivaap/task-release
tag: witsmlwelltrajectorypollingtask_release-3.0.1-1511246-202507091432-20250709T143406Z
witsmlliveupdatetask:
repoName: ivaap/task-release
tag: witsmlliveupdatetask_release-3.0.1-1511246-202507091432-20250709T143406Z
witsmlpollingtask:
repoName: ivaap/task-release
tag: witsmlpollingtask_release-3.0.1-1511246-202507091432-20250709T143406Z
These scheduled tasks will be deployed automatically if the witsmlnode is deployed, and if .Values.ivaapBackendNodes.realtimeEnabled equals true.
{{- if and .Values.ivaapBackendNodes.dataNodes.witsmlnode.enabled .Values.ivaapBackendNodes.realtimeEnabled }}
The scheduled tasks will be in their own backend pod, separate from the nodes.
NAME READY STATUS RESTARTS AGE
adminserver-deployment-6b4bfb4497-dqrq2 1/1 Running 0 7d2h
ivaap-activemq-deployment-6b6956684f-rfhdt 1/1 Running 0 27d
ivaap-admin-deployment-5b8f44cc4c-8jp96 1/1 Running 0 5d22h
ivaap-backend-deployment-765b4dc65-4xvjq 16/16 Running 0 7d5h
ivaap-dashboard-deployment-96b65978f-k5zhp 1/1 Running 0 14d
ivaap-dashboard-publish-deployment-6bc4d97ddf-wwcjc 1/1 Running 0 14d
ivaap-proxy-deployment-6875dcf849-wwwf2 1/1 Running 0 26d
ivaap-scheduledtasks-deployment-645f6fd6d7-zdps7 3/3 Running 0 7d5h
Real-time in IVAAP works with or without Infinispan caching service. However, if Infinispan is enabled ( .Values.ivaapBackendNodes.infinispan.enabled ), the scheduled tasks
containers will automatically mount the Infinispan configmap, as seen in this snippet from the scheduledtask template file:
spec:
template:
spec:
containers:
# Universal scheduled tasks block
{{- range $taskName, $taskConfig := .Values.ivaapBackendTasks.scheduledTasks }}
- name: {{ $taskName }}
envFrom:
{{- if $.Values.ivaapBackendNodes.infinispan.enabled }}
- configMapRef:
name: backend-infinispan
{{- end }}
For more in-depth configuration details on Infinispan, refer to the IVAAP Scalability Supplemental guide.
Backend Node Configmaps¶
For all of the backend nodes (Core and Data), environment variables can be easily added in values.yaml through the nodeEnvConfigMaps section.
# ----- In this section, environment variables can easily be added to any backend node.
# ----- Just make a new section for the node you want to add environment variables to.
# ----- This must use the expected node name. Ex: adminnode, mongonode, witsmlnode, etc.
nodeEnvConfigMaps:
playnode:
IVAAP_SERVER_MAX_STREAM_BYTES: "1000000000"
IVAAP_NODE_JAVA_OPTS: "-Dplay.server.http.idleTimeout=75s"
IVAAP_JVM_MAX_MEMORY: "4G"
epsgnode:
IVAAP_CRS_CONFIGS_PATH: "/opt/ivaap/ivaap-playserver/deployment/ivaapnode/conf/ivaapcrsconfigs"
SIS_DATA: "/opt/ivaap/ivaap-playserver/deployment/ivaapnode/conf/sisdata"
mqgatewaynode:
IVAAP_MQGATEWAY_STOMP_ENABLED: "true"
IVAAP_MQGATEWAY_STOMP_TIMEOUT: 360000
An environment variable can be added to any node here, but the nodename must be correct and match the node name found in .Values.ivaapBackendNodes.
For example, to add an environment variable to the seednode:
# ----- Location of node name
ivaapBackendNodes:
coreNodes:
seednode: # ----- Must match this node name
nodeEnvConfigMaps:
seednode: # ----- Matching node name
IVAAP_TEST_ENVAR: "test" # ----- New Environment Variable
Host Alias¶
In some use cases, users may need a way to set a host entry in the hosts file for a backend node. We have added an optional feature to achieve this.
ivaapBackendNodes:
# ----- [ OPTIONAL ] Set a host alias to allow a hosts file entry within the backend node pod.
# ----- Example use case: k3s deployment with a mongo database running in a docker container on the same VM -
# ----- ipAddress would equal the VM LAN IP, and hostName could be set to the mongo database container name.
# ----- This would allow setting the container name in connector properties for the mongo database host.
hostAlias:
enabled: false
ipAddress: ""
hostName: ""
As mentioned in the comments from values.yaml, an example use case of this could be a k3s deployment with a mongo database running in a docker container on the same VM. Below is an example of that scenario:
user@linux:~/opt/ivaap/IVAAPHelmTemplate$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
43e723939390 mongo:3.4 "docker-entrypoint.s…" 4 weeks ago Up 4 weeks 0.0.0.0:27017->27017/tcp, [::]:27017->27017/tcp ivaap-mongodb
Above is a docker container running a mongo database on the same VM as my IVAAPHelmTemplate deployment. In my connector properties for mongonode, we want the host name to match the container name ivaap-mongodb. The configuration below maps ivaap-mongodb to my VMs LAN IP of 10.42.0.1.
ivaapBackendNodes:
hostAlias:
enabled: true
ipAddress: "10.42.0.1"
hostName: "ivaap-mongodb"
Below shows the hosts file entry within the mongonode container in the backend pod:
ubuntu@ivaap-backend-deployment-765b4dc65-4xvjq:/opt/ivaap/ivaap-playserver/deployment$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.42.0.172 ivaap-backend-deployment-765b4dc65-4xvjq
# Entries added by HostAliases.
10.42.0.1 ivaap-mongodb
IVAAP Secrets¶
IVAAPHelmTemplate can handle secrets in many ways. The default method is with native Kubernetes secrets. All methods of providing secrets to the IVAAPHelmTemplate still syncs to native Kubernetes secrets, and uses the same secret references.
Secret References¶
Secret references can be found here, in values.yaml:
secrets:
envSecretRefs:
license: ivaap-license-secret
activemq: activemq-conf-secrets
circleOfTrust: circle-of-trust-secrets
adminserverConfig: adminserver-conf-secrets
These are the names of the native Kubernetes secrets that will be created. Most of these will contain more than one secret key and value. Since all methods of secrets sync to these native secret references, it allows the template files to be hard coded to use those references.
spec:
template:
spec:
containers:
- name: {{ $nodeName }}
envFrom:
- secretRef:
name: {{ $.Values.secrets.envSecretRefs.license }}
- secretRef:
name: {{ $.Values.secrets.envSecretRefs.circleOfTrust }}
Base64 Encoded Secrets¶
Starting with Helm Chart version 1.2.0, the process of having to base64 encode secrets before plugging them into your deployment configuration yaml is now optional! By default, the helm template will expect the secrets to be in plain text. When you deploy IVAAP, helm will take care of the base64 encoding for you before passing the secrets over to kubernetes.
However, if you choose to base64 encode these values, it can easily be done by setting the flag .Values.secrets.base64EncodedValues to true. By default, this will be false. More details on this can be found in the Native Kubernetes Secrets section below.
If this flag is set to true, all secrets under .Values.secrets.type.k8sSecrets will need to be base64 encoded. This can be done with the following command:
echo -n '<secret_value>' | base64 -w 0
# Encode the secret
user@linux:~$ echo -n 'myNewSecret' | base64 -w 0
bXlOZXdTZWNyZXQK
# Decode a secret
user@linux:~$ echo 'bXlOZXdTZWNyZXQK' | base64 -d
myNewSecret
-n option for echo to remove the possiblity of new line characters being added to the encoded value.
It is also important to note that sometimes quotation matters. when encoding/decoding a secret. For example, the IVAAP license contains double quotes "" within the license. If you use double quotes to wrap the full license in the echo command for base64 encoding, the license will be broken, as the command will remove the embedded double quotes within the license. Below demonstrates this with the improper way to encode the license:
# Encoding the license
user@linux:~$ echo -n "{{{FEATURE IVAAPServer INTD 1.0 9-oct-2025 uncounted VENDOR_STRING=users:16 HOSTID=ANY SIGN="022B 6B4F 7G92 85AB asdf 4AB1 142S 4524 BB2B 3EEF 0001 1BDD D69C A8FC asdf 6208 9CFC B54C CF12 F252 77E1"}}}" | base64 -w 0
e3t7RkVBVFVSRSBJVkFBUFNlcnZlciBJTlREIDEuMCA5LW9jdC0yMDI1IHVuY291bnRlZCBWRU5ET1JfU1RSSU5HPXVzZXJzOjE2IEhPU1RJRD1BTlkgU0lHTj0wMjJCIDZCNEYgN0c5MiA4NUFCIGFzZGYgNEFCMSAxNDJTIDQ1MjQgQkIyQiAzRUVGIDAwMDEgMUJERCBENjlDIEE4RkMgYXNkZiA2MjA4IDlDRkMgQjU0QyBDRjEyIEYyNTIgNzdFMX19fQo=
# Decoding the license
user@linux:~$ echo "e3t7RkVBVFVSRSBJVkFBUFNlcnZlciBJTlREIDEuMCA5LW9jdC0yMDI1IHVuY291bnRlZCBWRU5ET1JfU1RSSU5HPXVzZXJzOjE2IEhPU1RJRD1BTlkgU0lHTj0wMjJCIDZCNEYgN0c5MiA4NUFCIGFzZGYgNEFCMSAxNDJTIDQ1MjQgQkIyQiAzRUVGIDAwMDEgMUJERCBENjlDIEE4RkMgYXNkZiA2MjA4IDlDRkMgQjU0QyBDRjEyIEYyNTIgNzdFMX19fQo=" | base64 -d
{{{FEATURE IVAAPServer INTD 1.0 9-oct-2025 uncounted VENDOR_STRING=users:16 HOSTID=ANY SIGN=022B 6B4F 7G92 85AB asdf 4AB1 142S 4524 BB2B 3EEF 0001 1BDD D69C A8FC asdf 6208 9CFC B54C CF12 F252 77E1}}}
Notice in the decoded license above, the hashed value within the license is no longer in quotes. Instead, we can wrap the entire license in single quotes '' for the echo command to avoid this issue:
user@linux:~$ echo -n '{{{FEATURE IVAAPServer INTD 1.0 9-oct-2025 uncounted VENDOR_STRING=users:16 HOSTID=ANY SIGN="022B 6B4F 7G92 85AB asdf 4AB1 142S 4524 BB2B 3EEF 0001 1BDD D69C A8FC asdf 6208 9CFC B54C CF12 F252 77E1"}}}' | base64 -w 0
e3t7RkVBVFVSRSBJVkFBUFNlcnZlciBJTlREIDEuMCA5LW9jdC0yMDI1IHVuY291bnRlZCBWRU5ET1JfU1RSSU5HPXVzZXJzOjE2IEhPU1RJRD1BTlkgU0lHTj0iMDIyQiA2QjRGIDdHOTIgODVBQiBhc2RmIDRBQjEgMTQyUyA0NTI0IEJCMkIgM0VFRiAwMDAxIDFCREQgRDY5QyBBOEZDIGFzZGYgNjIwOCA5Q0ZDIEI1NEMgQ0YxMiBGMjUyIDc3RTEifX19Cg==
user@linux:~$ echo 'e3t7RkVBVFVSRSBJVkFBUFNlcnZlciBJTlREIDEuMCA5LW9jdC0yMDI1IHVuY291bnRlZCBWRU5ET1JfU1RSSU5HPXVzZXJzOjE2IEhPU1RJRD1BTlkgU0lHTj0iMDIyQiA2QjRGIDdHOTIgODVBQiBhc2RmIDRBQjEgMTQyUyA0NTI0IEJCMkIgM0VFRiAwMDAxIDFCREQgRDY5QyBBOEZDIGFzZGYgNjIwOCA5Q0ZDIEI1NEMgQ0YxMiBGMjUyIDc3RTEifX19Cg==' | base64 -d
{{{FEATURE IVAAPServer INTD 1.0 9-oct-2025 uncounted VENDOR_STRING=users:16 HOSTID=ANY SIGN="022B 6B4F 7G92 85AB asdf 4AB1 142S 4524 BB2B 3EEF 0001 1BDD D69C A8FC asdf 6208 9CFC B54C CF12 F252 77E1"}}}
Please be mindful of this when encoding secrets. Alternatively, there are online resources that can also help to encode/decode values for you without using CLI syntax.
Native Kubernetes Secrets¶
The below yaml snippet shows what environment variables are attached to each secret reference.
secrets:
base64EncodedValues: false
type:
k8sSecrets:
circle-of-trust-secrets:
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: ""
IVAAP_TRUST_PRIVATE_KEY: ""
IVAAP_TRUST_PUBLIC_KEY: ""
activemq-conf-secrets:
IVAAP_WS_MQ_QUEUE_PASSWORD: ""
ivaap-license-secret:
LM_LICENSE_FILE: ""
adminserver-conf-secrets:
# ----- PostgreSQL DB Connection Configuration
IVAAP_SERVER_ADMIN_DATABASE_HOST: ""
IVAAP_SERVER_ADMIN_DATABASE_NAME: ""
IVAAP_SERVER_ADMIN_DATABASE_PORT: ""
IVAAP_SERVER_ADMIN_DATABASE_USERNAME: ""
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY: ""
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD: ""
IVAAP_COMMON_TEAM_ENCRYPTION_KEY: ""
# ----- AWS Cognito Authentication
# ----- Only use this section if .Values.environment.authentication.ifExternal.externalAuthType equals awsCognito
# IVAAP_AWS_COGNITO_CALLBACK_URL: ""
# IVAAP_AWS_COGNITO_VIEWER_URL: ""
# IVAAP_AWS_COGNITO_ADMIN_USERS: ""
# IVAAP_AWS_COGNITO_DISCOVERY_URL: ""
# IVAAP_AWS_COGNITO_CLIENT_ID: ""
# IVAAP_AWS_COGNITO_ENCRYPTED_CLIENT_SECRET: ""
# IVAAP_AWS_COGNITO_SCOPE: ""
# ----- Azure AD Authentication
# ----- Only use this section if .Values.environment.authentication.ifExternal.externalAuthType equals azureAD
# IVAAP_AZURE_AD_CALLBACK_URL: ""
# IVAAP_AZURE_AD_VIEWER_URL: ""
# IVAAP_AZURE_AD_ADMIN_USERS: ""
# IVAAP_AZURE_AD_DISCOVERY_URL: ""
# IVAAP_AZURE_AD_CLIENT_ID: ""
# IVAAP_AZURE_AD_ENCRYPTED_CLIENT_SECRET: ""
# IVAAP_AZURE_AD_SCOPE: ""
# IVAAP_AZURE_AD_TENANT_ID: ""
To use native kubernetes secrets, .Values.secrets.type.nativek8s.enabled must be set to true. For deploying using only native Kubernetes secrets, the secrets can be passed directly as values as either plain text values or as base64 encoded values, depending on what you chose for .Values.secrets.base64EncodedValues. If base64EncodedValues is set to true, the values will need to be base64 encoded before adding to the yaml file. If base64EncodedValues is set to false, then you can put the secret values as plain text, and helm will take care of the base64 encoding for you before passing into kubernetes.
Below is an example of base64 encoded secrets:
secrets:
base64EncodedValues: true
type:
nativek8s:
enabled: true
# All secrets defined in this section must be base64 encoded.
k8sSecrets:
circle-of-trust-secrets:
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: "bXlFbmNyeXB0aW9uS2V5"
IVAAP_TRUST_PRIVATE_KEY: "BASE64_ENCODED_PRIVATE_KEY"
IVAAP_TRUST_PUBLIC_KEY: "BASE64_ENCODED_PUBLIC_KEY"
activemq-conf-secrets:
IVAAP_WS_MQ_QUEUE_PASSWORD: "RU5DKFVWdGtUOVhvY21OVk9LVUlVRG5jRUE9PSk="
ivaap-license-secret:
LM_LICENSE_FILE: "BASE64_ENCODED_LICENSE"
adminserver-conf-secrets:
# ----- PostgreSQL DB Connection Configuration
IVAAP_SERVER_ADMIN_DATABASE_HOST: "aXZhYXAtcG9zdGdyZXMtaG9zdA=="
IVAAP_SERVER_ADMIN_DATABASE_NAME: "aXZhYXBkYg=="
IVAAP_SERVER_ADMIN_DATABASE_PORT: "NTQzMg=="
IVAAP_SERVER_ADMIN_DATABASE_USERNAME: "aXZhYXBzZXJ2ZXI="
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY: "ZGJFbmNyeXB0aW9uS2V5"
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD: "ZW5jcnlwdGVkLWRiLXBhc3N3b3Jk"
And here is an example of the same secrets, but as plain-text values:
secrets:
base64EncodedValues: false
type:
nativek8s:
enabled: true
k8sSecrets:
circle-of-trust-secrets:
IVAAP_TRUST_PRIVATE_AES_ENCRYPTION_KEY: "myEncryptionKey"
IVAAP_TRUST_PRIVATE_KEY: "PRIVATE_KEY"
IVAAP_TRUST_PUBLIC_KEY: "PUBLIC_KEY"
activemq-conf-secrets:
IVAAP_WS_MQ_QUEUE_PASSWORD: "ENC(UVtkT9XocmNVOKUIUDncEA==)"
ivaap-license-secret:
LM_LICENSE_FILE: "LICENSE"
adminserver-conf-secrets:
# ----- PostgreSQL DB Connection Configuration
IVAAP_SERVER_ADMIN_DATABASE_HOST: "ivaap-postgres-host"
IVAAP_SERVER_ADMIN_DATABASE_NAME: "ivaapdb"
IVAAP_SERVER_ADMIN_DATABASE_PORT: "5432"
IVAAP_SERVER_ADMIN_DATABASE_USERNAME: "ivaapserver"
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY: "dbEncryptionKey"
IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD: "encrypted-db-password"
Each of these, whichever method you choose, will create four kubernetes secrets as shown in the following kubectl output:
user@linux:~$ kubectl get secrets -n <namespace>
NAME TYPE DATA AGE
activemq-conf-secrets Opaque 1 41d
adminserver-conf-secrets Opaque 6 41d
circle-of-trust-secrets Opaque 3 41d
ivaap-license-secret Opaque 1 41d
Cloud Native Secret Providers¶
AWS Secrets Manager¶
If deploying IVAAP to an EKS cluster in AWS, the IVAAPHelmTemplate supports ingesting secrets from AWS Secrets Manager.
secrets:
type:
######################################
## AWS Secrets Manager ##
######################################
awsSecretsManager:
enabled: ""
serviceAccount:
name: "" # ----- Name of your service account to be created
iamRoleArn: "" # ----- Arn of the IAM role for the secret access. Ex: arn:aws:iam::123456789000:role/ivaap-secret
secretName: "" # ----- The name of your secret in AWS Secrets Manager - this secret should contain all keys/value
For in-depth configuartion of AWS Secrets Manager in EKS deployments, refer to the AWS EKS helm documentation. This feature is only available in AWS EKS deployments.
Azure Key Vault¶
If deploying IVAAP to an AKS cluster in Azure, the IVAAPHelmTemplate supports ingesting secrets from Azure Key Vault.
secrets:
type:
######################################
## Azure Keyvault ##
######################################
# ----- These values are only for Azure Keyvault Access
# ----- This will use templates/azure/serviceaccount.yaml
azureKeyVault:
enabled: ""
keyvaultName: "" # ----- Name of your Azure Keyvault
useVMManagedIdentity: "" # ----- Set to true for using managed identity
userAssignedIdentityID: "" # ----- Leave empty for system-assigned identity, or specify if using user-assigned
cloudName: "" # ----- [OPTIONAL] Defaults to AzurePublicCloud if not provided
tenantid: "" # ----- Tenant ID of the azure containing your keyvault secrets
For in-depth configuartion of Azure Key Vault in AKS deployments, refer to the Azure AKS helm documentation. This feature is only available in Azure AKS deployments.
Limits and Requests¶
By default, the IVAAPHelmTemplate will not set limits or requests for any pod/container. However, given this can be a necessary requirement, we have added the ability to set limits and requests for each components memory and CPU.
As an example, lets compare ActiveMQ block with and without resource limits and requests.
# ----- Default ActiveMQ block from the template values.yaml
ivaapActiveMQ:
activemq:
repoName: ivaap/activemq
tag: activemq-6.1.7-96e1cba9-202506251620Z
# ----- ActiveMQ block with resource limits and requests added
ivaapActiveMQ:
activemq:
repoName: ivaap/activemq
tag: activemq-6.1.7-96e1cba9-202506251620Z
resources:
enabled: true
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 2048Mi
The logic is as simple as setting .Values.ivaapActiveMQ.activemq.resources.enabled to true. Then, the requests and limits can be set same as they would in any standard deployment yaml file. Order does not matter as long as the structure is correct.
This can be done for all IVAAP components:
ivaapProxy:
proxy:
resources:
enabled: true
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 2048Mi
ivaapBackendNodes:
realtimeEnabled: true
coreNodes:
seednode:
resources:
enabled: true
requests:
memory: 512Mi
cpu: 250m
limits:
memory: 1Gi
cpu: 500m
playnode:
resources:
enabled: true
requests:
memory: 512Mi
cpu: 250m
limits:
cpu: 500m
dataNodes:
witsmlnode:
enabled: true
resources:
enabled: true
requests:
memory: 512Mi