Lux: Helm Configuration Reference

Lux Overrides

The following table shows a configuration option’s name, type, and the default value:

ParameterDescriptionDefaultExample
global.uriSchemeSets the uriScheme to be used, acceptable values are http and https.
This setting deploys appropriate ingress configs and propogates the change to lux services.

Requires a restart of lux services. See restart
httphttp
global.domainDomain name to be used for ingress & CORS rules

Requires a restart of lux services. See restart
domain.nip.iodomain.nip.io
global.subDomainUse a sub-domain based deployment. This will host various lux components on subdomains instead of paths. Following sub-domain ingress’ are created,

auth.domain.nip.io
logs.domain.nip.io
accounts.domain.nip.io
data.domain.nip.io
metrics.domain.nip.io
oauth.domain.nip.io

Requires a restart of lux services. See restart
domain.nip.iodomain.nip.io
global.namespaceNamespace to deploy resources to; the namespace should already existdefaulthasura
global.certIssuerCert Issuer to be usedletsencrypt-stagingletsencrypt-prod
global.containerRegistryConfigure the container registry to be usedgcr.io/hasura-eedocker.io/yourcompany
secrets.dbUrlExternal Postgres connection string to be used by Lux services
By default, a Postgres container is deployed within kubernetes and used

Requires a restart of lux services. See restart
``postgres://ext-username:password@aws-rds-postgres.com:5432/lux_db
secrets.timescaledbUrlExternal Timescale DB connection string to be used by Lux metrics
By default, a Timescale container is deployed within kubernetes and used

Requires a restart of lux metrics services. See restart
``postgres://ext-username:password@aws-timescaledb.com:5432/dbname
secrets.hgeDbUrlExternal Postgres connection string to be used by HGE Pro
By default, a Postgres container deployed within kubernetes is shared between Lux and used

Requires a restart of HGE. See restart
``postgres://hasura%40test.postgres.database.azure.com:password@
test.postgres.database.azure.com:5432/hasura
configs.authMethodsConfigure the authentication modes to be allowed to authenticate to Lux

Requires a restart of lux services. See restart
password,google,github,samlsaml
configs.authRedisHostRedis host to be used for caching Lux tokens
By default, a Redis container deployed within kubernetes used

Requires a restart of lux services. See restart
auth-redis:6379test.ng.0001.use2.cache.amazonaws.com:6379
configs.authRedisUserRedis usernamedefaultdefault
secrets.data.AUTH_REDIS_PASSWORDRedis Password to be used````
configs.logsRedisHostRedis host to be used for caching Lux logs / metrics
By default, a Redis container deployed within kubernetes used

Requires a restart of lux metrics services. See restart
logs-redis:6379test-logs.ng.0001.use2.cache.amazonaws.com:6379
configs.logsRedisUserRedis usernamedefaultdefault
secrets.data.LOGS_REDIS_PASSWORDRedis Password to be used````
configs.smtpHostSMTP host to be used to send out account related emails

Requires a restart of lux auth services. See restart
smtp.org.comsmtp.org.com
configs.smtpPortSMTP port252525
configs.smtpUserSMTP useremail_useruser
secrets.data.SMTP_PASSWORDSMTP password````
configs.smtpDisableAuthDisable SMTP authentication, boolean typefalsetrue
configs.emailFromAddressEmail from address to be usedhasura-pro-team@org.comaccounts@org.com
configs.emailFromNameEmail from name to be usedHasura Pro TeamOrg Delta Team
configs.emailSubjectPrefixEmail subject prefix to be usedHasura ProHasura Pro
secrets.data.GITHUB_CLIENT_IDGithub Oauth Client ID<github-client-id>``
secrets.data.GITHUB_CLIENT_SECRETGithub Oauth Client Secret<github-client-secret>``
secrets.data.GOOGLE_CLIENT_IDGoogle Oauth Client ID<google-client-id>``
secrets.data.GOOGLE_CLIENT_SECRETGoogle Oauth Client Secret<google-client-secret>``

restart

Due to the way kubernetes handles updates to configmaps and secrets pods have to be restarted to read the new values.

One way is to scale the pods down and then do the lux deployment,

kubectl rollout restart deployment/cloud deployment/api deployment/auth deployment/logs-monitor deployment/data deployment/logs-grpc deployment/logs-worker deployment/metrics deployment/metricsapi deployment/oauth

There are other alternatives such as using reloader. In the upcoming release, lux will also provide a flag that can be used to force a rolling update of the above services.

saml request signing certs configuration

The below snippet is to be added to the overrides file, this assumes the certs are available in the kubernetes cluster under a secret named secrets,

auth:
  additionalEnv: |
    - name: SAML_SIGNING_KEY
      value: "/etc/ssl/saml-certs/private-key.pem"     
    - name: SAML_SIGNING_CERT
      value: "/etc/ssl/saml-certs/cert.pem"         
  extraVolumes: |
    - name: saml-certs
      secret:
        secretName: secrets    
  extraVolumeMounts: |     
    - name: saml-certs
      mountPath: "/etc/ssl/saml-certs/private-key.pem"
      readOnly: true    
      subPath: private-key.pem
    - name: saml-certs
      mountPath: "/etc/ssl/saml-certs/cert.pem"
      readOnly: true    
      subPath: cert.pem  

connect to database with cert auth

The below snippet is to be added to the overrides files, this assumes the certs are available in the kubernetes cluster under a secret named secrets,

data:
  extraVolumes: |
    - name: certs
      secret:
        defaultMode: 0600
        secretName: secrets
        items:
        - key: STAGE_PGSSLCERT
          path: certs/client-cert.pem
        - key: STAGE_PGSSLKEY
          path: certs/client-key.pem
        - key: STAGE_PGSSLROOTCERT
          path: certs/server-ca.pem    
  extraVolumeMounts: |     
    - name: certs
      mountPath: "/etc/pgcerts"
      readOnly: true
  additionalEnv: |
    - name: PGSSLMODE
      value: "verify-ca"   
    - name: PGSSLCERT
      value: "/etc/pgcerts/certs/client-cert.pem"   
    - name: PGSSLKEY
      value: "/etc/pgcerts/certs/client-key.pem"   
    - name: PGSSLROOTCERT
      value: "/etc/pgcerts/certs/server-ca.pem"      

configure prometheus exporter

You can configure prometheus exporter by adding the below snippet to enable it (disabled by default). Prometheus integration can be added to a project to export its metrics, in the EE dashboard.

prometheus-exporter:
    enabled: true

configure datadog trace exporter

Datadog integration can be configured to publish traces (in addition to logs and metrics, which are exported by default when the integration is configured) by adding the below snippet to enable it (disabled by default).

ddtrace:
    enabled: true

configure self-hosted Github EE for OAuth2 login

Self-Hosted Github EE can be configured to be used for OAuth2 login (by default, public Github is used) by adding the below snippet. Replace the values of the below ENV vars with self-hosted Github URLs. The example values are that of public Github.

auth:
  additionalEnv: |
    - name: GITHUB_AUTH_URL
      value: https://github.com/login/oauth/authorize
    - name: GITHUB_TOKEN_URL
      value: https://github.com/login/oauth/access_token
    - name: GITHUB_INFO_ENDPOINT
      value: https://api.github.com/user
    - name: GITHUB_EMAIL_ENDPOINT
      value: https://api.github.com/user/emails    

configure rate-limit redis

You can configure the rate limit redis on the graphql engine pro [if using provided helm chart] using the following snippet in the override file,

hge-pro:
  additionalEnv: |  
    - name: HASURA_GRAPHQL_RATE_LIMIT_REDIS_URL
      value: "redis://username:password@hostwithport"

configure caching redis

You can configure the caching redis on the graphql engine pro [if using provided helm chart] using the following snippet in the override file,

hge-pro:
  additionalEnv: |  
    - name: HASURA_GRAPHQL_REDIS_URL
      value: "redis://username:password@hostwithport"

Common Overrides

The below configs can be used to override per service configs and is applicable to all lux services; for example, using the below snippet in the override file we can override various aspects of the hge-pro service,

hge-pro:
  tag: "v1.3.3-pro.3"
  additionalEnv: |
    - name: HASURA_GRAPHQL_ENABLED_APIS
      value: "graphql,metadata,config,developer,pgdump"    
  resources: |
    requests:
      cpu: 200m
      memory: 1Gi
    limits:
      cpu: 1000m 
      memory: 1Gi    
ParameterDescriptionDefaultExample
namespaceDefault namespace for the servicedefaulthasura
replicasNumber of Pods to be created12
additionalEnvAdditional env variables to be added``additionalEnv: |
  - name: HASURA_GRAPHQL_ENABLED_APIS
    value: “graphql,metadata,config,developer,pgdump”  
 -  name: USER_BEHAVIOUR_SERVICE_URL
     valueFrom:
        configMapKeyRef:
          name: configs
          key: DATA_HOST
argsDefine the arguments to be passed to the command``args: |
  - serve
httpPortDefault port number for the service80803000
labelsLabels for the service''labels: |
  app: “postgres”
  group: “db”
ingress.enabledTo add ingress controller for the servicefalsetrue
ingress.contextWhen ingress is enabled, exposes the following context path``auth
ingress.additionalAnnotationsAdd additional annotations to the ingress resource``ingress:
  additionalAnnotations: |
    “nginx.ingress.kubernetes.io/rewrite-target: /$2”
ingress.waf.enabledWhen ingress is enabled, Enable Web Application Firewall for the servicefalsetrue
image.pullPolicyTo pull a Docker image from Docker repository, By default skip pulling an image if it already existsIfNotPresentAlways
image.tagDocker image tag for the servicelatestv1.3.3-pro.5
initContainers.gitSync.enabledTo add a gitSync init container which clones a repository using configured ssh read tokenfalse
healthChecks.enabledTo enable/disable healthchecks [Liveness probes and Readiness probes] for a podfalsetrue
healthChecks.livenessProbeLiveness probe to be added, advanced configuration, path and port override should suffice for most scenarios. Passed through the tpl function and thus to be configured a string``
livenessProbe: |
    httpGet:
        path: “{{ .Values.healthChecks.livenessProbePath }}”
        initialDelaySeconds: 60
healthChecks.livenessProbe.httpGet.pathContext path of the service to check the liveness of a pod{{ .Values.healthChecks.livenessProbePath }}/healthz
healthChecks.livenessProbe.httpGet.portPort number of the service to check the liveness of a pod{{ .Values.httpPort }}8080
healthChecks.readinessProbeReadiness probe to be added, advanced configuration. Passed through the tpl function and thus to be configured a string``
readinessProbe: |
    httpGet:
        path: “{{ .Values.healthChecks.readinessProbePath }}”
        initialDelaySeconds: 60
healthChecks.readinessProbe.httpGet.pathContext path of the service to check the readiness of a pod{{ .Values.healthChecks.readinessProbePath }}/healthz
healthChecks.readinessProbe.httpGet.portPort number of the service to check the readiness of a pod{{ .Values.httpPort }}8080
lifecycle.preStop.exec.commandExecutes the command in the pod before stopping- sh
- -c
- "sleep 10"
resourcesTo set the resource limits for the pod. Allows the specification of additional environment variables. Passed through the tpl function and thus to be configured a string``resources: |
  requests:
    cpu: 200m
    memory: 1Gi
  limits:
    cpu: 1000m 
    memory: 1Gi
extraVolumesTo add additional volumes to the service. Allows the specification of additional environment variables. Passed through the tpl function and thus to be configured a string``extraVolumes: |
    - name: new-volume
        configMap:
            name: service-new-volume
extraVolumeMountsTo mount additional volumes to the service in a desired mount path. Allows the specification of additional environment variables. Passed through the tpl function and thus to be configured a string``extraVolumeMounts: |
    - mountPath: /opt/service-path/file.conf
        configMap:
            name: new-volume
            subPath: file.conf
extraInitContainersAdditional init containers, e. g. for providing themes, etc. Passed through the tpl function and thus to be configured a string""
extraContainersAdditional sidecar containers, e. g. for a database proxy, such as Google’s cloudsql-proxy. Passed through the tpl function and thus to be configured a string""