Dynamic Scaling in a Container - Tableau Server Backgrounders

Introduction

The dynamic scaling of Backgrounder in a container enables various scaling strategies to be applied to backgrounder and scheduled jobs in Tableau Server. Autoscaling in this context means services can be independently scaled to handle variable task loads without requiring human intervention or impacting uptime of other server systems. The Tableau Server Containers which contain complete nodes of Tableau Server processes will continue to run as monolithic systems. Instead a smaller set of decoupled independent container services which comprise the "backgrounder" service role will be dynamicly scalable and handle computational load that would normally be handled by Tableau Server Containers. Backgrounder services are responsible for processing system tasks that include refreshing/creating extracts, sending subscriptions, checking data alerts, and many maintenance jobs. If there are times, for example, where it would be advantageous to refresh a large number of datasets or compute a swath of computationally expensive data alerts, you can now leverage Kubernetes to scale up compute power to complete those tasks efficiently. This guide is covers the configuration and deployment requirements for Autoscaling Backgrounders in Kubernetes. This document is a supplement to the Tableau Server in a Container documentation.

Prerequisites

Autoscaling Backgrounders is only available in Kubernetes and is based on Tableau Server in Containers. In order to use the autoscaling backgrounders feature, you must satisfy certain prerequisite:

Limitations

  • This feature only works as a part of a Linux-based deployment.
  • Flow jobs are not supported on the autoscaling backgrounders. Flow jobs will be handled by the backgrounder services that continue to run in the Tableau Server container.

Creating Tableau Server and Backgrounder Pod Images

The first step for using autoscaling backgrounders in containers is to create the service images that comprise the Tableau Server installation. These images will include the Tableau Server image, as well as images for the separate backgrounder and supporting services. You use the same build tool that is used to create the comprehensive all-in-one Tableau Server container image, but the tool must be version 2022.3.0 or later, you must have an Advanced Management license, and you need to use a special flag when building the images.

  1. To create the service images, run this command:

    build-image --accepteula -i <installer> --backgrounder-images

    This creates the Tableau Server and four new images. These additional images contain individual services that comprise the new autoscalable backgrounder pod.

    The docker images command lists the images created in the local docker repository:

    hyper                          20214.21.1117.1006             52fd9843c178   10 seconds ago
    gateway                        20214.21.1117.1006             2308e602a2a3   11 seconds ago
    backgrounder                   20214.21.1117.1006             4540e459cf23   12 seconds ago
    dataserver                     20214.21.1117.1006             c5345ed47f51   12 seconds ago
    tableau_server_image           20214.21.1117.1006             b27817047dd1   7 minutes ago

    The hyper, gateway, backgrounder, and dataserver images comprise the new Backgrounder pod. Custom drivers, installation scripts, and properties will be shared across all five of these images. For more information, see Customizing the image.

  2. Publish all of these images to your internal image repository for deployment.

Deployment Guide

The following information provides context for how to deploy Tableau Server in a Container and with Autoscaling Backgrounders. This information assumes you already understand and know how to deploy Tableau Server in a self-container container. For more information, see Tableau Server in a Container. The three Kubernetes configuration files in the Kubernetes Configuration section are templates that can be used to set up the deployment. The other sections in this guide cover the requirements and details of the deployment.

Deployment of Tableau Server and Autoscaling Backgrounders should be as simple as deploying the populated Kubernetes Configuration files at the bottom of the guide:

kubectl apply -f <tableau-kubeconfig-dir>

Backgrounder Jobs

Backgrounder pods assist the Tableau Server in a Container compute additional scheduled workloads in parallel. Backgrounder handles extract refresh, subscription, alert, flow, and system workloads. Distributing jobs among backgrounder pods means there will be more compute resources available for Tableau Server to handle other tasks, like interactive user activities like rendering workbooks and dashboards. Flow jobs are the only type of backgrounder job that does not run in the backgrounderFor details about Backgrounder jobs, see Managing Background Jobs in Tableau Server.

Backgrounder pods can handle any kind of load, except flow jobs, which must be run in the main Tableau Server containers, which continue to run the backgrounder service.

The Node Roles feature give users the flexibility to dedicate backgrounder pods for specific type of jobs. This feature is an extension of the Node Roles Feature on Tableau server. The detailed description about different node roles can be found here. Note, by default flow jobs are disabled on the backgrounder pods (i.e. the role is set to "no-flows") since the backgrounder pods are unable to run flow jobs.

To setup up node roles for backgrounder, you need to set the environment variable NODE_ROLES as part of kubeconfig for the container running backgrounder service. For example to set backgrounder to run only extract-refresh jobs, set up NODE_ROLES environment variable to extract-refreshes as shown below:

NODE_ROLE_CONFIG

containers:
- name: backgrounder
image: <backgrounder_image> # Backgrounder Image
ports:
- containerPort: 8600
volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
- name: dataengine-volume
mountPath: /docker/dataengine
- name: temp
mountPath: /var/opt/tableau/tableau_server/data/tabsvc/temp
env:
- name: ROOT_LOG_DIR
value: /var/log/tableau
- name: CLONE_ARTIFACT_DIR
value: /docker/clone/clone-artifacts
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
- name: NODE_ROLES
value: extract-refreshes

The Tableau Server pods will have at least one backgrounder configured in their topology, which is required to ensure there is always a place to run backgrounder jobs. By default, TSM will require that there must be a backgrounder able to handle every role of background job. In some scenarios, you may want to have backgrounder pods handle all jobs of a particular type. To do so, you must set the Server configuration key topology.roles_handle_all_jobs_constraint_disabled to true, which will disable the requirement that the TSM topology handle all job types. With this parameter set, the backgrounder role for the Tableau Server backgrounder instance could be set to no-extract-refreshes and the role for the backgrounder pods could be set to extract-refreshes, which would ensure that all extract refresh jobs run only on the backgrounder pods.

Note: Disabling this constraint allows you to configure roles such that some job types are never scheduled. The role configuration of TSM backgrounders and backgrounder jobs must be set carefully because TSM will no longer be verifying that all backgrounder job types can be scheduled.

Tableau Server in a Container Pods

The containers with Tableau Server as part of the autoscaling backgrounder pods deploy in almost the same way that our existing Tableau Server in a Container does. There are a few key requirement changes:

  • A network file share is required to transfer configuration between the Tableau Server container and the backgrounder pods.
  • You must enable and use the External Filestore feature. This also requires a dedicated network file share.

Backgrounder Pods

Backgrounder pods consist of four independent service containers working together: gateway, hyper, dataserver, and backgrounder. You may deploy these pods like typical independent Kubernetes container pods. The pods have the following requirements:

  • Backgrounder pods must be able to reach the Tableau Server node using hostname DNS resolution.
  • External Filestore and Clone network file shares must be provided.

Note: Backgrounder pods are configured with an init-container to wait until the Tableau Server container has successfully produced clone configuration output before proceeding to run.

Logs

Backgrounder pod services (like Tableau Server) still write logs predominately to disk. Because the backgrounder pods can be scaled in and out, they are ephemeral so it is essential to ensure that logs are stored off of the pods. Many customers with existing K8s environments will already be using a log aggregation service of some type to collect the logs from the pods they deploy. Examples of log aggregation services are Splunk, and fluentd. We strongly recommend that customers use some sort of log aggregation service to collect the logs from their backgrounder pods. To make log management easier, the kubeconfig we provide configures each service in the pod to write to a shared log volume. The path of the directory in each service container is specified by the ROOT_LOG_DIR environment variable.

If you need to open a support case and provide logs, you will provide two sets of logs: ziplogs gathered from the main Server containers, and logs from the backgrounder pods (either retrieved from your log aggregation service, or using the manual process below).

For customers that are unable to use a log aggregation service, logs can be retrieved manually from the pods.

Note: Logs from any pod that did not use a Persistent Volume Claim for the volume containing the logs will be lost when the pod is scaled down!

All the relevant logs are available at the /var/log/tableau directory (configurable via the ROOT_LOG_DIR environment variable) inside the backgrounder pod. We highly recommend you mount a PersistantVolumeClaim at this location so that logs are available when the pod dies.

Collecting logs when the backgrounder pod is running:

Create a tar file of the logs inside the container:

kubectl exec -it <backgrounder-pod-name> -- bash -c "tar -czf /docker/user/backgrounder-pod-logs.tar.gz /var/log/tableau"

Copy the tarfile to outside the container:

kubectl cp <backgrounder-pod-name>:docker/user/backgrounder-pod-logs.tar.gz ./backgrounder-pod-logs.tar.gz
Collecting logs when the backgrounder pod has exited (or failed to start)

Attach any long-running pod with PersistantVolumeClaim mount which is used for backgrounder pod deployment logs. One example configuration:

apiVersion: v1
kind: Pod
metadata:
name: <name>
namespace: <namespace>
spec:
containers:
- name: get-logs-pod
image: busybox:1.28
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
command: ['sh', '-c', "while :; do sleep 5; done"]
volumeMounts:
- name: logmount
mountPath: /var/log/tableau restartPolicy: Never
volumes:
- name: logmount
persistentVolumeClaim:
claimName: logvolume

Create a tar file of the logs inside container:

kubectl exec -it <backgrounder-pod-name> -- bash -c "tar -czf /backgrounder-pod-logs.tar.gz /var/log/tableau"

Copy the tarfile to outside the container:

kubectl cp <backgrounder-pod-name>:/backgrounder-pod-logs.tar.gz ./backgrounder-pod-logs.tar.gz

Live Configuration Changes

If you make configuration changes in Tableau Server in a Container (for example using the tsm command line) and want those configuration changes to be represented in Backgrounder pods, you need to run the tsm settings clone command to produce a new set of clone configuration files ("clone payload").

  1. Use TSM to make configuration changes in the Tableau Server in a Container pod and apply the configuration changes to the server.
  2. Run the following command in the Tableau Server in a Container pod:

    ## Run this command in the Tableau Server in a Container pod.
    tsm settings clone -d $CLONE_ARTIFACT_DIR

    This command creates a new set of configuration files and writes it to the Clone NFS drive location.

  3. Redeploy your backgrounder pods. The pods should be configured to use the Clone NFS drive and will pick up the new configuration.

Scaling Strategies

Backgrounder pods can be scaled in Kubernetes using a variety of techniques and strategies. We provide an example scaling strategy that changes Backgrounder pod pool size based on a time schedule.

Note that CPU and memory utilization are not good metrics for scaling Backgrounder pods. Memory and CPU utilization do not accurately portray the overall load demand on the cluster. For example, a Backgrounder pod could be at max utilization to refresh an extract, but there are no additional jobs waiting in the Backgrounder job queue. In this case autoscaling would not improve job throughput.

Scheduled Scaling

Standard Kubernetes mechanisms using cron jobs allow you to schedule a scaling solution.

An example Kubernetes configuration for this is provided in the Kubernetes Configuration section below.

Kubernetes Configuration

New Environment Variables

In addition to the standard Tableau Server container environment variables (see Initial Configuration Options), there are some new required environment variables that must be set in the Kubernetes Configuration.

Environment Variable Recommended Value Description
FILESTORE_MOUNT_PATH /docker/dataengine External Filestore directory mount location. This directory should point to the dataengine NFS directory mounted inside each deployed Tableau container. For more information about External Filestore, see Tableau Server External File Store. The value should be the same for the Tableau Server in a Container pod and the Backgrounder pod.
CLONE_ARTIFACT_DIR /docker/clone/clone-artifacts Clone configuration directory mount location. This directory should point to an NFS directory mounted inside each Tableau container. Tableau Server will ouput configuration data that Backgrounder Pods consume to become members of the cluster.
ROOT_LOG_DIR /var/log/tableau (Backgrounder pods only)

Common log directory location for all services running in a backgrounder pod.

Backgrounder Pod Ports

Backgrounder pods consist of four services, each by default is set to run on a specified port. If you want to change the port the service attaches to inside the container, you need to supply the key that corresponds to the service's port assignment. This kind of configuration should not be necessary in most cases, unless there is a sidecar container or some other additional component that is being added to the pod and conflicts with a service's port.

Port Environment Variable Default
BACKGROUNDER_PORT 8600
DATASERVER_PORT 8400
HYPER_PORT 8200
GATEWAY_PORT 8080

Dataserver also uses port 8300, which cannot be reconfigured.

Shared Network Directory

This deployment of Tableau Server requires two network shares to function properly. Note in all Tableau Server and Backgrounder od Kubernetes configuration templates these network directories are present:

  • Dataengine Directory (FILESTORE_MOUNT_PATH): Backgrounder Pods require the External Filestore feature. This network share contains extracts and other file-based artifacts that will be shared between Tableau Server and Backgrounder Pods.
  • Clone Directory (CLONE_ARTIFACT_DIR): Tableau Server writes static connection and configuration information to a network share. Backgrounder pods will consume this information to become members of the Tableau Server cluster. In future pre-releases this configuration will be incorporated into the standard life cycle of Kubernetes configuration.

Important: If you want to redeploy the cluster entirely (including a new Tableau Server Container), you must purge the contents of the clone NFS mount (otherwise backgrounder pods will attempt to attach to the old server).

Kubernetes Configuration Examples

Note: The configuration examples include use of a readiness probe. You can use the readiness probe when your Tableau Server Container deployment is a single-node TSM deployment (the deployment can include multiple backgrounder pods). You cannot use a readiness probe with multi-node Tableau Server in a Container deployments.

Tableau Server Container Config

---
apiVersion: v1
kind: Service
metadata:
name: <service_name>
namespace: <namespace>
spec:
selector:
app: <service_name>
ports:
- protocol: TCP
port: 8080
nodePort: <nodeport-number>
name: http
- protocol: TCP
port: 8443
nodePort: <nodeport-number>
name: https
type: NodePort
---
apiVersion: v1
kind: ConfigMap
metadata:
name: configfile
namespace: <namespace>
data:
config.json: |-
{
"configEntities": {
"identityStore": {
"_type": "identityStoreType",
"type": "local"
}
},
"configKeys" : {
"tabadmincontroller.init.smart_defaults.enable" : "false",
"wgserver.domain.ldap.starttls.enabled" : "false"
},
"daysLeftForMaintenanceExpiring" : 0
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: extrepojsonfile
namespace: <namespace>
data:
config.json: |-
{
"flavor":"generic",
"masterUsername":"<admin-name>",
"masterPassword":"<password>",
"host":"<hostname>",
"port":5432,
"prerequisiteCheckEnabled":false,
"requireSsl":false
}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir1
namespace: <namespace>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
# This is required for multi-node tableau server in container
apiVersion: v1
kind: PersistentVolume
metadata:
name: bootstrapnfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
server: <nfs-ip>
path: <mount-path>
---
# This is required for multi-node tableau server in container
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: bootstrapnfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
---
apiVersion: v1
kind: PersistentVolumn
metadata:
name: clonenfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
server: <nfs-ip>
path: <mount-path>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clonenfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: dataenginenfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
nfs:
server: <nfs-ip>
path: <namespace>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dataenginenfs
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
---
apiVersion: v1
kind: Secret
type: Opaque
metadata: name: tableau-server-in-a-container-secrets
namespace: <namespace>
stringData:
license_key: <license_key> # Tableau License Key String
tableau_username: <tableau_username> # Initial admin username in Tableau Server
tableau_password: <tableau_password> # Initial admin password
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tableau-server
namespace: <namespace>
labels:
app: <service_name>
spec:
selector:
matchLabels:
app: <service_name>
replicas: 1
serviceName: <service_name>
template:
metadata:
labels:
app: <service_name>
spec:
securityContext:
runAsUser: 999
fsGroup: 998
fsGroupChangePolicy: "OnRootMismatch"
terminationGracePeriodSeconds: 120
dnsPolicy: "None"
dnsConfig:
nameservers:
- <dns_ip> # DNS IP for resolving container hostnames
searches:
- <service_name>.<namespace>.svc.<cluster_domain>.<example> # SRV Record
- <namespace>.svc.<cluster_domain>.<example> # SRV Record
- svc.<cluster_domain>.<example> # SRV Record
- <cluster_domain>.<example> # SRV Record
options:
- name: ndots
value: "5"
initContainers: # init containers are optional, to clear directory content if already exists
- name: clean-bootstrap-dir
image: busybox:1.28
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
volumeMounts:
- name: bootstrap
mountPath: /docker/config/bootstrap
command: ['sh', '-c', 'rm -rf /docker/config/bootstrap/* || true']
- name: clean-clone-artifacts-dir
image: busybox:1.28
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
volumeMounts:
- name: clone
mountPath: /docker/clone
command: ['sh', '-c', 'rm -rf /docker/clone/clone-artifacts || true']
containers:
- name: <container_name> # Name of container
image: <tableau_server_image> # Tableau Server in Container Image
env:
- name: LICENSE_KEY
valueFrom:
secretKeyRef:
name: tableau-server-in-a-container-secrets
key: license_key
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
- name: SERVER_FOR_INDEPENDENT_SERVICE_CONTAINERS
value: "1"
- name: EXT_REP_JSON_FILE
value: /docker/external-repository/config.json
- name: TABLEAU_USERNAME
valueFrom:
secretKeyRef:
name: tableau-server-in-a-container-secrets
key: tableau_username
- name: TABLEAU_PASSWORD
valueFrom:
secretKeyRef:
name: tableau-server-in-a-container-secrets
key: tableau_password
resources:
requests:
memory: 40Gi
cpu: 15
limits:
memory: 40Gi
cpu: 15
ports:
- containerPort: 8080
volumeMounts:
- name: configmount
mountPath: /docker/config/config.json
subPath: config.json
- name: externalrepomount
mountPath: /docker/external-repository
- name: datadir1
mountPath: /var/opt/tableau
- name: bootstrap
mountPath: /docker/config/bootstrap
- name: clone
mountPath: /docker/clone
- name: dataengine
mountPath: /docker/dataengine
imagePullPolicy: IfNotPresent
startupProbe:
exec:
command:
- /bin/sh
- -c
- /docker/server-ready-check
initialDelaySeconds: 300
periodSeconds: 15
timeoutSeconds: 30
failureThreshold: 200
readinessProbe:
exec:
command:
- /bin/sh
- -c
- /docker/server-ready-check
periodSeconds: 30
timeoutSeconds: 60
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /docker/alive-check
initialDelaySeconds: 600
periodSeconds: 60
timeoutSeconds: 60
volumes:
- name: configmount
configMap:
name: configfile
- name: externalrepomount
configMap:
name: extrepojsonfile
- name: datadir1
persistentVolumeClaim:
claimName: datadir1
- name: bootstrap
persistentVolumeClaim:
claimName: bootstrapnfs
- name: clone
persistentVolumeClaim:
claimName: clonenfs
- name: dataengine
persistentVolumeClaim:
claimName: dataenginenfs

Backgrounder Pod Config

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logvolume
namespace: <namespace>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backgrounder
labels:
app: backgrounder
namespace: <namespace>
spec:
replicas: 2
selector:
matchLabels:
app: backgrounder
template:
metadata:
labels:
app: backgrounder
spec:
securityContext:
runAsUser: 999
runAsGroup: 998
fsGroup: 998 fsGroupChangePolicy: "OnRootMismatch"
hostname: backgrounder
dnsPolicy: "None"
dnsConfig:
nameservers:
- <dns_ip> # DNS IP for resolving container hostnames
searches:
- <service_name>.<namespace>.svc.<cluster_domain>.<example> # SRV Record
- <namespace>.svc.<cluster_domain>.<example> # SRV Record
- svc.<cluster_domain>.<example> # SRV Record
- <cluster_domain>.<example> # SRV Record
options:
- name: ndots
value: "5" initContainers:
- name: init-myservice
image: busybox # This init-container is optional (as long as there is a mechanism to set the log volume directory permissions and the pod waits for clone artifacts)
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
env:
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
command: ['sh', '-c', "chmod 777 /var/log/tableau && while [ ! -d ${CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS} ]; do sleep 5; done"]
containers:
- name: backgrounder
image: <backgrounder_image> # Backgrounder Image
ports:
- containerPort: 8600 imagePullPolicy: IfNotPresent readinessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
periodSeconds: 30
timeoutSeconds: 60
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
initialDelaySeconds: 600
periodSeconds: 60
timeoutSeconds: 60 volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
- name: dataengine-volume
mountPath: /docker/dataengine
- name: temp
mountPath: /var/opt/tableau/tableau_server/data/tabsvc/temp
env:
- name: ROOT_LOG_DIR
value: /var/log/tableau
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
- name: dataserver
image: <dataserver_image> # Dataserver Image
ports:
- containerPort: 8400 imagePullPolicy: IfNotPresent readinessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
periodSeconds: 30
timeoutSeconds: 60
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
initialDelaySeconds: 600
periodSeconds: 60
timeoutSeconds: 60
volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
- name: dataengine-volume
mountPath: /docker/dataengine
- name: temp
mountPath: /var/opt/tableau/tableau_server/data/tabsvc/temp
env:
- name: ROOT_LOG_DIR
value: /var/log/tableau
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
- name: gateway
image: <gateway_image> # Gateway Image
ports:
- containerPort: 8080 imagePullPolicy: IfNotPresent readinessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
periodSeconds: 30
timeoutSeconds: 60
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
initialDelaySeconds: 600
periodSeconds: 60
timeoutSeconds: 60 volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
- name: dataengine-volume
mountPath: /docker/dataengine
- name: temp
mountPath: /var/opt/tableau/tableau_server/data/tabsvc/temp
env:
- name: ROOT_LOG_DIR
value: /var/log/tableau
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
- name: hyper
image: <hyper_image> # Hyper Image
ports: - containerPort: 8200 imagePullPolicy: IfNotPresent readinessProbe:
exec:
command:
- /bin/sh
- -c
- /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
periodSeconds: 30
timeoutSeconds: 60
livenessProbe:
exec:
command:
- /bin/sh
- -c - /tsm_docker_utils/status_check.sh | grep -E 'ACTIVE|BUSY'
initialDelaySeconds: 600
periodSeconds: 60
timeoutSeconds: 60 volumeMounts:
- name: logmount
mountPath: /var/log/tableau
- name: clone-volume
mountPath: /docker/clone
- name: dataengine-volume
mountPath: /docker/dataengine
- name: temp
mountPath: /var/opt/tableau/tableau_server/data/tabsvc/temp
env:
- name: ROOT_LOG_DIR
value: /var/log/tableau
- name: CLONE_ARTIFACT_DIR_FOR_INDEPENDENT_CONTAINERS
value: /docker/clone/clone-artifacts
- name: FILESTORE_MOUNT_PATH
value: /docker/dataengine
volumes:
- name: clone-volume
nfs:
server: <nfs_ip>
path: <mount_path>
- name: dataengine-volume
nfs:
server: <nfs_ip>
path: /dataengine
- name: logmount
persistentVolumeClaim:
claimName: logvolume
- name: temp
emptyDir: {}

Scheduled Scaling Config

apiVersion: v1
kind: ServiceAccount
metadata:
name: backgrounder-scaler-service-account
namespace: <namespace> # Namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scale-backgrounder-pods
namespace: <namespace> # Namespace
subjects:
- kind: ServiceAccount
name: backgrounder-scaler-service-account
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scale-up-job
namespace: <namespace> # Namespace
spec:
schedule: "0 7 * * *" # Cron Job timing to scale up deployment replicas
jobTemplate:
spec:
template:
spec:
serviceAccountName: backgrounder-scaler-service-account
restartPolicy: OnFailure
containers:
- name: scale
image: bitnami/kubectl:1.21
imagePullPolicy: IfNotPresent
args:
- scale
- --replicas=4
- deployment/backgrounder
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scale-down-job
namespace: <namespace>
spec:
schedule: "0 9 * * *" # Cron Job timing to scale down deployment replicas
jobTemplate:
spec:
template:
spec:
serviceAccountName: backgrounder-scaler-service-account
restartPolicy: OnFailure
containers:
- name: scale
image: bitnami/kubectl:1.21
imagePullPolicy: IfNotPresent
args:
- scale
- --replicas=2
- deployment/backgrounder

Kubernetes Job to clean Clone Configuration (Optional)

This is a convenience Kubernetes job you might use during testing. If you want to clear out the clone configuration produced by Tableau Server in a Container between distinct deployment runs, you can run a job like this to clean the NFS.

apiVersion: batch/v1
kind: Job
metadata:
name: delete-clone-artifacts-job
namespace: manatee-cluster
spec:
template:
spec:
containers:
- name: delete-clone-artifacts
image: busybox:1.28
command: ['sh', '-c', "rm -rf ${CLONE_ARTIFACT_DIR}"]
env:
- name: CLONE_ARTIFACT_DIR
value: /docker/clone/clone-artifacts
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
volumeMounts:
- name: clone-volume
mountPath: /docker/clone
restartPolicy: Never
volumes:
- name: clone-volume
nfs:
server: <nfs_ip> # IP for shared NFS directory for clone output
path: /clone

 

 

Thanks for your feedback!Your feedback has been successfully submitted. Thank you!