Deploy Metric Insights on Kubernetes
Leveraging Kubernetes (K8) for highly scalable container orchestration is a great option. It's an open-source framework so while there are many Kubernetes providers (Azure, GCP, TKE, etc.), deployment is generally the same across said providers. In summary, initial deployment requires a deployment configuration file and docker images for each of the Metric Insights services. Let's walk through the process together below.
- Understanding Application Architecture in Kubernetes
- Obtain Docker Registry Credentials
- Сreate Secrets for Docker Registry
- Create & Upload Secrets for Each MI Service
- Deploy Load Balancers for External Communications to Data Processor & Seed
- Create Deployment Configuration File to Deploy Metric Insights Application
- Confirm Metric Insights is Deployed in Kubernetes Dashboard
- Access Metric Insights Application in Browser
For non-orchestrated environments, see the help article on using Simple Installer.
Below is an architectural diagram of Metric Insights deployed in Kubernetes Namespace. A namespace is a virtual cluster that consists of several Nodes (servers). The nodes host Pods which is essentially a Container. Metric Insights consists of services that run inside their own container, rooted to a shared file system for persistence.
The deployment scheme consists of the following services deployed in individual pods (1 service per pod):
- Web Master
- Web Slave
- Data Analyzer
- Data Processor
- Remote Data Processor 1
- Remote Data Processor 2
Additional items of note:
- A minimum of 3 Nodes are required for automatic failover
- MySQL is required to host the Metric Insights application database, and it should run on a remote server
- Persistent storage is required for the shared file system
The following Kubernetes versions support MI deployment manifest:
The following is required to deploy to Kubernetes:
- Access to the Kubernetes Dashboard (Web UI)
- Kubectl command-line tool to manage the Kubernetes Cluster
- Remote database server to host the application database; e.g., MySQL/MariaDB (MariaDB is supported only in MI versions prior to 6.2.0)
- Persistent shared storage; e.g., NFS, Portworx
The following ports must also be open on the network:
- 80, 443 - HTTP/HTTPS ports for UI access
- 2550 - TCP port for the Data Processor cluster within the kubernetes namespace
- 2551 - TCP port for the Seed service within the kubernetes namespace
- 32550 - TCP port for external access to the Data Processor cluster
- 32551 - TCP port for external access to the Seed service
- 3306 - MySQL port
- 8080, 8443 - HTTP/HTTPS ports for the REST API Data Processor service
- 8081 - TCP port for the Monitoring Tool
Kubernetes namespace requirements:
- Create a unique namespace for Metric Insights:
- How to deploy a Kubernetes cluster: https://kubernetes.io/docs/setup/
- How to create a unique namespace: https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces
- Mount persistent storage volumes to the namespace. See Select a Storage Class for Persistent Volumes below for more.
- Configure an ingress controller to allow incoming connections to the namespace (UI access in a browser). See Choose an Ingress Controller type below for more.
The main services (web, data analyzer, data processor, seed, monitoring) can be installed separately depending on the needs of your deployment.
2.1. Select a Storage Class for Persistent Volumes
Metric Insights supports the following Storage Classes for the application file system, which is shared across pods as persistent volumes. Supported types are NFS and Portworx.
If using an NFS server hosted on Linux, please configure the mounted share as follows:
- In /etc/exports, set the mounted share to /opt/mi with the following options:
If using Portworx, identify the exact class name to use for the deployment.
- The class name must also be enabled for multi-node volume attachment.
Running the installer to set the storage class for the deployment configuration file:
$ ./installer.py kubernetes --storage-class <nfs/portworx>
2.2. Choose an Ingress Controller type
Metric Insights supports the following Ingress Controller Types for incoming traffic from outside of the namespace. For example, users opening the application in a browser will be routed through the Ingress Controller. Supported types are:
- Traefik v1.7
Note, for Traefik v1.7, TCP ports are not supported. So for external communications to Seed and Data Processor, additional load balancers must be deployed. See Deploy Load Balancers for external communications to Data Processor & Seed for more.
Running the installer to set the ingress controller type for the deployment configuration file:
$ ./installer.py kubernetes --ingress-controller-type <nginx/traefik>
3. Obtain Docker Registry Credentials
Contact MI Support for access to the official Metric Insights Docker Registry. Credentials are needed to pull docker images for each Metric Insights service.
- Note, the MI Docker Registry address (docker.metricinsights.com) is specified in the deployment configuration file:
4. Сreate Secrets for Docker Registry
Before deploying to Kubernetes, Docker Registry credentials must be registered as Secret for K8 to reference. Metric Insights uses a secret labeled
docker-registry to authenticate against a docker registry to pull docker images.
- Note, Secret is Kubernetes Object for storing and managing sensitive information like passwords and OAuth tokens. See Kubernetes Secrets for more.
Create Secret for Docker Registry credentials using kubectl:
kubectl --namespace <MI-namespace> create secret docker-registry <secret-name> --docker-server <docker-registry-url> --docker-username <username> --docker-password <password> --docker-email <email-address>
5. Create & Upload Secrets for Each MI Service
Metric Insights provides template files (environment variables) to create secrets for each service:
Templates for each are available in the installer directory:
- Copy each template file from the installer directory to your working directory
- The template files are saved with the extension .example (e.g., web.env.example). Rename each file by removing the .example extension.
- Open each file in an editor and update all fields. Ensure passwords are consistent between files.
- You can encode passwords of our choice or create random passwords.
- To encode, run
echo -n '<password>' | base64
- To generate a random password, run
openssl rand -base64 8 | tr -d /=+ | cut -c -11
- To encode, run
- You can encode passwords of our choice or create random passwords.
- Create the secrets for each service by uploading each file to the namespace using kubectl:
$ kubectl --namespace <MI-namespace> create secret generic metricinsights-web --from-file web.env $ kubectl --namespace <MI-namespace> create secret generic metricinsights-dataprocessor --from-file dataprocessor.env $ kubectl --namespace <MI-namespace> create secret generic metricinsights-seed --from-file seed.env $ kubectl --namespace <MI-namespace> create secret generic metricinsights-mysql-root-password --from-file mysql.secret $ kubectl --namespace <MI-namespace> create secret generic metricinsights-data-analyzer --from-file data-analyzer.env $ kubectl --namespace <MI-namespace> create secret generic metricinsights-monitoring --from-file monitoring.env
6. Deploy Load Balancers for External Communications to Data Processor & Seed
This step is critical if you plan on integrating with BI Tools that require a Remote Data Processor to run on a Windows machine (e.g., Power BI, Qlik Sense, Tibco Spotfire, etc.)
For the Remote Data Processor to communicate with the Local Data Processor in Kubernetes, deploy additional Load Balancers to handle external TCP traffic. Make sure the following ports are open on the load balancers:
- 32550 (data processor)
- 32551 (seed)
Note, additional load balancers are needed because K8 Ingress Controller types like Traefik only handle HTTP/HTTPS traffic and not TCP. Comparatively, for an ingress controller like Nginx, you can use a combination of the ingress controller + nodeports for external communications, or deploy load balancers as their own service (separate pods).
For the public hostname, we recommend using the hostname of the K8 master node and adding "dataprocessor" or "seed" to the name to keep things simple. For example, if the hostname of the master node (the hostname by which users will access the UI in a browser) is customName.companyName.com then you can map the following for the Data Processor LB: customName-dataprocessor.companyName.com.
Once the load balancers are deployed and a public hostname is set, update the following parameters in either the Data Processor secrets file (dataprocessor.env) or in the K8 deployment manifest file (metricinsights-6.x.x.yml):
Here's an example for the deployment configuration file:
... env: - name: SPRING_PROFILES_ACTIVE value: "singleton,swagger,admin" - name: DATAPROCESSOR_HOSTNAME value: "metricinsights-dataprocessor" - name: DATAPROCESSOR_SEED_HOSTNAME value: "metricinsights-seed" - name: DATAPROCESSOR_SEED_BIND_PORT value: "2551" ...
7. Create Deployment Configuration File to Deploy Metric Insights Application
To create a Kubernetes Deployment configuration file (also called a manifest), use the MI installer to generate a yaml file. Here's an example where we are setting the following values:
- Storage Type = NFS
- Ingress Controller = Nginx
- Private Docker Registry
- Data Processor Hostname
If the remote DB server has the same timezone as MI app:
./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs.example.com> --ingress-controller-type nginx --hostname <MI-hostname> --dp-hostname <dataprocessor_hostname> --registry <registry-url> --timezone <MI app timezone> -o <manifest filename>.yml
If the remote DB server has a different timezone than MI app:
./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs.example.com> --ingress-controller-type nginx --hostname <MI-hostname> --dp-hostname <dataprocessor_hostname> --registry <registry-url> --timezone <MI app timezone> --mysql-timezone <remote database server timezone> -o <manifest filename>.yml
The key here is using the -o option for the output file and then specifying a file name with a *.yml extension (yaml file).
./installer.py kubernetes -h for more options.
Next, deploy the Metric Insights application to the K8 namespace using the newly created deployment configuration file:
$ kubectl --namespace <MI-namespace> apply -f <manifest filename>.yml
You should see the services and pods being created as soon as the yaml file is applied. The kubectl apply output should look something like this:
$ kubectl --namespace <MI-namespace> apply -f <manifest filename>.yml service/metricinsights-web created deployment.apps/metricinsights-web-master created deployment.apps/metricinsights-web-slave created service/metricinsights-seed created deployment.apps/metricinsights-seed created service/metricinsights-dataprocessor created service/metricinsights-data-analyzer created deployment.apps/metricinsights-data-analyzer created persistentvolume/metricinsights-v613-data created persistentvolumeclaim/metricinsights-v613-data created persistentvolume/metricinsights-v613-ssl created persistentvolumeclaim/metricinsights-v613-ssl created persistentvolume/metricinsights-v613-external-libs created persistentvolumeclaim/metricinsights-v613-external-libs created ingress.extensions/metricinsights-ingress-nginx created . . .
8. Confirm Metric Insights is Deployed in Kubernetes Dashboard
Once the deployment is complete, log into the Kubernetes Dashboard to confirm Metric Insights is available. See the image below as a reference.
9. Access Metric Insights Application in Browser
Now that Metric Insights is up and running in Kubernetes, the web UI is accessible through the requested Hostname or IP Address:
- The hostname and ip address can be for the Ingress Controller or the Kubernetes worker node + port 80/443 on which the Web pod is running.
To learn how to administer Metric Insights in Kubernetes, see Administering Metric Insights in Kubernetes.