Deployment for environments with Container Orchestration

All 6.x releases of Metric Insights can be deployed using the Kubernetes container orchestration platform. The main services (web, data processor, data processor seed nodeand data analyzer) can be installed separately depending on the needs of your configuration.

Requirements:

  • kubectl command-line tool to manage the Kubernetes Cluster
  • Database as an external service (e.g. MySQL/MariaDB, Amazon RDS or equivalent)
  • Network shared storage (e.g. NFS, EFS, Portworx)

Deployment process:

  1. Obtain Docker Registry credentials
  2. Сreate a Secret for Kubernetes
  3. Identify Data Processor future IP address (or DNS name)
  4. Upload MI secret files into Kubernetes
  5. Deploy the MI application
  6. Access Metric Insights

Simple Installer.

MI app deployment in a Kubernetes cluster (6.1.0)

Below is an example of how a Metric Insights application can be deployed inside a Kubernetes cluster.

Deployment Scheme:

  1. Web master service (required)
  2. Data analyzer service (required)
  3. Web slave service (required)
  4. Data Processor service  (required)
  5. Data Processor seed service  (required)
  6. Remote Data Processor service #1  (optional)
  7. Remote Data Processor service #2 (optional)

Physical Nodes (servers):

  1. N nodes inside a Kubernetes cluster (minimum 3 Nodes are required for failover)
  2. MySQL service (required component that can either be deployed inside a Kubernetes Cluster or on a remote server)
  3. Remote NFS shared storage (optional component that can either be deployed inside a Kubernetes Cluster or on a remote server)

1. Obtain Docker Registry credentials

Contact Support to get login and password for the MI Private Docker Registry. Each customer will have their own credentials to pull images from the Metric Insights' private Docker Registry.

2. Choose Deployment Method (Kubernetes, Amazon ECS)

Amazon ECS Prerequisites:

  1. Database (RDS or EC2 instance with custom database deployment)
  2. EFS or custom NFS shared storage
  3. Optional: If utilizing a custom registry (non-Metric Insights), ensure that you have those credentials available.
  4. Skip ahead to Step 7


Kubernetes Prerequisites:

  1. Deploy cluster with a unique namespace for Metric Insights: 
    1. Deploy Kubernetes cluster: https://kubernetes.io/docs/setup/
    2. If cluster already exists, add a unique namespace for Metric Insights: https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces
    3. Azure AKS: https://docs.microsoft.com/en-us/azure/aks/
    4. Amazon EKS: https://aws.amazon.com/eks/
  2. Mount NFS shared persistent storage volume
    1. Check contents of /etc/exports on the server that will be used as the NFS shared storage:
      cat/etc/exports 
    2. The NFS should be  mounted to /opt/mi by default; all Kubernetes containers need to have the same configuration:
      /opt/mi192.168.33.200(rw,fsid=1,crossmnt,no_subtree_check,no_root_squash,async)

3. Сreate a Secret for Kubernetes

Before deploying in Kubernetes, Docker Registry credentials must be registered as a Secret (a Kubernetes Object that lets you store and manage sensitive information, such as passwords, OAuth tokens). A Kubernetes cluster uses the Secret of docker-registry to authenticate with a container registry to pull a private image.

Create a Secret with Docker Registry credentials using the following command:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

WHERE:

  1. regcred: the name of your Secret
  2. your-registry-server: docker.metricinsights.com:5002
  3. your-name: your Docker username
  4. your-pword: your Docker password
  5. your-email: your email

4. Identify Data Processor future IP address (or DNS name)

Data Processor future IP address (or DNS name) is required for the Remote Data Processor to work correctly.

These parameters can either be specified in the Data Processor Secret file (dataprocessor.env) or in the deployment configuration file (metricinsights-6.x.x.yml):

  • DATAPROCESSOR_HOSTNAME can be left as default
  • DATAPROCESSOR_SEED_HOSTNAME must be provided to be able to connect Remote Data Processor from the Remote Server outside the Kubernetes cluster

View example below:

...
        env:
          - name: SPRING_PROFILES_ACTIVE
            value: "singleton,swagger,admin"
          - name: DATAPROCESSOR_HOSTNAME
            value: "metricinsights-dataprocessor"
          - name: DATAPROCESSOR_SEED_HOSTNAME
            value: "metricinsights-seed"
          - name: DATAPROCESSOR_SEED_BIND_PORT
            value: "2551"
...

5. Upload the MI Secret files into Kubernetes

MI secret files can be uploaded into Kubernetes using the using the web.env, dataprocessor.env, seed.env, mysql.secret files

  1. In your file system, describe the MI separated Secret files for the MI Web (UI), Data Processor and Seed Node components as well as MySQL root password Secret
  2. Upload data-analyzer.env [6.1.0], web.env, dataprocessor.env, seed.env, mysql.secret  containing the said Secrets into Kubernetes (View example below)
$ kubectl create secret generic metricinsights-data-analyzer --from-file data-analyzer.env [6.1.0]
$ kubectl create secret generic metricinsights-mysql-root-password --from-file mysql.secret
$ kubectl create secret generic metricinsights-web --from-file web.env
$ kubectl create secret generic metricinsights-seed --from-file seed.env
$ kubectl create secret generic metricinsights-dataprocessor --from-file dataprocessor.env

6. Deploy the MI application

For MI Version 6.1.0, use the following command:

$ ./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs.example.com> -o metricinsights-6.1.0.yml

Deploy the MI app according to the provided deployment configuration file:

$ kubectl apply -f metricinsights-6.1.0.yml 
service/metricinsights-web unchanged 
deployment.apps/metricinsights-web configured 
deployment.apps/metricinsights-web-replica configured 
service/metricinsights-seed unchanged 
deployment.apps/metricinsights-seed configured 
service/metricinsights-dataprocessor unchanged 
deployment.apps/metricinsights-dataprocessor configured 
service/metricinsights-rdc1 unchanged deployment.apps/metricinsights-rdc1 configured 
service/metricinsights-rdc2 unchanged 
deployment.apps/metricinsights-rdc2 configured 
persistentvolume/cache-volume unchanged 
persistentvolumeclaim/cache-volume unchanged
...

6.1. Select a StorageClass in Kubernetes

 --storage-class option allows setting a StorageClass for a shared data folder. Possible values are: nfs, portworx.

$ ./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs server address>

6.2. Choose an Ingress Controller type

--ingress-controller-type option allows setting the type of an Ingress Controller. Possible values are: nginx, traefik.

$ ./installer.py kubernetes --ingress-controller-type nginx --hostname <host name>

6.3. Deployed components will be displayed in the Kubernetes UI

7. Access Metric Insights

Once the deployment is finished, Metric Insights becomes accessible through the requested IP address:

  • It can be the address of the Load Balancer or direct address to the Kubernetes Node with an external port

What's next?

Access the MI console from the Kubernetes UI

To get access to Metric Insights Docker Container Console that is deployed in Kubernetes, it is possible to use the Kubernetes dashboard:

  1. Go to the Navigation Menu > Pods
  2. Select a Pod to open the Pod's Detail Page
  3. Click [EXEC] to open the Console
  4. Use the Console to execute your commands in a container
Access Containers' logs in Kubernetes

By default each of Metric Insights services prints its own logs into stdout.

To get logs:

  1. Go to the Navigation Menu > Pods
  2. Select a Pod to open the Pod's Detail Page
  3. Click [LOGS]
  4. Logs will show:
    • General: that the applicaion is working and the boot process for Containers
    • For Web Container: apache logs, https requests and other provision information
    • For Data Processor Container: logs from the Java application

NOTE:  it is possible to download logs directly from your browser.

Access Containers' logs in Splunk

Instead of the default, you can use the Splunk logging driver that sends container logs to HTTP Event Collector in Splunk Enterprise and Splunk Cloud.

Update the MI application in Kubernetes

If required, you can update the Metric Insights application using the following command:

$ kubectl apply -f metricinsights-6.x.x.yml

0 Comments

Add your comment

E-Mail me when someone replies to this comment