Deployment for environments with Container Orchestration

All 6.x releases of Metric Insights can be deployed using the Kubernetes container orchestration platform. The main services (web, data processor, data processor seed nodeand data analyzer) can be installed separately depending on the needs of your configuration.

Requirements:

  • kubectl command-line tool to manage the Kubernetes Cluster
  • Database as an external service (e.g. MySQL/MariaDB, Amazon RDS or equivalent)
  • Network shared storage (e.g. NFS, EFS, Portworx)

Deployment process:

  1. Obtain Docker Registry credentials
  2. Сreate a Secret for Kubernetes
  3. Identify Data Processor future IP address (or DNS name)
  4. Upload MI secret files into Kubernetes
  5. Deploy the MI application
  6. Access Metric Insights

Simple Installer.

MI app deployment in a Kubernetes cluster (6.1.0)

Below is an example of how a Metric Insights application can be deployed inside a Kubernetes cluster.

Deployment Scheme:

  1. Web master service (required)
  2. Data analyzer service (required)
  3. Web slave service (required)
  4. Data Processor service  (required)
  5. Data Processor seed service  (required)
  6. Remote Data Processor service #1  (optional)
  7. Remote Data Processor service #2 (optional)

Physical Nodes (servers):

  1. N nodes inside a Kubernetes cluster (minimum 3 Nodes are required for failover)
  2. MySQL service (required component that can either be deployed inside a Kubernetes Cluster or on a remote server)
  3. Remote NFS shared storage (optional component that can either be deployed inside a Kubernetes Cluster or on a remote server)

1. Obtain Docker Registry credentials

Contact Support to get login and password for the MI Private Docker Registry. Each customer will have their own credentials to pull images from the Metric Insights' private Docker Registry.

2. Choose Deployment Method (Kubernetes, Amazon ECS)

Amazon ECS Prerequisites:

  1. Database (RDS or EC2 instance with custom database deployment)
  2. EFS or custom NFS shared storage
  3. Optional: If utilizing a private registry (non-Metric Insights), ensure that you have those credentials available.
  4. Skip ahead to Step 8


Kubernetes Prerequisites:

  1. Deploy cluster with a unique namespace for Metric Insights: 
    1. Deploy Kubernetes cluster: https://kubernetes.io/docs/setup/
    2. If cluster already exists, add a unique namespace for Metric Insights: https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces
    3. Azure AKS: https://docs.microsoft.com/en-us/azure/aks/
    4. Amazon EKS: https://aws.amazon.com/eks/
  2. Mount persistent storage volume to the namespace. NFS shared storage and the Portworx storage type are both supported. For Portworx, the exact class name used is required for deployment. For NFS, proceed with the steps below:
    1. Check contents of /etc/exports on the server that will be used as the NFS shared storage:
      cat/etc/exports 
    2. The NFS should be  mounted to /opt/mi by default; all Kubernetes containers need to have the same configuration:
      /opt/mi192.168.33.200(rw,fsid=1,crossmnt,no_subtree_check,no_root_squash,async)

3. Сreate a Secret for Kubernetes

Before deploying in Kubernetes, Docker Registry credentials must be registered as a Secret (a Kubernetes Object that lets you store and manage sensitive information, such as passwords, OAuth tokens). A Kubernetes cluster uses the Secret of docker-registry to authenticate with a container registry to pull a private image.

Create a Secret with Docker Registry credentials using the following command:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

WHERE:

  1. regcred: the name of your Secret
  2. your-registry-server: docker.metricinsights.com:5002
  3. your-name: your Docker username
  4. your-pword: your Docker password
  5. your-email: your email

4. Identify Data Processor future IP address (or DNS name)

Data Processor future IP address (or DNS name) is required for the Remote Data Processor to work correctly.

These parameters can either be specified in the Data Processor Secret file (dataprocessor.env) or in the deployment configuration file (metricinsights-6.x.x.yml):

  • DATAPROCESSOR_HOSTNAME can be left as default
  • DATAPROCESSOR_SEED_HOSTNAME must be provided to be able to connect Remote Data Processor from the Remote Server outside the Kubernetes cluster

View example below:

...
        env:
          - name: SPRING_PROFILES_ACTIVE
            value: "singleton,swagger,admin"
          - name: DATAPROCESSOR_HOSTNAME
            value: "metricinsights-dataprocessor"
          - name: DATAPROCESSOR_SEED_HOSTNAME
            value: "metricinsights-seed"
          - name: DATAPROCESSOR_SEED_BIND_PORT
            value: "2551"
...

5. Upload the MI Secret files into Kubernetes

MI secret files can be uploaded into Kubernetes using the using the web.env, dataprocessor.env, seed.env, mysql.secret files. Templates for each of the files are available in the installer directory: MetricInsights-Installer-v6.x.x-Full/utils/orchestration/kubernetes/secrets/

  1. Update the fields in each file that represents the major MI services: MI Web (UI), Data Processor and Seed Node components, Data-Analyzer service, as well as MySQL root password
  2. Upload data-analyzer.env, web.env, dataprocessor.env, seed.env, mysql.secret  containing the Secrets into Kubernetes using kubectl (command to use is below):
$ kubectl create secret generic metricinsights-data-analyzer --from-file data-analyzer.env [6.1.0]
$ kubectl create secret generic metricinsights-mysql-root-password --from-file mysql.secret
$ kubectl create secret generic metricinsights-web --from-file web.env
$ kubectl create secret generic metricinsights-seed --from-file seed.env
$ kubectl create secret generic metricinsights-dataprocessor --from-file dataprocessor.env

6. Deploy the MI application

For MI Version 6.1.0, use the following command:

$ ./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs.example.com> -o metricinsights-6.1.0.yml

Deploy the MI app according to the provided deployment configuration file:

$ kubectl apply -f metricinsights-6.1.0.yml 
service/metricinsights-web unchanged 
deployment.apps/metricinsights-web configured 
deployment.apps/metricinsights-web-replica configured 
service/metricinsights-seed unchanged 
deployment.apps/metricinsights-seed configured 
service/metricinsights-dataprocessor unchanged 
deployment.apps/metricinsights-dataprocessor configured 
service/metricinsights-rdc1 unchanged deployment.apps/metricinsights-rdc1 configured 
service/metricinsights-rdc2 unchanged 
deployment.apps/metricinsights-rdc2 configured 
persistentvolume/cache-volume unchanged 
persistentvolumeclaim/cache-volume unchanged
...

6.1. Select a StorageClass in Kubernetes

 --storage-class option allows setting a StorageClass for a shared data folder. Possible values are: nfs, portworx.

$ ./installer.py kubernetes --storage-class nfs --nfs-server-address <nfs server address>

6.2. Choose an Ingress Controller type

--ingress-controller-type option is for setting the Ingress Controller type. Possible values are: nginx, traefik.

$ ./installer.py kubernetes --ingress-controller-type nginx --hostname <host name>

6.3. Deployed components will be displayed in the Kubernetes UI

7. Access Metric Insights

Once the deployment is finished, Metric Insights becomes accessible through the requested IP address:

  • It can be the address of the Load Balancer or direct address to the Kubernetes Node with an external port

What's next?

Access the MI console from the Kubernetes UI

To get access to Metric Insights Docker Container Console that is deployed in Kubernetes, it is possible to use the Kubernetes dashboard:

  1. Go to the Navigation Menu > Pods
  2. Select a Pod to open the Pod's Detail Page
  3. Click [EXEC] to open the Console
  4. Use the Console to execute your commands in a container
Access Containers' logs in Kubernetes

By default each of Metric Insights services prints its own logs into stdout.

To get logs:

  1. Go to the Navigation Menu > Pods
  2. Select a Pod to open the Pod's Detail Page
  3. Click [LOGS]
  4. Logs will show:
    • General: that the applicaion is working and the boot process for Containers
    • For Web Container: apache logs, https requests and other provision information
    • For Data Processor Container: logs from the Java application

NOTE:  it is possible to download logs directly from your browser.

Access Containers' logs in Splunk

Instead of the default, you can use the Splunk logging driver that sends container logs to HTTP Event Collector in Splunk Enterprise and Splunk Cloud.

Update the MI application in Kubernetes

If required, you can update the Metric Insights application using the following command:

$ kubectl apply -f metricinsights-6.x.x.yml

8. Generate the configuration file to deploy to Amazon ECS

The configuration file can be generated using the Metric Insights installer package:

  1. Download the installer package to a linux system and unpack
  2. Change into the installer directory then run the installer with the ecs command and specifiy a target filename to generate the configuration file:
    $ ./installer.py ecs -o <filename>.json
  3. The configuration file can now be used as a template with AWS CloudFormation to create and deploy the Metric Insights environment

9. Creating the ECS Stack with AWS CloudFormation

Prepare the following:

  1. RDS address with root credentials
  2. EFS address to connect to Metric Insights application

Apply the configuration file through the CloudFormation UI:

  1. Upload the generated json file as a template

2.    Fill out each field then click Next at the bottom of the page. Some key notes:

  • To generate passwords for each service, you can either run echo -n '<pwd>' | base64 to encode a password of your choice, or run something like openssl rand -base64 8 to auto generate a password for you.
  • Use the full RDS address for the field `DBHostName`
  • Enter the RDS root user in the field `DBRootUserName`
  • Enter the full EFS address in the field `NFSServerAddress`
  • Select all Subnet IDs available in the field `SubnetIDs`
  • The field `WebReplicationsCount` represents the number of web slave containers (secondary to web master)

3.   Click Next to skip through the subsequent pages until you reach the following window. Click the checkbox to acknowledge that IAM resources might be created on deployment and click the button Update stack.

4.   Allow incoming connections to RDS for new ECS/EC2 security group to compete the deployment

  1. As the new ECS Stack is being deployed, go to the EC2 Console and select one of the new EC2s created for ECS
  2. Go to the Security Group field and select on the new security group name
  3. Copy the Group ID (e.g., `sg-name`)
  4. Switch to the RDS Console and select the RDS instance being used for ECS
  5. Go to the VPC Security Group field and select the security group name
  6. Switch to the `Inbound` tab and click the Edit button
  7. Add the new EC2 security group to the list then Save (Add Rule > All Traffic > Paste Group ID)

On adding the group, switch back to CloudFormation to monitor the ECS Stack deployment. The deploy should complete in 5-10 minutes.

10. Accessing the Metric Insights Deployment

Once the ECS Stack is deployed, switch back to the EC2 Console and select Load Balancers in the left menu pane. Identify the Load Balancer DNS name to access the Metric Insights application in a browser. For the best user experience, map the Load Balancer DNS name to a user friendly name in Amazon Route 53. Metric Insights is now deployed in ECS and browser ready!

Resources involved in running Metric Insights in ECS

  1. AWS ECS Task Definitions
  2. AWS ECS Cluster
  3. AWS ECS Services
  4. AWS EC2 Auto Scaling group
  5. AWS EC2 Launch Configuration
  6. AWS EC2 Security Groups
  7. AWS Target Groups
  8. AWS Network Load Balancer
  9. IAM Roles
  10. AWS Secret Manager
  11. AWS Cloud Formation (only for deployment and updates)

Non-ECS resources in AWS needed for deployment include:

  1. AWS RDS instance based on MariaDB 10.1 (custom parameter group with log_bin_function_creators enabled)
  2. AWS EFS Shared Storage

 

0 Comments

Add your comment

E-Mail me when someone replies to this comment