Deploying Container Orchestration with Docker Swarm

Docker Swarm is an excellent option for container orchestration if you meet one of the following conditions:

  • Need to utilize existing hardware already procured for Metric Insights
  • Don't have a cloud service provider to take advantage of orchestration tools like:
    • AWS Kubernetes (EKS) or AWS Native Orchestration (ECS)
    • Azure Kubernetes (AKS)
    • GCP Kubernetes (GKE)
  • Don't have an on-premise Kubernetes or OpenShift cluster to deploy

In terms of hardware, for Docker Swarm to work, we would need to deploy Metric Insights across a minimum of two application servers (with each server serving as nodes for the containers to run on).

  • NOTE: For best fault tolerance, three application nodes are recommended.

See Operating system Linux package requirements for v6.1+ for host server Linux package requirements.

One downside of Docker Swarm is the ability for host nodes to be auto-managed,  meaning if a host node goes down, a new replacement node is not automatically brought up for the swarm (as is typically the case with Kubernetes/OpenShift).

Also, Docker Swarm is managed via command-line (although there are third party GUIs available).

1. Architectural Diagram of Docker Swarm

7.0.1

This diagram highlights a basic 2-node Docker Swarm deployment with primary services (containers) on Node 1 and additional Web services on Node 2.

Required services:

  1. Web Master, Web Slave: The application's user interface.
  2. Data Analyzer: Provides global search capabilities within the MI application.
  3. Data Processor: Manages the integration between MI and external BI services.
  4. Console: Monitors the application's services and their status.
  5. Redis: Handles internal caching for optimized performance.
  6. Image Generator: Renders images directly from web pages.

Optional services:

  • Remote Data Processor 1, Remote Data Processor 2: A service for BI Tools that require integrating from a Windows environment instead of Linux.
6.4.5

This diagram highlights a basic 2-node Docker Swarm deployment with primary services (containers) on Node 1 and additional Web services on Node 2.

Required services:

  1. Web Master, Web Slave: The application's user interface.
  2. Data Analyzer: Provides global search capabilities within the MI application.
  3. Data Processor: Manages the integration between MI and external BI services.
  4. Seed: Works together with Data Processor to manage integration between Mi and external BI services.
  5. Monitoring: Monitors the application's services and their status.
  6. Redis: Handles internal caching for optimized performance.

Optional services:

  • Remote Data Processor 1, Remote Data Processor 2: A service for BI Tools that require integrating from a Windows environment instead of Linux.

2. Prerequisites

  1. Docker-CE with swarm mode enabled
  2. Connect the required list of nodes to the Docker Swarm cluster
  3. Credentials for your private Docker Registry to pull Metric Insights docker images
    • You can also pull from Metric Insights' official docker registry (contact [email protected] for credentials)
  4. NFS Share mounted to all nodes to store the Metric Insights file system
    • This ensures continuous operations where critical application and data files survive and containers across the cluster continue to run should any nodes go down.
  5. Ensure the following ports are open in the load balancer:

MI v7.0.1a:

  • 80, 443 - HTTP and HTTPS ports for the Web UI Service (default redirection to 443)
  • 32550 - TCP port for the Data Processor cluster
  • 3306 - MySQL port for external access
  • 8080, 8443 - HTTP and HTTPS ports for the REST API Data Processor Service
  • 8081 - TCP port for Console Tool

 

MI v6.4.5:

  • 80, 443 - HTTP and HTTPS ports for the Web UI Service (default redirection to 443)
  • 2550 - TCP port for the Data Processor cluster
  • 2551 - TCP port for the Seed Node Service
  • 3306 - MySQL port for external access
  • 8080, 8443 - HTTP and HTTPS ports for the REST API Data Processor Service
  • 8081 - TCP port for Monitoring Tool

3. Preparing for the Deployment

See Docker Commands Cheat Sheet to understand the Docker commands used below.

  1. Install all required packages to both application nodes to support Docker
  2. Provision a remote MySQL server database to store the application database
    • MySQL 8+
      • 8.0.37 for MI v7.0.1a
      • 8.0.32 for MI v6.4.5
    • MySQL root user required for the deployment
  3. Provision an NFS share to mount to all nodes to store the MI filesystem (default path is /opt/mi)
  4. Identify and label the nodes as master and slave:
    • For master node, enter the following docker command to label it as master:
      docker node update --label-add type=master node_master
    • For slave node, enter the following docker command to label it as slave:
      docker node update --label-add type=worker node_slave
  5. Download the Metric Insights installer then unpack to the master node (contact [email protected] for the installer tarball). This will create an installer directory from where we can run the installer to generate a Deployment Manifest for Docker Swarm
    • tar xf MetricInsights-Installer-vX.X.X-Full.tar.gz
    • cd MetricInsights-Installer-vX.X.X-Full
  6. Prepare Docker Secret files for each MI service
    • In the installer directory, you can find the template files to generate the secrets in ../utils/orchestration/swarm/secrets
    • Copy the *.env.example files there to your local directory and rename to *.env
    • Each template file represents a different MI service:

MI v7.0.1a:

  • web.env
  • console.env
  • dataprocessor.env
  • data-analyzer.env
  • image-generator.env
  • redis.env

MI v6.4.5:

  • web.env
  • seed.env
  • dataprocessor.env
  • data-analyzer.env
  • monitoring.env
  • redis.env
  1. Edit each template file and update each parameter. Make sure the passwords you generate are consistent throughout the files
  2. Generate the deployment manifest to use with Docker Swarm by running the following from the installer directory:
    • If the remote DB server has the same timezone as MI app: ./installer.py swarm --timezone <MI app timezone> -o <manifest filename>.yml
    • If the remote DB server has a different timezone than MI app: ./installer.py swarm --timezone <MI app timezone> --mysql-timezone <remote database server timezone> -o <manifest filename>.yml
    • If the Metric Insights Docker images will be pulled from a private Docker registry, use the --registry option along with the Docker registry URL as in ./installer.py swarm... --registry docker.metricinsights.com. When providing a registry URL, be sure to not include “https://.” The supported values are <hostname> or <hostname>:<port> without any protocol.

Note: Run ./installer.py swarm --help to see the list of available installer options. See Basic Console Commands section.

  1. Finally, update the NFS share address in the deployment manifest file:
    • Look for this section and change the address nfs-server.metricinsights.com to the appropriate NFS share:
data:
    driver_opts:
      type: "nfs"
      o: "addr=nfs-server.metricinsights.com,nolock,soft,rw"
      device: ":/opt/mi/data/data"

4. Deployment

To deploy docker swarm using the deployment manifest, use the docker stack deploy command:

$ docker stack deploy -c mi-swarm.yml --with-registry-auth mi
Creating network mi_metricinsights_net 
Creating secret mi_dataprocessor 
Creating secret mi_data-analyzer 
Creating secret mi_console 
Creating secret mi_redis 
Creating secret mi_image-generator 
Creating secret mi_web 

Creating service mi_dataprocessor 
Creating service mi_data-analyzer 
Creating service mi_console 
Creating service mi_redis 
Creating service mi_image-generator 
Creating service mi_web-master 
Creating service mi_web-slave
  • Note, the docker stack deploy command will create the docker secrets required for each service based on the *.env files. There is no need to create the secrets yourself.
  • Please see Docker Commands Cheat Sheet for a list of docker commands to use to manage the deployment.

If you need to customize some processes after the application deployment, see Configuring Custom Components article.  

5. Upgrading Metric Insights to a Newer Version

To upgrade the deployment to a newer version of Metric Insights, simply generate a new config file using the installer for the new release, then redeploy using docker stack deploy.

6. Basic Console Commands

Basic console commands can be checked by running ./installer.py swarm --help.

The following list of utilities are available to use on the host.

Note, all of these tools become available only if the Web Component is installed.

Optional Parameters
-h, --help Show this help message and exit
-o OUTPUT, --output OUTPUT Save Metric Insights Swarm deployment config to a file.
--nfs-server-address NFS_SERVER_ADDRESS Set NFS server address to connect the network shared data folder.
--nfs-shared-folder NFS_SHARED_FOLDER Set NFS network shared data folder path. (default: /opt/mi/data)
--hostname HOSTNAME Web service additional hostname. (default: None)
--web-instances-count WEB_INSTANCES_COUNT Set the amount of web instances. Possible values: >=1. (default: 2)
--dp-hostname DP_HOSTNAME Set hostname of Dataprocessor. (default: dataprocessor)
--registry REGISTRY Docker registry URL, that will be used for deployment MI components. Example: <hostname> or <hostname>:<port>. (default: None)
--db-hostname DB_HOSTNAME MySQL hostname of Database Server.
--db-name DB_NAME MySQL database name. (default: dashboard)
--db-port DB_PORT MySQL port for db-hostname. (default: 3306)
--db-user DB_USER MySQL admin user name to init Metric Insights database.
--db-password DB_PASSWORD MySQL admin user password.
--timezone TIME_ZONE Set time zone. (default: UTC)
--mysql-timezone MYSQL_TIME_ZONE Set MySQL time zone if MySQL engine has a different timezone than the application. If not specified, then value from --timezone option is used.
--shared-drive-folder SHARED_DRIVE_FOLDER Enable specification for shared drive from host to web container
--shared-drive-address SHARED_DRIVE_ADDRESS Set NFS server address. (default: None)
--high-load Enable high-load configuration for the Web service. This will optimize the system for several thousand concurrent users.
--enable-remote-execution Allow remote commands execution
--require-2mfa Require 2MFA for MI Console users
--da-cpu-number DA_CPU_NUMBER Set number for Data-Analyzer search processes. (default: 2)