How to Deploy a Cloud Native Monitoring Application on Kubernetes

How to Deploy a Cloud Native Monitoring Application on Kubernetes

Guide on how to create a Python monitoring application using Flask, containerize it using Docker, and deploy it to kubernetes.

Project Overview

Here's a brief overview of the process of building the monitoring app and what the app does:

In this blog post, we will be building a monitoring application using Python and Flask that can monitor the CPU and memory utilization of your computer. The application makes use of the psutil library to retrieve the CPU and memory utilization data.

We start by creating a basic Flask application that renders an HTML page with two gauges showing the CPU and memory utilization using Plotly.com. We use the psutil library to retrieve the CPU and memory utilization data and display it on the gauges. We also add a message to be displayed on the page if the CPU or memory utilization goes above 80%.

Next, we containerize the application using Docker. We create a Dockerfile that installs the required Python libraries and copies the application code to the container. We then build a Docker image from the Dockerfile and run a Docker container from the image.

We then deploy the application on Kubernetes using Amazon EKS. We create an ECR repository and push the Docker image to the repository. We then create an Amazon EKS cluster and nodes, and deploy the application on the cluster using a python boto3 library writing codes for kubernetes Deployment and Service.

Finally, we port forward and expose the Kubernetes application to our local machine, allowing us to access the monitoring application from a web browser.

Overall, the monitoring application allows us to monitor the CPU and memory utilization of our computer in a scalable and efficient manner, and with the use of Plotly.com, we can display the utilization data in a visually rich format.

Prerequisites

Before we begin, make sure you have the following:

  • Python 3.10 or higher installed on your machine

  • Flask web framework

  • Docker installed on your machine

  • AWS CLI installed on your machine

  • You should have eksctl command-line tool installed on your machine

  • Access to an AWS account

  • You should have kubectl command-line tool installed on your machine

  • Access to an EKS cluster

  • VSCode installed

  • Basic Docker and Kubernetes knowledge

Project Architecture

I have used LucidChart to design a sketch architectural diagram that represents what we intend to build. The diagram provides an overview of the system's components, their relationships, and how they fit together.

Step 1: Create the Flask application

Create a new Python file and name it app.py. Copy and paste the following code:

import psutil
# render - add style in flask application
from flask import Flask, render_template

app = Flask(__name__)


@app.route("/")
def index():
    cpuGauge = psutil.cpu_percent()
    memGauge = psutil.virtual_memory().percent
    Message = None
    if cpuGauge > 80 or memGauge > 80:
        Message = " High CPU or Memory utilization detected. Please scale up"
    return render_template("index.html", cpu_metric=cpuGauge, mem_metric=memGauge, message=Message)


if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0')

This code defines a Flask application that listens on the root URL / and returns the CPU and memory utilization of your computer in HTML rich format using render_template.

The psutil library is used to retrieve the CPU and memory utilization data. The CPU utilization is measured as a percentage using the cpu_percent method, and the memory utilization is also measured as a percentage using the virtual_memory().percent method.

If the CPU or memory utilization goes above 80%, a message is displayed on the page asking you to scale up.

Running the Application Locally

To run the application locally, save the code in a file called app.py and run it using the following command:

python3 app.py

This will start the Flask development server and you can access the application by opening a web browser and navigating to http://localhost:5000/

If you are using a MacBook you will encounter an error "Port 5000 already in use". This happens because the port 5000 we are using for localserver is being used by MacOS.

The solution to resolving this: Goto system preferences --> sharing --> uncheck Airplay Receiver. Airplay Receiver is using port 5000.

Run the server again you should be able to open http://localhost:5000/ on your browser:

Next: Create Folder 'templates' and a file inside the folder 'index.html' with the codes below:

<!DOCTYPE html>
<html>

<head>
    <title>System Monitoring</title>
    <script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
    <style>
        .plotly-graph-div {
            margin: auto;
            width: 50%;
            background-color: rgba(151, 128, 128, 0.688);
            padding: 20px;
        }

        .alert {
            color: red;
        }
    </style>
</head>

<body>
    <div class="container">
        <h1>System Monitoring</h1>
        <div id="cpu-gauge"></div>
        <div id="mem-gauge"></div>
        {% if message %}
        <div class="alert">{{ message }}</div>
        {% endif %}
    </div>
    <script>
        var cpu_metric = {{ cpu_metric }}; // Render the dynamic value here
        var mem_metric = {{ mem_metric }}; // Render the dynamic value here

        var cpuGauge = {
            type: "indicator",
            mode: "gauge+number",
            value: cpu_metric,
            gauge: {
                axis: { range: [null, 100] },
                bar: { color: "#1f77b4" },
                bgcolor: "white",
                borderwidth: 2,
                bordercolor: "#ccc",
                steps: [
                    { range: [0, 50], color: "#d9f0a3" },
                    { range: [50, 85], color: "#ffeb84" },
                    { range: [85, 100], color: "#ff5f5f" }
                ],
                threshold: {
                    line: { color: "red", width: 4 },
                    thickness: 0.75,
                    value: cpu_metric,
                }
            }
        };

        var memGauge = {
            type: "indicator",
            mode: "gauge+number",
            value: mem_metric,
            gauge: {
                axis: { range: [null, 100] },
                bar: { color: "#1f77b4" },
                bgcolor: "white",
                borderwidth: 2,
                bordercolor: "#ccc",
                steps: [
                    { range: [0, 50], color: "#d9f0a3" },
                    { range: [50, 85], color: "#ffeb84" },
                    { range: [85, 100], color: "#ff5f5f" }
                ],
                threshold: {
                    line: { color: "red", width: 4 },
                    thickness: 0.75,
                    value: mem_metric
                }
            }
        };

        var cpuGaugeLayout = { title: "CPU Utilization" };
        var memGaugeLayout = { title: "Memory Utilization" };

        Plotly.newPlot('cpu-gauge', [cpuGauge], cpuGaugeLayout);
        Plotly.newPlot('mem-gauge', [memGauge], memGaugeLayout);
    </script>
</body>

</html>

After creating the file 'index.html' we should have a rich HTML visual of the CPU and Memory Utilization Metrics.

Run python3 app.py Open your browser you should see below:

Step 2: Containerize the application using Docker

Create a new file in the same directory as app.py name it requirements.txt. These are the dependency libraries and appropriate versions that are required to be used for the project. Copy and paste the following:

Flask==2.2.3
MarkupSafe==2.1.2
Werkzeug==2.2.3
itsdangerous==2.1.2
psutil==5.8.0
plotly==5.5.0
tenacity==8.0.1
boto3==1.9.148
kubernetes==10.0.1

Create a new file in the same directory as app.py and name it Dockerfile. Copy and paste the following code:

FROM python:3.9-slim-buster

WORKDIR /app

COPY requirements.txt .

# Update package manager and install required dependencies
RUN apt-get update && apt-get install -y gcc python3-dev

# Update pip
RUN python3 -m pip install --upgrade pip

#install the required python packages
RUN pip3 install --no-cache-dir -r requirements.txt

#copy the application code to the working directory
COPY . .

#set the environment variables for the flask app
ENV FLASK_RUN_HOST=0.0.0.0

# Expose the port on which the flask app wil run
EXPOSE 5000

# start the Flask app when the conainer is run
CMD [ "flask", "run"]

This Dockerfile defines the steps to build a Docker image for a Python application that uses Flask web framework. Here's what each instruction does:

  • FROM python:3.9-slim-buster: This specifies the base image for the container. In this case, the base image is python:3.9-slim-buster.

  • WORKDIR /app: This sets the working directory to /app in the container.

  • COPY requirements.txt .: This copies the requirements.txt file from the local directory to the container's working directory.

  • RUN apt-get update && apt-get install -y gcc python3-dev: This updates the package manager and installs required dependencies such as gcc and python3-dev.

  • RUN python3 -m pip install --upgrade pip: This updates pip to the latest version.

  • RUN pip3 install --no-cache-dir -r requirements.txt: This installs the Python packages specified in requirements.txt.

  • COPY . .: This copies the application code from the local directory to the container's working directory.

  • ENV FLASK_RUN_HOST=0.0.0.0: This sets the environment variable FLASK_RUN_HOST to 0.0.0.0, which allows the Flask app to listen on all network interfaces.

  • EXPOSE 5000: This exposes port 5000 on the container, which is the port on which the Flask app will run.

  • CMD [ "flask", "run"]: This specifies the command that will be executed when the container starts. In this case, it starts the Flask app using the command flask run.

To build the Docker image, run the following command in the same directory as the Dockerfile:

docker build -t my-flask-app .

This will build a Docker image with the tag my-flask-app.

To run the Docker container from the image, run the following command:

docker run -d -p 5000:5000 my-flask-app

This command starts a Docker container and maps port 5000 on the host to port 5000 on the container. The command '-d' also runs the container in a detached mode.

Open a web browser and navigate to http://localhost:5000 to see the render_template HTML output:

Step 3: Create ECR and push image to the Repo

Assuming you have an AWS account, you can use the AWS CLI to create an Amazon ECR repository and push the Docker image to the repository.

First, create an ECR repository by running the following command:

aws ecr create-repository --repository-name cloud-native-repo

The next step is to access the AWS management console and search for "ECR" in the search bar to navigate to the Elastic Container Registry. Once there, you can view the recently created repository.

To push the Docker image that was created on your local machine to the AWS ECR Repository, click on "View Push Commands" to get instructions on how to proceed. Follow the instructions to complete the image push.

OR Use the following steps to authenticate and push an image to your repository.

  1. Retrieve an authentication token and authenticate your Docker client to your registry.

    Use the AWS CLI:

  2.    aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <your-aws-account-id>.dkr.ecr.us-east-1.amazonaws.com
    

    Note: if you receive an error using the AWS CLI, make sure that you have the latest version of the AWS CLI and Docker installed.

  3. After the build is completed, tag your image so you can push the image to this repository:

     docker tag my-cloud-native-repo:latest <your-aws-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-cloud-native-repo:latest
    
  4. Run the following command to push this image to your newly created AWS repository:

     docker push <your-aws-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-cloud-native-repo:latest
    

This command pushes the my-flask-app image to the ECR repository.

Step 4: Create EKS cluster and nodes

You can use the AWS CLI to create an Amazon EKS cluster and nodes.

First, create an Amazon EKS cluster by running the following command:

eksctl create cluster --name my-flask-app --region <your-aws-region> --nodegroup-name standard-workers --node-type t2.micro --nodes 2 --nodes-min 1 --nodes-max 3 --ssh-access --ssh-public-key <your-public-ssh-key>

Replace <your-aws-region> and <your-public-ssh-key> with the appropriate values for your AWS account.

This command creates a new EKS cluster named monitoring-app and launches two worker nodes of type t2.micro with minimum one node and maximum three nodes.

I have chosen to use an AWS free tier t2.micro instance type to minimize expenses. However, this instance type is equipped with only 1 CPU and 1GB memory, which may be inadequate for certain Kubernetes deployments to operate efficiently. If you encounter performance issues or errors, it is recommended to upgrade to an instance type with at least 2 CPUs and 2GB memory, such as the t3.small instance type. Please note that the specific instance type required for your deployment may vary depending on its requirements.

Overall, eksctl automates the process of creating and configuring the necessary AWS resources for an EKS cluster and node groups, including the required IAM roles and policies.

Step 5: Create Kubernetes Deployment and Service using Python

Now, create a Kubernetes Deployment and Service that use the my-flask-app Docker image.

Create a new file in the same directory as app.py and name it eck.py. Copy and paste the following code:

from kubernetes import client, config

# Load kubernetes configuration
config.load_kube_config()

# create a kubernetes API Client
api_client = client.ApiClient()

# Define the deployment
deployment = client.V1Deployment(
    metadata=client.V1ObjectMeta(name="my-flask-app"),
    spec=client.V1DeploymentSpec(
        replicas=3,  # set the desired number of replicas
        min_ready_seconds=30,  # set the minimum number of seconds for a pod to become ready
        strategy=client.V1DeploymentStrategy(
            type="RollingUpdate",
            rolling_update=client.V1RollingUpdateDeployment(
                max_unavailable=1,  # set the maximum number of unavailable pods during a rolling update
                max_surge=1  # set the maximum number of pods that can be created above the desired number of replicas during a rolling update
            )
        ),
        selector=client.V1LabelSelector(
            match_labels={"app": "my-flask-app"}
        ),
        template=client.V1PodTemplateSpec(
            metadata=client.V1ObjectMeta(
                labels={"app": "my-flask-app"}
            ),
            spec=client.V1PodSpec(
                containers=[
                    client.V1Container(
                        name="my-flask-container",
                        image="<your-aws-account-id>.dkr.ecr.us-east-1.amazonaws.com/my-cloud-native-repo:latest",
                        ports=[client.V1ContainerPort(container_port=5000)]
                    )
                ]
            )
        )
    )
)

# Create the deployment
api_instance = client.AppsV1Api(api_client)

api_instance.create_namespaced_deployment(
    namespace="default",
    body=deployment
)

# Define the service
service = client.V1Service(
    metadata=client.V1ObjectMeta(name="my-flask-service"),
    spec=client.V1ServiceSpec(
        selector={"app": "my-flask-app"},
        ports=[client.V1ServicePort(port=5000)]
    )
)

# Create the service
api_instance = client.CoreV1Api(api_client)
api_instance.create_namespaced_service(
    namespace="default",
    body=service
)

This Python code is for deploying a Flask application on a Kubernetes cluster.

Firstly, the code imports the required Kubernetes client and configuration modules from the Kubernetes Python library.

Then it loads the Kubernetes configuration using config.load_kube_config(). This step ensures that the code is using the correct Kubernetes cluster for deployment.

Next, the code defines a V1Deployment object that describes the deployment of the Flask application. It includes the desired number of replicas, minimum number of seconds for a pod to become ready, and the rolling update strategy. The deployment object also specifies the label selector, pod template, and container image for the application.

After defining the deployment object, the code creates the deployment by calling api_instance.create_namespaced_deployment() method of the AppsV1Api class, passing in the deployment object and the namespace where it should be created.

Finally, the code defines a V1Service object that describes how to access the Flask application. The service object specifies the label selector and the port where the Flask application is listening. The code creates the service object by calling the api_instance.create_namespaced_service() method of the CoreV1Api class, passing in the service object and the namespace where it should be created.

To apply these Kubernetes manifests, run the following command:

python3 eks.py

confirm if the kubernetes deployment and service was created

kubectl get deploy,svc

Step 6: Port forward and expose the Kubernetes application

To access the Kubernetes application from your local machine, you need to forward the Kubernetes Service port to your local machine.

To do this, run the following command:

kubectl port-forward service/my-flask-service 5000:5000

This command forwards port 5000 of the Kubernetes Service named my-flask-service to port 5000 on your local machine.

You can now access the my-flask-app application by opening a web browser and navigating to http://localhost:5000/.

Cleanup

To delete the EKS cluster and associated resources created by eksctl to avoid incurring charges, you can use the following command:

eksctl delete cluster --name <cluster-name> --region <your-aws-region>

Replace <cluster-name> with the name of the EKS cluster you want to delete, and <your-aws-region> with the AWS region where the cluster is located.

This command will delete the CloudFormation stack that was created by eksctl, including the control plane, worker nodes, and networking resources. It will also delete the IAM roles and policies created for the EKS cluster and worker nodes.

Note that deleting an EKS cluster and its associated resources is an irreversible operation, and you will lose all data stored on the worker nodes. Before running the eksctl delete cluster command, make sure you have backed up any data that you want to keep and that you have terminated any services or applications running on the cluster.

Conclusion

In this blog post, you learned how to build a cloud-native monitoring application using Python and Flask, monitor CPU and memory utilization of your computer, containerize the application using Docker, deploy the application on Kubernetes using Amazon EKS, and port forward and expose the Kubernetes application to your local machine.

By following these steps, you can create a scalable and efficient monitoring application that can be run in the cloud. You can also extend the application to monitor other aspects of your system, such as network usage, disk usage, and more.

I hope this blog post was helpful to you in building your own cloud-native monitoring application.