Day 25. Launching Your Kubernetes Cluster with Deployment

Day 25. Launching Your Kubernetes Cluster with Deployment

Prerequisites:

  • An Ubuntu 22.04 system.

  • Minimum 2GB RAM or more.

  • Minimum 2 CPU cores (or 2 vCPUs).

  • 20 GB of free disk space on /var (or more).

  • Custom ICMP rule for all nodes in Security Group

  • Docker and Kubernetes installed on VM

  • Open ports on all nodes 6443,10250,10257,10259,2378,2379

I am using the following repository for this project:

Clone the repository to your local.

git clone https://github.com/Ashvini379/DevOps-Challenge.git
cd 'Day 25. Launching Your Kubernetes Cluster with Deployment'

The docker file in the cloned repository has the following:

FROM python:3
WORKDIR /data
RUN pip install django==3.2
COPY . .
RUN python manage.py migrate
EXPOSE 8000
CMD ["python","manage.py","runserver","0.0.0.0:8000"]

Go to the location and build the image(make sure your docker is running)

cd django-todo
docker build . -t ashvini34/django-todo-app-new:latest

Let’s verify if the image is created by running.

docker images

Yes, it is created. Now push this image to the docker hub.

Login to dockerhub.

docker login

Now push the repository to registry.

docker push ashvini34/django-todo-app-new:latest

Let’s verify in Docker Hub if the image is pushed.

We can see here ashvini34/django-todo-app-new is updated here.

Yes, the image is pushed from our local repo.

Now let us start with creating the deployment. Create the deployment.yml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-django-application
spec:
  replicas: 3
  selector:
    matchLabels:
      app: django-todo-app
  template:
    metadata:
      labels:
        app: django-todo-app
    spec:
      containers:
        - name: todo-app
          image: ashvini34/django-todo-app-new:latest
          ports:
            - containerPort: 8000
          resources:
            requests:
              ephemeral-storage: "2Gi"
            limits:
              ephemeral-storage: "2Gi"

Let’s create this deployment. Run this command as a root user:

kubectl apply -f deployment.yml

Let us verify if any pods are created.

COPY

kubectl get pods

Let us test if the container we created is working or not. This will be checked on the Worker node.

Let’s connect to any one of the containers locally.

COPY

sudo docker exec -it <container_id> bash

Let’s connect to the application using the container’s IP.

curl -L http://<container_ip>:8000

So the deployment I created is working successfully.

Let’s check the auto-healing and autoscaling features.

What are auto-healing and auto-scaling features in k8s?

Auto-healing, also known as self-healing, is a feature that automatically detects and recovers from failures within the cluster.

Auto-scaling is a feature that dynamically adjusts the number of running instances (pods) based on the current demand or resource utilization.

So I will delete two containers.

Use the following commands on the Master node to delete pods:

kubectl get pods
kubectl delete pod <podname> <podname>

If we check the pods again, we can observe that the minimum number of pods we specified (in this case 3) will be up again.

You can observe that two pods are created before 77s, proving that those came up automatically after the deletion of pods. This is the feature of k8s: auto-scaling and auto-healing.

To delete deployment, we use:


kubectl delete -f deployment.yml

We can also observe that, along with deployment, the pods created are also deleted.

observe that, along with deployment, the pods created are also deleted.