Day 27. Kubernetes Networking

Day 27. Kubernetes Networking

What is Kubernetes Networking?

In a Kubernetes cluster, the way Pods communicate with each other and with the outside world is crucial. Proper networking ensures that your applications are reliable, scalable, and secure. Understanding these fundamentals will not only make you proficient in managing Kubernetes clusters but also enable you to troubleshoot and optimize your deployments more effectively.

What is a Service?

As Deployment resources took care of deployments for us. Service resources will take care of serving the application to connections from outside (and also inside!) of the cluster.

Create a file service.yaml into the manifests folder and we need the service to do the following things:

  1. Declare that we want a Service

  2. Declare which port to listen to

  3. Declare the application where the request should be directed to

  4. Declare the port where the request should be directed to

This translates into a yaml file with contents

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: hashresponse-svc
spec:
  type: NodePort
  selector:
    app: hashresponse # This is the app as declared in the deployment.
  ports:
    - name: http
      nodePort: 30080 # This is the port that is available outside. Value for nodePort can be between 30000-32767
      protocol: TCP
      port: 1234 # This is a port that is available to the cluster, in this case it can be ~ anything
      targetPort: 3000 # This is the target port
kubectl apply -f manifests/service.yaml
  service/hashresponse-svc created

Types of Services

  1. ClusterIP :

    It creates external exposure.

    It is simple yet essential for k8s to perform its core duty and container orchestration.

  2. NodePort :

    It exposes workload to outside internet using each host VMs IP map to availabke static ports across the cluster.

  3. LoadBalancer:

    It exposes workload to the load balancer.

    It uses NodePort as its backend, which uses clusterIP access.

    It provides permanent or temporary IP address to access the application from outside internet

  4. ExternalName:

    It does not expose workload.

    It is used to map expose service to domain name by providing CNAME records for the service.

    The service type can be dictated using type flag in the kubectl to expose command

    To create headless service, provide the type = "None".

  5. Ingress:

    An Ingress resource manages external access to the services in a cluster, typically HTTP/HTTPS. Ingress can also provide load balancing, TLS termination and name-based virtual hosting. Ingress consists of an Ingress API object and the Ingress Controller. The Ingress Controller implements the Ingress API. The Ingress Controller is usually a load balancer for routing external traffic to your Kubernetes cluster and is responsible for L4-L7 Network Services.

    Ingress consists of three components:

    • Ingress resources,

    • L7 Application Load Balancer (ALB),

    • An external L4 load balancer to handle incoming requests across zones, depending on the provider.

What is DNS?

Your workload can discover Services within your cluster using DNS; this page explains how that works.

Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses.

A DNS query may return different results based on the namespace of the Pod making it.

DNS Records

What objects get DNS records?

  1. Services

     # Expose the nginx Pod
     $ kubectl expose pod nginx --name=nginx-service --port 80 --namespace apps
     service/nginx-service exposed
    
     # Get the nginx-service in the namespace "apps"
     $ kubectl get svc -n apps
     NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
     nginx-service   ClusterIP   10.96.120.174   <none>        80/TCP    6s
    
     # To get the dns record of the nginx-service from the default namespace
     $ kubectl run -it test --image=busybox:1.28 --rm --restart=Never -- nslookup nginx-service.apps.svc.cluster.local
     Server:    10.96.0.10
     Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
     Name:      nginx-service.apps.svc.cluster.local
     Address 1: 10.96.120.174 nginx-service.apps.svc.cluster.local
     pod "test" deleted
    
     # Accessing with curl command
     $ kubectl run -it nginx-test --image=nginx --rm --restart=Never -- curl -Is http://nginx-service.apps.svc.cluster.local
     HTTP/1.1 200 OK
     Server: nginx/1.19.2
    
  2. Pods

     # To create a namespace
     $ kubectl create ns apps
    
     # To create a Pod
     $ kubectl run nginx --image=nginx --namespace apps
    
     # To get the additional information of the Pod in the namespace "apps"
     $ kubectl get po -n apps -owide
     NAME    READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
     nginx   1/1     Running   0          99s   10.244.1.3   node01   <none>           <none>
    
     # To get the dns record of the nginx Pod from the default namespace
     $ kubectl run -it test --image=busybox:1.28 --rm --restart=Never -- nslookup 10-244-1-3.apps.pod.cluster.local
     Server:    10.96.0.10
     Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
     Name:      10-244-1-3.apps.pod.cluster.local
     Address 1: 10.244.1.3
     pod "test" deleted
    
     # Accessing with curl command
     $ kubectl run -it nginx-test --image=nginx --rm --restart=Never -- curl -Is http://10-244-1-3.apps.pod.cluster.local
     HTTP/1.1 200 OK
     Server: nginx/1.19.2
    

What is CoreDNS ?

CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. Like Kubernetes, the CoreDNS project is hosted by the CNCF.

You can use CoreDNS instead of kube-dns in your cluster by replacing kube-dns in an existing deployment, or by using tools like kubeadm that will deploy and upgrade the cluster for you.

CoreDNS is a DNS server that is modular and pluggable, with plugins adding new functionalities. The CoreDNS server can be configured by maintaining a Corefile, which is the CoreDNS configuration file. As a cluster administrator, you can modify the ConfigMap for the CoreDNS Corefile to change how DNS service discovery behaves for that cluster.

In Kubernetes, CoreDNS is installed with the following default Corefile configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

The Corefile configuration includes the following plugins of CoreDNS:

  • errors: Errors are logged to stdout.

  • health: Health of CoreDNS is reported to http://localhost:8080/health. In this extended syntax lameduck will make the process unhealthy then wait for 5 seconds before the process is shut down.

  • ready: An HTTP endpoint on port 8181 will return 200 OK, when all plugins that are able to signal readiness have done so.

  • kubernetes: CoreDNS will reply to DNS queries based on IP of the Services and Pods. You can find more details about this plugin on the CoreDNS website.

    • ttl allows you to set a custom TTL for responses. The default is 5 seconds. The minimum TTL allowed is 0 seconds, and the maximum is capped at 3600 seconds. Setting TTL to 0 will prevent records from being cached.

    • The pods insecure option is provided for backward compatibility with kube-dns.

    • You can use the pods verified option, which returns an A record only if there exists a pod in the same namespace with a matching IP.

    • The pods disabled option can be used if you don't use pod records.

  • prometheus: Metrics of CoreDNS are available at http://localhost:9153/metrics in the Prometheus format (also known as OpenMetrics).

  • forward: Any queries that are not within the Kubernetes cluster domain are forwarded to predefined resolvers (/etc/resolv.conf).

  • cache: This enables a frontend cache.

  • loop: Detects simple forwarding loops and halts the CoreDNS process if a loop is found.

  • reload: Allows automatic reload of a changed Corefile. After you edit the ConfigMap configuration, allow two minutes for your changes to take effect.

  • loadbalance: This is a round-robin DNS loadbalancer that randomizes the order of A, AAAA, and MX records in the answer.

Task

  1. Create and expose a new nginx pod so that its FQDN reflects the new suffixes.

Solution:

  1. Create a new nginx pod using the kubectl command.

     kubectl run nginx-coredns --image=nginx:latest --port 80
    

    Output:

  2. Expose the nginx-coredns pod.

     kubectl expose pod nginx-coredns --type NodePort --name nginx-svc
    

    Output:

  3. Exec into the amazon-linux container (any Linux container that provides curl utility) and send a request to the nginx container using the new FQDN.

     kubectl exec -it amazon-linux -- /bin/shcurl nginx-svc.blackpink.io
    

    Output:

Ingress

Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API.

What is Ingress?

Ingress resource - Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Here is a simple example where an Ingress sends all its traffic to one Service:

ingress-diagram

Figure. Ingress

  • Create a ingress resource
Ingress-wear.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear
spec:
     backend:
        serviceName: wear-service
        servicePort: 80
  • To create the ingress resource
$ kubectl create -f Ingress-wear.yaml
ingress.extensions/ingress-wear created
  • To get the ingress
$ kubectl get ingress
NAME           CLASS    HOSTS   ADDRESS   PORTS   AGE
ingress-wear   <none>   *

Ingress controller- is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.

In order for an Ingress to work in your cluster, there must be an ingress controller running. You need to select at least one ingress controller and make sure it is set up in your cluster. This page lists common ingress controllers that you can deploy.

  • Deployment of Ingress Controller

ConfigMap

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      serviceAccountName: ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443

Ingress Annotations and rewrite-target

In this section, we will take a look at Ingress annotations and rewrite-target

  • Different Ingress controllers have different options to customize the way it works. Nginx Ingress Controller has many options but we will take a look into the one of the option "Rewrite Target" option.

  • Kubernetes Version 1.18

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: critical-space
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /pay
        backend:
          serviceName: pay-service
          servicePort: 8282