Intro to Containerization

1. What is Containerization?

Containerization is the process of packaging software and its dependencies into a portable, lightweight container that can run consistently across different environments:

2. Types of Containerization

Several types of containerization focus on different runtime environments, resource isolation, and use cases:

3. Understanding Docker

Docker is a popular platform for creating, deploying, and managing containers. It simplifies the development process and enhances portability:

4. Benefits of Containerization

Containerization offers numerous benefits that make it ideal for modern application development and deployment:

Overview of Kubernetes

1. Origin & History

Kubernetes, often referred to as "K8s," originated as a project by Google and was later open-sourced in 2014. It has since become a key technology in the container orchestration landscape:

2. Features of Kubernetes

Kubernetes provides a robust set of features that make it highly effective for managing containerized applications:

3. Understanding Architecture

Kubernetes has a modular architecture, consisting of various components that work together to manage applications efficiently:

4. Benefits of Kubernetes

Kubernetes offers numerous benefits, making it the go-to choice for container orchestration:

Kubernetes Architecture

1. Kubernetes Control Plane

The Control Plane is the central component in Kubernetes architecture, orchestrating and managing clusters:

2. Nodes and Pods

Nodes and Pods form the core of Kubernetes’ application deployment structure, with nodes representing physical or virtual machines and pods representing units of deployment:

3. Services in Kubernetes

Services in Kubernetes enable communication between various components and facilitate reliable access to applications within a cluster:

4. Kubernetes Objects

Kubernetes Objects are the fundamental units that represent the desired state of the cluster. They define what resources the system manages and how:

Setup Kubernetes Locally

1. Minikube: Local Kubernetes

Minikube is a tool that lets you run Kubernetes locally, allowing developers to test and experiment with Kubernetes clusters on their local machines:

2. Setting up Minikube

Installing Minikube is straightforward and requires minimal setup. Here’s how you can set it up:

3. Kubernetes CLI – kubectl

Kubectl is the command-line interface for managing Kubernetes clusters. It’s essential for interacting with your Minikube cluster:

Kubernetes CLI – kubectl

1. Installation of kubectl

kubectl is the command-line tool for interacting with Kubernetes clusters, used to deploy and manage applications:

2. Basic kubectl Commands

These foundational commands allow you to manage Kubernetes resources effectively:

3. Kubernetes Authentication

Authentication ensures secure communication between kubectl and the Kubernetes API server:

Kubernetes Pods

1. Understanding Pods

In Kubernetes, a pod is the smallest deployable unit and represents a single instance of a running process in your cluster:

2. Deploying a Pod

Pods can be deployed manually or as part of higher-level controllers (like deployments and replicasets):

3. Managing & Troubleshooting Pods

Managing pods in Kubernetes involves inspecting, scaling, and diagnosing issues:

4. Pod Lifecycle

The lifecycle of a pod consists of several phases, each representing the state of the pod from creation to termination:

Deployments & ReplicaSets

1. Understanding Deployments

In Kubernetes, Deployments are used to manage the desired state of applications by ensuring that a specified number of instances of an application (pods) are running at all times:

2. Creating & Managing a Deployment

Deployments can be managed using YAML files or commands in the Kubernetes CLI:

3. Understanding ReplicaSets

A ReplicaSet is a Kubernetes resource that ensures a specified number of pod replicas are running at all times:

4. Creating & Managing a ReplicaSet

ReplicaSets are usually managed indirectly by Deployments, but they can also be created directly if needed:

Kubernetes Services

1. Understanding Services

In Kubernetes, Services provide stable networking for pods. Pods have a dynamic lifecycle, and their IPs may change over time as they are created and deleted. Services create a persistent IP address and DNS name for a set of pods, ensuring consistent access and enabling communication between different parts of an application or external access to the application.

2. Types of Services

Kubernetes supports different service types to suit various use cases and network configurations:

3. Service Discovery

Service discovery in Kubernetes enables services and applications to find and communicate with each other:

ConfigMaps and Secrets

1. ConfigMaps for Application Configuration

In Kubernetes, ConfigMaps are used to manage configuration data separately from application code. They allow you to inject configuration details such as environment variables, command-line arguments, and configuration files into your application, enabling flexibility and reusability across multiple environments without modifying code.

Using ConfigMap as Environment Variables

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
    - name: app-container
      image: myapp:latest
      envFrom:
        - configMapRef:
            name: app-config

Mounting ConfigMap as Files

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
    - name: app-container
      image: myapp:latest
      volumeMounts:
        - name: config-volume
          mountPath: "/etc/config"
  volumes:
    - name: config-volume
      configMap:
        name: app-config

2. Managing ConfigMaps and Secrets

Secrets store sensitive information like passwords, tokens, and keys. They are encoded in base64 to provide an extra layer of protection but should still be stored securely.

Using Secrets as Environment Variables

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
    - name: app-container
      image: myapp:latest
      env:
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: DB_USER

Mounting Secret as Files

apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
    - name: app-container
      image: myapp:latest
      volumeMounts:
        - name: secret-volume
          mountPath: "/etc/secret"
  volumes:
    - name: secret-volume
      secret:
        secretName: db-credentials

3. Best Practices for Managing ConfigMaps and Secrets

Volume & Persistent Storage

1. Understanding Volumes

In Kubernetes, volumes provide a way for containers to access and store data that persists beyond the container's lifecycle. Unlike a container’s local storage, volumes allow data sharing between containers within the same pod and can ensure data persists even if the container crashes or restarts.

Example YAML for emptyDir Volume

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: app-container
      image: nginx:latest
      volumeMounts:
        - name: cache-volume
          mountPath: "/cache"
  volumes:
    - name: cache-volume
      emptyDir: {}

2. Creating & Managing Volumes

Creating and managing volumes in Kubernetes involves specifying a volume under the pod's spec and configuring its mount path in each container. Different volumes can be mounted at specific paths to isolate storage needs between containers within a pod.

Mounting a HostPath Volume

apiVersion: v1
kind: Pod
metadata:
  name: hostpath-pod
spec:
  containers:
    - name: app-container
      image: nginx:latest
      volumeMounts:
        - name: host-storage
          mountPath: "/data"
  volumes:
    - name: host-storage
      hostPath:
        path: "/path/on/host"

3. Persisting Application Data with Persistent Volumes (PVs)

Persistent Volumes (PVs) are a way to abstract storage outside of the pod's lifecycle, enabling data to persist even if the pod is deleted or recreated. Persistent Volumes are independent storage units that a cluster administrator provisions, while Persistent Volume Claims (PVCs) allow users to request PV resources.

Creating a Persistent Volume and Claim

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-storage
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Using PVCs in Pods

apiVersion: v1
kind: Pod
metadata:
  name: pod-with-pvc
spec:
  containers:
    - name: app-container
      image: nginx:latest
      volumeMounts:
        - name: pvc-storage
          mountPath: "/persistent-storage"
  volumes:
    - name: pvc-storage
      persistentVolumeClaim:
        claimName: pvc-storage

4. Best Practices for Persistent Storage

Ingress and Network Policies

1. Ingress Controls Routing

Ingress in Kubernetes is an API object that manages external access to services within a cluster, typically HTTP/S. Ingress allows you to define rules for routing traffic based on the host or path of the incoming request, enabling the management of multiple services through a single entry point.

Key Components of Ingress

Basic Ingress Example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80

Ingress Features

2. Network Policies

Network Policies in Kubernetes are a way to control the communication between pods and/or services at the IP address or port level. By defining Network Policies, you can restrict which pods can communicate with each other, enhancing the security and isolation of applications.

Key Concepts of Network Policies

Example of a Network Policy

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: example-network-policy
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 80
  egress:
    - to:
        - podSelector:
            matchLabels:
              role: database
      ports:
        - protocol: TCP
          port: 5432

Best Practices for Using Network Policies

Advanced Features

1. Kubernetes Auto Scaling

Kubernetes Auto Scaling is a feature that automatically adjusts the number of pod replicas in a deployment based on observed CPU utilization or other select metrics. This helps ensure that your applications have the necessary resources to handle fluctuations in demand without manual intervention.

Types of Auto Scaling

Example of Horizontal Pod Autoscaler

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50

2. Role-Based Access Control (RBAC)

Role-Based Access Control (RBAC) in Kubernetes is a method for regulating access to resources based on the roles of individual users within an organization. RBAC allows you to define roles and permissions for users and groups, enhancing security by ensuring that only authorized users can access or modify resources.

Key Concepts of RBAC

Example of a Role and RoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: my-namespace
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: example-rolebinding
  namespace: my-namespace
subjects:
  - kind: User
    name: example-user
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: example-role
  apiGroup: rbac.authorization.k8s.io

3. Kubernetes Namespaces

Namespaces in Kubernetes provide a way to divide cluster resources between multiple users or teams. They allow for resource isolation, meaning that different environments (such as development, testing, and production) can coexist within the same cluster without conflict.

Benefits of Using Namespaces

Example of Creating a Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: my-namespace

Viewing and Managing Namespaces

To view all namespaces in a cluster, use the following command:

kubectl get namespaces

To switch the context to a specific namespace, use:

kubectl config set-context --current --namespace=my-namespace