Containerization is the process of packaging software and its dependencies into a portable, lightweight container that can run consistently across different environments:
Several types of containerization focus on different runtime environments, resource isolation, and use cases:
Docker is a popular platform for creating, deploying, and managing containers. It simplifies the development process and enhances portability:
Containerization offers numerous benefits that make it ideal for modern application development and deployment:
Kubernetes, often referred to as "K8s," originated as a project by Google and was later open-sourced in 2014. It has since become a key technology in the container orchestration landscape:
Kubernetes provides a robust set of features that make it highly effective for managing containerized applications:
Kubernetes has a modular architecture, consisting of various components that work together to manage applications efficiently:
Kubernetes offers numerous benefits, making it the go-to choice for container orchestration:
The Control Plane is the central component in Kubernetes architecture, orchestrating and managing clusters:
Nodes and Pods form the core of Kubernetes’ application deployment structure, with nodes representing physical or virtual machines and pods representing units of deployment:
Services in Kubernetes enable communication between various components and facilitate reliable access to applications within a cluster:
Kubernetes Objects are the fundamental units that represent the desired state of the cluster. They define what resources the system manages and how:
Minikube is a tool that lets you run Kubernetes locally, allowing developers to test and experiment with Kubernetes clusters on their local machines:
Installing Minikube is straightforward and requires minimal setup. Here’s how you can set it up:
choco install minikube
or scoop install minikube
.brew install minikube
./usr/local/bin
directory for easy access.minikube start
. This will initialize the Kubernetes cluster locally on a virtual machine.minikube dashboard
.Kubectl is the command-line interface for managing Kubernetes clusters. It’s essential for interacting with your Minikube cluster:
kubectl get nodes
: Verifies that Minikube has successfully created a node.kubectl get pods
: Lists all running pods in the current namespace.kubectl create deployment [name] --image=[image]
: Deploys an application in the Kubernetes cluster.kubectl config view
or kubectl cluster-info
.kubectl apply
for creating resources from YAML files, and kubectl exec
for accessing containerized applications.kubectl is the command-line tool for interacting with Kubernetes clusters, used to deploy and manage applications:
choco install kubernetes-cli
.brew install kubectl
./usr/local/bin
directory for easy access. For example:
wget https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
to check the client version.These foundational commands allow you to manage Kubernetes resources effectively:
kubectl cluster-info
: Displays details about the Kubernetes cluster.kubectl get nodes
: Lists all nodes in the cluster.kubectl get pods
: Lists all pods in the current namespace.kubectl create deployment [name] --image=[image]
: Deploys an application in the cluster.kubectl delete pod [pod-name]
: Deletes a specified pod.kubectl get namespaces
: Lists all namespaces.kubectl get pods --namespace=[namespace]
: Lists pods within a specific namespace.kubectl apply -f [file.yaml]
: Creates or updates resources from a YAML file.kubectl edit [resource-type]/[name]
: Opens a specified resource for editing directly.Authentication ensures secure communication between kubectl and the Kubernetes API server:
kubectl config
commands to manage and switch between cluster configurations.
kubectl config view
: Shows the current kubeconfig file details.kubectl config set-credentials
: Adds new user credentials to the kubeconfig file.kubectl create rolebinding
: Binds a role to a user in a specific namespace.kubectl auth can-i
: Verifies user permissions on resources.In Kubernetes, a pod is the smallest deployable unit and represents a single instance of a running process in your cluster:
localhost
and share storage volumes.Pods can be deployed manually or as part of higher-level controllers (like deployments and replicasets):
kubectl run [pod-name] --image=[container-image]
: Quickly deploy a pod with a specified container image.apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
kubectl apply -f [filename.yaml]
: Create a pod from a YAML file configuration.Managing pods in Kubernetes involves inspecting, scaling, and diagnosing issues:
kubectl get pods
: Lists all pods in the current namespace.kubectl describe pod [pod-name]
: Provides detailed information about a specific pod.kubectl logs [pod-name]
: Displays the logs of a pod’s main container.kubectl exec -it [pod-name] -- /bin/sh
: Access the container in the pod to check configurations and logs manually.kubectl port-forward [pod-name] [local-port]:[container-port]
: Forward a port for local debugging access to the application.kubectl describe pod [pod-name]
shows recent events such as scheduling, networking, and resource issues.The lifecycle of a pod consists of several phases, each representing the state of the pod from creation to termination:
OnFailure
or Never
will reach this state if successful.In Kubernetes, Deployments are used to manage the desired state of applications by ensuring that a specified number of instances of an application (pods) are running at all times:
Deployments can be managed using YAML files or commands in the Kubernetes CLI:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx
kubectl apply -f deployment.yaml
: Create a deployment from a YAML file.kubectl scale deployment my-deployment --replicas=5
: Scale up/down the number of pods managed by the deployment.kubectl set image
command to update the container image in a rolling update.
kubectl set image deployment/my-deployment my-container=my-image:v2
kubectl rollout undo deployment/my-deployment
: Reverts to the previous deployment version.A ReplicaSet is a Kubernetes resource that ensures a specified number of pod replicas are running at all times:
ReplicaSets are usually managed indirectly by Deployments, but they can also be created directly if needed:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx
kubectl apply -f replicaset.yaml
: Create a ReplicaSet from a YAML file.kubectl scale rs my-replicaset --replicas=5
In Kubernetes, Services provide stable networking for pods. Pods have a dynamic lifecycle, and their IPs may change over time as they are created and deleted. Services create a persistent IP address and DNS name for a set of pods, ensuring consistent access and enabling communication between different parts of an application or external access to the application.
Kubernetes supports different service types to suit various use cases and network configurations:
apiVersion: v1
kind: Service
metadata:
name: my-clusterip-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
type: NodePort
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30007
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
apiVersion: v1
kind: Service
metadata:
name: my-externalname-service
spec:
type: ExternalName
externalName: example.com
Service discovery in Kubernetes enables services and applications to find and communicate with each other:
my-service.default.svc.cluster.local
).clusterIP: None
, Kubernetes configures a service without a ClusterIP, making it "headless." Headless services allow for direct pod access and support stateful applications requiring pod-specific IPs.In Kubernetes, ConfigMaps are used to manage configuration data separately from application code. They allow you to inject configuration details such as environment variables, command-line arguments, and configuration files into your application, enabling flexibility and reusability across multiple environments without modifying code.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_URL: "jdbc:mysql://db.example.com:3306/mydb"
APP_MODE: "production"
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: myapp:latest
envFrom:
- configMapRef:
name: app-config
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: myapp:latest
volumeMounts:
- name: config-volume
mountPath: "/etc/config"
volumes:
- name: config-volume
configMap:
name: app-config
Secrets store sensitive information like passwords, tokens, and keys. They are encoded in base64 to provide an extra layer of protection but should still be stored securely.
kubectl
command. Example YAML:apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
DB_USER: YWRtaW4= # base64 encoded 'admin'
DB_PASSWORD: cGFzc3dvcmQ= # base64 encoded 'password'
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: myapp:latest
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USER
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: myapp:latest
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
volumes:
- name: secret-volume
secret:
secretName: db-credentials
In Kubernetes, volumes provide a way for containers to access and store data that persists beyond the container's lifecycle. Unlike a container’s local storage, volumes allow data sharing between containers within the same pod and can ensure data persists even if the container crashes or restarts.
emptyDir
, hostPath
, configMap
, and cloud provider-specific volumes like awsElasticBlockStore
.apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app-container
image: nginx:latest
volumeMounts:
- name: cache-volume
mountPath: "/cache"
volumes:
- name: cache-volume
emptyDir: {}
Creating and managing volumes in Kubernetes involves specifying a volume
under the pod's spec
and configuring its mount path in each container. Different volumes can be mounted at specific paths to isolate storage needs between containers within a pod.
spec.volumes
in a Pod and referenced in volumeMounts
for each container.apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- name: app-container
image: nginx:latest
volumeMounts:
- name: host-storage
mountPath: "/data"
volumes:
- name: host-storage
hostPath:
path: "/path/on/host"
Persistent Volumes (PVs) are a way to abstract storage outside of the pod's lifecycle, enabling data to persist even if the pod is deleted or recreated. Persistent Volumes are independent storage units that a cluster administrator provisions, while Persistent Volume Claims (PVCs) allow users to request PV resources.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-storage
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: Pod
metadata:
name: pod-with-pvc
spec:
containers:
- name: app-container
image: nginx:latest
volumeMounts:
- name: pvc-storage
mountPath: "/persistent-storage"
volumes:
- name: pvc-storage
persistentVolumeClaim:
claimName: pvc-storage
Ingress in Kubernetes is an API object that manages external access to services within a cluster, typically HTTP/S. Ingress allows you to define rules for routing traffic based on the host or path of the incoming request, enabling the management of multiple services through a single entry point.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
Network Policies in Kubernetes are a way to control the communication between pods and/or services at the IP address or port level. By defining Network Policies, you can restrict which pods can communicate with each other, enhancing the security and isolation of applications.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-network-policy
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
role: database
ports:
- protocol: TCP
port: 5432
Kubernetes Auto Scaling is a feature that automatically adjusts the number of pod replicas in a deployment based on observed CPU utilization or other select metrics. This helps ensure that your applications have the necessary resources to handle fluctuations in demand without manual intervention.
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Role-Based Access Control (RBAC) in Kubernetes is a method for regulating access to resources based on the roles of individual users within an organization. RBAC allows you to define roles and permissions for users and groups, enhancing security by ensuring that only authorized users can access or modify resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: example-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rolebinding
namespace: my-namespace
subjects:
- kind: User
name: example-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
Namespaces in Kubernetes provide a way to divide cluster resources between multiple users or teams. They allow for resource isolation, meaning that different environments (such as development, testing, and production) can coexist within the same cluster without conflict.
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
To view all namespaces in a cluster, use the following command:
kubectl get namespaces
To switch the context to a specific namespace, use:
kubectl config set-context --current --namespace=my-namespace