Containerization: Docker and Kubernetes
Containers have allowed for rapid deployment, scalability, and consistency of software. Traditional deployment has struggled with issues such as environment mismatching and dependency conflicts. Two major technologies that have addressed these factors are Docker and Kubernetes.
Containerization
What containerization has allowed for is the process of packing applications and dependencies into portable containers that are lightweight. The containers are able to encapsulate all things that applications need to run which include things like libraries and configuration files. They also share the host system’s kernel which makes them faster and more resource efficient. These key characteristics are that they are lightweight so they do not need many resources compared to virtual machines, since these virtual machines run their own operating systems and get allocated resources, it makes them less efficient compared to containers. Second is that these containers are portable which means they can be consistently run across different environments, from local machines to cloud servers. These containers are isolated as well which means no interference between applications.
Docker
Docker is a widely used platform for containerization. Docker is best suited for smaller scale applications. It provides many tools to create, deploy, and manage containers and is used by devs, testers, DevOps, and sys admins. Important components of Docker includes:
-Docker Engine: Runtime that builds, runs, and manages containers.
-Docker Images: The images are templates that can be used to create containers, the images define everything needed to run the application which include the working directory, the requirements, etc.
-Docker Hub: Cloud registry for sharing and downloading pre-built Docker images.
Example of Dockerfile, which is the script that builds the Docker image:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Build Image:
docker build -t my-python-app .
Run Container:
docker run -p 5000:5000 my-python-app
Kubernetes (K8s)
This tool excels with large scale applications. It has the ability to manage individual containers to allow for deploying and managing containers at any scale across multiple machines. Kubernetes automates the deployment, scaling, and management of these containers which allows for “fault tolerance” where the system keeps working even when one or more components fail, and also ensures “high availability” where the system is accessible and reliable for as close to 100% of the time as possible. Lastly it allows for efficient resource utilization by scheduling on nodes that have enough resources in minimum time.
-Pods: A pod is the smallest deployable unit in Kubernetes, which is one or more containers that share storage and network.
-Nodes: A node is a machine that runs the container (can be physical or virtual)
-Cluster: Set of nodes that Kubernetes automatically manages to adapt for the demand.
-Control Plane: A control plane works together with one or more worker nodes that manages clusters and orchestrates the scheduling of these containers.
Important Features of Kubernetes:
-Load Balancing: Distributes the traffic across multiple containers to ensure that it remains reliable.
-Auto Scaling: Adjusts the number of running containers automatically based on demands.
-Self Healing: Restarts failed containers and reschedules as needed
-Rolling Updates: Updates happen gradually to retain minimal downtime
Example YAML deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-python-app
spec:
replicas: 3
selector:
matchLabels:
app: my-python-app
template:
metadata:
labels:
app: my-python-app
spec:
containers:
- name: my-python-app
image: my-python-app:latest
ports:
- containerPort: 5000
Apply Config:
kubectl apply -f deployment.yaml
Expose Application:
kubectl expose deployment my-python-app --type=LoadBalancer --port=80 --target-port=5000
Overall both Docker and Kubernetes allows for consistency, scalability, efficiency of resources, automation of deployment and scaling and recovery, and portability of containers across different environments both local and virtual cloud providers.
Comments
Post a Comment