Kubernetes resources and its use cases
What is kubernetes ?
Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. It allows you to manage containerized applications across a cluster of machines, providing mechanisms for deploying, maintaining, and scaling applications with ease. With Kubernetes, you can deploy your applications quickly and predictably, scale them on the fly, and seamlessly roll out updates or changes. It abstracts away the underlying infrastructure, making it easier to focus on developing and running your applications.
what is kubernetes resources ?
In Kubernetes, resources refer to the compute, memory, and storage units available to your applications. These resources are allocated to containers running within Kubernetes pods. When you define a pod or a deployment in Kubernetes, you can specify resource requests and limits for each container.
Resource Requests: These are the minimum amount of resources that Kubernetes guarantees to allocate to a container. If a container requests 1 CPU and 1 GB of memory, Kubernetes will ensure that these resources are available before scheduling the container onto a node.
Resource Limits: These are the maximum amount of resources that a container can use. If a container exceeds its resource limits, Kubernetes may throttle or terminate the container to prevent it from impacting other containers or the overall cluster performance.
By setting resource requests and limits appropriately, you can ensure that your applications have the resources they need to run efficiently while also preventing them from consuming excessive resources and affecting other applications running in the cluster
Here are some kubernetes resources and its use cases :
- Pods: Pods are the smallest deployable units in Kubernetes, consisting of one or more containers. They are used to run and scale applications.
Use cases:
1) Running a Single Container: The most basic use case for pods is to run a single container. This is useful for simple applications that only require one container.
2) Sidecar Containers: Pods can be used to run a main application container alongside one or more sidecar containers. Sidecar containers can provide additional functionality, such as logging, monitoring, or proxying, to the main application.
3) Multi-Container Applications: Pods can be used to run multi-container applications where multiple containers need to work together and share resources. For example, a web server container and a database container can be run together in a pod to create a web application.
Here’s how you can create pods :
step 1 : Create a file named my-pod.yaml and write,
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
This manifest defines a pod named my-pod with a single container running the nginx:latest image, which listens on port 80.
step 2 : Apply the manifest using the kubectl apply command:
kubectl apply -f my-pod.yaml
step 3 : You can verify that the pod has been created by running:
kubectl get pods
2. Deployments: Deployments manage the lifecycle of pods, allowing for easy scaling, rolling updates, and rollbacks of application versions.
Use cases :
Rolling Updates: Deployments support rolling updates, allowing you to update your application without downtime. You can gradually update pods to a new version, ensuring that the application remains available throughout the update process.
Rollbacks: If a deployment update fails or causes issues, you can easily roll back to a previous version. Kubernetes will automatically revert the deployment to the previous version, minimizing downtime and impact on users.
Scaling: Deployments allow you to scale your application up or down based on demand. You can easily scale the number of replicas (pods) running your application to handle increased traffic or reduce costs during periods of low demand
Here’s how you can create deployments
step 1 : create a deployment, you need to define a Deployment manifest in YAML or JSON format, here’s an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:latest
ports:
- containerPort: 80
This manifest defines a deployment named my-deployment with 3 replicas, each running a container with the nginx:latest image.
step 2 : Use the kubectl apply command to create the deployment:
kubectl apply -f my-deployment.yaml
now your deployment has been created.
Rolling Updates: Kubernetes will perform rolling updates by default, gradually updating pods to the new version while ensuring that the application remains available. You can monitor the progress of the update using:
kubectl rollout status deployment/my-deployment
Rollback: If an update causes issues, you can rollback to a previous version of the deployment:
kubectl rollout undo deployment/my-deployment
Cleaning Up: To delete the deployment and all associated resources (pods, replicaset), you can use:
kubectl delete deployment my-deployment
3. Services: Services provide network connectivity to pods, allowing them to communicate with each other both internally and externally.
Use cases :
Load Balancing: Services can distribute incoming traffic across multiple pods that are part of the service. This helps in scaling your application horizontally by adding more pods to handle increased traffic.
Service Discovery: Services provide a stable endpoint (cluster IP or external IP) and DNS name that other applications can use to communicate with your pods. This allows other services within the cluster to discover and communicate with your application without needing to know the specific IP addresses of individual pods.
External Access: Services can be used to expose your application to external traffic. This can be done using a NodePort service type, which maps a port on the host machine to the service, allowing external traffic to reach the service.
Here’s how you can create services :
Step 1 : Create a Service Manifest ,Define a Service manifest in YAML or JSON format. Here’s an example of a basic service that exposes port 80 on the nginx pods:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
In this manifest, selector specifies which pods the service should target based on their labels (app : my-app in this case). ports specify the port configuration for the service.
step 2 : Apply the Service, Use the kubectl apply command to create the service
kubectl apply -f my-service.yaml
step 3 : You can verify that the service has been created by running:
kubectl get services
Accessing the Service: Depending on the type of service you created, you can access it using the cluster IP (for ClusterIP type), the node’s IP (for NodePort type), or an external IP (for LoadBalancer type). For example, if you created a NodePort service, you can access it using any node’s IP address and the NodePort.
Here’s an example of creating a simple NGINX service in Kubernetes using a NodePort type service:
step 1 : First, let’s create a Deployment for NGINX, Create a file named nginx-deployment.yaml with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
step 2 : Apply the deployment
kubectl apply -f nginx-deployment.yaml
step 3 : Next, create a Service to expose the NGINX Deployment. Create a file named nginx-service.yaml with the following content:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
step 4 : Apply the service
kubectl apply -f nginx-service.yaml
step 5 : To access the NGINX service, you can use any node’s IP address along with the NodePort (30080 in example). You can find the node's IP address using:
kubectl get nodes -o wide
Now, you can access the NGINX service using http://<node-ip>:30080 in your web browser.
4. ConfigMaps and Secrets: ConfigMaps and Secrets store configuration data and sensitive information, respectively, which can be injected into pods as environment variables or mounted volumes.
use case : ConfigMaps
Configuration Data: Storing configuration data that can be consumed by applications running in pods. This can include environment variables, command-line arguments, configuration files, or any other type of configuration data.
Decoupling Configuration from Application Code: Keeping configuration data separate from application code, making it easier to change configuration without rebuilding the application.
Environment-specific Configurations: Providing different configurations for different environments (e.g., development, staging, production) without modifying the application code.
use case : secrets
Sensitive Information: Storing sensitive information such as passwords, API keys, and TLS certificates securely.
Access Credentials: Providing access credentials to applications without exposing them in the application code or configuration.
Volume Mounts: Mounting sensitive files or data into pods as volumes, allowing applications to access them securely.
Here’s how you can create ConfigMaps and Secrets:
step 1 : Create a ConfigMap , ConfigMaps are used to store configuration data that can be used by pods in your cluster. For example, you can store database connection strings, environment variables, or any other configuration data. create a ConfigMap named my-configmap , using yaml file.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
database_url: "mysql://user:password@hostname/database"
app_config: |
key1: value1
key2: value2
The data section contains key-value pairs that represent the configuration data.
In this example, database_url and app_config are two pieces of configuration data stored in the ConfigMap.
step 2 : Apply the ConfigMap
kubectl apply -f my-configmap.yaml
step 3 : Create a Secret, Secrets are used to store sensitive information, such as passwords, API keys, and TLS certificates. Secrets are base64-encoded in Kubernetes. create a yaml file named my-secret
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
username: dXNlcg== # base64-encoded "user"
password: cGFzc3dvcmQ= # base64-encoded "password"
step 4 : Apply the Secret
kubectl apply -f my-secret.yaml
step 5 : Using ConfigMaps and Secrets in Pods, You can use ConfigMaps and Secrets in your pods by mounting them as volumes or setting them as environment variables. Here’s how you can mount a ConfigMap and a Secret in a pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: secret-volume
mountPath: /etc/secret
volumes:
- name: config-volume
configMap:
name: my-configmap
- name: secret-volume
secret:
secretName: my-secret
step 6 : Apply the pod
kubectl apply -f my-pod.yaml
This pod mounts the my-configmap ConfigMap to /etc/config and the my-secret Secret to /etc/secret in the container.
now you have created a Kubernetes Pod that uses a ConfigMap and a Secret to access configuration data and sensitive information, respectively, inside the container.
Conclusion
Kubernetes is a powerful open-source platform for automating the deployment, scaling, and management of containerized applications. It provides a rich set of features for managing resources, deploying pods, managing their lifecycle with deployments, and enabling communication between pods using services. Additionally, Kubernetes offers ConfigMaps and Secrets for managing configuration data and sensitive information securely.