Kubernetes Basics: Pods, Nodes, Containers, Deployments and Clusters

Kubernetes Basics: Pods, Nodes, Containers, Deployments and Clusters


Kubernetes Basics: Pods, Nodes, Containers, Deployments and Clusters

Kubernetes won the Container Orchestration War.

👉 How to Manage Secrets in Terraform -    • How to Manage Secrets in Terraform?  
👉 Terraform Tips \u0026 Tricks -    • Terraform Tips \u0026 Tricks: loops, if-st…  
👉 ArgoCD Tutorial -    • ArgoCD Tutorial for Beginners: GitOps…  

💼 - I’m a Senior Software Engineer at Juniper Networks (11+ years of experience)
📍 - Located in San Francisco Bay Area, CA (US citizen)

🤝 - LinkedIn - https://www.linkedin.com/in/anton-putra
🎙 - Twitter - https://twitter.com/antonvputra
đź“§ - Email - [email protected]
👨‍💻 - GitHub - https://github.com/antonputra

=========
⏱️TIMESTAMPS⏱️
0:00 Intro
0:18 Kubernetes Nodes
1:42 Kubernetes Persistent Volumes
2:26 Kubernetes Containers
3:08 Kubernetes Pods
4:16 Kubernetes Deployment
4:49 Kubernetes Load Balancer \u0026 Ingress

=========
Source Code
📚 - Tutorial: https://github.com/antonputra/tutoria…

#Kubernetes #DevOps #K8s


Content

0.06 -> Kubernetes won the Container Orchestration War.
2.57 -> If you are a developer, DevOps, or SRE engineer, you have to know at least the basics of how
7.16 -> Kubernetes operates.
8.5 -> In this video, we will go over the basic concepts such as pods, nodes, containers, deployments,
14.22 -> and clusters.
15.22 -> Also, we will touch on ingresses and load balancers.
18.16 -> Let's start with a Node, which is the smallest unit of computing hardware in the Kubernetes
22.3 -> cluster.
23.3 -> It is a single machine where your applications will run.
25.449 -> It can be a physical server in a data center or a virtual machine in the public cloud such
30.029 -> as AWS or GCP.
31.829 -> You can even build Kubernetes from multiple raspberry pis.
35.14 -> Thinking of a machine as a Node allows us to insert a level of abstraction.
38.89 -> Now, you don't need to worry about a single server in your cluster and the unique characteristics
43.42 -> of how much memory or CPU it has.
45.75 -> Instead, you can delegate the decision of where to deploy your service to Kubernetes
49.26 -> based on the spec that you provide.
51.3 -> Also, if something happens with a single node, it can be easily replaced, and Kubernetes
55.53 -> will take care of the load distribution for you.
58.03 -> Sometimes it can be helpful to work with individual servers, but it's not a Kubernetes way.
62.19 -> In general, it should not matter for you or the application where it will be run.
66.07 -> Multiple nodes are combined into a node pool.
68.09 -> When you deploy the service, Kubetens will inspect individual nodes for you and select
72.2 -> one node based on the available CPU, memory, and other characteristics.
76.44 -> If for some reason, that node fails, Kubernetes will make sure that your application is rescheduled
81.67 -> and healthy.
82.729 -> You can have multiple node pools or sometimes called instance groups in your cluster.
86.99 -> For example, you can have ten nodes with high CPU and low memory to run CPU-intensive tasks,
91.68 -> and another node pool would have high memory and low CPU.
95.799 -> In the cloud, it's very common to separate node pools to on-demand nodes and spot nodes
100.039 -> that are much cheaper but can be taken away at any moment.
103.679 -> Since applications running on your cluster aren't guaranteed to run on a specific node,
108.02 -> you cannot use the local disk to save any data.
110.289 -> If the application saves something on the local file system and then is relocated to
114.56 -> another node, the file will no longer be there.
117.27 -> That's why you can only use a local disk as a temporary location for the cache.
121.179 -> To store data permanently, Kubernetes uses Persistent Volumes.
125.039 -> While the CPU and memory resources of all nodes are pooled and managed by the Kubernetes
129.2 -> cluster, persistent file storage is not.
131.739 -> Instead, local or cloud drives can be attached to the cluster as a Persistent Volume.
136.48 -> You can think about it as plugging an external hard drive into the cluster.
140.55 -> Persistent Volumes provide a file system that can be mounted to the cluster without being
144.379 -> associated with any particular node.
146.48 -> To run an application on the Kubernetes cluster, you need to package it as a Linux container.
150.989 -> Containerization allows you to create self-contained Linux execution environments.
155.239 -> Any application and all its dependencies can be bundled up into a single image and then
159.54 -> can be easily distributed.
161.47 -> Anyone can download the image and deploy it on their infrastructure with minimal setup
165.72 -> required.
166.72 -> Usually, creating Docker images is a part of the CI/CD pipeline.
168.84 -> You check out the code, run some unit tests and then build an image.
173.68 -> You can add multiple applications in one single container, but you should limit yourself to
177.62 -> one process per container if possible.
179.8 -> It's better to have a lot of small containers than one large one.
183.26 -> If the container has a tight focus, updates are easier to deploy, and issues are easier
187.48 -> to debug.
188.48 -> Kubernetes doesn't run containers directly; instead, it wraps one or more containers into
193 -> a higher-level structure called a pod.
195.54 -> Any containers in the same pod will share the same resources and local network.
200.37 -> Containers can easily communicate with other containers in the same pod as though they
204.099 -> were on the same machine while maintaining a degree of isolation from others.
208.4 -> Pods are used as the unit of replication in Kubernetes.
212 -> If your application needs to be scaled up, you simply increase the number of pods.
215.84 -> Kubernetes can be configured to automatically scale up and down your application based on
220.39 -> the load.
221.39 -> You can use CPU, memory, or even custom metrics such as a number of requests to the application.
227.24 -> Usually, you would run multiple copies for the same application to avoid downtimes if
230.79 -> something happens with a single node.
232.52 -> As a container that can have multiple processes, a pod can have multiple containers inside.
237.379 -> However, since pods are scaled up and down as a unit, all containers in a pod must be
241.959 -> scale together, regardless of their individual needs.
245.01 -> This leads to wasted resources and an expensive bill.
247.93 -> Pods should remain as small as possible to resolve this, typically holding only a main
252.11 -> process and its tightly-coupled helper containers.
255.01 -> We typically call them side-cars.
256.269 -> Pods are the basic unit of computation in Kubernetes, but they are not typically directly
260.93 -> created in the cluster.
262.09 -> Instead, Kubernetes provides another level of abstraction such as Deployment.
266.28 -> A deployment's primary purpose is to declare how many replicas of a pod should be running
270.66 -> at a time.
271.66 -> When a deployment is added to the cluster, it will automatically spin up the requested
275.699 -> number of pods and then monitor them.
277.4 -> If a pod fails, the deployment will automatically re-create it.
280.9 -> Using a deployment, you don't have to deal with pods manually.
283.56 -> You can just declare the desired state of the system, and it will be managed for you
288.11 -> automatically.
289.11 -> By now, we have learned about some core components in Kubernetes.
292.5 -> We can run the application in the cluster with the deployment, but how can we expose
296.639 -> our service to the internet.
297.94 -> By default, Kubernetes provides isolation between pods and the outside world.
302.27 -> If you want to communicate with a service running in a pod, you have to open up a channel
306.28 -> for communication.
307.58 -> There are multiple ways to expose your service.
309.78 -> If you want to expose the application directly, you can use the load balancer type.
313.65 -> It will map one application per load balancer.
316.509 -> In this case, you can use almost any kind of protocol: TCP, UDP, gRPC, WebSockets, and
321.961 -> others.
323.069 -> Another popular method is the Ingress controller.
324.84 -> There are a lot of different ingresses available for the Kubernetes with different capabilities.
329.88 -> When using an ingress controller, you would share the single load balancer between all
333.83 -> your services and use subdomains or paths to direct traffic to a particular application
338.4 -> within the cluster.
340 -> Ingresses only allow you to use HTTP and HTTPS protocols.
343.75 -> And it is way more complicated to set up and maintain over time than simple load balancers.
348.93 -> If you want more videos like this, subscribe to my channel.
351.62 -> Thank you for watching, and I'll see you in the next video.

Source: https://www.youtube.com/watch?v=B_X4l4HSgtc