Kubernetes Node Autoscaling with Karpenter (AWS EKS & Terraform)
Aug 16, 2023
Kubernetes Node Autoscaling with Karpenter (AWS EKS & Terraform)
Karpenter automatically launches just the right compute resources to handle your cluster’s applications. 👉 How to Manage Secrets in Terraform -    • How to Manage Secrets in Terraform?  👉 Terraform Tips \u0026 Tricks -    • Terraform Tips \u0026 Tricks: loops, if-st…  👉 ArgoCD Tutorial -    • ArgoCD Tutorial for Beginners: GitOps…  💼 - I’m a Senior Software Engineer at Juniper Networks (11+ years of experience) 📍 - Located in San Francisco Bay Area, CA (US citizen) 🤝 - LinkedIn - https://www.linkedin.com/in/anton-putra 🎙 - Twitter - https://twitter.com/antonvputra 📧 - Email - [email protected] 👨‍💻 - GitHub - https://github.com/antonputra ========= ⏱️TIMESTAMPS⏱️ 0:00 Intro 0:52 Cluster Autoscaller \u0026 Karpenter \u0026 AWS Fargate 1:29 Create AWS VPC Using Terraform 2:22 Create EKS Cluster Using Terraform 4:12 Create Karpenter Controller IAM Role 5:36 Deploy Karpenter to EKS 6:18 Create Karpenter Provisioner 7:02 Demo: Automatic Node Provisioning ========= Source Code 📚 - Tutorial: https://antonputra.com/amazon/kuberne … #AWS #Karpenter #DevOps
Content
0.08 -> In this video, we will go over the following
section.
3.05 -> First of all, we will discuss the differences
between Cluster Autoscaller & Karpenter & AWS
8.5 -> Fargate.
9.5 -> Then we will create AWS VPC Using Terraform.
12.809 -> Right after that, I'll show you how to create
an EKS cluster and a default node group with
18.16 -> terraform as well.
19.25 -> To grant access to Karpenter to create Kubernetes
nodes we would need to create OpenID connect
25.01 -> provider and IAM role.
26.98 -> Then we will deploy Karpenter to EKS using
Helm.
30.619 -> To create EC2 instances Karpenter needs a
Custom Resouce called Provisioner.
35.39 -> Finally, we will test how quickly Karpenter
can create Kubetnes nodes and schedule new
41.25 -> pods in the cluster.
42.68 -> Source code and all the commands are available
on my website and github repository.
48.03 -> You can also find the timestamps for each
section in the video description.
52.02 -> So what are the differences between Cluster
Autoscaller & Karpenter & AWS Fargate?
53.02 -> When you create a regular Kubernetes cluster
in AWS, each node group will be managed by
58.86 -> the AWS autoscaling group.
60.79 -> Cluster Autoscaller will adjust the desired
size based on the load in your cluster to
65.6 -> fit all the unschedulable pods.
68.08 -> Karpenter on the other hand creates Kubernetes
nodes directly from EC2 instances.
73.21 -> It improves the efficiency and cost of running
workloads on that cluster.
78.01 -> Finally, AWS Fargate creates a dedicated node
for each pod running in the cluster.
83.75 -> With Fargate, you don't need to worry about
infrastructure management and only focus on
88.909 -> your workloads.
90.06 -> First of all, we need to create VPC using
terraform.
93.45 -> In this video, I'm not going to go over each
configuration parameter of each terraform
98.25 -> resource as I did in the previous videos.
100.84 -> If you'd like you can find more details in
this video.
103.97 -> In a new version of terraform you have the
lock file with all the provider versions that
109.19 -> I use.
110.19 -> If you face any errors during the tutorial,
copy this file and rerun terraform.
115.11 -> Next is a provider with some variables such
as EKS cluster name and a region.
120.799 -> VPC resource with EFS specific parameters.
125.08 -> Internet Gateway.
126.08 -> Four subnets, two private and two public.
129.42 -> NAT Gateway.
130.86 -> Finally two routes: one public with default
route to internet gateway and a private with
136.34 -> default route to NAT Gateway.
138.86 -> Let's initialize terraform and create all
those components with terraform apply.
143.31 -> Next, we need to create an EKS cluster and
a node group.
146.99 -> EKS requires an IAM role to access AWS API
on your behave to create resources.
152.569 -> There is a single IAM policy that needs to
be attached to that role: AmazonEKSClusterPolicy.
158.73 -> Then the EKS cluster itself.
161.04 -> Provide a cluster name and the ARN of the
role that we just created.
165.88 -> Configure your cluster to be private or for
demo keep a public endpoint.
170.64 -> Now we need to create another IAM role for
Kubernetes nodes.
174.73 -> It's going to be used by the regular node
pool and not Karpenter.
178.78 -> You have two options, either to use the same
IAM role and create an instance profile for
183.73 -> Karpenter or you can create a dedicated IAM
role.
187.4 -> But in this case, you would need to manually
update auth configmap to authorize nodes created
193.43 -> by karpenter with a new IAM role to join the
cluster.
196.77 -> I'll show you how to add it later on.
199.099 -> For this video let's use the same IAM role.
201.96 -> We usually need to attach 3 AWS-managed policies
as a base minimum.
206.91 -> Then the EKS node group.
208.57 -> It's going to be an EKS-managed node group
created as an AWS autoscaling group, with
214.02 -> max-min and desired size.
216.629 -> Your typical cluster autoscaller would adjust
this value based on your load.
221.94 -> Karpenter on the other hand creates EC2 instances
directly without autoscaling groups.
226.95 -> Now let's again apply the terraform to create
an EKS cluster.
230.68 -> To connect to the cluster you need to update
the Kubernetes context with this command.
235.04 -> Then the quick check if we can reach Kubernetes.
238.44 -> It should return the default k8s service.
241.209 -> Right now we have a single node.
243.3 -> As I mentioned before, if you decide to create
a separate IAM role and instance profile you
248.599 -> would need to edit the auth configmap to add
the ARN of the new role.
251.377 -> Lastly, let's create a Kubernetes deployment
to test how quickly Karpenter can create EC2
252.377 -> instances and schedule new pods.
253.377 -> It's going to be a simple nginx-based deployment
with 5 replicas.
254.377 -> With given resources, this deployment won't
fit on the default node group.
255.377 -> When you just getting started with Karpenter,
it's a good idea to check logs in case you
256.377 -> get any errors.
257.377 -> In another window, let's run get pods.
258.377 -> Then let's get all the nodes available in
the Kubernetes cluster.
259.377 -> Finally, create the deployment with 5 replicas.
260.377 -> In a few seconds, you'll see Karpenter creates
a node.
261.377 -> In a couple of minutes when the node transition
to the ready state, all the pods will be scheduled.
262.377 -> Karpenter is a nice tool to have in your toolbox.
263.377 -> If I can't use AWS Fargate, I would definitely
prefer Karpenter over the traditional cluster
264.377 -> autoscaller.
265.377 -> Karpenter needs permissions to create EC2
instances in AWS.
266.377 -> If you use a self-hosted Kubernetes cluster,
for example by using kOps.
267.377 -> You can add additional IAM policies to the
existing IAM role attached to Kubernetes nodes.
268.377 -> We use EKS, the best way to grant access to
internal service would be with IAM roles for
274.349 -> service accounts.
275.349 -> First, we need to create an OpenID Connect
provider.
278.33 -> With Terraform, it's very easy.
280.249 -> Get the certificate and use it in terraform
resource.
284.169 -> Next is a trust policy to allow the Kubernetes
service account to assume the IAM role.
290.27 -> Make sure that you deploy Karpenter to the
karpenter namespace with the same service
295.43 -> account name.
296.43 -> Then create a karpenter controller role and
attach that policy.
300.349 -> Next is a set of permissions that we need
to grant to Karpenter to manage Kubernetes
305.369 -> nodes.
306.369 -> We will create this json file later.
308.61 -> Then attach it to the role as well.
310.999 -> Since we will be using the same IAM role,
we need to create an IAM instance profile
315.509 -> that Karpenter will use to attach to EC2 instances.
319.409 -> Let's create the controller-trust-policy.json
file.
327.129 -> You can find an example in the official Karpenter
github project.
330.759 -> Alright, since we've added an additional provider
we need to initialize before we can apply
335.979 -> the terraform code.
336.979 -> To deploy Karpenter to our cluster, we're
going to use Helm.
340.539 -> First of all, you need to authenticate with
EKS using the helm provider.
344.599 -> Then the helm release.
345.719 -> At this time, this is the latest version of
the Helm chart.
349.509 -> There are a few important variables that we
need to override.
352.939 -> First is the annotation for the Kubernetes
service account.
356.3 -> Then the EKS cluster name.
358.229 -> Most of these variables you can pull dynamically
from terraform resources.
362.599 -> We also need a cluster endpoint, Karpenter
will use it to join new nodes to the EKS.
367.909 -> The final variable is the instance profile.
370.68 -> Let's apply and check if the controller is
running.
373.669 -> Check if the helm was deployed successfully.
376.11 -> Then the karpenter pod in its dedicated namespace.
379.759 -> Before we can test Karpenter, we need to create
a Provisioner.
383.27 -> Karpenter defines a Custom Resource called
a Provisioner to specify provisioning configuration.
388.91 -> Each provisioner manages a distinct set of
nodes.
392.259 -> I included some parameters such as TTL.
395.129 -> Also, you can define the limit on how many
nodes Karpenter can create.
399.75 -> It's measured in CPU cores.
401.93 -> You can define what types of EC2 family you
want to use and exclude.
406.55 -> Also, we need to create another Custom Resouce:
AWSNodeTemplate.
411.3 -> Here you need to use AWS tags to select subnets
and security groups.
416.4 -> You need to replace the demo with your EKS
cluster name.
419.819 -> Finally, use kubectl to create those
436.74 -> resources
469.699 -> in
the cluster.
Source: https://www.youtube.com/watch?v=C_YZXpXwtbg