AWS re:Invent 2021 - Deep dive on Amazon EKS
Aug 16, 2023
AWS re:Invent 2021 - Deep dive on Amazon EKS
Amazon EKS is a fully managed Kubernetes service. This session covers recent enhancements to EKS and dives deep into the latest features. Learn how EKS gives you the flexibility to start, run, and scale Kubernetes applications in the AWS Cloud or on premises and how customers trust EKS to run their most sensitive and mission-critical applications. Learn more about re:Invent 2021 at https://bit.ly/3IvOLtK Subscribe: More AWS videos http://bit.ly/2O3zS75 More AWS events videos http://bit.ly/316g9t4 ABOUT AWS Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster. #AWS #AmazonWebServices #CloudComputing
Content
1.437 -> (upbeat music)
10.85 -> - Hello, everyone, welcome to CON304.
13.9 -> This is a deep dive on Amazon
Elastic Kubernetes Service.
18.18 -> My name is Mike Stefaniak.
19.78 -> I am a senior product
manager on the EKS team.
27.67 -> So we'll start with what is Amazon EKS?
29.93 -> I'm sure many of you are either
using EKS or trying it out,
33.95 -> but let's just cover the basics.
36.11 -> EKS is our managed
Kubernetes service at AWS,
40.57 -> and it provides the
flexibility of Kubernetes
44.22 -> with the security and resiliency
45.86 -> of being an AWS managed service.
48.88 -> Some of the important things
we think about for EKS
52.4 -> is first, EKS is just
vanilla upstream Kubernetes.
56.5 -> We don't fork it in any way.
58.88 -> The only occasional thing we
do is backport security patches
62.32 -> because we support versions
for longer than upstream does.
65.94 -> But otherwise, if you're
running open-source Kubernetes,
68.91 -> you're going to get the same versions
70.1 -> if you're running EKS.
72.8 -> We do a lot of work to give
74.91 -> a performant, reliable, secure experience.
78.66 -> And really our goal with EKS
80.27 -> is to make Kubernetes
operations, administration,
83.45 -> and management simple.
90.59 -> So before we jump more into EKS,
92.78 -> I think we need to talk
about Kubernetes itself.
95.2 -> What problems is Kubernetes solving?
98.317 -> Why did AWS even build a
managed Kubernetes service
101.42 -> in the first place?
103.03 -> And really, Kubernetes
simplifies the deployment
106.8 -> and management of
containerized applications.
109.53 -> It's also backed by a
vibrant and active community
113.18 -> and part of the Cloud
Native Computing Foundation.
116.78 -> We want to,
117.613 -> one of the more common
reasons you hear about
121.11 -> moving to Kubernetes is portability,
122.82 -> the fact that you can run, you know,
125.13 -> you can run Kubernetes
across multiple environments.
127.47 -> But that's actually not
the top reason we hear
130.39 -> from customers and yourselves why.
132.47 -> It's really actually the
faster deployment time.
135.14 -> By moving to Kubernetes, your
organization can move faster,
140.07 -> deploy and ship code faster.
142.08 -> And it's really the declarative,
143.904 -> self-healing nature of Kubernetes
146.22 -> that provides those benefits
and allow operations teams
150.115 -> to sleep more soundly at night,
151.9 -> knowing that they don't have to wake up
154.07 -> or deal with problems
deploying applications.
157.86 -> Some of the other reasons are scalability.
160.28 -> It's easy to run lots of
applications in Kubernetes
163.96 -> as well as availability.
169.38 -> So then why Amazon EKS?
171.41 -> Why did we build EKS?
173.66 -> And really it comes down
to, at a certain scale,
176.37 -> running and self-managing Kubernetes
178.553 -> adds significant operational overhead,
181.36 -> which is that classic
undifferentiated heavy lifting.
184.91 -> And it takes time and resources away
187.1 -> from core business applications.
189.01 -> And so this was really the
first reason why we built EKS
194.13 -> is to take away that heavy lifting.
197.9 -> You know,
198.944 -> first, running and scaling
Kubernetes is challenging.
201.17 -> The other part is securing
Kubernetes can be a lot of work
206.09 -> for organizations,
206.923 -> and that's something
that you offload to EKS.
210.95 -> And then the other major part with EKS
213.57 -> is that we spend a lot of time
215.24 -> making sure Kubernetes on AWS
217.54 -> is integrated with other AWS
services securely and reliably.
225.59 -> All right, so many of you
maybe are using EKS today.
229.18 -> Others of you might be new to Kubernetes.
231.66 -> And yet another group of you
233.13 -> might be self-managing
Kubernetes yourself on EC2,
237.84 -> and possibly because, you know,
239.49 -> you were using Kubernetes
before EKS launched
242.572 -> three and a half years ago now.
245.26 -> And here are some of the tips
247.52 -> and some of the reasons why customers,
252.32 -> or that helps customers move
253.57 -> from self managed Kubernetes to EKS.
257.45 -> The first thing to keep in mind
258.7 -> is you're going to want to make sure
260.45 -> you qualify your applications
261.74 -> on a recent version of Kubernetes.
263.77 -> EKS typically supports
265.5 -> the most recent four
versions of Kubernetes.
268.55 -> And if you're self managing,
269.85 -> you may be on an older version,
271.15 -> you're going to have to upgrade
272.479 -> and qualify your applications
274.59 -> on a version that's supported by EKS.
277.36 -> The next is evaluating your
user authentication process.
280.69 -> We'll talk about that a little
later on in the presentation.
285.11 -> Then as far as building out clusters,
287.16 -> you may have some custom tooling today
289.47 -> to manage your clusters.
291.1 -> Maybe you're using the
open source COPS project.
294.25 -> When moving to EKS, you're
going to have to evaluate
296.71 -> moving to a different tooling choice,
298.02 -> whether that's CloudFormation
299.77 -> or open source tools
like Terraform or Pulumi,
303.69 -> you're going to need to
adopt different tooling
305.38 -> to deploy your clusters.
307.18 -> You're going to want to
perform load testing.
308.99 -> We'll go into more details on how EKS
311.16 -> can scale to support your applications.
314 -> Then you'll have to figure out
314.93 -> how to move your
applications from one cluster
318.72 -> on your self managed
Kubernetes over to EKS.
322.51 -> One option is the more
traditional backup and restore,
326.66 -> taking a backup of
everything you're running
328.26 -> in your self managed cluster,
and moving it to EKS.
331.08 -> But more simply we see
customers be successful
334.1 -> by simply just redirecting
their CICD pipelines
337.82 -> over to the new clusters on EKS.
340.397 -> And the beauty of Kubernetes is
342.057 -> if it's running on self
managed Kubernetes,
345.03 -> you really shouldn't have
to make too many changes
347.2 -> to get it working on EKS.
349.28 -> And then the final step is going to be
351.42 -> shifting over application traffic.
353.64 -> So you can see in the
diagram on the right,
356.54 -> this is a typical pattern
we see customers making
360.1 -> where they'll use Route 53 Weighted DNS
362.25 -> to slowly shift over traffic
363.81 -> from the clusters running on EKS to,
367.8 -> from clusters running on self
managed Kubernetes to EKS.
375.01 -> And then before we get into
more of the details here,
378.21 -> I want to quickly touch
on the tenets of EKS.
382.01 -> What are the guiding
principles that we keep in mind
384.59 -> when we make make decisions?
387.61 -> So really our mission is to
bring you Kubernetes you want
391.89 -> while delivering better
efficiency with less overhead.
396.32 -> And that's with applying
398 -> our security, operations,
scaling expertise.
401.62 -> From the beginning, EKS was built
403.86 -> as a production ready service,
406.39 -> and we support production workloads
407.98 -> from a variety of industries and startups
410.71 -> all the way to large enterprises.
412.5 -> We also, as I mentioned,
413.52 -> work closely with other AWS
services to make integrations
416.56 -> with Kubernetes as smooth as possible.
419.55 -> And then of course we engage
in with open source projects,
423.17 -> and really open source is
one of our favorite ways
425.67 -> to collaborate with you.
429.51 -> So moving from these
tenets into the five areas
433.73 -> we're going to talk about today.
436.34 -> So we're going to talk about security,
438.04 -> reliability, efficiency,
cluster operations,
441.77 -> and portability.
443.09 -> And as a product manager,
445.08 -> one of one of my jobs really
is to answer your questions.
448.5 -> I get a lot of questions,
whether it's from, you know,
451.86 -> customers like yourselves on
GitHub or in the AWS Forums
455.53 -> or from the AWS solutions
architects and account managers.
459.84 -> There's lots of questions
on how to do this
461.999 -> or how to do that with EKS and Kubernetes.
464.667 -> And so for the rest of this presentation,
467.28 -> we're just going to cover
a lot of the questions
469.56 -> that I commonly get.
470.82 -> And we're going to go over some of the,
473.09 -> a lot of the areas in EKS
475.08 -> and especially in these five categories.
480.65 -> Okay, starting with
security, it's no accident
482.97 -> that security is the first
section of this presentation.
487.47 -> It's really the number one feature of EKS.
490.69 -> It's the thing that's
going to get prioritized
493.12 -> over everything else.
494.18 -> And it's really the area
where we spend the most time.
500.69 -> So how do I secure my cluster?
503.42 -> This is a rather broad
question that I get,
507.37 -> and I think it's important to talk about
509.81 -> the shared responsibility model
511.577 -> before we can really answer this question.
514.58 -> So if you look at the diagram here,
515.99 -> you can see the orange and the blue.
518.29 -> And by moving to EKS, everything in orange
521.257 -> is the responsibility of the
Elastic Kubernetes Service.
527.19 -> So the control plane runs
in EKS owned accounts.
530.36 -> We are responsible for the
security of the API server
533.927 -> and the scheduler, the controller manager,
536.135 -> as well as the etcd instances
that are backing your cluster.
541.14 -> And then if you move to
Fargate, which again,
543.22 -> we'll talk a little bit
about in this presentation,
546.41 -> even more of that
responsibility shifts to EKS.
549.05 -> So some of the worker node sides
550.56 -> such as the operating system kubelet
553.03 -> also moves into the EKS side
of the shared responsibility.
557.61 -> On your side is everything in blue.
560.52 -> So, you know, things like
worker nodes, policies,
564.71 -> how you're configuring your cluster
566.79 -> and deploying your applications
568.68 -> are the areas that you need
to concern yourself with
572 -> when thinking about security.
573.61 -> In a lot of the rest of this presentation
575.36 -> I'll be referring back to the
shared responsibility model.
582.97 -> How do I secure sensitive cluster data?
585.27 -> One quick area to address is
for the control plane side,
591.34 -> EKS is part of the Kubernetes
product security committee
594.73 -> that's responsible for triaging issues.
597.84 -> And so most of the time
599.46 -> when there's a security issue upstream,
601.47 -> we're going to be part
of that inner circle
603.59 -> that's responsible for addressing it,
605.67 -> rolling out the fix,
606.86 -> and your cluster is going
to be patched before the,
610.497 -> you know, before the issue
is publicly released.
612.93 -> So that's really one of the
main benefits you get of EKS.
617.13 -> But then it comes to the data on your side
619.07 -> and data actually running in the cluster.
621.46 -> And so there's a couple of
features I want to highlight here
623.9 -> that we've seen customers use
625.53 -> to protect sensitive cluster data,
628.19 -> such as such as Kubernetes Secrets.
631.13 -> The first option, which we
strongly recommend you do
633.83 -> is to encrypt your secrets
635.73 -> using a AWS Key Management Service.
638.36 -> This was a feature we launched
at least a year ago now,
641.99 -> and it gives you the option
to add an extra layer
645.08 -> of encryption onto Kubernetes Secrets.
647.16 -> EKS already runs the etcd volumes
651.68 -> that are encrypted and stored at rest.
654.2 -> But this gives you an
option to add an extra layer
656.46 -> of encryption and that defense in depth.
660.833 -> Another feature, and this
was another recent launch
663.04 -> that we worked closely with
the AWS Secrets Manager team
665.57 -> is now you have the ability
to store secrets outside
668.69 -> of the cluster in our managed
Secrets Manager service.
672.61 -> And then using the Secrets
Manager CSI driver,
674.85 -> you can retrieve your secrets.
677.04 -> And one of the benefits here
is that with Secrets Manager,
680.66 -> you can have IAM policies
that really restrict down
684 -> which applications running in your cluster
686.03 -> can access which secrets.
688.12 -> And then finally, this
is a very recent launch.
690.87 -> Another one where we worked
692.54 -> with the AWS Certificate Manager team
694.475 -> is through their private
certificate authority,
698.43 -> you can now use the cert manager plugin
701.21 -> for AWS Cert Manager Private CA
704.97 -> to generate TLS certificates,
707.59 -> to provide secure authentication
and encryption over TLS.
713.55 -> So I think you, you know,
714.383 -> you can see a theme here that
we spend quite a bit of time
717.08 -> working with AWS service teams,
718.72 -> especially security,
you know, focused teams
722.27 -> on how we can use some
of their managed services
725.26 -> to provide a more secure
experience in EKS.
732.87 -> How do I limit access
to the cluster endpoint?
734.77 -> This is a pretty common question
736.56 -> when you're getting started with EKS.
739.58 -> By default when you create a cluster,
741.74 -> we give you an endpoint
that's publicly accessible,
745.28 -> And this is good for getting
started and trying things out,
748.98 -> but it's generally not a best practice
750.62 -> for running your cluster in production.
752.93 -> And so we also give you the option
755 -> to enable a private
endpoint for your cluster.
759.867 -> And this endpoint is only accessible
761.67 -> within the VPC of your cluster.
765.892 -> If possible, we recommend you
disable the public endpoint
769.74 -> and only use the private endpoint.
771.88 -> And if you can see in the diagram,
773.17 -> when you enable this private endpoint
775.431 -> all of the traffic is
going to go through the ENI
778.7 -> that enables the communication
781.17 -> between the subnets in your VPC,
783.69 -> and then the subnets in the EKS owned VPC
786.36 -> where the control plane's running.
788.15 -> If you can't disable the public endpoint,
791.75 -> then we have a feature
where you can restrict
793.922 -> using IP address CIDR ranges,
796.57 -> which addresses can access
the public endpoints.
799.73 -> So for example,
800.563 -> if you have corporate networks
from different offices,
803.12 -> and those are the places where your users
804.63 -> need to access the cluster,
806.26 -> then you can just restrict
the public endpoint
808.18 -> to those ranges.
810.02 -> The last thing I'll call out is
812.33 -> for some of our most
security conscious customers,
815.22 -> if you don't have a need for any
817.36 -> inbound or outbound
internet communication,
819.31 -> you can run clusters in
private only subnets.
823.02 -> And we have documentation on this.
824.64 -> You'll have to set up all
of the AWS private link
827.913 -> for several AWS services like S3, ECR.
831.97 -> But once you set all those up,
833.47 -> you can run your cluster
in a totally private VPC
837.21 -> without any internet access.
842.21 -> Okay, onto worker nodes.
844.35 -> How do I run secure worker nodes?
845.98 -> There's a lot we could cover here.
848.72 -> Probably the easiest way
to secure your worker nodes
851.76 -> is to use an operating system
that is built for containers.
856.2 -> And the one I'm going to talk about here
857.64 -> is AWS Bottlerocket,
859.43 -> which is the AWS supported
host operating system
863.45 -> that's optimized to run containers.
865.83 -> And it's an open source project
868.73 -> and we publish EKS optimized
variants of Bottlerocket
872.22 -> for each version of Kubernetes.
875.21 -> I'd also like to shout
out that we just launched
877.44 -> native managed node group
support for Bottlerocket.
880.54 -> So it's easier than ever
882.21 -> to launch worker nodes in your cluster
883.98 -> using a container
optimized operating system.
888.03 -> The next tip would be to treat
worker nodes as immutable.
890.43 -> So once you, just like
containers are immutable
893.411 -> in that once you publish your container,
895.57 -> you're not going to change it,
896.97 -> you should treat worker
nodes the same way.
898.62 -> So if you need to update the AMI version
901.91 -> of your worker nodes, rather
than changing it in place,
904.76 -> you just bring down, you know,
spin up a new EC2 instance,
907.574 -> bring down the old one,
908.68 -> and the new one has the
updated AMI version.
912.31 -> And again, if you're
using managed node groups,
914.32 -> all of this is automated.
915.45 -> It's just a single API call.
918.72 -> Next it's strongly recommended
to use AWS Systems Manager
922.74 -> instead of SSH.
923.93 -> Both the EKS optimized Amazon Linux AMIs
927.697 -> and the Bottlerocket AMIs
include SSM by default.
932.01 -> There's no need to open
port 22 or enable SSH keys
937.06 -> on the instance.
938.15 -> If you need to access the
instance for debugging reasons,
940.91 -> you can go through SSM.
943.54 -> EKS supports custom AMIs.
946 -> While we do publish our own AMIs
947.47 -> we also open source the build scripts.
949.45 -> And many of our customers
951.043 -> have requirements or compliance reasons
953.59 -> that they need to build
their own custom AMIs.
955.34 -> And if you do that,
956.54 -> then we recommend using
the EKS CIS benchmark,
960.72 -> which we published a
little over a year ago
962.75 -> to validate that the
custom AMIs you've built
965.3 -> are following best practices.
968.11 -> Finally, if you don't want
to deal with any of this,
970.64 -> if this sounds like a lot of work to you
972.71 -> and you don't have specific
security or compliance reasons,
976.16 -> you can pass off this
responsibility to Fargate.
978.78 -> EKS supports Fargate,
980.44 -> which is the serverless
container engine for AWS.
984.76 -> And if you saw in the shared
responsibility slide earlier,
987.64 -> that becomes our responsibility
989.74 -> to maintain secure worker nodes.
996.22 -> Okay, from worker nodes
down to the pod level,
999.41 -> when migrating from a world
1002.27 -> where you're deploying
applications to VMs,
1005.403 -> this is one area that we see customers
1007.38 -> needing some help on,
1009.01 -> is how do you implement
security at the pod level?
1012.5 -> By default, you can give
your worker node an IAM role
1016.69 -> like you might do in a previous world
1020.21 -> where you're deploying
applications to virtual machines.
1022.84 -> The problem is if you have multiple pods
1024.89 -> that land on the same node,
1026.27 -> they're all going to
assume the IAM permissions
1028.87 -> of the worker node role,
1030.28 -> which violates that
principle of least security.
1033.81 -> And so the recommendation
here is using IAM roles
1036.98 -> for service accounts,
1038.55 -> where instead of giving pods permissions
1041.5 -> to the worker node role,
you instead create a role,
1044.97 -> assign it specifically to a
Kubernetes service account,
1048.3 -> and namespace and cluster.
1050.91 -> And then when you launch the pod,
1052.14 -> you just have to add a simple annotation
1053.66 -> to the service account
1054.99 -> and the pod will pick up the IAM role
1056.916 -> of the role you created for the pod
1059.81 -> instead of the worker node role.
1062.37 -> Now, when you do that,
1064.24 -> the pod will still be able
to assume the privileges
1066.64 -> of the worker node role,
where in this case,
1068.8 -> you only have very limited policies
1071.09 -> associated with the node roles
1072.44 -> such as ECR read only access.
1075.22 -> But as an even more secure principle,
1078.52 -> you can totally block access
1080.7 -> to the instance metadata service.
1082.71 -> And then the pod can't
assume any permissions
1084.92 -> of the worker node IAM role.
1088.58 -> That's at the authorization layer.
1090.78 -> You can take it a step further
1092.31 -> and do security at the networking layer.
1094.8 -> So you can use Kubernetes network policies
1096.75 -> to restrict traffic within the cluster.
1099.71 -> And then you can also
use the feature of EKS
1101.7 -> where you can associate pods
with specific security groups.
1105.22 -> And in this case,
1106.27 -> one of the common reasons
we see customers doing that,
1109.24 -> if they have pods that
need to access AWS services
1111.91 -> such as RDS or ElastiCache,
1114.6 -> you'll assign the pod to a security group
1116.63 -> that can access that service.
1118.68 -> But then the other pods on the node
1120.09 -> won't be able to access that.
1122.03 -> So there's a defense in depth layer here
1124.57 -> where as a minimum,
1126.67 -> you should do security to pod level
1128.75 -> at the authorization layer and IAM layer.
1131.04 -> And then as a defense in depth step,
1132.41 -> you can do it at the
networking layer as well.
1139 -> Okay, what are some additional
security best practices?
1143.79 -> It's important to point
out here that Kubernetes
1147.14 -> is a single tenant orchestrator.
1149.7 -> It was designed for use cases
1153.39 -> where tenants within
the cluster are trusted.
1156.9 -> If you have untrusted
tenants, for example,
1159.91 -> you're running a SaaS platform
1161.97 -> and you have various customers
1163.66 -> that need to be deployed
into your Kubernetes,
1168.13 -> the recommendation is
to use separate clusters
1171.11 -> because namespaces are not
a hard security boundary.
1174.67 -> However, if you have trusted tenants,
1176.75 -> for example, team A, team
B within your organization,
1180.14 -> then these are some practices to follow.
1182.31 -> So using Kubernetes RBAC and
namespaces to isolate tenants,
1188.92 -> using Kubernetes quotas and ranges,
1191.27 -> which will help control the consumption
1194.22 -> of compute resources.
1195.14 -> So say you want to prevent
team A or application A
1199.03 -> from consuming the
entire EC2 instance VCPU.
1203.31 -> You can implement limits and
ranges as an administrator.
1207.01 -> A feature that was launched
in Kubernetes version 1.20
1210.58 -> is API priority and fairness.
1213.08 -> So if you have pods running in the cluster
1214.95 -> that need to talk to the API server,
1217.5 -> for example, controllers you've written
1219.476 -> or, you know, add ons you're running
1222.69 -> that need to talk to the API server,
1224.59 -> you can use priority and fairness to limit
1228.05 -> the amount of requests that
can be made to the API server.
1231.516 -> Another good practice
is enforcing policies
1235.54 -> to limit what dev teams
can run in the cluster.
1240.14 -> As a good example, say
you want to prevent teams
1243.27 -> from running host networking pods.
1245.57 -> You can use tools like Gatekeeper,
1248.11 -> where it's with policies
built on Open Policy Agent
1251.1 -> and limit what teams can run.
1254.61 -> And then, again, down
at the networking layer
1256.3 -> you can use Kubernetes network policies.
1258.14 -> A good practice we see is default policies
1260.62 -> to deny cross namespace traffic
1263.4 -> and only enable it where you need it.
1265.21 -> For example,
1266.043 -> if application A has a
dependency on application B,
1270.03 -> then after the deny policy
1271.91 -> you can add an additional policy
1273.27 -> to allow traffic between them.
1275.57 -> And then again, I'll mention Fargate.
1277.34 -> As a defense in depth step
1279.03 -> you can use VM level isolation
1281.33 -> where each Fargate pod runs
in its own virtual machine.
1284.66 -> And this helps.
1285.493 -> But again, it's important to point out,
1287.479 -> if you have untrusted tenants,
1289.45 -> the strong recommendation
is to use separate clusters.
1296.01 -> We know many of you operate
in regulated industries.
1298.88 -> And one of the main
benefits of adopting EKS
1302.3 -> is that many of these compliance standards
1304.95 -> that you need to meet
are already done for you.
1307.38 -> So one to call out here is FedRAMP.
1310.49 -> We just achieved FedRAMP
High earlier this year.
1313.64 -> And this just makes it much
easier to run your applications
1318.23 -> in environments with these
strict security requirements.
1321.49 -> And we're always working
on additional ones.
1323.76 -> This is not an exhaustive list.
1325.41 -> There's ones from Europe
that aren't shown here.
1328.36 -> All of this is in the EKS documentation.
1335.418 -> Okay, after security the
second most important feature
1341.38 -> of EKS is reliability.
1346.86 -> And we'll cover a couple areas here
1349.52 -> of how EKS ensures a
highly available service,
1353.47 -> as well as practices you can
take to ensure reliability
1356.83 -> and high availability
of your applications.
1361.21 -> How does EKS ensure high availability?
1363.65 -> This is a question I get a lot.
1367.05 -> And if you look at the
diagram on the right,
1369.34 -> this is all of the work we do to ensure
1372.72 -> that the Kubernetes endpoint you get
1374.49 -> when you create a cluster is
going to be able to respond
1378.401 -> across all kinds of situations.
1382.52 -> So we run the control plane
across three availability zones.
1387.28 -> The etcd instances are spread out
1389.89 -> across those three availability zones.
1392.37 -> And there's exactly three etcd instances
1394.63 -> to meet the quorum required for etcd.
1397.3 -> And then we run at least two API servers
1401.01 -> across those availability zones.
1403.53 -> All of it is fronted by
a network load balancer.
1408.01 -> And this architecture
is designed to eliminate
1410.65 -> any single point of failure
1411.88 -> that may compromise the
availability and durability
1415.21 -> of the Kubernetes control plane.
1419.118 -> One of the main areas that we test a lot
1422.36 -> is being able to survive single AZ events.
1425.53 -> If there's an issue in an AZ of a region,
1428.03 -> your cluster will still be available.
1432.38 -> We do rolling control plane upgrades.
1434.77 -> So if you do an upgrade,
say from 1.19 to 1.20,
1439.19 -> it's done in a rolling fashion
1441.04 -> where new instances are brought up,
1443.08 -> old instances are brought down.
1445.53 -> And as long as your clients,
which most of them are,
1447.77 -> are configured to reconnect
1449.52 -> in case there's an IP address change,
1451.55 -> again, the end point is
going to always be available
1453.76 -> even if it's undergoing an upgrade.
1456.1 -> We take automated snapshots in etcd.
1459.3 -> In case something does go wrong
1460.98 -> we have the ability to restore it
1462.53 -> and have automated processes to do so.
1465.07 -> All of this architecture and availability
1469.918 -> leads to the 99.95 SLA.
1473.47 -> This is a guarantee, this
is not a best effort.
1476.36 -> It's something we take very seriously.
1478.85 -> And then as the last line of
defense, there's, you know,
1482.45 -> rarely are you going
to encounter an issue,
1484.57 -> but if you do we have 24-7, 365 support
1487.51 -> with the EKS engineering
team who can, you know,
1492 -> who works on hundreds
of thousands of clusters
1494.927 -> and has seen every issue possible.
1497.06 -> And if something happens to
go wrong in your cluster,
1499.81 -> we're always here to help.
1505.14 -> Can you guys handle the
scale of my applications?
1508.21 -> Yes, but it's a shared responsibility.
1512.07 -> On our side we auto scale
on the control plane.
1516.31 -> So if you have additional,
1517.88 -> or we look at a bunch of different signals
1519.43 -> to decide whether to
scale your control plane.
1522.31 -> And we're much more aggressive scaling up
1524 -> than we are scaling down.
1526.01 -> And so we're going to auto
scale to larger EC2 instances
1528.67 -> if we see you have any
increased load on your cluster.
1532.2 -> We also will autotune
1533.87 -> various Kubernetes
control plane parameters,
1536.58 -> such as QPS, max requests in flight,
1539.85 -> as those instances go up and down.
1541.66 -> So we want to make sure that
1543.37 -> when you're running on a larger instance,
1545 -> the software is configured
to take advantage
1547.43 -> of all of the available computing power
1549.94 -> on the larger hardware.
1553.58 -> And then, running EKS
we get to take advantage
1557.57 -> of all the latest AWS
infrastructure enhancements,
1560.43 -> whether it's newer EBS
volumes or, you know,
1564.59 -> the latest generation EC2 instances,
1566.91 -> or, you know, the networking
load balancer enhancements.
1571.43 -> we're always working to validate
1574.5 -> the new infrastructure
enhancements that come out of AWS.
1578.09 -> So even in an existing cluster,
1580.27 -> you're going to see better performance
1582.09 -> without having to do anything.
1583.41 -> That's just something we're
always working on enhancing
1586.04 -> behind the scenes.
1587.3 -> And we have a team right
now dedicated to testing
1590.92 -> what we're calling mega clusters,
1592.33 -> really, really large clusters
with up to 15000 worker nodes.
1595.587 -> And this isn't forking Kubernetes,
1597.38 -> we're not changing anything.
1598.64 -> It's just taking advantage
of the AWS scale and hardware
1602.827 -> and Kubernetes settings and
pushing Kubernetes to its limit.
1607.77 -> Now your side.
1610.16 -> There's a limit on what
Kubernetes can handle.
1613.33 -> If you have some runaway
controller in your cluster,
1616.06 -> that's creating, you know, 10,000 secrets
1619.32 -> or a ton of pods without cleaning them up
1623.52 -> at a certain point, we can't scale.
1625.97 -> Like there's no size of instances
1628.96 -> is going to be able to handle that scale.
1630.27 -> So there's definitely things
to monitor on your side.
1634.21 -> We expose metrics of the control plane
1637.1 -> where you can look at things
1638.24 -> like API server request latency.
1640.723 -> You can also, we recommend enabling
1643.66 -> the control plane audit log.
1645.82 -> Where again, this is
forwarded to CloudWatch.
1647.87 -> You can run queries to
identify top callers.
1650.35 -> So you might see, you know,
1652.08 -> one of the best
troubleshooting tips we give
1656.38 -> and we do when any
support cases are opened
1658.7 -> is looking at these top callers.
1660.93 -> And almost very quickly, you'll identify
1663.72 -> some misconfigured controller
1665.52 -> or application that is misbehaving,
1668.9 -> and it's quick to identify and fix.
1672.42 -> And then we recommend
visualizing these metrics,
1675.3 -> whether it's through CloudWatch
1676.9 -> which supports Prometheus
metrics or Grafana,
1679.75 -> whether you're self managing Grafana
1681.52 -> or you can now use the recently launched
1683.84 -> Amazon Managed Grafana service.
1689.71 -> Okay, on to efficiency.
1693.58 -> One question I get a lot,
1694.64 -> how do I right size compute capacity?
1698.66 -> This is really, if done right,
1701.506 -> this is one of the major
benefits of Kubernetes
1705.8 -> is being able to make the
best possible trade-off
1710.09 -> of cost versus performance.
1712.94 -> When it comes to rightsizing compute,
1715.49 -> this is the business decision
1716.77 -> you're always going to have to look at.
1718.542 -> You know, how close to the edge
1721 -> of the required compute capacity
1723.52 -> versus a buffer that I may need to have
1726.14 -> for some scaling event?
1729.18 -> And some recommendations here
for right-sizing capacity.
1733.21 -> One is you're going to want
to set requests and limits
1739.07 -> as close as possible to actual utilization
1741.68 -> on your Kubernetes pods.
1743.49 -> And two of the ways you can do this
1746.04 -> is using the vertical pod autoscaler,
1747.94 -> which will help you scale
1750.21 -> your pod size up and down vertically.
1753.28 -> And then the horizontal pod autoscaler,
1755.6 -> which can help you scale
out the number of replicas.
1758.2 -> Once you've right-sized
the pod vertically,
1760.51 -> then you're going to have
to scale out horizontally.
1763.34 -> And so using HPA and VPA
are strongly recommended.
1768.56 -> Then it comes to actually
compute capacity in your cluster.
1771.99 -> You need worker nodes or Fargate nodes
1774.64 -> to actually run the containers.
1777.98 -> And there's a couple of different
available solutions here.
1781.32 -> The most commonly used
one is cluster autoscaler.
1783.99 -> It's the defacto autoscaling
solution in Kubernetes.
1788.24 -> It'll take a look at any pending pods,
1790.55 -> ones that can't be scheduled
1791.85 -> because there isn't enough capacity,
1793.85 -> and it'll spin up additional instances
1796.85 -> in auto-scaling groups.
1798.74 -> We also have a new project
that we've been working on
1801.41 -> over the last year called Karpenter,
1802.86 -> which is our take on
re-imagined node auto-scaling
1808.69 -> in Kubernetes.
1809.55 -> So it's a different paradigm
compared to cluster autoscaler.
1816.6 -> And so, and then the other
option you have is AWS Fargate,
1823.14 -> where you can totally take away the need
1825.55 -> to scale capacity at all.
1827.35 -> Because with AWS Fargate,
it's our responsibility.
1830.78 -> We're going to scale
capacity behind the scenes.
1833.29 -> All you have to do is
schedule your pod to Fargate,
1835.38 -> and we will find the capacity for you.
1841.81 -> Okay, next up is reducing costs.
1844.78 -> Right-sizing compute
that we just talked about
1847.17 -> is one good way to reduce costs.
1849.09 -> And if you look at the graph on the right,
1852.03 -> this was from a case study we
published earlier this year
1856.08 -> of a customer's cost optimization journey.
1860.6 -> And auto-scaling and right-sizing
were the first steps taken
1864.08 -> and provided a lot of bang for the buck.
1866.55 -> But then different purchase
options that AWS has
1870.43 -> will also help you save costs.
1871.98 -> So using Spot Instances is
a great way you can save
1876.17 -> up to 90% off the on demand pricing.
1879.12 -> Using managed node groups
is strongly recommended
1882.06 -> with Spot because the
Spot interruption notices
1885.03 -> are automatically handled
by managed node groups.
1888.92 -> I mentioned cluster autoscaler last slide,
1890.61 -> there's different settings
in cluster autoscaler.
1893.04 -> So you can use the expander
mode, the priority mode,
1896.73 -> and prioritize lower cost node groups,
1900.182 -> as opposed to on-demand node groups.
1902.49 -> Savings Plan is another good way
1904.57 -> we've seen customers save costs.
1906 -> This works for both EC2 and Fargate.
1909.27 -> Moving to the latest
generation EC2 instances,
1911.79 -> the sixth generation Intel
instances were just launched,
1916.16 -> Graviton instances.
1918.68 -> You can save even more, up to 40%
1921.04 -> compared to the traditional instances.
1925.26 -> And then lastly, again,
another mention of Karpenter,
1929.2 -> which is one of the areas where
Karpenter works really well
1932.96 -> is if you're running really large clusters
1934.74 -> and you want to scale really fast
1936.84 -> without overprovisioning
a lot of capacity.
1940.07 -> This is an area where
we've seen some customers
1943.11 -> get early success with Karpenter.
1948.41 -> Okay, scaling applications in clusters
1950.56 -> with limited VPC IPv4 space,
1954.82 -> the way the default networking
plugin in EKS works,
1959.52 -> the VPC CNI,
1961.18 -> is it uses real IP
addresses from your VPC.
1965.51 -> As compared to other CNIs,
1967.36 -> which may build an overlay network.
1970.198 -> And this has a lot of benefits.
1972.04 -> By directly using the VPC
it's easier to troubleshoot,
1975.08 -> you get better performance.
1977.14 -> But one of the issues you might run into
1978.84 -> is you're running in a VPC
1980.923 -> that has limited IP address space.
1984.23 -> And so what we've always said
1986.06 -> is the best solution to
IPv4 exhaustion is IPv6.
1990.8 -> So this is now supported in EKS.
1993.15 -> Instead of giving a pod a V4 address,
1995.37 -> you can give a pod a
globally routed IPv6 address.
2000.09 -> Every node gets a /80
chunk of IPv6 addresses
2005.195 -> so you get pods that launch faster.
2008.07 -> We also support egress IPv4
traffic with IPv6 clusters.
2013.2 -> So it lets you migrate
without have to waiting
2015.78 -> on the rest of your organization
to already support IPv6.
2020.1 -> If you're still in IPv4 and
you can't move to IPv6 yet,
2023.89 -> we have another solution called
VPC CNI custom networking.
2027.76 -> And here you can add
additional CIDR blocks to your,
2031.85 -> additional IPv4 CIDR blocks to your VPC.
2034.27 -> And you can run pods in
subnets from those CIDR blocks
2037.3 -> while your worker nodes can stay
2038.77 -> in the primary CIDR block subnets.
2045.793 -> This is a fun topic and comes up a lot.
2048.18 -> Should I run lots of small
clusters or fewer large clusters?
2053.93 -> The answer is, it depends.
2057.38 -> At a minimum we recommend that you run
2060.012 -> separate clusters per environment.
2062.32 -> So if you have dev
staging prod environments,
2065.43 -> you certainly want to run
different clusters there.
2068.46 -> The next is,
2069.793 -> you're going to have to
look at your constraints.
2071.81 -> The first one to look
at is untrusted tenants.
2074.61 -> If you remember back to
the best practices slide,
2077.97 -> if you have untrusted tenants
2079.7 -> you're going to want to
separate those out per cluster.
2082.51 -> The next thing we see customers look at
2085.38 -> is organizational boundaries.
2086.58 -> Sometimes it's just too
difficult to manage, you know,
2091.51 -> teams in different organizations
2092.8 -> and try to have them
run in the same cluster.
2094.48 -> So in that case, you're going
to do separate clusters.
2097.15 -> Once you look at those constraints,
2099.19 -> then you're going to want
to optimize within them.
2101.01 -> So the fewer clusters you can run
2103.62 -> the more efficient bin packing you'll get
2105.8 -> and more efficient
utilization of EC2 resources.
2109.4 -> So really it comes down
to split when you have to
2112.17 -> based on constraints, but
within those constraints
2115.35 -> run as few amount of clusters as you can.
2120.78 -> Okay, moving on to cluster operations.
2125.04 -> How do I upgrade my cluster?
2126.93 -> One of the main benefits
you get from moving to EKS
2130.07 -> is that we handle that for you.
2131.53 -> It's just a single API call to say,
2134.18 -> upgrade from, say, 1.19 to 1.20.
2138.05 -> Really the more managed
features of EKS you can use
2141.21 -> such as the managed data plane,
2143.94 -> like node groups and Fargate
as well as EKS add ons
2147.32 -> which we recently launched,
simplifies that upgrade process.
2151.46 -> EKS aims to be around 100 to 150 days
2155.64 -> behind upstream Kubernetes.
2157.49 -> We do a lot of work to
validate and qualify
2160.11 -> each Kubernetes version
2161.7 -> and make sure it passes our
application security bar.
2165.5 -> So at a minimum, you're
going to have to plan
2168.26 -> for a Kubernetes version
upgrade at least yearly.
2171.78 -> And if you do it yearly,
2172.74 -> you're going to have to jump
multiple versions at a time.
2175.1 -> If you want to go one version at a time,
2177.13 -> you're going to have to plan
2177.98 -> for about two to three times a year.
2180.43 -> It's just a part of adopting Kubernetes.
2183.92 -> You're going to need to upgrade versions.
2186.35 -> And the reason our policy is that way
2189.06 -> is because of security.
2190.25 -> The newer Kubernetes versions
have security patches applied,
2193.94 -> old ones are much harder to patch.
2197.24 -> So use EKS managed
capabilities when you can.
2200.73 -> Test, test test,
2202.18 -> run the upgrade process
in your test environments.
2205.36 -> Test your application manifests
2207.16 -> against deprecated or removed APIs.
2210.11 -> And if there's a version that's coming out
2211.86 -> that happens to remove
any Kubernetes APIs,
2213.92 -> we will call that out in very bold text
2216.58 -> in our release notes.
2217.46 -> So I strongly recommend you
look at the release notes
2220.01 -> before doing any kind of upgrade.
2224.5 -> Managing user access, we
have two options here.
2227.779 -> You can either use IAM users,
2230.02 -> and with IAM there's no need to maintain
2231.95 -> any kind of separate data store.
2233.86 -> You can use your existing
IAM roles and users
2238.43 -> for providing user access to the cluster.
2241.15 -> If you have lots of users
2242.52 -> and they all need to get
the same permissions,
2244.15 -> the recommendation is to
just create a single role,
2247.04 -> give that access to the cluster,
2248.32 -> and then have the users
assume those roles.
2251.37 -> The other option is OIDC, OpenID Connect.
2254.87 -> Many of you told us
2255.82 -> that you had existing
identity management systems,
2258.48 -> and it would be much easier
to just integrate that
2260.52 -> into your cluster
2261.7 -> rather than having to give
your users all IAM access.
2265.56 -> And so EKS now supports OIDC.
2268.1 -> You can use this as an alternative
2270.27 -> or in addition to IAM roles.
2272.61 -> And these are your options
2273.443 -> for authenticating users to the cluster.
2275.57 -> Once they're authenticated,
2276.64 -> to actually give them permissions
to the various resources
2279.55 -> in Kubernetes, that's
where you're going to use
2281.01 -> role-based access control.
2287.38 -> Monitoring, observability.
2289.21 -> This is an important part.
2290.27 -> How do I monitor the state of my cluster
2293.89 -> is a question I get quite a bit.
2295.9 -> And again, going back to that
shared responsibility side,
2299.48 -> there's the control plane side
2301.53 -> and then there's the application side.
2303.34 -> On the control plane side
2304.41 -> you're going to want to
enable control plane logging
2307.08 -> so you get logs for components
like the API server,
2311.09 -> the authenticator, the audit log,
2313.24 -> and all those get
forwarded onto CloudWatch.
2315.78 -> And I believe I mentioned this earlier,
2317.04 -> but you can also scrape
the Prometheus metrics
2320.66 -> that we expose through the
control plane and monitor those.
2325.41 -> On the application side,
EKS add-ons now in preview
2329.52 -> has support for the AWS
distro for OpenTelemetry.
2333.28 -> We have an entire team in AWS
2337.14 -> that's dedicated to working
on the OpenTelemetry Project.
2339.69 -> We think it's the future of
observability in containers.
2344.25 -> Today OpenTelemetry
supports metrics and traces.
2346.53 -> And you can send those
off to many destinations,
2348.79 -> whether it's CloudWatch,
2350.24 -> Amazon Managed Service for Prometheus
2352.24 -> or any partner destinations.
2354.3 -> For logging we have the
AWS distro for Fluent Bits.
2359.57 -> And if you're using Fargate
that already comes built in,
2361.98 -> you don't have to install anything.
2364.18 -> And as a separate option,
2365.2 -> if you just want an out of the
box all batteries included,
2368.59 -> we have CloudWatch Container Insights,
2370.25 -> which you can install,
2371.13 -> which handles logging, metrics, traces,
2373.24 -> and is an all-in-one package solution
2376.7 -> that a lot of our customers are using.
2382.45 -> How do I route traffic to my cluster?
2384.95 -> The recommendation here
2387.04 -> is to use the AWS Load
Balancer controller.
2390.34 -> So previously the,
2392.5 -> if you wanted to expose a
service in your cluster,
2394.62 -> you could use the in-tree controller.
2396.85 -> But that is deprecated.
2398.57 -> With the out of tree controller
we can move much faster
2402.62 -> and release new versions
without having to be tied
2405.55 -> to the Kubernetes version release cycle.
2407.93 -> So with the load balancer controller,
2409.36 -> you can provision
application load balancers
2413.501 -> in response to Kubernetes ingress objects,
2416.381 -> or you can provision
network load balancers
2420.32 -> in response to Kubernetes service options.
2422.96 -> One of the more powerful features
2424.67 -> of the AWS Load Balancer Controller
2427.42 -> combined with the VPC CNI
is direct to pod routing.
2431.15 -> So you can skip the extra hop
2432.87 -> where you have to hit the worker node
2434.51 -> and have kube-proxy forward
it to the right pod.
2437.18 -> Instead, you can just go
directly from the load balancer
2439.58 -> right to the pod in your cluster.
2441.75 -> And then another feature we
launched about a year or so ago
2445.15 -> is the ability to save money
by grouping multiple ingresses
2448.68 -> under a same ALB.
2450.12 -> So even if you have applications
2451.95 -> running in different namespaces,
2453.8 -> you can have them all fronted
2455.27 -> by a single application load balancer.
2457.66 -> And if you can see in
the diagram on the right,
2459.33 -> you have multiple
different rules on the ALB
2461.78 -> that are routing traffic to
the different applications
2464.53 -> in your cluster.
2470.53 -> The slide earlier, we
talked about fewer clusters,
2473.83 -> lots of clusters.
2475.58 -> Even at a minimum, you're
going to have several clusters
2478.576 -> for multiple environments.
2480.48 -> And some of our customers
today you're running, you know,
2483.45 -> tens, hundreds, in some cases,
maybe thousands of clusters.
2488.24 -> And so as an administrator,
2490.04 -> how do you ensure that a fleet of clusters
2492.67 -> has a consistent set of configuration?
2495.53 -> And the strong recommendation
here that we give to you
2499.63 -> is to follow a GitOps operational model.
2502.65 -> By using GitOps, you have
that configuration state
2506.44 -> stored in a central git repository.
2510.33 -> And as an administrator, you
can commit new configurations,
2514.16 -> whether that's defining manifests
2516.33 -> for operational, security, compliance,
2518.74 -> whatever tooling needs
to be on your cluster
2521.71 -> before you're deploying
business logic applications.
2525.03 -> And then using the Flux
operator, which is a GitOps tool.
2530.85 -> It's natively built into eksctl as well,
2533.958 -> will run in your cluster.
2535.94 -> Notice that the change
got picked up in git
2538.51 -> and it'll automatically sync the changes
2540.07 -> from git to your cluster.
2542.016 -> And you can have this
running in multiple clusters.
2543.94 -> And this is the way we recommend to sync
2548.76 -> and keep in aligned changes
from a synced central place
2553.21 -> out to multiple clusters
that may be in your fleet.
2560.03 -> How to provision and access AWS services
2562.425 -> from Kubernetes workloads?
2564.33 -> One of the projects we're
really excited about
2566.33 -> that we've been working
on for the last year now
2568.16 -> is the AWS Controllers for
Kubernetes or ACK for short.
2573.58 -> And what we heard is
customers who are deploying,
2576.15 -> especially developers,
2577.8 -> where you're deploying
applications in Kubernetes
2580.105 -> and you're looking for AWS services
2582.74 -> to support those applications,
2584.34 -> whether it's an S3 bucket like
as shown in the example here,
2587.705 -> maybe a database like RDS or ElastiCache
2591.25 -> or a message queue like SNS.
2596.11 -> Customers want a consistent way to define
2599.28 -> both their applications
2600.55 -> and the resources that's
supporting those applications.
2603.07 -> And so that's what ACK is.
2604.33 -> You can define your
definition for AWS resources
2608.6 -> in a Kubernetes manifest,
2610.14 -> deploy a controller to the
cluster in this admin namespace,
2614.063 -> and then in the application namespace
2615.773 -> the controller will see that you have
2618.4 -> one of these resources defined
2619.67 -> and it's responsible for going
and creating and maintaining
2622.6 -> the state of that resource.
2623.99 -> And so we have lots of services
2625.56 -> that have controllers already
2626.83 -> and many more will be coming next year.
2631 -> Running stateful workloads in
EKS takes some special care
2635.5 -> compared to running stateless workloads.
2637.57 -> So a stateless workload
might be your web server
2639.74 -> that's just serving web requests,
2642.11 -> whereas a stateful workload
has to store state somewhere.
2645.7 -> And so for high performance
use cases for databases,
2648.87 -> we have the Elastic
Block Store CSI driver.
2651.95 -> For storage that needs
to be shared across pods,
2655.727 -> we have the Elastic
File System CSI driver.
2659.65 -> If you're running EBS workloads,
2661.96 -> one of the important
points to keep in mind here
2664.84 -> is that you need to
have autoscaling groups
2668.62 -> tied to a single availability zone.
2670.93 -> Because EBS volumes
can't move across zones
2673.548 -> you might run into a case
where a node gets scaled up
2677.865 -> in a different AZ and the pod
gets stuck and can't be moved.
2681.12 -> So the diagram shown on the right
2683.36 -> is the recommendation we have
2684.61 -> for running EBS stateful workloads.
2691.3 -> And then the last slide
here on cluster operations.
2694.43 -> This is, again, with customers
2696.45 -> running more and more clusters
2697.945 -> one of the questions we get
is how do I discover services
2700.74 -> running in other clusters?
2702.52 -> And where you have the
central platform team
2705.51 -> who might be trying to keep the lights on
2707.54 -> for hundreds of clusters
2708.94 -> and the application team who just wants
2710.41 -> to run their apps somewhere
2711.72 -> and they don't want to care where
2713.81 -> their dependent application is running.
2717.111 -> And one option to do this
that's been around for awhile
2719.22 -> is a service mesh.
2721.01 -> However, a service mesh
includes a lot of features
2723.88 -> such as encryption and traffic shaping,
2726.51 -> canary observability,
2728.74 -> and a lot of customers don't
need all of those tools.
2731.64 -> And so we recently
launched a implementation
2735.01 -> of the Kubernetes upstream multi
cluster services controller
2738.2 -> that uses Cloud Map behind the scenes.
2740.4 -> And it does bi-directional syncing
2742.247 -> of Kubernetes service
definitions across clusters.
2745.75 -> And so as long as your clusters
have network reachability,
2749.62 -> you can now run application
A and cluster A,
2752.84 -> application B and cluster B,
2754.65 -> and they don't have to know
2755.62 -> that they're running
across separate clusters.
2760.91 -> Okay, and the final piece is portability.
2765.08 -> How do I run consistent
Kubernetes across environments?
2768.39 -> Up until recently, EKS was a product
2771.52 -> where you deployed on AWS.
2774.57 -> But we've heard from many
of you, whether it's for,
2777.948 -> you know, you have existing costs
2780.18 -> in data centers on premises,
2781.77 -> or, you know, for various
compliance requirements
2784.75 -> you need to run certain
workloads on premises.
2787.19 -> How do I get a consistent
Kubernetes experience
2789.52 -> across those environments?
2791.72 -> And earlier this year,
we launched EKS Anywhere
2794.428 -> which is our distribution of Kubernetes,
2799.457 -> the EKS distro plus
opinionated consistent tooling
2803.645 -> for spinning up Kubernetes clusters
2806.26 -> on your hardware in your environments.
2809.42 -> If you still want to use
the managed control plane,
2813.5 -> we also have options for
local zones and outposts
2816.21 -> which are fully supported by EKS,
2818.2 -> which lets you keep the
fully managed control plane
2820.63 -> but reduce latency by putting the compute
2822.89 -> close to where your, you
know, your customers are
2826.17 -> and where you need it.
2827.29 -> So we now offer multiple
options for running Kubernetes,
2830.76 -> you know, from fully managed
2832.3 -> all the way down to
running on your hardware.
2836.84 -> Okay, now I have all of these
clusters running on premises,
2840.41 -> running in the cloud.
2841.243 -> How do I see them all in one place?
2843.23 -> And the EKS Connector is a recent launch
2845.93 -> that enables you to connect
2847.33 -> any Kubernetes conformant cluster to AWS
2850.97 -> and visualize it.
2851.81 -> So if you can see here, I
have a COPS cluster running,
2856.09 -> a local cluster running in my data center
2858.29 -> and they're all showing
up in the EKS console.
2861.23 -> So this gives you that
single pane of glass
2863.14 -> for seeing a status of
all of your clusters.
2870.61 -> Okay, wrapping up.
2872.71 -> Where can I learn more?
2873.56 -> This, we covered a lot here,
2875.67 -> but Kubernetes is a broad space.
2878.25 -> EKS is always launching new features
2880.56 -> to make running and
managing Kubernetes easier.
2883.72 -> And one of the strong
recommendations I have
2886.88 -> is for you to go check out
the EKS Best Practices Guide,
2889.95 -> which is our open source guide.
2892.89 -> We're constantly adding new chapters,
2894.62 -> there's chapters on security, reliability,
2897.21 -> a lot of the things we talked about today
2899 -> but in even more detail.
2903.12 -> And then where can you give feedback
2905.01 -> and submit feature requests?
2906.91 -> One of my favorite
places to surf on the web
2911.65 -> is the Containers Roadmap.
2912.94 -> It's our open source roadmap on GitHub.
2916.53 -> I love interacting with you
2917.92 -> and hearing your feature
requests on there.
2920.19 -> We look at this super
closely as the product team.
2923.38 -> It really helps us prioritize
and gives you the ability
2927.13 -> to put in your feature requests
2929.31 -> and let us know what are
the most important things
2931.86 -> that the EKS team can be working on.
2936.72 -> And that's it.
2938.031 -> Thank you very much.
2938.864 -> I hope this was helpful
2940.4 -> in understanding various features of EKS
2944.56 -> that can help you run Kubernetes.
2947.7 -> And thank you.
2950.394 -> (upbeat music)
Source: https://www.youtube.com/watch?v=cipDJwDWWbY