AWS Certified Solutions Architect - Associate 2020 (PASS THE EXAM!)
Aug 16, 2023
AWS Certified Solutions Architect - Associate 2020 (PASS THE EXAM!)
AWS Certified Solutions Architect is one of the most popular cloud computing certifications. In this full course taught by an expert trainer, you will learn the major parts of Amazon Web Services, and prepare for the associate-level AWS Certified Solutions Architect exam. By the end of this course, you will be ready to take the AWS Certified Solutions Architect Associate exam - and pass! đĽ Course developed by Andrew Brown of ExamPro. Check out the ExamPro YouTube channel:    / @examprochannel  đ ExamPro AWS Obsessed Certification Training: https://www.exampro.co đ LinkedIn: https://www.linkedin.com/company/exam ⌠đŚ Twitter: https://twitter.com/examproco đˇ Instagram: https://www.instagram.com/exampro.co/ âCourse Contentsâ Check the pinned comment for the course contents with time codes. This course is so massive the full contents wonât fit in this description! âď¸ More AWS Courses âď¸ đĽ AWS Certified Cloud Practitioner Training:    â˘Â AWS Certified Cloud Practitioner TraiâŚÂ  đĽ AWS Certified Developer Associate Training:    â˘Â AWS Certified Developer - Associate 2âŚÂ  đĽ AWS for Startups - Deploying with AWS:    â˘Â AWS for Startups - Deploying with AWSâŚÂ  â Learn to code for free and get a developer job: https://www.freecodecamp.org\r \r Read hundreds of articles on programming: https://freecodecamp.org/news
Content
1.14 -> Hey, this is Andrew Brown from exam Pro. And
I'm bringing you another free Eva's certification
5.66 -> course. And this one happens to be the most
popular in demand. And it is the solutions
9.47 -> architect associate certification. So if you're
looking to pass that exam, this is the course
14.82 -> for you. And we are going to learn a broad
amount of database services, we're going to
20.32 -> learn how to build applications that are highly
available, scalable or durable. And we're
24.36 -> going to learn how to architect solutions
based on the business use case. Now, if you're
28.711 -> looking at this course, and it's 2020, and
you're wondering if you can use it to pass,
33.46 -> you definitely can because the video content
here was shot in late 2019. The only difference
39.05 -> is that the AWS interface has changed a little
bit in terms of aesthetic, but it's more or
44.82 -> less the same. So you can definitely use this
course, if you want a high chance of passing,
49.3 -> I definitely recommend that you do the hands
on labs here in your own AWS account. If you
56.1 -> enjoyed this course, I definitely want to
get your feedback. So definitely share anything
60.44 -> that you've experienced throughout this course
here. And if you pass, I definitely want to
65.86 -> hear that as well. And I also hope you enjoy
this course. And good luck studying.
72.67 -> Hey, this is Andrew Brown from exam Pro. And
we are going to look at the solution architect
81.43 -> associate and whether it's a good fit for
us to take the certification. So the first
87.74 -> thing I want you to know is that this kind
of role is for finding creative solutions
91.43 -> by leveraging cloud services instead of reinventing
the wheel. It's all about big picture thinking.
96.73 -> So you're going to need broad knowledge across
multiple domains. It's great for those who
100.729 -> get bored really easily. And so you're gonna
have to wear multiple hats. And it's really
105.63 -> less about how are we going to implement this
and more about what are we going to implement,
111.27 -> okay, so you would come up with an architecture
using multiple different cloud services, and
116.59 -> then you would pass it on to your cloud engineers
to actually go implement, it's not uncommon
121.969 -> for a solution architect to be utilized within
the business development team. So it's not
127.079 -> quite unusual to see solution architects being
very charismatic speakers and extroverts,
132.73 -> because they're going to have to talk to other
companies to collaborate with, alright, and
138.69 -> just to really give you a good idea of what
a solution architect does, they're going to
143.26 -> be creating a lot of architectural diagrams.
So here, I just pulled a bunch from the internet,
146.93 -> and you can see kind of the complexity and
how they tie into different services, you're
151.69 -> going to require a lot of constant learning,
because AWS is constantly adding new services
156.09 -> and trying to figure out how they all fit
together is a common thing. And advice that
160.65 -> I get from some senior solution architects
at large companies, is you're always thinking
165.489 -> about pricing, and you're always thinking
about can you secure a whatever that is okay,
169.31 -> but at best is gonna have their own definition
there, which is all about the five pillars,
175.489 -> which comes into the well architected framework.
But you know, we'll learn that as we go along
180.549 -> here. Okay, so let's talk about what value
do we get out of the solution architect associate?
186.22 -> Well, it is the most popular at a certification
out of every single one. It's highly in demand
192.39 -> with startups, because you can help wherever
help is needed startups, from small to medium
197.69 -> size, just need people to fill any possible
role. And because you're gonna have broad
201.29 -> knowledge, you're going to be considered very,
very valuable, it is recognized as the most
205.099 -> important certification at the associate level,
and it's going to really help you stand out
210.18 -> on a resumes, I would not say the associate
is going to help you increase your salary
216.17 -> too much. But you're definitely going to see
a lot more job opportunities to see those
220.459 -> increase in salaries, you're gonna have to
get those pros and specialty certifications.
225.31 -> Okay, so if you're still not sure whether
you should take the solution architect associate,
231.2 -> let me just give you a little bit more information.
So it is the most in demand a certification.
237.51 -> So it has the most utility out of any other
certification because of that broad knowledge.
243.31 -> It's not too easy, but it's not too hard.
So it's not too easy, in the sense that, you
247.84 -> know, the information you're learning is superficial,
it's actually going to be very useful on the
251.95 -> job. But it's also not that hard. So you're
not going to risk failing the exam because
256.75 -> you don't know the nitty gritties of all the
services, okay, it requires the least amount
261.51 -> of technical knowledge. So if you're really
more of a, a academic or or theory based learner,
269.29 -> instead of having that hands on experience,
you're going to excel here taking the solution
273.79 -> architect associate. And again, when in doubt,
just take this certification because it gives
279.35 -> you the most flexible future learning path.
So I always say that if you aren't sure what
285.6 -> specialty you want to take, take the solution
architect associate. So you get to familiarize
290.01 -> yourself with all the different kinds of roles
that you can encounter. So if you're definitely
294.83 -> thinking about doing big data security, machine
learning, I would absolutely think a to do
300.04 -> Take the solution architect associate first.
Of course, you can always do the solution
303.89 -> architect professional, if you want to keep
on going down this specific path. And if you
309.63 -> are new to AWS and cloud computing in general,
that I strongly recommend that you take the
315.48 -> CCP before taking the solution architect associate
because it's a lot easier. And it's going
322.27 -> to give you more foundational knowledge so
that you're going to have a really easy time
326.97 -> with this exam. And it specifically is the
direct upgrade path. So all that stuff you
332.4 -> learn in the CCP is directly applicable to
the Solution Architect associate. So how much
339.63 -> time are we going to have to invest in order
to pass the solution architect
343.48 -> associate. And this is going to depend on
your past experience. And so I've broken down
348.29 -> three particular archetypes to give you an
idea of time investment. So if you are already
353.63 -> a cloud engineer, you're already working with
AWS, you're looking at 20 hours of study,
358.78 -> you could pass this in a week, okay, but that's
if you're using AWS on a day to day basis,
364.78 -> if you are a bootcamp grad, it's going to
take you one to two months. So we're looking
369.09 -> between 80 to 160 hours of study. If you have
never used AWS or heard of it, then you probably
376.6 -> should go take the certified cloud practitioner
first, it's going to make things a lot easier,
381.95 -> which has a lot more foundational information,
you might start this here and be overwhelmed,
387.88 -> because you feel that you're missing information.
So you will probably want to go there first.
391.77 -> If you are a developer, and you've been working
in the industry for quite a few years, but
395.51 -> maybe you've just never used AWS, then you're
looking at one month of study. So that's about
402.03 -> 80 hours of study. Okay, and so that will
give you an idea how much time you need to
407.32 -> commit. Okay, so let's just touch on the exam
itself here. So the exam itself is going to
414.02 -> cost $150, for you to take and that's in USD,
you have to take it at a test center that
421.69 -> is partnered with AWS. So you will have to
go through the portal there and book it and
427.79 -> then you'll be going down to that test center
to right at the exam gives you 130 minutes
433.4 -> to complete it. There's a 65 questions on
the exam, the passing score is around 72%.
440.58 -> And once you have the certification, it's
going to be valid for three years. All right.
444.92 -> So hopefully, that gives you a bit of perspective
and whether the solution architect associate
449.45 -> is right for you. Here I have on the right
hand side, the exam guide. And I'm just going
459.72 -> to walk you quickly through it just so you
get a kind of a breakdown of what it is that
463.411 -> AWS recommends that we should learn and how
this this exam is broken up in terms of domains,
472.41 -> and also its scoring. Okay, so here on the
left hand side, we're going to first look
476.12 -> at the content outline. Okay, if we just scroll
down here, you can see it's broken up into
480.96 -> five domains. And we get a bunch a bunch more
additional information. Okay, so we have a
487.55 -> design resilient architectures, design performance
architectures, specify secure applications
492.75 -> and architectures designed cost optimized
architectures, and define operational excellent
497.11 -> architectures. Now I highlighted the word
in there resilient performance, secure cost,
502.63 -> optimizing operational excellence, because
this actually maps to the five pillars of
507.78 -> the well architected framework, which is a
recommended read for study here, okay. So
515.87 -> there is a rhyme and rhythm to this layout
here, which we will talk about when we get
520.12 -> to the white paper section. But let's just
look inside of each of these domains. So for
524.97 -> resilient architecture, you have to choose
reliable and resilient storage. So there we're
529.99 -> talking about elastic block store in s3 and
all the different storage options available
537.64 -> to us design, how to design decoupling mechanisms
using AWS services. So they're talking about
543.18 -> application integration, such as Sq s, and
SNS. Then we have a design how to or determine
549.51 -> how to design a multi tier architecture solution.
Maybe they're hinting there at once as multi
555.76 -> tier. So when you have tiers you, you'd have
your database layer, your web layer, your
561.08 -> load balancing layer, okay, so that's probably
what they mean by tiers. Determine how to
564.84 -> design high available availability or fault
tolerant architectures. So that's going to
569.74 -> be knowing how to use row 53. Load Balancing
auto scaling groups, what happens when an
574.33 -> AZ goes out what happens when a region goes
out? That kind of stuff, okay. The next thing
579.81 -> is design, performance architecture. So choose
performance storage, and databases. So that's
585.65 -> just going to be knowing Dynamo DB versus
RDS versus redshift, okay? That we can apply
591.02 -> caching to improve performance. That's going
to know that dynamodb has a caching layer
595.38 -> that's going to be knowing how to use elastic
cache or maybe using cloud Front to cache
600.7 -> your static content, then we have designed
solutions for elasticity and scalability.
606.9 -> So that sounds pretty much like auto scaling
groups to me, okay. And then we got specify
613.02 -> secure applications in architecture. So determine
how to secure application tiers. So again,
618.25 -> there's three tiers database, web network,
or load balancing. There's obviously other
622.2 -> tiers there. But just knowing when to check
box to turn on security for those services
627.95 -> and how that stuff works. From a general perspective,
okay,
631.74 -> Turman, how do you secure data, so just knowing
data at rest, like data in transit? Okay,
639.05 -> then defining the networking infrastructure
for a single VPC application. This is about
643.19 -> knowing v PCs inside and out, which we definitely
cover heavily. I'm a solution architect associate
649.5 -> and all the associate certifications, because
it's so darn important that we have designed
654.07 -> cost optimize architecture. So determine how
to design cost optimized storage, determine
659.02 -> how to design cost, optimize, compute, we're
talking about storage, they're probably really,
663.62 -> really talking about s3, s3 has a bunch of
storage classes that you can change and they
667.94 -> get cheaper, the further down you go, and
knowing when and how to use that for compute,
673.66 -> maybe they're talking about just knowing when
to use us different kinds of ECU instances,
679.65 -> or maybe using auto scaling groups to reduce
that cost to, to scale out when you don't
684.92 -> have a lot of usage. Then the last one here
is design, operational, excellent architecture.
689.13 -> So design features and solutions that enable
opera enable operational excellence, okay,
694.19 -> and so you can see, and I'm not even exactly
sure what they're saying here. But that's
698.72 -> okay. Because it's worth 6%. Okay, it's definitely
covered in the course, it's just, it's a funny
705.54 -> word in one there, I never could remember
what they're saying there. Okay. But you can
709.45 -> see the most important one here is designing
resilient architecture. Okay, so that's the
714.43 -> highest one there. And the last two is cost
and operational excellence. So you're not
719.23 -> going to be hit with too many cost questions,
but you just generally have to know, you know,
723.81 -> when it makes sense to use x over y. Alright.
So yeah, there's the outline, and we will
729.4 -> move on to the next part here. And that's
the response types. Okay. So this exam, I
735.441 -> believe, has a 65 questions. I don't think
it actually states it in here. But generally,
740.91 -> it's 65. Okay, a lot of times when you take
the exams, they'll actually have additional
745.23 -> questions in there that are not scored, because
they're always testing out new questions.
750.09 -> Questions are going to come in two formats,
multiple choice. So we're going to have the
753.76 -> standard one out of four. And then we're going
to have multiple response, which is going
758.84 -> to be choose two or more out of five or more,
okay, generally, it's always two out of five.
764.59 -> But I guess sometimes you could have three
out of six. All right. And so just be aware
771.9 -> of that. Now, the passing score for this is
going to be 720 points out of 10,000 points.
778.34 -> Okay, so they have this point system, and
so 720 is passing. So the way you can think
782.06 -> about it, it's 72%, which is a C minus two
pass. All right, I put a tilde there, I know
788.43 -> it looks a bit funny there. But the Tilda
means to say like about or around because
792.77 -> that value can fluctuate. So the thing is,
is that it's not exactly 72%, you could go
799.13 -> in and get 72%, and fail, you can go and get
75% and fail, it just depends on how many
804.5 -> people are taking the exam, and they're going
to adjust it based on how you feel or passing
809.47 -> or failing, okay, but it doesn't, it doesn't
fluctuate too far from this point, okay, it's
813.7 -> not gonna be like, you have to get 85%. Alright.
And then just the last thing here is the white
818.25 -> paper. So each of us recommends white papers
for you to read. And they're not very clear
823.8 -> here. So they do architecting for the cloud
as best practices, that's when you should
828 -> definitely read. It's not a very difficult
read. So it's on the top of your reading list.
832.52 -> And then there's Eva's well, architected architected
webpage. And so that web page contains a bunch
838.839 -> of white papers. And this is the full list
here. Okay, so we have the well architected
843.29 -> framework, which talks about the five pillars
and then then they actually have a white paper
847.461 -> for each pillar. And then there's these other
ones down below, which are kind of new additions.
852.57 -> So the question is, do you have to read all
of these things? No. In fact, you should just
857.06 -> probably just read the top one here as well
architecture framework, and you could read
860.231 -> half of that and you'd still be good. It is
great to dove dive into these. These ones
865.92 -> here. So um, there are still listed here.
The last ones here are definitely 100% optional.
870.21 -> I do not believe they are on the exam. But
again, they just tell you to go to the entire
874.47 -> page. So it is a bit confusing there. So hopefully,
that gives you a bit of a breakdown so you
880.07 -> are prepared. what's ahead of you for study.
884.07 -> Hey, this is Angie brown from exam Pro. And
we are looking at simple storage service,
891.06 -> also known as s3, which is an object based
storage service. It's also serverless storage
896.05 -> in the cloud. And the key thing is you don't
have to worry about falling systems or display
901.17 -> to really understand three, we need to know
what object storage is. And so it is a data
905.589 -> storage architecture that manages data as
objects, as opposed to other storage architectures,
911.78 -> other architectures being file systems, where
you manage data as files within a file hierarchy,
918.18 -> or you have block storage, which manages data
as blocks, when within sectors and tracks,
923.29 -> the huge benefit to object storage is you
don't have to think about the underlying infrastructure,
928.09 -> it just works. And with s3, you have practically
unlimited storage. So you just upload stuff,
934.3 -> you don't worry about disk space. s3 does
come with a really nice console that provides
940.18 -> you that interface to upload and access your
data. And the two most key key components
946.339 -> to s3, our s3 objects and s3 bucket so objects
are is what contains your data. And they're
952.42 -> kind of like files. And an object is composed
of a key value of version ID and metadata.
957.79 -> So the key is just the name of the file or
the object, the value is actually the data
962.2 -> itself made up as a sequence of bytes. The
version ID, if you're using versioning, then
966.98 -> you have to enable that on s3, then each object
you upload would have an ID. And then you
972.83 -> have metadata, which is just additional information
you want to attach to the object. An s3 object
977.791 -> can be zero bytes in size and five terabytes.
Please note that I really highlighted zero
983.32 -> bytes, because that is a common Trick question
on the exam, because a lot of people think
988.39 -> you can't do zero bytes, and you definitely
can. And then you have buckets and buckets,
993.2 -> hold objects, they are kind of like top level
folders or directories. Buckets can also have
998.57 -> folders, which in turn hold objects. So buckets
have a concept called folders. And so you
1003.92 -> can have had those objects directly in that
bucket. Or in those folders. When you name
1009.24 -> an s3 bucket, it's using a universal namespace.
So the bucket names must be unique. It's like
1014.91 -> having a domain name. So you have to really
choose unique names. Okay. So the concept
1024.491 -> behind storage classes is that we get to trade
retrieval time accessibility and durability
1029.23 -> for cheaper storage. And so when you upload
data to s3, by default, it's using standard.
1034.48 -> And it's really, really fast. It has 99 point
99% availability, it has 11 nines of durability,
1040.92 -> and it replicates your data across three availability
zones. And as we go down, this list is going
1045.939 -> to get cheaper, we're going to skip over until
intelligent tearing, we'll come back to that.
1050.27 -> And we're gonna look at standard and frequency
access, also known as IAA. So it's just as
1054.26 -> fast as standard. The trade off here is that
it's cheaper if you access files less than
1059.27 -> once a month. There isn't an additional retrieval
fee when you access that data. But the cost
1065.98 -> overall is 50%. Less than standard. So the
trade off here is you're getting reduced availability,
1071.62 -> then you have one zone IAA. And as the name
implies, it only runs your data or only replicate
1076.429 -> your data in one AZ so you don't you have
reduced durability. So there is a chance that
1082.71 -> your data could get destroyed, a retrieval
fee is applied just like an AI a, your availability
1089.01 -> is going to drop down to 99.5%. Okay, then
you have glacier and glaciers for long term
1096.01 -> cold storage.
1098.14 -> It's for the trade off here, though, is that
the retrieval is going to take minutes to
1102.53 -> an hour, but you get extremely, extremely
cheap storage. There also is a retrieval fee
1107.87 -> applied here as well. And glacier normally
is like pitched kind of like its own service.
1113.04 -> But really, it's an s3 service, then you have
glacier deep archive. And this is just like
1119.03 -> glacier except now it's going to take 12 hours
before you can access your data. Again, it's
1125.98 -> very, very, very, very cheap at this level.
It's the cheapest tier here. And so this is
1132.71 -> really suited for long archival data. Now
we glossed over intelligent, tearing, but
1138.66 -> let's talk about it. So what it does is it
uses machine learning to analyze your object
1143.37 -> usage and determine the appropriate storage
class. So it's going to decide for you what
1148.2 -> storage class you should use so that you save
money, okay, and so that is all of the classes
1154.52 -> and we're going to compare them in a big chart
in the next slide. I just have here up the
1163.4 -> comparison of storage classes just to make
it a bit easier for you to see what's going
1167.3 -> on here. So you can see across the board we
have durability at the 11 nines across all
1171.78 -> services. There is reduced durability in one
zone I A but I guess it's trying to say that
1177.12 -> maybe it has 11 nines in that one zone. I'm
not sure so that one confuses me a bit. But
1182.6 -> you have to think that if you're only running
one zone, there has to be reduced durability.
1187.43 -> For availability, it's 99.9% until we hit
one zone IAA. For glacier and glacier deep
1193.45 -> archive. It's not applicable because it's
just going to take a really long time to access
1197.12 -> those files. So availability is like indefinitely
Hello, we're not going to put a percentage
1201.12 -> on that. For azs. It's going to run in three
or more azs. From standard to standard ay
1209.46 -> ay ay ay. Actually, across the board. The
only one that is reduced is for one zone IAA,
1214.05 -> I always wonder, you know, if you're running
in Canada Central, it would only use the two
1218.76 -> because there's only two availability zones
there. So it's always a question I have on
1223.08 -> the top of my head. But anyway, it's always
three or more azs, you can see that there
1229.809 -> is a capacity charge for standard IAA. and
above, for there is a storage duration charge
1237.45 -> for all the tiers with the exception of standard.
And then you have your retrieval fees, which
1243.23 -> are only going to come in your eyes and your
glacier, okay. And then you have the latency.
1247.97 -> That's how fast you can access files. And
you can see Ms means milliseconds. So for
1252.44 -> all these tiers, it's super, super fast. And
you know, it's good to just repeat, but you
1257.9 -> know, AWS does give a guarantee of 99.99%
availability, it has a guarantee of 11 nines
1268.16 -> durability. Alright, so there you go, that
is the big. Now we're taking a look at s3
1278.1 -> security. So when you create a bucket, they're
all private by default. And AWS really obsesses
1284.22 -> over not exposing public buckets, they've
changed the interface like three or four times,
1289.22 -> and they now send you email reminders telling
you what buckets are exposed because it's
1293.24 -> a serious vulnerability for AWS, and people
just seem to keep on leaving these buckets
1297.85 -> open. So when you create a new bucket, you
have all public access denied. And if you
1303.48 -> want to have public access, you have to go
check off this for either for your ACLs or
1308.67 -> your bucket policies. Okay.
1310.99 -> Now, in s3, you can turn on logging per request.
So you get all the detailed information of
1317.679 -> what objects were accessed or uploaded, deleted
in granular detail. log files are generated,
1323.79 -> but they're not putting the same bucket, they're
putting in a different bucket. Okay. Now to
1328.91 -> control access to your objects. And your buckets,
you have two options. We have bucket policies
1335.32 -> and access control lists. So access control
lists came first, before bucket policies,
1339.77 -> they are a legacy feature, but they're not
depreciated. So it's not full pot to use them.
1344.309 -> But they're just simpler in action. And sometimes
there are use cases where you might want to
1347.95 -> use them over bucket policies. And so it's
just a very simple way of granting access,
1352.179 -> you right click in a bucket or an object and
you could choose who so like there's an option
1359.07 -> to grant all public access, you can say lists
the objects, right the objects, or just read
1362.99 -> and write permissions. And it's as simple
as that now bucket policies are, they're a
1367.14 -> bit more complicated, because you have to
write a JSON document policy, but you get
1372.16 -> a lot more rich, complex rules. If you're
ever setting up static s3, hosting, you definitely
1378.799 -> have to use a bucket policy. And that's what
we're looking at here. This is actually an
1381.95 -> example website where we're saying allow read
only access, forget objects to this bucket.
1387.84 -> And so this is used in a more complicated
setup. But that is the difference. So bucket
1391.88 -> policies are generally used more, they're
more complex and ACLs are just simple. And
1396.41 -> there's no foolproof. So we talked about security
and very big feature about that is encryption.
1406.11 -> And so when you are uploading files to s3,
it by default uses SSL or TLS. So that means
1411.799 -> we're going to have encryption in transit.
When it comes to server side encryption, actually
1416.85 -> what is sitting on the actual data at rest.
We have a few options here. So we have SSE
1425.96 -> a s, we have SSE kms. And SSE C, if you're
wondering what SSE stands for, it's server
1431.51 -> side encryption. And so for the first option,
this is just an algorithm for encryption.
1436.99 -> So that means that it's going to be 256 bytes
in length or characters in length when it
1442.16 -> uses encryption, which is very long. And s3
is doing all the work here. So it's handling
1446.23 -> all the encryption for you. Then you have
kms, which is key management service, and
1451.58 -> it uses envelope encryption. So the key is
then encrypted with another key. And with
1457.19 -> kms. It's either managed by AWS or managed
by you the keys itself, okay, then you have
1462.32 -> customer provided keys, this is where you
provide the key yourself. There's not an interface
1467.83 -> for it here, it's a bit more complicated.
But you know, all you need to know is that
1471.95 -> the C stands for customer provided, then you
have client side encryption, there's no interface
1476.809 -> or anything for that. It's just you encrypting
the files locally and then uploading them
1480.45 -> to s3 or looking at s3 data consistency. Sorry,
I don't have any cool graphics for here because
1490.929 -> it's not a very exciting topic, but we definitely
need to know what it is. So when you put data
1495.75 -> or you write data to s3, which is when you're
writing new objects, The consistency is going
1501.66 -> to be different when you are overwriting files
or deleting objects. Okay, so when you send
1508.72 -> new data to s3, as a new object, it's going
to be read after write consistency. What that
1514.5 -> means is as soon as you upload it, you can
immediately read the data and it's going to
1518.61 -> be consistent. Now when it comes to overwriting
and deleting objects, when you overwrite or
1525.9 -> delete, it's going to take time for s3 to
replicated to all those other azs. And so
1530.65 -> if you were to immediately read the data,
s3 may return to you an old copy. Now, it
1535.96 -> only takes like a second or two for it to
update. So there, it might be unlikely in
1540.93 -> your use case, but you just have to consider
that that is a possibility. Okay. So we're
1550.13 -> taking a look at cross region replication,
which provides higher durability in the case
1557.86 -> of a disaster, okay, and so what we do is
we turn it on, and we're going to specify
1562.35 -> a destination bucket in another region, and
it's going to automatically replicate those
1566.36 -> objects from the region source region to that
destination region. Now, you can also have
1572.809 -> it replicate to a bucket in another AWS account.
In order to use this feature, you do have
1578.76 -> to have versioning turned on in both the source
and destination buckets.
1587.79 -> Now in s3, you can set on a bucket versioning.
And what versioning does is it allows you
1593.341 -> to version your objects, all right. And the
idea here is to help you prevent data loss
1599.92 -> and just keeping track of versions. So if
you had a file, here, I have a an image called
1604.96 -> tarok. Nor the name is the same thing as a
key, right, it's gonna have a version ID.
1610.75 -> And so here we have one, which is 111111.
And when we put a new file, like a new object
1618.52 -> with the exact same key, it's going to create
a new version of it. And it's going to give
1623.44 -> it a new ID, whatever that ID is 121212. And
the idea is now if you access this object,
1631.5 -> it's always going to pull the one from the
top. And if you were to delete that object,
1635.53 -> now it's going to access the previous one.
So it's a really good way of protecting your
1641.17 -> data from and also, if you did need to go
get an older version of it, you can actually
1646.34 -> get any version of the file you want, you
just have to specify the version ID. Now when
1653.419 -> you do turn on s3, versioning, you cannot
disable it after the fact. So you'll see over
1658.83 -> here it says enabled or suspended. So once
it's turned on, you cannot remove versioning
1663.75 -> from existing files, all you can do is suspend
versioning. And you'll have all these baseball's
1669.11 -> with one version, alright.
1673.26 -> So s3 has a feature called Lifecycle Management.
And what it does is it automates the process
1680.65 -> of moving objects to different storage classes
or deleting them altogether. So down below
1685.13 -> here, I have a use case. So I have an s3 bucket.
And I would create a lifecycle rule here to
1689.87 -> say after seven days, I want to move this
data to glacier because I'm unlikely to use
1695.169 -> that data for the year. But I have to keep
it around for compliancy reasons. And I want
1699.99 -> that cheaper cost. So that's what you do with
lifecycle rule, then you create another lifecycle
1704.04 -> rule to say after a year, you can go ahead
and delete that data. Now, Lifecycle Management
1710.33 -> does work with a version and it can apply
to both current or previous version. So here
1715 -> you can see you can specify what you're talking
about when you're looking at a lifecycle rule.
1725.09 -> So let's look at transfer acceleration for
s3. And what it does is it provides you with
1729 -> fast and secure transfer of files over long
distances between your end users and an s3
1734.85 -> bucket. So the idea is that you are uploading
files and you want to get them to s3 as soon
1740.1 -> as possible. So what you're going to do is
instead of uploading it to s3, you're going
1744.45 -> to send it to a distinct URL for a edge location
nearby. an edge location is just a data center
1750.45 -> that is as close as you as possible. And once
it's uploaded there, it's going to then accelerate
1756.77 -> the uploading to your s3 bucket using the
AWS backbone network, which is a an optimized
1762.35 -> network path. Alright. And so that's all there
is to it. So pre signed URLs is something
1772.73 -> you're definitely going to be using in practicality
when you're building web application. So the
1777.059 -> idea behind it is that you can generate a
URL which provides you temporary access to
1781.79 -> an object to either upload or download object
data to that endpoint. So presale URLs are
1787.87 -> commonly used to provide access to private
objects. And you can use the CLR SDK to generate
1794.15 -> pre signed URLs Actually, that's the only
way you can do it. So here using the COI you
1798.309 -> can see I'm specifying the actual object and
I'm saying that it's going to expire after
1802.071 -> 300. I think that's seconds. And so, anyway,
the point is, is that, you know, it's only
1810.29 -> going to be accessible for that period of
time. And what it's going to do is gonna generate
1815.86 -> this very long URL. And you can see it actually
has an axis axis key in here, sets the expiry
1821.97 -> and has the signature. Okay, so this is going
to authenticate a temporary temporarily to
1827.59 -> do what we want to do to that object of dairy,
very common use cases, if you have a web application,
1833.16 -> you need to allow users to download files
from a password protected part of your web
1837.38 -> app, you'd also expect that those files on
s3 would be private. So what you do is you
1844.13 -> generate out a pre signed URL, which will
expire after like something like five seconds,
1848.04 -> enough time for that person to download the
file. And that is the concept. So if you're
1858.33 -> really paranoid about people deleting your
objects in s3, what you can do is enable MFA
1863.45 -> delete. And what that does is it makes you
require an MFA code in order to delete said
1871.799 -> object. All right, now in order to enable
MFA, you have to jump through a few hoops.
1876.7 -> And there's some limitations around how you
can use it. So what you have to do is you
1881.471 -> have to make sure version is turned on your
bucket or you can't use MFA delete. The other
1886.36 -> thing is that in order to turn on MFA delete,
you have to use the COI. So here down below,
1891.27 -> I'm using the C ally. And you can see the
configuration for versioning. I'm setting
1894.52 -> it to MFA delete enabled. Another caveat is
that the only the bucket owner logged in as
1899.96 -> the root user can delete objects from the
bucket. Alright, so those are your three caveats.
1905.309 -> But this is going to be a really good way
to ensure that files do not get deleted by
1909.46 -> act. Hey, this is Andrew Brown from exam Pro,
and welcome to the s3 Follow along where we're
1918.78 -> going to learn how to use s3. So the first
thing I want you to notice in the top right
1922.69 -> corner is that s3 is in a global region. So
most services are going to be region specific.
1928.01 -> And you'd have to switch between them to see
the resources of them, but not for s3, you
1932.7 -> see all your buckets from every single region
in one view, which is very convenient. That
1937.43 -> doesn't mean that buckets don't belong to
a region, it's just that the interface here
1941.57 -> is
1942.57 -> global.
1943.57 -> Okay, so we're going to go ahead and create
our first bucket and the bucket name has to
1946.77 -> be unique. So if we choose a name that's already
taken by another database user, we're not
1950.96 -> gonna be able to name it that and the name
has to be DNS compliant. So it's just like
1954.91 -> when you register for a domain name, you're
not allowed certain characters. So whatever
1960.25 -> is valid for a domain name, or URL is what's
going to be valid here. So I'm going to try
1964.01 -> to name it exam Pro. And it's going to be
in this region here. And we do have all these
1967.97 -> other options. But honestly, everybody always
just goes ahead and creates and configures
1971.53 -> after the fact, we're gonna hit Create. And
you're gonna notice that it's gonna say this
1974.77 -> bucket name has already been taken, and it
definitely has been because I have it in my
1979.5 -> other AWS account here. So I'm just gonna
go ahead and name it something else, I'm gonna
1985.26 -> try 000. Okay, I'm gonna hit Create. And there
we go. So we have our own bucket. Alright,
1992.88 -> now if we wanted to go ahead and delete this
bucket, I want you to do that right away here,
1997.32 -> we're going to go ahead and delete this bucket,
it's gonna pop up here, and it's going to
2000.33 -> ask us to put in the name of the bucket. So
I was just copy it in here, like this. And
2004.91 -> we're going to delete that bucket. Okay, so
that's how you create a bucket. And that's
2008.02 -> how you delete a bucket. But we're gonna need
a bucket for to learn about s3. So we're gonna
2012.299 -> have to go ahead and make a new bucket here.
So I'm going to put in exam pro 000. And we
2018.21 -> will go ahead and create that. And now we
have our buckets. So great. And we'll go click
2023.61 -> into this bucket. And there we go. So let's
start actually uploading our first file. So
2032.89 -> I prepared some files here for upload. And
just before I upload them, I'm just gonna
2037.51 -> go create a new folder here in this bucket.
Okay. And I have a spelling mistake there.
2045.74 -> Great. And so I'm gonna just go upload these
images. Here. They are from Star Trek The
2050.45 -> Next Generation. They're the characters in
the show. And so I all I have to do here is
2054.989 -> click and drag, and then I can go ahead and
hit upload down here. And they're just going
2060.48 -> to go ahead and upload them. Alright. And
we'll just give that a little bit of time
2065.909 -> here. And they are all in Great. So now that
I have all my images here in my bucket, I
2072.399 -> can click into an individual one here. And
you can see who the owner is, when it was
2077.889 -> uploaded, uploaded the storage class, the
size of it. And it also has an object URL,
2082.659 -> which we're going to get to in a moment here.
All right. So but if we want to actually just
2086.909 -> view it in the console here, we can click
open and we can view it or we can hit download.
2092.559 -> Alright, and so that's going to download that
file there. But then we have this object URL
2096.419 -> and this was why we're saying that you have
to have unique you Unique a bucket names because
2102.239 -> they literally are used as URLs. Okay, so
if I were actually to take this URL and try
2106.91 -> to access it, and I actually just had it open
here a second ago, you can see that we're
2111.45 -> getting an Access denied because by default,
s3 buckets are private. Okay? So if we wanted
2118.289 -> to make this public, so anyone could access
this URL, we want to hit the make public button,
2122.66 -> we're going to see that it's disabled. So
if we want to be able to make things public,
2129.319 -> we're gonna have to go to our bucket up here
at the top here, go to the properties, or
2133.9 -> sorry, I should say permissions. And we're
going to have to allow public access. Okay,
2140.079 -> so this is just an additional security feature
that eight of us has implemented, because
2145.219 -> people have a really hard time about making
things public on their bucket and getting
2150.509 -> a lot of bad stuff exposed. So we're gonna
go over here and hit edit, and we have a bunch
2154.77 -> of options. But we're first going to untick
block all public access, and this is gonna
2159.039 -> allow us to now make things public. So I hit
save, okay, and I go to type in confirm. Okay.
2168.109 -> And so now, if I go back to our bucket here
into the enterprise D and data, I now have
2173.479 -> the ability to make this public. So I'm going
to click make public. And so now, this file
2178.68 -> is public. And if I were to go back here and
refresh, okay, so there you go. So now I could,
2185.039 -> I could take this link and share it with you,
or anybody, and anyone in the world can now
2189.349 -> view that file. Okay. Great. So now that we
learned how to upload a file, or files, and
2198.88 -> how to make a file public, let's learn about
versioning.
2206.39 -> So we uploaded all these different files here.
But let's say we had a newer version of those
2212.14 -> files. And we wanted to keep track of that.
And that's where version is going to help
2215.43 -> us in s3. So, you know, we saw that I uploaded
these characters from Star Trek The Next Generation.
2223.45 -> And there, we actually have a newer images
here of the characters, not all of them, but
2227.539 -> some of them. And so when I upload them, I
don't want the old ones to vanish, I want
2233.12 -> it to keep track of them. And that's where
virgin is going to come into play. So to turn
2237.609 -> on versioning, we're going to go to exam pro
000. And we're going to go to properties here
2242.64 -> and we have a box here to turn on versioning,
I just want you to notice that when you turn
2246.88 -> on versioning, you cannot turn it off, you
can only suspend it, okay. So what that means
2252.23 -> is that objects are still going to have version
history, it's just that you're not gonna be
2255.801 -> able to add additional versions, if you turn
it off, we'll go ahead and we're going to
2259.719 -> enable versioning here, and versioning is
now turned on. And you're going to notice
2263.559 -> now that we have this versions tab, here we
go hide and show and it gives us additional
2268.339 -> information here for the version ID. Okay,
so I'm going to go into the enterprise D,
2272.589 -> I hit show here, so I can see the versions.
And now you're gonna see, we've got a big
2276.359 -> mess here. Maybe we'll turn that off here
for a second. And we're going to go ahead
2279.2 -> and upload our new files here, which have
the exact same name, okay, so I'm gonna click
2283.68 -> and drag, and we're gonna hit upload, and
Oh, there they go. Okay, so we're gonna upload
2291.19 -> those files there. And now let's hit show.
And we can see that some of our files where
2295.779 -> we've done some uploading there have have
additional versions. So you're gonna notice
2300.91 -> that the first file here actually has no version
ID, it's just No, okay, but the latest version
2306.369 -> does, it's just because these were the, the
initial files. And so the initial files are
2310.999 -> going to have null there. But you know, from
then on, you're going to have these new version
2317.359 -> IDs. Okay, so let's just see if we can see
the new version. So we're looking at data
2321.599 -> before. And what we want to see here is what
he looks like now. So I click open, and it's
2327.229 -> showing us the latest version of data. All
right. Now, if we wanted to see the previous
2331.94 -> version, I think if we drop down here, see
vs latest and prior, so we have some dates
2335.93 -> there, we click here, we hit open, and now
we can see the previous version of data. Okay.
2341.729 -> Now, one other thing I want to check here
is if we go up to this file here, and we were
2347.31 -> to click this link, is this going to be accessible?
No, it's not. Okay. Now, let's go look at
2352.92 -> the previous example. Now we had set this
to be public, is this one still public? It
2358.489 -> is okay, great. So what you're seeing here
is that when you do upload new files, you're
2362.819 -> not going to inherit the original properties,
like for the the public access, so if we want
2368.049 -> data to be public, we're going to have to
set the new one to be public. So we're going
2371.54 -> to drop down the version here and hit make
public. And now if we go open this file here,
2376.47 -> he should be public. So there you go, that
is versioning. Now, if we were to delete this
2380.719 -> file out of here, let's go ahead and delete
data out of here. Okay, so I'm going to hit
2386.049 -> actions and I'm going to delete data. So I'm
going to hit Delete. Notice it says the version
2391.71 -> ID here, okay? So I hit delete, and data is
still here. So if we go into data,
2397.41 -> and we hit open, okay, we're Now we get the
old data, right? So we don't have
2404.369 -> the previous versions. I'm pretty sure we
don't. So we just hit open here open. Great.
2408.259 -> And if I go here and go this one open, okay,
so the specified version does not exist. So
2414.16 -> we, it still shows up in the console, but
the the file is no longer there. Okay? Now
2419.829 -> let's say we wanted to delete this original
data file, right? Can we do that, let's go
2424.349 -> find out, delete, and we're gonna hit delete,
okay, and still shows the data's there, we're
2430.309 -> gonna hit the refresh, we're going to go in
there, and we're going to look at this version
2434.54 -> and open it. Okay, and so you can still see
it's there. So the thing about versioning
2439.27 -> is that it's a great way to help you protect
from the deletion of files. And it also allows
2447.42 -> you to, you know, keep versions of stuff and
those properties, you know, again, does not
2451.849 -> carry over to the next one. So, now that we've
learned about versioning, I think it'd be
2456.15 -> a bit of fun to learn about encryption. Okay.
Actually, just before we move on to encryption,
2461.279 -> I just want to double check something here.
So if we were to go to versions here, and
2465.289 -> I was saying, like, the initial version here
is normal, what would happen if we uploaded
2469.599 -> a file for the first time? That because remember,
these were uploaded, before we turned on?
2475.89 -> versioning? Right. And so they were set to
novel what happens when we upload a new file
2479.219 -> with versioning turned on? Is it going to
be normal? Or is it going to have its own
2482.529 -> version ID? Okay, that's a kind of a burning
question I have in the back of my mind. And
2487.619 -> so I added another image here, we now have
Keiko, so we're going to upload kago. And
2492.38 -> we're going to see is it going to be no, or
is going to have an ID and so look, it actually
2496.019 -> has a version ID. So the only reason these
are no is because they existed prior to versioning.
2503.73 -> Okay, so if you see no, that's the reason
why. But if you have versioning turned on,
2509.44 -> and then from then on, it's always going to
have a version ID. Actually, just before we
2513.579 -> move on to encryption, I just want to double
check something here. So if we were to go
2517.21 -> to versions here, and I was saying like, the
initial version here is novel, what would
2521.599 -> happen if we uploaded a file for the first
time? That because remember, these were uploaded,
2528.249 -> before we turned on? versioning? Right. And
so they were set to novel what happens when
2531.509 -> we upload a new file with versioning turned
on? Is it gonna be normal? Or is it gonna
2535.47 -> have its own version ID? Okay, that's a kind
of a burning question I have in the back of
2539.53 -> my mind. And so I added another image here,
we now have Keiko and so we're gonna upload
2544.849 -> Keiko, and we're gonna see is it going to
be no? Or is it going to have an ID and so
2548.599 -> look, it actually has a version ID. So the
only reason these are null is because they
2554.03 -> existed prior to versioning. Okay, so if you
see no, that's the reason why. But if you
2561.66 -> have versioning turned on, and then from then
on, it's always going to have a version of
2566.95 -> it. Alright, so let's explore how to turn
on server side encryption, which is very easy
2574.4 -> on s3. So we're going to go back to our bucket,
go to our properties, and then click on default
2579.569 -> encryption. And by default, you're gonna see
we don't have any encryption turned on. But
2582.839 -> we can turn on server side encryption using
a Aes 256, which is uses a 256 algorithm in
2591.259 -> length or key in length. I'm always bad on
that description there. But the point is,
2595.94 -> it's 256 something in length for encryption,
and then we have AWS kms, we're going to turn
2601.809 -> on a Aes 256, because that's the easiest way
to get started here. But look at the warning
2605.67 -> up here, it says the property, this property
does not affect existing objects in your bucket.
2610.339 -> Okay. And so we're going to turn on our encryption.
So we got our nice purple checkmark. And we're
2614.609 -> going to go back to our bucket to see if any
of our files are encrypted, which I don't
2618.209 -> think they're going to be based on that message
there. So we're going to go and check out
2621.91 -> data. And we can see that there is no server
side encryption. Okay. So in order to turn
2628.829 -> it on for existing files, I would imagine
it's gonna be the same process here, we'll
2632.63 -> go to properties. We're going to have encryption
here, and we're going to turn on a Aes 256.
2636.7 -> So you can see that you can set individual
encryption profiles. And you can also do it
2642.27 -> per bucket. And so we're going to go ahead
there and encrypt data there. Alright, so
2648.45 -> now if we were to go access this URL, do we
have permissions even though it is set to
2653.38 -> public? So remember, data is public, right?
But can we see it when encryption is turned
2657.599 -> on? And apparently, we totally totally can.
So encryption doesn't necessarily mean that
2663.509 -> the files aren't accessible, right? It just
because we have made this file public, it
2668.73 -> just means that when they're at rest on the
servers on AWS, there are going to be encrypted.
2674.78 -> Okay. So, you know, that is how easy it is
to turn on encryption. Now when it comes to
2681.069 -> accessing files via the csli and kms. There
is a little bit more work involved there.
2687.549 -> So you know, for that there's going to be
a bit more of a story there. But, you know,
2692.269 -> if we do get to see alive, we'll talk about
that. Okay.
2697.579 -> Now, I want to To show you how to access private
files using pre signed URL. But just before
2705.63 -> we get to that, I figured this is a good opportunity
to learn how to use the COI for s3. And we'll
2711.969 -> work our way to a pre signed URL. So here
I have my terminal open. And I already have
2717.459 -> the a vcli installed on my computer here.
So what we're going to do first is just list
2722.499 -> all the buckets within our Eva's count. So
we can do ABS s3 Ls LS stands for list. And
2728.4 -> then what's going to do is we're going to
see that single bucket that we do actually
2731.459 -> have here, if we wanted to see the contents
of it, we can just type AWS s3 Ls, and then
2736.489 -> provide the bucket name. And it's gonna show
us its content, which is a single folder.
2742.339 -> And then if we wanted to see within that folder,
you kind of get where we're going here, we
2745.809 -> can put that on the end there and hit enter.
Okay, and then we're gonna see all our files.
2750.239 -> So that's how easy it is to use. ls, you're
going to notice over here, I do have a very
2756.039 -> slightly different syntax here, which is the
using the s3 protocol here in the front. This
2762.44 -> is sometimes as needed for certain commands,
which are going to find out here with CP in
2767.21 -> a moment. But not all commands require it,
okay, like so for instance, in LS, we've admitted
2772.88 -> that protocol there, but yeah, moving on to
copying files, which CP stands for, we can
2779.43 -> download objects to and from our desktop here.
So let's go ahead and actually go download
2784.779 -> a Barkley from our bucket here. So I'm just
going to clear this here, and type A, this
2789.519 -> s3, CP, we're gonna use that protocol, we
definitely have to use it for CP or will air
2794.039 -> out and we'll do exam Pro, there was a zero
enterprise enterprise D here. And then it's
2803.39 -> going to be Barclay, okay. And we're just
going to want to download that, that fellow
2809.099 -> there to our desktop. Okay, and then we're
just gonna hit enter there. And it's just
2815.69 -> complain, because I typed it in manually.
And I have a spelling mistake, we need an
2819.7 -> R there. And it should just download that
file. Great. So if we go check our desktop,
2825.059 -> there we go, we've downloaded a file from
our s3 bucket. Now, we want to upload a file.
2829.89 -> So down in here, I have an additional file
here called Q, and I want to get that into
2834.119 -> my bucket vcli, it's going to be the same
command, we're just going to do it in the
2838.21 -> reverse order here. So we're gonna do Avis
s3, CP, and we're first gonna provide the
2842.279 -> file locally, we want to upload here, and
that's going to be enterprise d, q dot jpg,
2848.569 -> and we're going to want to send that to s3.
So we have to specify the protocol of the
2853.069 -> bucket name, the folder here that it's going
to be in enterprise D, make sure it won't
2858.299 -> be spelling mistakes this time. And we're
just going to put q dot jpg, okay. And we're
2862.079 -> going to send out there to s3 and you can
see it's uploaded, we're going to refresh.
2867.229 -> And there it's been added to our s3 bucket.
Great. So now we know how to list things and
2874.749 -> upload or download things from s3. And now
we can move on to pre signed URL. So we saw
2879.539 -> earlier that was data, we had access to data
here because he was public. So if we were
2886.74 -> to click this fellow here, we can access him
right. But let's say we wanted to access a
2891.65 -> queue that we just uploaded, right? And so
by default, they are private. Okay, so if
2896.46 -> I was to open this, I'm not going to be able
to see it, it's access tonight, which is a
2900.209 -> good, good, good and sane default. But let's
say I wanted to give someone temporary access.
2906.969 -> And this is where pre signed URLs come. So
pre signed URLs, what it's going to do is
2910.4 -> going to generate a URL with the credentials
that we need to be able to temporarily temporarily
2916.039 -> access it. Okay. So if I were to copy this
AWS s3 pre signed URL here, and what we'll
2923.109 -> just type it out, it's not a big deal here.
And we're going to try to get access to this,
2930.839 -> this Q file here. So we're going to want to
do enterprise D.
2937.199 -> And we're gonna say q dot j, jpg, and we're
gonna put an expires on there expires, like
2941.989 -> by default, I
2942.989 -> think it's like an hour or something. But
we want it to expire after 300 seconds. So
2946.97 -> people like these aren't these links aren't
staying around there. Again, they're temporary,
2950.009 -> right? And I'm just going to hit enter there.
Um, and I've made a mistake, I actually forgot
2956.13 -> to write the word pre signed in there. Okay,
what's going to do, it's going to spit back
2959.559 -> a URL. So if we were to take this URL, right,
and then supply it up here, now we actually
2964.809 -> have access. So that's a way for you to provide
temporary access to private files. This is
2969.68 -> definitely a use case that you'd have if let's
say you had paid content behind, like a web
2974.63 -> application that you'd have to sign up to
gain access. And this is how you give them
2979.569 -> temporary access to whatever file they wanted.
And I just wanted to note, I think this is
2984.43 -> the case, where if we were to actually open
this here, so again, if we have this URL that
2989.001 -> has the Access ID, etc, up there, but if we
were to open it up via this tab, I think it
2994.441 -> doesn't exact same thing. So it has a security
token here. So I guess maybe it's not exactly
2999.109 -> the same thing, but I was hoping maybe this
was actually also using a pre signed URL here.
3003.119 -> But anyway, the point is, is that if you want
to access temporary files, you're going to
3007.42 -> be using pre signed URLs.
3014.739 -> So we uploaded all our files here into this
bucket. And when we did, so it automatically
3020.279 -> went to the storage class standard by default.
So let's say we want to change the storage
3025.829 -> class for our objects, we're not going to
do that at the bucket level, we're going to
3029.619 -> do it at the object level here. So we're gonna
go to properties here for Gaiden. And all
3035.309 -> we have to do here is choose the class that
we want to standardize. And we're going to
3040.519 -> hit save, and now we can start saving money.
So that's all it takes to switch storage classes.
3046.289 -> But let's say we want to automate that process.
Because if we were handling a lot of log files,
3051.97 -> maybe after 30 days, we don't really need
need them anymore, but we need to hold on
3055.329 -> to them for the next seven years. And that's
where lifecycle policies are going to come
3059.18 -> in play. So what we're going to do is we're
going to go back to our bucket, we're going
3062.849 -> to go to management. Here, we have lifecycle,
and we're going to add a new lifecycle rule.
3067.459 -> So we'll say, here, we'll do just for something
simple, say so after 30 days, so 30 day rule.
3073.849 -> And we could limit the scope of what files
we want. So if we wanted to say just enterprise
3077.89 -> D here, we could do enterprise D, okay, that's
not what I'm going to do. I'm just going to
3083.42 -> say, all files within the actual bucket here,
go next. And then we can choose the storage
3088.069 -> class. So transition. So here, we have to
decide whether it's the current version of
3092.92 -> the previous versions, okay, and so I'm just
gonna say it's gonna be the current version.
3097.299 -> All right, always the current version here,
we're going to add a transition, and we're
3100.68 -> going to move anything that's in standard
into standard ay ay ay. and it's going to
3104.539 -> be after 30 days, I don't think you can go
below that, if I try seven here, see, so the
3109.14 -> minimum value here has to be 30. So we're
gonna have to set it to 30. I think we saw
3114.27 -> those, those, those minimums in the actual
storage class when we were setting them. So
3121.24 -> if you're wondering what those are, they're
probably over there. But we'll just hit next
3125.049 -> here. And so then after we're seeing that
it's been transitioned, after 30 days, it's
3131.589 -> going to move to move to that. And we can
also set an expiration. So we don't necessarily
3139.059 -> need to set this but this is if we wanted
to actually delete the file. So after current
3142.559 -> days, we could then say, to completely delete
the files, which is not what we're going to
3148.299 -> do, we're just going to hit next. And click
that. And now we have a rule that was going
3155.47 -> to automate the moving of our files from one
storage class to another. Alright, so there
3161.17 -> you go.
3163.839 -> So we're gonna learn how to set up cross region
replication here. And so this is going to
3170.599 -> allow us to copy one files to from a bucket
to another bucket. And this could be in another
3175.739 -> region and in another AWS account, okay, so
there's a few different possibilities here
3182.019 -> as to why we'd want to do that. But let's
just learn how to actually do it. And so we
3186.4 -> did, we're going to need to create a replication.
But before we can do that, we're going to
3189.599 -> need a destination bucket. So we're going
to go back here to s3, we're going to create
3194.19 -> a new bucket, I'm just going to call it exam
pro BBB. And I'm going to set this to Canada
3199.269 -> Central, okay, if this name is not available,
you're just going to have to come up with
3203.319 -> your own names here. But just make sure you're
saying it to another region for this, this
3208.63 -> example here. And so now we have a bucket
in the States and in Canada, and we're almost
3213.809 -> ready to go, we just have to make sure that
we have versioning turned on in both buckets,
3217.14 -> both the source and destination. So we'll
go here to our new bucket, turn on versioning.
3221.999 -> Okay, and I already know that we have versioning
turned on in our source. But we'll just take
3226.52 -> a quick look here. So here it is turned on.
And so now we are ready to turn on cross region
3232.079 -> replication. So we'll go ahead and create
our rule in our source bucket, our source
3236.559 -> bucket is selected here. Then we'll go next.
And we will choose our destination bucket.
3242.68 -> And so now we have a couple options here,
which can happen during replication. So we
3248.339 -> can actually change the storage class, which
is a good idea if you want to save money.
3251.469 -> So the other bucket is just like your backup
bucket. And you don't plan to really use those
3256.819 -> files, you probably would want to change the
storage class there to save money. And you
3262.44 -> can also send the this to someone else's bucket
in another AWS account. So maybe you your
3267.93 -> use case is this bucket has a files and you
want to provide it to multiple clients. And
3273.109 -> so you've used that replication rule to replicate
it to their, their buckets, okay, but we're
3278.14 -> just this is going to be our buckets for the
time being. And we're going to go ahead here
3281.619 -> and create a new role. And we'll just call
it cc R. us to Canada. Okay. I will just create
3290.51 -> a new rule. So we have permissions to go ahead
and do that there. And we'll get a nice little
3295.4 -> summary here and hit save. And we will wait
and we'll cross our fingers as As the replication
3300.65 -> figuration was not found, so this sometimes
happens, it's not a really big deal here.
3305.539 -> So just go back to replication. And it actually
did work, I think. So, sometimes what happens
3311.43 -> is, the roll isn't created in time. So you
know, sometimes it's green, and sometimes
3318.119 -> it's red, but just come back and double check
here because it definitely is set. So now
3321.569 -> we have replication set up. So now we're going
to learn how to set up bucket policies. So
3331.459 -> we can create custom rules about the type
of access we want to allow to our buckets.
3336.239 -> So in order to do so we're going to go to
our exam pro 000 bucket, we're going to go
3340.48 -> to permissions, and we're going to go to bucket
policy, okay. And so this is where we're going
3344.789 -> to provide a policy in the format of Jason
file here, it's very hard to remember these.
3350.099 -> So luckily, they have a little policy generator
down here, I'm gonna open it in a new tab.
3352.4 -> And we're going to drop this down to s3, and
we're going to define what kind of access
3359.069 -> control we want to have. So let's say we wanted
to deny anyone being able to upload new files
3365.989 -> in this bucket. I don't know why you'd want
to do that. But maybe there's a use case.
3368.599 -> So we're gonna say denied, we're gonna give
it a asterik here. So we can say this applies
3374.14 -> to everyone. The service is going to be s3,
of course, and we're going to look for the
3378.119 -> actions here. So we're just going to look
for the puts. So we'll say put bucket ACL.
3385.01 -> And there should just be a regular put in
here. Oh, that's, that's bucket ACL. We want
3389.41 -> objects. So we say bucket, put object and
put object ACL so we can't upload files. And
3394.369 -> we'll have to provide the Arn and they give
you a bit of an indicator as to what the format
3398.829 -> is here. That's going to be exam pro hyphen,
000, at Ford slash Asterix, so it's gonna
3404.459 -> say any of the files within that bucket. And
we're gonna go add that statement and generate
3409.13 -> that policy. And now we have our JSON. Okay,
so we'll copy that, go back over here, paste
3415.279 -> it in, save it, cross your fingers, hope it
works. And it has saved Yeah, so you don't
3420.049 -> get like a response here. I'm just gonna save
again, and then just refresh it to just be
3424.229 -> 100% sure here and go back to your bucket
policy. And there it persists. So our bucket
3428.109 -> policy is now in place. So we should not be
able to upload new files. So let's go find
3433.2 -> out if that is actually the case. So here
I am in the overview here and enterprise D.
3439.119 -> And I want to upload a new file to to see
so let's go to our enterprise D and we have
3444.689 -> a new person here we have Tom Alok, he is
a Romulan, and we do not want him in the enterprise
3450.539 -> D here. So we're going to drag it over here
and see what happens, we're going to hit upload.
3457.96 -> Okay. And you're going to see that it's successfully
uploaded. Okay,
3462.22 -> so we're going to go ahead and do a refresh
here and see if it actually is there. And
3468.549 -> it looked like it worked. But I guess it didn't,
because we do have that policy there. So there
3472.44 -> you go. But let's just to be 100% sure that
our policy actually is working, because I
3478.119 -> definitely don't see it there. We're gonna
go back there. And we're just going to remove
3481.03 -> our policy. Okay, so we're gonna go ahead
here and just delete the policy, right? It's
3486.03 -> where the interface shows it to you as if
you know, it's actually working. And so Tom
3490.509 -> lock is definitely not in there. But our policy
has been removed. So now if we were to upload
3494.39 -> it, Tom should be able to infiltrate the enterprise
D bucket here, we're going to do an upload
3499.209 -> here. Okay. Let's see if we get a different
result. And there it is. So there you go.
3505.989 -> So our bucket policy was working, you can
see that ABS can be a little bit misleading.
3512.839 -> So you do have to double check things that
happened for me all the time. But there you
3517.241 -> go, that is how you set up.
3524.089 -> So we are on to the s3 cheat sheet. And this
is a very long cheat sheet because s3 is so
3529.309 -> important to the eight of us associate certification,
so we need to know the service inside and
3535.449 -> out. So s3 stands for simple storage service.
It's an object based storage, and allows you
3542.671 -> to store unlimited amounts of data without
worrying of the underlying storage infrastructure.
3547.329 -> s3 replicates data across at least three availability
zones to ensure 99.9% availability and 11
3554.119 -> nines of durability on just contain your data.
So you can think of objects like files, objects,
3560.499 -> objects can be sized anywhere from zero bytes
to five terabytes, I've highlighted zero bytes
3565.96 -> in red because most people don't realize they
can be zero bytes in size. Buckets contain
3571.859 -> objects and buckets can also contain folders,
which can in turn contain objects. And you
3577.17 -> can also just think of buckets themselves
as folders. Buckets names are unique across
3582.439 -> all AWS accounts. So you can treat them like
domain names. So there you your bucket name
3588.41 -> has to be unique within within the entire
world. When you upload a file to s3 successfully
3594.519 -> then you'll receive an HTTP 200 code Lifecycle
Management feature So this allows you to move
3601.809 -> objects between different storage classes.
And objects can be deleted automatically based
3607.019 -> on a schedule. So you will create lifecycle
lifecycle rules or policies to make that happen,
3614.759 -> then you have versioning. So this allows you
to have
3617.059 -> version IDs on your objects. So when you upload
a new object, the overtop of an existing object,
3625.839 -> the old objects will still remain, you can
access any previous object based on their
3630.96 -> version ID. When you delete an object, the
previous object will be restored. Once you
3635.45 -> turn on versioning cannot be turned off, it
can only be suspended. Then we have MFA delete.
3641.619 -> So this allows you to enforce all delete operations
to require an MFA token in order to delete
3648.269 -> an object, so you must have versioning turned
on to use this, you can only turn on MFA delete
3654.499 -> from the ADC Li and it's really just the root
account or the root user who's allowed to
3659.47 -> delete these objects. All new buckets are
private by default, logging can be turned
3665.039 -> on on a bucket. So you can track all the operations
performed on objects. Then you have access
3671.809 -> control, which is configured using either
bucket policies or access control lists. So
3676.38 -> we have bucket policies, which are Jason documents,
which let you write complex control access.
3683.329 -> Then you have ACLs. And they are the legacy
legacy method. They came before bucket bucket
3688.189 -> policies. And they're not depreciated. So
there's no full part in using them, but they're
3692.14 -> just not used as often anymore. And it allows
you to grant object access to objects and
3697.869 -> buckets with simple actions. And so now we're
on to the security portion. So security in
3704.02 -> transit is something you have with s3, because
all the files uploaded are done over SSL.
3709.769 -> And so you have SSC, which stands for server
side encryption. s3 has three options for
3714.599 -> SSE. We have SSE A S. And so s3 handles the
key itself, and it uses a Aes 256 algorithm
3724.519 -> as the encryption method, then you have SSE
kms. And as the name implies, it is using
3730.799 -> a key management service, which is an envelope
encryption service. And so AWS manages the
3735.97 -> the key and so do you, then you have SSE C,
and the C stands for work customer. So it's
3741.65 -> a customer provided key, you actually upload
the key, and you have full control over key
3746.699 -> but you also have to manage that key. All
right, s3 doesn't come with client side encryption,
3751.969 -> it's up to you to encrypt your files locally,
and then upload them to s3, you could store
3756.809 -> your client side key in kms. So that is an
option for you. But it's not that important
3761.979 -> to actually have here on the cheat sheet.
You have also cross region replication. This
3767.549 -> allows you to replicate files across regions
for greater durability, you must have versioning
3772.96 -> turned on in the source and destination bucket
in order to use cross region replication.
3777.91 -> And you can replicate a source bucket to a
bucket in another AWS account, then you have
3784.599 -> transfer acceleration. This provides fast
and secure uploads from anywhere in the world
3788.63 -> data is uploaded via a distinct URL to an
edge location. And data is then transported
3793.019 -> to your s3 bucket via the AWS backbone network,
which is super fast, then you have pre signed
3798.199 -> URLs. And this is a URL generated via the
HSC ally or SDK, it provides temporary access
3803.24 -> to write or download to an object like data
to that actual object via that endpoint. Pre
3809.73 -> signed URLs are commonly used to access private
objects. And the last thing is our storage
3814.859 -> classes. And we have six different kinds of
storage classes started with standard. And
3819.619 -> that's the default one. And it's fast. It's
has 99.99% availability, 11 nines of durability,
3825.739 -> you access files within the milliseconds and
it replicates your data across at least three
3830.039 -> azs. Then you have the intelligent tear, tearing
storage class. And this uses XML to analyze
3835.979 -> your objects usage and determine the appropriate
storage to help you save money and just move
3840.929 -> to those other storage classes which recovering
now, then you have standard and frequency
3845.319 -> access. Review to IAA it's just as fast as
standard. It's cheaper to access files, if
3852.799 -> you're only accessing files less than once
a month. So just one file in the month, if
3857.549 -> you access it twice. Now it's the same class
of standard probably a little bit more because
3860.809 -> there's an additional retrieval fee. When
you try to grab those files. It's 50% less
3867.75 -> than standard. The trade off here is reduced
availability, then you have one zone IAA.
3872.059 -> And as the name implies, it's not replicating
cross three azs just at least three Z's. It's
3877.61 -> only in one az, so it's going to be super
fast. And the trade off here is it's going
3883.299 -> to be 20% cheaper than standard IAA, but now
you also have reduced durability. And again,
3889.73 -> it has a retrieval fee. Then you have glacier
and Glazier is for long term cold storage.
3895.969 -> It's archival storage, and it's a very very,
very cheap the tradeoff here is that it's
3901.46 -> going to take between minutes to hours for
you to actually access your files if you need
3906.01 -> them. Then you have glacier deep archive,
it is the cheapest, the cheapest solution
3910.949 -> or storage class on our list. And you can't
access your files for up to 12 hours. So if
3918.079 -> that's how long it's going to take before
you can use them. So that is the s3 cheat
3921.249 -> sheet. It was a very long cheat sheet. But
there's a lot of great information here. So
3925.339 -> hey, this is Angie brown from exam Pro. And
we are looking at AWS snowball, which is a
3934.39 -> petabyte scale data transfer service. So move
data onto a database via a physical briefcase
3940.579 -> computer. Alright, so let's say you needed
to get a lot of data onto AWS very quickly
3946.16 -> and very inexpensively. Well, snowball is
going to help you out there because if you
3951.029 -> were to try and transfer 100 terabytes over
a high speed internet to AWS, it could take
3956.519 -> over 100 days where with a snowball, it will
take less than a week. And again, for cost.
3962.089 -> If you had to transfer 100 terabytes over
high speed internet, it's gonna cost you 1000s
3965.709 -> of dollars, where snowball is going to reduce
that cost by 1/5. Now we'll just go through
3970.13 -> some of the features of snowball here, it
does come with an E ink display, it kind of
3974.619 -> looks like your shipping label, but it is
digital, which is kind of cool. It's tamper
3979.619 -> and weatherproof. The data is encrypted end
to end using tivity. Six bit encryption, it
3984.13 -> has a Trusted Platform Module, TPM is just
this little chip here. And as it says here,
3990.13 -> endpoint device that stores RSA encryption
keys specific to host systems for hardware
3994.91 -> authentication. So that's a cool little hardware
feature. And for security purposes, data transfers
3999.41 -> must be completed within 90 days of the snowball
being prepared. And this data is going to
4003.88 -> come into s3, you can either import or export.
So not only can you you know use this to get
4010.339 -> data into the cloud, it can be a way for you
to get data out of the cloud it snowball comes
4014.2 -> in two sizes, we have 50 terabytes and 80
terabytes. Now you don't get to utilize all
4018.88 -> the space on there. So in reality, it's really
42 terabytes and 72 terabytes. And you're
4023.779 -> going to notice that it said this was a petabyte
scale migration, while they're suggesting
4029.18 -> that you use multiple multiple snowballs to
get to petabytes. So you don't you don't transport
4035.579 -> petabytes in one snowball, it's going to take
multiple snowballs to do that. Alright. Now
4044.449 -> we're going to take a look here at Ada snowball
edge, which again, is a petabyte scale data
4048.849 -> transfer service move data on database via
a physical briefcase computer, but it's going
4053.14 -> to have more storage and onsite compute capacity
capabilities. So just looking at snowball
4058.349 -> edge here in greater detail, you're going
to notice one aesthetic difference is that
4062.38 -> it has these little orange bars, maybe that's
the way to distinguish snowball from snowball
4065.829 -> edge, but it's similar to snowball, but with
more storage and with local processing. So
4070.539 -> going through these features, instead of having
an ink display, it's going to have an LCD
4075.93 -> display. So again, it's the shipping information.
But with other functionality. The huge advantage
4080.66 -> here is that it can undertake local processing
and edge computing workloads. It also has
4085.489 -> the ability to cluster so you can get a bunch
of these noble edges and have them work on
4090.469 -> a single job kind of like having your own
little mini data center up to five to 10 devices.
4095.499 -> And it comes in three options for device configuration.
So you can optimize for storage compute or
4099.94 -> with GPU optimization. And the CPU amounts
are going to change in this device based on
4105.9 -> what you need. And snowball edge comes in
two sizes. We have 100 terabytes, that's 83
4110.67 -> terabytes of usage space. And then we have
the clustered version, and it's lesser, a
4115.85 -> fewer amounts of terabytes. But of course,
you're gonna be using this in clusters. So
4119.18 -> there's good reasons for that. So there you
go. That's still no, now we're taking a look
4127.15 -> here at snowmobile, and it is a 45 foot long
shipping container pulled by a semi trailer
4132.52 -> track, it can transfer up to 100 petabytes
personal mobiel. So in order to get to exabytes,
4138.13 -> you're gonna need a few of these, but it definitely
is feasible. And it has some really cool security
4142.5 -> features built in. We have GPS tracking, alarm
monitoring, 24 seven video surveillance and
4147.52 -> an escort security vehicle while in transit.
Now that is an optional feature. I don't know
4151.5 -> if it costs more, but it's definitely sounds
really cool. So you know, just to wrap this
4156.65 -> up here, eight of us personnel will help you
connect your network to the snowmobile. And
4161.02 -> when data transfer is complete, we'll drive
it back data bus and import it into s3 or
4165.23 -> s3 Glacier. So
4166.251 -> there you are. I'm for the cheat sheet and
it's for snowball, snowball edge and snowmobile.
4175.32 -> So let's jump into it. So snowball and snowball
edge is a rugged container, which contains
4179.31 -> a storage device. snowmobile is a 45 foot
long ruggedized shipping container pulled
4184.73 -> by a semi trailer truck snowball and snowball
edge is for petabyte scale migration, whereas
4190.089 -> snowmobile is for exabyte scale migration.
So the advantages here with snowball is low
4196.17 -> cost so 1000s of dollars to transfer 100 terabytes
over high speed internet snowball comes at
4201.15 -> 1/5 of the price, then we have speed 100 terabytes
over 100 days to transfer over high speed
4205.23 -> internet. Or you can use snowball, which takes
less than a week. And so then we talked about
4210.25 -> snowball here. So snowball comes in two sizes,
we have 50 terabytes and 80 terabytes, but
4214.19 -> the actual usable space is less so it's 42
and 72. Then you have snowball edge, it comes
4219.28 -> in two sizes, we have 100 terabytes and 100
terabytes clustered. And then the usability
4223.91 -> here is gonna be 83 and 45. snowmobile comes
in one size 100 petabytes per vehicle, and
4230.22 -> you can both export or import data using snowball
or snowmobile Okay, and that also includes
4235.32 -> snowball edge there, you can import into s3
or glacia snowball edge can undertake local
4241.65 -> processing and edge low edge computing workloads.
snowball edge can use be used in a cluster
4247.52 -> in groups of five to 10 devices. And snowball
edge provides three options for device configurations,
4252.7 -> we have storage optimized, compute optimized
and GPU optimized and the variation there
4258.35 -> is going to be how many CPUs are utilized
and GPU have is going to have more GPUs on
4262.96 -> board there. So there you go, that that is
your snowball, snowball edge and snowmobile.
4269.1 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at Virtual Private Cloud known
4277.76 -> as VPC. And this service allows you to provision
logically isolated sections of your database
4284.4 -> cloud where you can launch eight of his resources
in a virtual network that you define.
4295.83 -> So here we are looking at an architectural
diagram of a VPC with multiple networking
4300.63 -> resources or components within it. And I just
want to emphasize how important it is to learn
4306.77 -> VPC and all components inside and out because
it's for every single aidable certification
4312.35 -> with the exception of the cloud practitioner.
So we definitely need to master all these
4317.43 -> things. So the easiest way to remember what
a VPC is for is think of it as your own personal
4322.91 -> data center, it gives you complete control
over your virtual networking environment.
4327.43 -> All right, so the idea is that we have internet,
it flows into an internet gateway, it goes
4332.23 -> to a router, the router goes to a route table,
the route table passes through knakal. And
4337.24 -> the knakal sends the traffic to the public
and private subnets. And your resources could
4343.43 -> be contained within a security group all within
a VPC. So there's a lot of moving parts. And
4347.68 -> these are not even all the components. And
there's definitely a bunch of different configurations
4351.97 -> we can look at. So looking at the core components,
these are the ones that we're going to learn
4356.56 -> in depth, and there are a few more than these,
but these are the most important ones. So
4361.32 -> we're going to learn what an internet gateway
is. We're gonna learn what a virtual private
4365.75 -> gateway is route tables, knackles, security
groups, public and private subnets, Nat gateway
4372.68 -> and instances customer gateway VPC endpoints
and VPC peering. So, this section is very
4379.79 -> overwhelming. But you know, once you get it
down, it's it's pretty easy going forward.
4385.68 -> So we just need to master all these things
and commit them to memory. Now that we kind
4393.45 -> of have an idea, what's the purpose of VPC,
let's look at some of its key features, limitations
4399.41 -> and some other little things we want to talk
about. So here on the right hand side, this
4402.73 -> is the form to create a VPC, it's literally
four fields. It's that simple. You name it,
4407.11 -> you give it an address, you can also give
it an additional ipv6 address. You can't be
4413.21 -> or it's either this and this. And you can
set its tendencies to default or dedicated,
4418.54 -> dedicated, meaning that it's running on dedicated
hardware. If you're an enterprise, you might
4423.13 -> care about that. This is what the ipv6 cider
block would look like because you don't enter
4427.28 -> it in Amazon generates one for you. So v PCs
are region specific. They do not span regions,
4434.77 -> you can create up to five epcs per region.
Every region comes with a default VPC, you
4439.81 -> can have 200 subnets per VPC, that's a lot
of subnets. You can create, as we said, Here,
4446.42 -> an ipv4 cider block, you actually have to
create one it's a requirement. And in addition
4450.732 -> to you can provide an ipv6 cider block. It's
good to know that when you create a VPC, it
4457.42 -> doesn't cost you anything. That goes the same
for route tables, knackles, internet gateway
4461.941 -> security groups subnets and VPC peering. However,
there are resources within the VPC that are
4468.03 -> going to cost you money such as Nat gateways,
VPC endpoints, VPN gateways, customer gateways,
4472.56 -> but most the time you'll be working with the
ones that don't cost any money so that there
4477.25 -> shouldn't be too much of a concern of getting
over billed. One thing I do want to point
4481.96 -> out is that when you do create a VPC, it doesn't
have DNS host names turned on by default.
4487.89 -> If you're wondering what that option is for
what it does is when you launch easy two instances
4492.72 -> and so here down below, I have an easy to
instance and it will get a public IP but it
4498.94 -> will only get a Public DNS, which looks like
a domain name, like an address, and that's
4503.46 -> literally what it is. But if this isn't turned
on that easy to instance, won't get one. So
4508.43 -> if you're wondering, why isn't that there,
it's probably because your host names are
4512.01 -> disabled and they are disabled by default,
you just got to turn that.
4519.5 -> So we were
4520.5 -> saying earlier that you get a default VPC
for every single region. And the idea behind
4524.85 -> that is so that you can immediately launch
EC two instances without having to really
4529.13 -> think about all the networking stuff you have
to set up. But for Eva certification, we do
4534.41 -> need to know what is going on. And it's not
just a default dBc It comes with other things
4539.56 -> and with specific configurations. And we definitely
need to know that for the exams. So the first
4544.6 -> thing is it creates a VPC of cider block size
16. We're going to also get default subnets
4550.56 -> with it. So for every single AZ in that region,
we're going to get a subnet per AZ and there
4556.41 -> gonna be a cider block size 20. It's going
to create an internet gateway and connect
4560.89 -> it to your default VPC. So that means that
our students is going to reach the internet,
4565.54 -> it's going to come with a default security
group and associated with your default VPC.
4569.29 -> So if you launch an EC two instance, it will
automatically default to the security group
4574.33 -> unless you override it. It will also come
with by by default, a knakal. And associated
4579.56 -> with your VPC, it will also default DHCP options.
One thing that it's implied is that you It
4588.25 -> comes with a main route table, okay, so when
you create a VPC, it automatically comes to
4592.06 -> the main route table. So I would assume that
that comes by default as well. So there are
4597.15 -> all people.
4603.06 -> So I just wanted to touch on this 0.0 dot
zero forward slash zero here, which is also
4607.48 -> known as default. And what it is, is it represents
all possible IP addresses. Okay, and so you
4614.18 -> know, when you're doing a device networking,
you're going to be using this to get the GW
4621.29 -> to have a route like routing traffic to the
GW to the internet. When you're using a security
4626.69 -> group, when you set up your inbound rules,
you're going to set 0.0 dot 0.0 to allow any
4631.84 -> traffic from the internet to access your public
resources. So anytime you see this, just think
4638.061 -> of it as giving access from anywhere or the
internet. Okay. We're looking at
4647.17 -> VPC peering, which allows you to connect one
VPC to another over direct network route using
4652.94 -> private IP addresses. So the idea is we have
VPC, a VPC Zb. And we want to treat it so
4660.06 -> like the they behave like they're on the same
network. And that's what VPC peering connection
4664.79 -> allows us to do. So it's very simple to create
a peering connection, we just give it a name,
4670.65 -> we say V, what we want is the requester. So
that could be VP ca and then we want as the
4675.19 -> acceptor which could be VP CB, and we can
say whether it's in my account, or another
4680.46 -> account, or this region or another region.
So you can see that allows v PCs from same
4685.25 -> or different regions to talk to each other.
There is some limitations around the configuration.
4692.67 -> So you know, when you're peering, you're using
star configuration, so you'd have one central
4696.34 -> VPC and then you might have four around it.
And so for each one, you're going to have
4700.36 -> to have a peering connection. There's no transitive
peering. So what does that mean? Well, the
4706.02 -> idea is like, let's say VPC c wants to talk
to VPC, B, the traffic's not going to flow
4712.41 -> through a, you actually would have to create
another direct connection from C to B. So
4719.94 -> it's only to the nearest neighbor, where that
communication is going to happen. And you
4724.43 -> can't have overlapping cider blocks. So if
these had the same cider block, this was 172
4729.42 -> 31. This was 172 31, we're gonna have a conflict
and we're not gonna be able to talk to each
4734.56 -> other. So that is the VPC peering in a nutshell.
4737.99 -> Alright, so we're taking a look here at route
tables. The route tables are used to determine
4746.75 -> where network traffic is directed, okay. And
so each subnet in your V PC must be associated
4753.71 -> with a route table. And a subnet can only
be associated with one route table at a time,
4758.79 -> but you can associate multiple subnets subnets
with the same route table. Alright, so now
4764.87 -> down below, I have just like the most common
example of where you're using route tables.
4769.08 -> And that's just allowing your easy two instances
to gain access to the internet. So you'd have
4775.88 -> a public subnet where that easy to instance
resides and that's going to be associated
4780.13 -> with a route table. That read route table
is going to have us routes in here. And here
4784.94 -> you can see we have a route, which has the
internet gateway attached that allows access
4792.33 -> to the internet. Okay, so there you go. That's
all there is to it. We're taking a look at
4801.25 -> Internet gateway internet gateway allows your
VPC access to the internet and I N GW does
4806.75 -> two things. It provides a target in your VPC
route tables for internet routable traffic.
4812.69 -> And it can also perform network address translation
Nat, which we'll get into in another section
4817.59 -> for instances that have been assigned a public
ipv4 address. Okay, so down below here, I
4824.08 -> have a representation of how I GW works. So
the idea is that we have internet over here
4831.06 -> and to access the internet, we need an internet
gateway, but to route traffic from our EC
4836 -> two instances or anything, they're gonna have
to pass through a route table to get to a
4840.16 -> router. And so we need to create a new route
in our route table for the ID W. So I gwi
4847.6 -> hyphen, Id identifies that resource, and then
we're going to give it 0.0 point zero as the
4854.44 -> destination. Alright, so that's all there
is to it. So we talked about how we could
4863.39 -> use Nat gateways or Nat instances to gain
access to the internet for our EC two instances
4868.83 -> that live in a private subnet. But let's say
you wanted to SSH into that easy to essence,
4874.31 -> well, it's in a private subnet, so it doesn't
have a public IP address. So what you need
4878.63 -> is you need an intermediate EC two instance
that you're going to SSH into. And then you're
4883.05 -> going to jump from that box to this one, okay?
And that's why bastions are also known as
4887.49 -> jump boxes. And this institute instance for
the bastion is hardened. So it should be very,
4894.06 -> very secure, because this is going to be your
point of entry into your private EC two instances.
4900.52 -> And some people might always ask, Well, if
a NAT instance, like Nat gateways, we can't
4905.86 -> obviously turn into bastions, but a NAT instance
is just a new situation, it's Couldn't you
4910.12 -> have it double as a bastion, and the possibility
of it is possible, but generally the way you
4917.51 -> configure NATS and also, from a security perspective,
you'd never ever want to do that, you'd always
4922.82 -> want to have a different EC two instance,
as your Bastion. Now, there is a service called
4930.03 -> SYSTEMS MANAGER, session manager and it replaces
the need for bastions so that you don't have
4935.03 -> to launch your own EC two instances. So generally,
that's recommended in AWS. But you know, bastions
4941.14 -> are still being commonly used throughout a
lot of companies because it needs to meet
4946.15 -> whatever their requirements are, and they're
just comfortable with them. So there you go.
4952.04 -> So we're gonna take a look at Direct Connect,
and Direct Connect is in aid of a solution
4958.08 -> for establishing dedicated network connections
from on premise locations to AWS, it's extremely
4964.95 -> fast. And so depending on what configuration
you get, if it's in the lower bandwidth, we're
4969.8 -> looking between 1550 megabytes to 500 megabytes,
or the higher bandwidth is one gigabytes to
4976.25 -> 10 gigabytes. So the transfer rate to your
on premise environment, the network to AWS,
4982.66 -> is it considerably fast. And this can be really
important if you are an enterprise and you
4987.91 -> want to keep the same level of performance
that you're used to. So yeah, the takeaway
4993.24 -> here with Direct Connect is that it helps
reduce network costs increase bandwidth throughput,
4997.8 -> it provides a more consistent network experience
than a typical internet internet based connection.
5003.63 -> Okay, so that's all.
5008.65 -> We're looking
5009.65 -> at VPC endpoints, and they're used to privately
connect your V PC to other Ada services, and
5015.49 -> VPC endpoint services. So I have a use case
here to make it crystal clear. So imagine
5020.08 -> you have an EC two instance, and you want
to get something from your s3 bucket. So what
5024.86 -> you normally do is use the ABS SDK and you
would make that call, and it would go out
5030.93 -> of your internet gateway to the internet back
into the AWS network to get that file or or
5038.72 -> object out of s3. So wouldn't it be more convenient
if we could just keep the traffic within the
5044.99 -> AWS network and that is the purpose of a VPC
endpoint. It helps you keep traffic within
5051.22 -> the EBS network. And the idea is now because
it does not leave a network, we do not require
5056.29 -> a public IP address to communicate with these
services. I eliminates the need for an internet
5061.13 -> gateway. So let's say we didn't need this
internet gateway, the only reason we were
5064.03 -> using it was to get to s3, we can now eliminate
that and keep everything private. So you know,
5069.34 -> there you go. There are two types of VPC endpoints
inter interface endpoints and gateway endpoints.
5074.2 -> And we're going to get into that.
5080.71 -> So we're going to look at the first type of
VPC endpoint and that is interface endpoints.
5086.31 -> And they're called interface endpoints because
they actually provision an elastic network
5090.76 -> interface, an actual network interface card
with a private IP address, and they serve
5095.5 -> as an entry point for traffic going to a supported
service. If you read a bit more about interface
5101.77 -> endpoints, they are powered by AWS private
link. There's not much to say here, that's
5106.89 -> just what it is. So Access Services hostname
is easily securely by keeping your network
5110.61 -> traffic within a bus network. This is always
confused me this branding of Eva's private
5115.73 -> link. But you know, you might as well just
think of interface endpoints, and it is prevalent
5119.55 -> to be in the same thing. Again, it does cost
something because it is speeding up and he
5123.63 -> and I, and so you know, it's it's point 01
cents per hour. And so over a month's time,
5130.23 -> if you had it on for the entire time, it's
going to cost around $7.50. And the interface
5136.19 -> endpoint supports a variety of native services,
not everything. But here's a good list of
5141.3 -> them for you.
5142.69 -> The second type of VCP endpoint is a gateway
endpoint and a gateway endpoint. It has a
5152.99 -> target for a specific route in your row table
used for traffic destined for a supported
5157.73 -> database service. And this endpoint is 100%.
Free because you're just adding something
5162.85 -> to your row table. And you're going to be
utilizing it mostly for Amazon s3 and dynamodb.
5168.89 -> So you saw that first use case where I showed
you that we were getting the AC units in stock
5175.75 -> yesterday that was using a gateway endpoint.
So there you go. Here we are at the VPC endpoint
5185.581 -> cheat sheet and this is going to be a quick
one, so let's get to it. VPC endpoints help
5190.52 -> keep traffic between awa services within the
AWS network. There are two kinds of VPC endpoints
5196.67 -> interface endpoints and gateway endpoints.
interface endpoints cost money, whereas gateway
5201.96 -> endpoints are free interface endpoints uses
a elastic network interface in the UI with
5208.1 -> a private IP address part. And this was all
powered by private link. gateway endpoints
5213.38 -> is a target for a specific route in your route
table. And interface endpoints support many
5219.1 -> ad services, whereas gateway endpoints only
support dynamodb and s3.
5230.07 -> So we're going to take a look at VPC flow
logs, which allows you to capture IP traffic
5234.09 -> information in and out from your network interfaces
within your VPC. So you can turn on flow logs
5240.89 -> at three different levels. You can turn it
on at the VPC level, which we're doing right
5244.7 -> here. You can turn it on at a specific subnet,
or you can turn it on for a specific network
5249.41 -> interface. The idea is this all trickles down.
So the turn off VBC is it's monitoring everything
5254.03 -> below. And same thing with subnets. To find
VPC flow logs, you just go to go to the VPC
5260.39 -> console, and there's gonna be a tab for flow
logs. Same thing with subnets and network
5264.22 -> interface, we're gonna be able to create that
flow log. And so here is that forms Do you
5268.53 -> have an idea what you can do here. So the
idea is you can choose to filter for only
5273.95 -> the accepted or rejected or all. So I'm saying
it all, and it can deliver those logs to cloudwatch
5280.11 -> logs, they can also deliver them to an s3
bucket If you would prefer them to go there
5285.59 -> instead. So you know, that's the general stuff.
But once you create a flow log, you can't
5290.41 -> really edit it, all you can do is delete it.
So there you go. So we now know what VPC flow
5300.3 -> logs are for. But let's actually take a look
at what VPC flow logs look like. And so here
5305.3 -> I have the structure up here of the data that
is stored on a VPC flow logs so it stores
5310.94 -> these on individual lines. And immediately
below, we actually have an example of a VPC
5316.01 -> flow log. And this is the full description
of all these attributes. And these are pretty
5320.86 -> straightforward. The only thing I really want
you to go away with here is that the fact
5324.59 -> that it stores the source IP address and the
destination IP address. There's some exam
5330.65 -> questions and the probably at the pro level
or the specialty level where we're talking
5336.24 -> about VPC flow logs. And the question might
have to do with like, you know, just the VPC
5340.61 -> flow logs contain host names or does it contain
IP addresses? And, and the answer is it contains
5346.24 -> IP addresses. So that's the big takeaway here
that I wanted to show. So now that we've learned
5355.63 -> everything about VPC flow logs, here's your
cheat sheet for when you go sit the exam.
5360.41 -> So the first thing is VPC flow logs monitors
the in and out traffic of your network interfaces
5364.8 -> within your VPC. You can turn on flow logs
at the VPC subnet or network interface level.
5371.39 -> VPC flow logs cannot be tagged like other
resources. You cannot change the configuration
5376.33 -> of a flow log after it's been created. You
cannot enable full logs for epcs which are
5381.28 -> appeared within your VPC unless it's the same
account. VPC flow logs can be delivered to
5386.3 -> s3 or cloud watch logs. VPC flow logs contain
the source and destination IP addresses so
5392.45 -> not the host names okay. And there's some
instance traffic that will not get monitored.
5397.82 -> So instance traffic generated by Contact the
Avis DNS servers, Windows license activation
5403.66 -> traffic from instances traffic to and from
instance metadata addresses DHCP traffic any
5409.88 -> traffic to the reserved IP address of the
default VPC router. So there you go. Andrew
5420.12 -> Brown from exam Pro, and we are looking at
network access control lists, also known as
5423.94 -> knackles. It is an optional layer of security
that acts as a firewall for controlling traffic
5429.98 -> in and out of subnets. So knackles act as
a virtual firewall at the subnet level. And
5436.3 -> when you create a VPC, you automatically get
a knakal by default, just like security groups
5442.53 -> knackles have both inbound and outbound rules
of the difference here is that you're going
5447.36 -> to have the ability to allow or deny traffic
in either way. Okay, so for security groups,
5453.18 -> you can only allow whereas knackles, you have
deny. Now, when you create these rules here,
5460.63 -> it's pretty much the same as security groups
with the exception that we have this thing
5463.26 -> called rule number And rule number is going
to determine the order of evaluation for these
5469.38 -> rules, and the way it evaluates is going to
be from the lowest to the highest, the highest
5474.14 -> rule number, it could be 32,766. And AWS recommends
that when you come up with these rule numbers,
5483.58 -> you use increments of 10 or 100. So you have
some flexibility to create rules in between
5489.41 -> if you need be, again, subnets are at the
subnet level. So in order for them to apply,
5494.35 -> you need to associate subnets to knackles.
And subnets can only belong to a single knakal.
5500.77 -> Okay, so yeah, where you have security groups,
you can have a instances that belong to multiple
5508.73 -> ones for knackles. It's just a singular case,
okay? Alright,
5515.29 -> we're just gonna look at a use case for knackles.
Here, it's going to be really around this
5520.84 -> deny ability. So let's say there is a malicious
actor trying to gain access to our instances,
5526.51 -> and we know the IP address, well, we can add
that as a rule to our knakal and deny that
5531.07 -> IP address. And let's say we know that we
never need to SSH into these instances. And
5538.07 -> we just want to additional guarantee in case
someone Miss configures, a security group
5543.47 -> that SSH access is denied. So we'll just deny
on port 22. And now we have those two cases
5549.87 -> covered. So there you go. So we're on to the
knackles cheat sheets. Let's jump into it.
5558.27 -> So network access control list is commonly
known as nachal. v PCs are automatically given
5564.44 -> default knakal, which allow all outbound and
inbound traffic. Each subnet with any VPC
5569.86 -> must be associated with a knakal. subnets
can only be associated with one knakal. At
5574.95 -> a time associating a subnet with a new knakal
will remove the previous Association. If a
5579.9 -> knakal is not exclusively associated with
a subnet, the subnet will automatically be
5583.81 -> associated with the default knakal knakal
has inbound and outbound rules just like security
5589.5 -> groups, rules can either allow or deny traffic,
unlike security groups, which can only allow
5595.84 -> knackles or stateless. So that means your
it's going to allow inbound traffic and also
5599.94 -> outbound. When you create a knakal, it will
deny all traffic by default knackles contain
5606.47 -> a number numbered list of rules that gets
evaluated in order from lowest to highest.
5611.9 -> If you need to block a single IP address,
you could you could be a knackles. You cannot
5617.9 -> do this via security groups because you cannot
have deny actions. Okay, so there you go.
5626.23 -> Hey, it's Andrew Brown from exam Pro, we are
looking at security groups. And they help
5632.27 -> protect our EC two instances by acting as
a virtual firewall controlling the inbound
5637.17 -> and outbound traffic, as I just said, security
groups acts as a virtual firewall at the instance
5642.91 -> level. So you would have an easy to instance
and you would attach to it security groups.
5646.95 -> And so here is an easy to instance. And we've
attached a security group to it. So what does
5652.63 -> it look like on the inside for security groups,
each security group contains a set of rules
5657.21 -> that filter traffic coming into. So that's
inbound, and out of outbound to that easy
5663.07 -> to instance. So here we have two tabs, inbound
and outbound. And we can set these are rules,
5668.24 -> right. And we can set these rules with a particular
protocol and a port range. And also who's
5673.98 -> allowed to have access. So in this case, I
want to be able to SSH into this YSU instance,
5679.74 -> which uses the TCP protocol. And the standard
port for SSH is 22. And I'm going to allow
5686.65 -> only my IP so anytime you see forward slash
32 that always means my IP. All right. So
5693.81 -> that's all you have to do to add inbound and
outbound rules. There are no deny rules. So
5699.32 -> all traffic is blocked by default unless a
rule specifically allows it. And multiple
5705.34 -> instances across multiple subnets can belong
to a security group. So here I have three
5710.81 -> different EC two instances, and they're all
in different subnets. And security groups
5714.89 -> do not care about subnets, you just assign
EC two instance, to a security group. And,
5721.38 -> you know, just in this case, and they're all
in the same one, and now they can all talk
5724.63 -> to each other, okay?
5728.53 -> You're I have three security group scenarios,
and they all pretty much do the same thing.
5734.39 -> But the configuration is different to give
you a good idea of variation on how you can
5738.35 -> achieve things. And so the idea is we have
a web application running on a situ instance.
5743.2 -> And it is connecting to an RDS database to
get its information running in a private subnet.
5748.59 -> Okay. And so in the first case, what we're
doing is we have an inbound rule on the SU
5755.65 -> database saying allowing for anything from
5432, which is the Postgres port number, for
5763.13 -> this specific IP address. And so it allows
us these two instance to connect to that RDS
5769.11 -> database. And so the takeaway here is you
can specify the source to be an IP range,
5773.48 -> or specific IP. And so this is very specific,
it's forward slash 32. And that's a nice way
5778.59 -> of saying exactly one IP address. Now in the
second scenario, it looks very similar. And
5784.8 -> the only difference is, instead of providing
an IP address as a source, we can provide
5788.42 -> another security group. So now anything within
the security group is allowed to gain access
5794.42 -> for inbound traffic on 5432. Okay, now, in
our last use case, down below, we have inbound
5803.63 -> traffic on port 80, and inbound traffic on
port 22, for the SG public group, and then
5809.85 -> we have the EC two instance and the RDS database
within its own security group. So the idea
5814.07 -> is that that EC two instance is allowed to
talk to that RDS database, and that EC two
5819.09 -> instance is not exposing the RDS database
to it well wouldn't, because it's in a private
5826 -> subnets, that doesn't have a public IP address.
But the point is, is that this is to instance,
5830.42 -> now is able to get traffic from the internet,
it's also able to accept someone from like
5838.13 -> for an SSH access, okay. And so the big takeaway
here is that you can see that an instance
5843.2 -> can belong to multiple security groups and
rules are permissive. So when we have two
5847.99 -> security groups, and this one has allows,
and this is going to take precedence over
5852.51 -> su stack, which doesn't have anything, you
know, because it's denied by default, everything,
5856.82 -> but anything that allows is going to override
that, okay, so you can nest multiple security
5861.9 -> groups onto one EC two instance. So just keep
that stuff. There are a few security group
5871.09 -> limits I want you to know about. And so we'll
look at the first you can have up to 10,000
5875.23 -> security groups in a single region, and it's
defaulted to 2500. If you want to go beyond
5880.51 -> that 2500, you need to make a service limit
increase request to Eva support, you can have
5886.9 -> 60 inbound rules and 60 outbound rules per
security group. And you can have 16 security
5891.84 -> groups per EMI. And that's defaulted to five.
Now, if you think about like, how many scripts
5899.51 -> Can you have on an instance? Well, it's depending
on how many annise are actually attached to
5904.09 -> that security group. So if you have to realize
that it's attached to a security group, then
5908.42 -> by default, you'll have 10. Or if you have
the upper limit here, 16, you'll be able to
5912.86 -> have 32 security groups on a single instance.
Okay, so those are the limits, you know, I
5918.3 -> thought were worth telling. So we're gonna
take a look at our security groups cheat sheet.
5926.86 -> So we're ready for exam time. So security
groups act as a firewall at the instance level,
5931.97 -> unless allowed, specifically, all inbound
traffic is blocked by default, all outbound
5937.4 -> traffic from the instance is allowed by default,
you can specify for the source to be either
5942.71 -> an IP range, a single IP address or another
security group. security groups are stateful.
5948.29 -> If traffic is allowed, inbound is also allowed
outbound. Okay, so that's what stateful means.
5954.26 -> Any changes to your security group will take
effect immediately. ec two instances can belong
5958.84 -> to multiple security groups. security groups
can contain multiple EC two instances, you
5964.15 -> cannot block a specific IP address security
groups. For this, you need to use knackles.
5969.32 -> Right? So again, it's allowed by default,
sorry, everything's denying you're only allowing
5976.07 -> things okay. You can have up to 10,000 security
groups per region default is 2500. You can
5981.96 -> have 60 inbound and 60 outbound rules per
security group. And you can have 16 security
5986.56 -> groups associated to that. And I default is
five and I can see that I added an extra zero
5991.09 -> there. So don't worry when you print out your
security scripts to cheat it will be all correct.
5997.28 -> Okay.
5999.1 -> We're looking
6001.15 -> at network address translation, also known
as Nat. And this is the method of remapping
6005.95 -> one IP address space into another. And so
here you can see we have our local network
6012.57 -> with its own IP address space. And as it passes
through the NAT, it's going to change that
6018.4 -> IP address. Well, why would we want to do
this? Well, there's two good reasons for this.
6022.69 -> So if you have a private networking need to
help gain outbound access to the internet,
6027.76 -> you need to use a NAT gateway to remap those
private IPS. If you have two networks, which
6032.82 -> have conflicting network addresses, maybe
they actually have the same, you can use a
6037.57 -> NAT to make the addresses more agreeable for
communication. So when we want to launch our
6048.101 -> own Nat, we have two different options in
AWS, we have Nat instances, and Nat gateways.
6053.75 -> So we're just going to go through the comparison
of these two. So before Nat gateways, all
6058.45 -> there was was Nat instances, and so you have
to configure that instance, it's just the
6065.31 -> regular situ instance, to do that remapping.
And so luckily, the community came up with
6070.4 -> a bunch of Nat instances. And so through the
aect marketplace, you go to community am eyes,
6077.26 -> you can still do this. And some people have
use cases for it. And you can launch a NAT
6082.56 -> instance, okay, and so in order for Nat instances
to work, they have to be in a public subnet,
6087.961 -> because there it has to be able to reach the
internet, if it wasn't a private subnet, there's
6091.93 -> no way it's going to get to the internet.
So you would launch a NAT instance there,
6096.19 -> and there you go, there'd be a few more steps
to configuration. But that's all you need
6100.63 -> to know. Now, when we go over to Nat gateways,
um, it's a managed service. So it's going
6106.8 -> to set up that easy to instance for you, you're
not going to have access to it, Avis is going
6110.14 -> to win 100% manage it for you. But it's not
just going to launch one, it's going to have
6114.33 -> a redundant instance for you. Because when
you launch your own Nat instances, if for
6119.75 -> whatever reason it gets taken down, then you'd
have to run more than once. And now you have
6125.01 -> to do all this work to make sure that these
instances are going to scale based on your
6130.66 -> traffic or have the durability that you need.
So for Nat gateways, they take care of that
6137.4 -> for you. And again, you would launch it in
a public subnet. The only thing that Nat gateway
6144.901 -> doesn't do is it doesn't launch them automatically
across other azs for you. So you need to launch
6150.55 -> a NAT gateway per AZ but you do get redundancy
for your instances. So those are the two methods.
6156.3 -> And generally you want to use Nat gateways
when possible, because it is the new way of
6160.81 -> doing it, but you could still use the legacy
way of doing it. So we're on to the NAT cheat
6171.23 -> sheet and we have a lot of information here.
It's not that important for the solution architect
6175.98 -> associate, it would definitely come up for
the sysops. Some of these details might matter.
6183.34 -> So we'll just go through this here. So when
creating a NAT instance, you must disable
6187.33 -> source and destination checks on the instance.
NAT instances must exist any public subnet,
6193.65 -> you must have a route out of the private subnet
to the NAT instance, the size of a NAT instances
6199.86 -> is determined how much traffic can be handled.
Highveld bill availability can be achieved
6205.17 -> using auto scaling groups, multiple subnets
in different Z's and automate failover between
6210.61 -> them using a script so you can see there's
a lot of manual labor. When you want to have
6215.27 -> availability and durability and scalability
for Nat instances, it's all on on to you.
6219.12 -> And then we'll go look at Nat gateway so Nat
gateways are redundant inside an availability
6224.08 -> zone so they can survive failure of EC to
have an EC two instance. You can only have
6229.46 -> one Nat gateway inside one AZ so they cannot
span multiple eyzies starts at five gigabytes
6236.59 -> per second and scales all the way up to 45
gigabytes per second. NAT gateways are the
6241.16 -> preferred setup for enterprise systems. There
is no requirement to patch Nat gateways and
6246 -> there's no need to disable source and destination
checks for the NAT gateway. Unlike Nat instances,
6251.41 -> Nat gateways are automatically assigned a
public IP address route tables for the NAT
6256.73 -> gateway must be updated resources. in multiple
easy's sharing a gateway will lose internet
6262.12 -> access if the gateway goes down unless you
create a gateway in each AZ and configure
6267.04 -> route tables accordingly. So there you go.
That is your net. Hey, this is Andrew Brown
6275.99 -> from exam pro and we are starting to VPC follow
along and this is a very long section because
6282.03 -> we need to learn about all the kind of networking
components that we can create.
6286.86 -> So we're going to learn how to create our
own VPC subnets, route tables, internet gateways,
6291.97 -> security groups, Nat gateways knackles we're
going to touch it all okay, so it's very core
6297.86 -> to learning about AWS And it's just great
to get it out of the way. So let's jump into
6303.55 -> it. So let's start off by creating our own
VPC. So on the left hand side, I want you
6308.68 -> to click on your VPC. And right away, you're
gonna see that we already have a default VPC
6313.34 -> within this region of North Virginia. Okay,
your region might be different from mine,
6318.38 -> it doesn't actually does kind of matter what
region you use, because different regions
6323.22 -> have different amounts of available azs. So
I'm going to really strongly suggest that
6329.2 -> you switch to North Virginia to make this
section a little bit smoother for you. But
6335.05 -> just notice that the default VPC uses an ipv4
cider, cider block range of 172 31 0.0 forward
6343.19 -> slash 16. Okay, and so if I was to change
regions, no matter what region will go to
6349.51 -> us, West, Oregon, we're going to find that
we already have a default VPC on here as well.
6356.17 -> And it's going to have the same a cider block
range, okay. So just be aware that at best
6361.87 -> does give you a default VPC so that you can
start launching resources immediately without
6366.71 -> having to worry about all this networking,
and there's no full power with using the default
6370.74 -> VPC, it's totally acceptable to do so. But
we definitely need to know how to do this
6375.25 -> ourselves. So we're going to create our own
VPC. Okay, and so I'm a big fan of Star Trek.
6380.98 -> And so I'm going to name it after the planet
of Bayshore, which is a very well known planet
6385.83 -> in the Star Trek universe. And I'm going to
have to provide my own cider block, it cannot
6390.82 -> be one that already exists. So I can't use
that 172 range that AWS was using. So I'm
6397.21 -> gonna do 10.0 dot 0.0, forward slash 16. And
there is a bit of a rhyme and rhythm to choosing
6404.45 -> these, this one is a very commonly chosen
one. And so I mean, you might be looking at
6409.39 -> this going, Okay, well, what is this whole
thing with the IP address slash afford 16.
6414.02 -> And we will definitely explain that in a separate
video here. But just to give you a quick rundown,
6418.92 -> you are choosing your IP address that you
want to have here. And this is the actual
6424.27 -> range, and this is saying how many IP addresses
you want to allocate. Okay. Um, so yeah, we'll
6430.19 -> cover that more later on. And so now we have
the option to set ipv6 cider, or a cider block
6437.16 -> here. And so just to keep it simple, I'm going
to turn it off. But you know, obviously, ipv6
6442.16 -> is supported on AWS. And it is the future
of, you know, our IP protocol. So it's definitely
6450.49 -> something you might want to turn on. Okay,
and just be prepared for the future there,
6454.53 -> that we have this tendency option, and this
is going to give us a dedicated hosts. For
6458.93 -> our VPC, this is an expensive, expensive option.
So we're going to leave it to default and
6463.36 -> go proceed and create our VPC. And so there
it has been created. And it was very fast,
6469.67 -> it was just instantaneous there. So we're
going to click through to that link there.
6473.4 -> And now we can see we have our VPC named Bayshore.
And I want you to notice that we have our
6479.32 -> IP V for cider range, there is no ipv6 set.
And by default, it's going to give us a route
6486.89 -> table and a knakal. Okay, and so we are going
to overwrite the row table because we're going
6492.96 -> to want to learn how to do that by ourselves.
knackles is not so important. So we might
6497.93 -> just glossed over that. But um, yeah, so there
you are. Now, there's just one more thing
6502.9 -> we have to do. Because if you look down below
here, we don't have DNS resolution, or DNS,
6509.74 -> or sorry, DNS hostnames is disabled by default.
And so if we launch an EC two instance, it's
6514.72 -> not going to get a, a DNS, DNS hostname, that's
just like a URL. So you can access that ecsu
6522.22 -> instance, we definitely want to turn that
on. So I'm going to drop this down to actions
6526.14 -> and we're going to set a host names here to
enabled okay. And so now we will get that
6532.63 -> and that will not cause this pain later down
the road. So now that we've created our VPC,
6537.8 -> we want to actually make sure the internet
can reach it. And so we're going to next learn
6543.45 -> about internet gateways. So we have our VPC,
but it has no way to reach the internet. And
6549.28 -> so we're going to need an internet gateway.
6551.25 -> Okay, so on the left hand side, I want you
to go to internet gateway. And we are going
6556.78 -> to go ahead and create a new one. Okay. And
I'm just going to call it for internet gateway,
6562 -> bay shores and people do it w e do it w doesn't
hurt. And so our internet gateway has been
6567.75 -> created, and so we'll just click through to
that one. And so you're gonna see that it's
6572.14 -> in a detached state. So internet gateways
can only be attached to a very specific VP,
6577.47 -> VPC, it's a one to one relationship. So for
every VPC, you're going to have an internet
6581.9 -> gateway. And so you can see it's attached
and there is no VPC ID. So I'm going to drop
6586.37 -> this down and attach the VPC and then select
Bayshore there and attach it and there you
6591.95 -> go. Now it's attached and we can see the ideas
associated. So we have an internet gateway,
6597.11 -> but that still doesn't mean that things within
our network can Reach the internet, because
6601.31 -> we have to add a route to our route table.
Okay, so just closing this tab here, you can
6606.72 -> see that there already is a route table associated
with our VPC because it did create us a default
6612.9 -> route table. So I'm just going to click through
to that one here to show you, okay, and you
6617.29 -> can see that it's our main route tables, it's
set to main, but I want you to learn how to
6620.66 -> create route tables. So we're going to make
one from scratch here. Okay. So we'll just
6624.93 -> hit Create route table here. And we're just
going to name it our main route table or RG,
6632.6 -> our internet road table, I don't know doesn't
matter. Okay, we'll just say RT, to shorten
6638.83 -> that there and we will drop down and choose
Bayshore. And then we will go ahead and create
6642.69 -> that route table. Okay, and so we'll just
hit close. And we will click off here. So
6647.34 -> we can see all of our route tables. And so
here we have our, our, our main one here for
6652.84 -> Bayshore. And then this is the one we created.
Okay, so if we click into this route table
6657.93 -> here, you can see by default, it has the full
scope of our local network here. And so I
6664.28 -> want to show you how to change this one to
our main. So we're just going to click on
6669.45 -> this one here and switch it over to main,
so set as main row table. So the main row
6673.6 -> table is whenever you know, just what is going
to be used by default. All right, and so we'll
6679.89 -> just go ahead and delete the default one here
now, because we no longer need it. Alright,
6685.26 -> and we will go select our new one here and
edit our routes. And we're going to add one
6689.63 -> for the internet gateway here. So I'm gonna
just drop down here, or sorry, I'm just gonna
6694.239 -> write 0.0 dot 0.0, forward slash, zero, which
means let's take take anything from anywhere
6699.23 -> there. And then we're going to drop down,
select internet gateway, select Bayshore,
6702.83 -> and hit save routes. Okay, and we'll hit close.
And so now we, we have a gateway. And we have
6710.1 -> a way for our subnets to reach the internet.
So there you go. So now that we have a route
6715.69 -> to the internet, it's time to create some
subnets. So we have some way of actually launching
6720.93 -> our EC two instances, somewhere. Okay, so
on the left hand side, I want you to go to
6726.07 -> subnets. And right away, you're going to start
to see some subnets. Here, these are the default
6729.989 -> ones created with you with your default VPC.
And you can see that there's exactly six of
6735.09 -> them. So there's exactly one for every availability
zone within each region. So the North Virginia
6740.44 -> has six azs. So you're going to have six,
public subnets. Okay, the reason we know these
6747.239 -> are public subnets. If we were to click on
one here and check the auto assign, is set
6751.65 -> to Yes. So if a if this is set to Yes, that
means any EC two instance launch in the subnet
6758.18 -> is going to get a public IP address. Hence,
it's going to be considered a public subnet.
6763.95 -> Okay. So if we were to switch over to Canada
Central, because I just want to make a point
6768.72 -> here, that if you are in a another region,
it's going to have a different amount of availability
6774.239 -> zones, Canada only has two, which is a bit
sad, we would love to have a third one there,
6778.26 -> you're going to see that we have exactly one
subnet for every availability zone. So we're
6782.4 -> going to switch back to North Virginia here.
And we are going to proceed to create our
6787.39 -> own subnets. So we're going to want to create
at least three subnets if we can. So because
6793.49 -> the reason why is a lot of companies, especially
enterprise companies have to run it in at
6797.45 -> least three availability zones for high availability.
Because if you know one goes out and you only
6803.61 -> have another one, what happens if two goes
out. So there's that rule of you know, always
6807.61 -> have at least, you know, two additional Okay,
so we're going to create three public subnets
6813.01 -> and one, one private subnet, we're not going
to create three private subnets, just because
6817.239 -> I don't want to be making subnets here all
day. But we'll just get to it here. So we're
6821.17 -> going to create our first subnet, I'm going
to name this Bayshore public, okay, all right,
6827.51 -> and we're going to select our VPC. And we're
going to just choose the US East one, eight,
6833.22 -> and we're going to give it a cider block of
10.0 dot 0.0 forward slash 24. Now, notice,
6839.34 -> this cider range is a smaller than the one
up here, I know the number is larger, but
6843.8 -> from the perspective of how many IP addresses
6846.76 -> it allocates, there's actually a fewer here,
so you are taking a slice of the pie from
6851.05 -> the larger range here. So just be aware, you
can set this as 16, it's always going to be
6855.73 -> less, less than in by less, I mean, a higher
number than 16. Okay, so we'll go ahead and
6862.28 -> create our first public subnet here. And we'll
just hit close. And this is not by default
6867.53 -> public because by default, the auto sign is
going to be set to No. So we're just going
6871.72 -> to go up here and modify this and set it so
that it does auto assign ipv4 and now is is
6879.32 -> considered a public subnet. So we're going
to go ahead and do that for our B and C here.
6884.57 -> So it's going to be the same thing Bayshore
public,
6887.5 -> be,
6888.72 -> okay, choose that. We'll do B, we'll do 10.0
dot 1.0 24. Okay. And we're going to go create
6897.51 -> that close. And we're going to that auto assign
that there. All right. And the next thing
6907.02 -> we're going to do is create our next subnet
here so Bayshore How boring I beige or public.
6914.33 -> See? And we will do that. And we'll go to
see here, it's going to be 10.0 dot 2.0, Ford
6921.41 -> slash 24. Okay, we'll create that one. Okay,
let it close. And we will make sure did I
6928.09 -> set that one? Yes, I did that I said that
one, not as of yet. And so we will modify
6933.59 -> that there, okay. And we will create a another
subnet here, and this is going to be a beige,
6940.55 -> your private a, okay. And we are going to
set that to eight here. And we're going to
6950.66 -> set this to 10.0 dot 3.0 24. Okay, so this
is going to be our private subnet. Alright,
6959.03 -> so we've created all of our subnets. So the
next thing we need to do is associate them
6963.47 -> with a route table, actually, we don't have
to, because by default, it's going to use
6967.55 -> the main, alright, so they're already automatically
associated there. But for our private one,
6973.45 -> we're not going to be wanting to really use
the, the, the the main route table there,
6979.73 -> we probably would want to create our own route
table for our private subnets there. So I'm
6984.09 -> just gonna create a new one here, and we're
gonna just call it private RT. Okay, I'm going
6989.72 -> to drop that down, choose Bayshore here. And
we're going to hit close, okay. And the idea
6995.63 -> is that the, you know, we don't need the subnet
to reach the internet. So it doesn't really
6999.59 -> make sense to be there. And then we could
set other things later on. Okay, so what I
7005.03 -> want you to do is just change the association
here. So we're gonna just edit the route table
7008.53 -> Association. And we're just going to change
that to be our private one. Okay. And so now
7013.63 -> our route tables are set up. So we will move
on to the next step. So our subnets are ready.
7024 -> And now we are able to launch some EC two
instances. So we can play around and learn
7028.49 -> some of these other networking components.
So what I want you to do is go to the top
7033.04 -> here and type in EC two. And we're going to
go to the EC two console. And we're going
7039.23 -> to go to instances on the left hand side.
And we're going to launch ourselves a couple
7043.56 -> of instances. So we're going to launch our
first instance, which is going to be for our
7047.34 -> public subnet here. So we're going to choose
t to micro, we're going to go next, and we
7052.881 -> are going to choose the Bayshore VPC that
we created. We're going to launch this in
7059.18 -> the public subnet here public a, okay, and
we're going to need a new Im role. So I'm
7066 -> just going to right click here and create
a new Im role because we're going to want
7069.45 -> to give it access to both SSM for sessions
manager and also, so we have access to s3.
7076.73 -> Okay, so just choosing EC to there. I'm going
to type in SSM, okay, SSM, there it is at
7084.841 -> the top, that will type in s3, we're gonna
give it full access, we're going to go next,
7090.53 -> we're going to go to next and we're going
to just type in my base, your EC two. Okay.
7097.93 -> And we're going to hit Create role. Okay,
so now we have the role that we need for our
7104.16 -> EC two instance, we're just going to refresh
that here, and then drop down and choose my
7109.4 -> beige or EC two. Okay, and we are going to
want to provide it a script here to run. So
7118.6 -> I already have a script pre prepared that
I will provide to you. And this is the public
7122.55 -> user data.sh. All this is going to do. And
if you want to just take a peek here at what
7128.22 -> it does, I guess they don't have it already
open here. But we will just quickly open this
7133.68 -> up here. all it's going to do is it's going
to install an Apache server. And we're just
7138.489 -> going to have a static website page here served
up. Okay, and so we're going to go ahead and
7146.95 -> go to storage, nothing needs to be changed
here. We're going to add, we don't need to
7152.4 -> add any tags. We're gonna go to security group
and we're going to create a new security group,
7155.68 -> I'm going to call it um, my my beige your
EC two SG, okay. And we're going to make sure
7169.33 -> that we have access to HTTP, because this
is a website, we're going to have to have
7174.489 -> Port 80 open, we're going to restrict it down
to just us.
7179.07 -> And we could also do that for SSH, so we might
as well do that there as well. Okay, we're
7185.17 -> going to go ahead and review and launch this
EC two instance and already have a key pair
7189.52 -> that is created. You'll just have to go ahead
and create one if you don't have one there.
7193.35 -> And we'll just go ahead and launch that instance
there. Okay, great. So now, we have this EC
7199.25 -> two instance here. Which is going to be for
our public segment. Okay. And we will go ahead
7205.05 -> and launch another instance. So we'll go to
Amazon Lex to here, choose teach you micro.
7210.95 -> And then this time we're going to choose our
private subnet. Okay, I do want to point out
7216.86 -> that when you have this auto assign here,
see how it's by by default disabled, because
7221.51 -> it's inheriting whatever the parents have
done has, whereas when we set it, the first
7225.47 -> one, you might have not noticed, but it was
set to enable, okay. And we are going to also
7229.33 -> give it the same role there, my beige or EC
two. And then this time around, we're going
7234.24 -> to give it the other scripts here. So I have
a private script here, I'm just going to open
7239.08 -> it up and show it to you. Okay, and so what
this script does, is a while it doesn't actually
7244.84 -> need to install Apache, so we'll just remove
that, I guess it's just old. But anyway, what
7249.3 -> it's going to do is it's going to reset the
password on the EC to user to chi win. Okay,
7254.52 -> that's a character from Star Trek Deep Space
Nine. And we're also going to enable password
7259.04 -> authentication. So we can SSH into this using
a password. And so that's all the script does
7266.69 -> here. Okay, and so we are going to go ahead
and choose that file there. And choose that.
7273.61 -> And we will move on to storage, storage is
totally fine. We're not going to add tags,
7278.97 -> secure groups, we're gonna actually create
a new security group here. It's not necessarily
7282.7 -> necessary, but I'm going to do anyway, so
I'm gonna say my private, private EC two,
7292.38 -> SD, maybe put Bayshore in there. So we just
keep these all grouped together note, therefore,
7300.14 -> it's only going to need SSH, we're not going
to have any access to the internet there.
7304 -> So like, there's no website or anything running
on here. And so we'll go ahead and review
7307.52 -> and launch. And then we're going to go launch
that instance, and choose our key pair. Okay,
7313.19 -> great. So now we're just going to wait for
these two instances to spin up here. And then
7319.57 -> we will play around with security groups and
knackles. So just had a quick coconut water.
7329.94 -> And now I'm back here and our instances are
running, they don't usually take that long
7333.8 -> to get started here. And so we probably should
have named these to make it a little bit easier.
7338.34 -> So we need to determine which is our public
and private. And you can see right away, this
7342.57 -> one has a public public DNS hostname, and
also it has its ip ip address. Okay, so this
7351.18 -> is how we know this is the public one. So
I'm just going to say, base your public. Okay.
7356.78 -> And this one here is definitely the private
one. All right. So we will say a base your
7360.84 -> private. Okay. So, yeah, just to iterate over
here, if we were to look, here, you can see
7368.63 -> we have the DNS and the public IP address.
And then for the private, there's nothing
7373.66 -> set. Okay, so let's go see if our website
is working here. So I'm just going to copy
7378.95 -> the public IP address, or we can take the
DNS when it doesn't matter. And we will paste
7382.63 -> this in a new tab. And here we have our working
website. So our public IP address is definitely
7388.09 -> working. Now, if we were to check our private
one, there is nothing there. So there's nothing
7392.57 -> for us to copy, we can even copy this private
one and paste it in here. So there's no way
7396.96 -> of accessing that website is that is running
on the private one there. And it doesn't really
7402.8 -> make a whole lot of sense to run your, your
website, in the private subnet there. So you
7409.23 -> know, just to make a very clear example of
that, now that we have these two instances,
7414.72 -> I guess it's a good opportunity to learn about
security groups. Okay, so we had created a
7418.93 -> security group. And the reason why we were
able to access this instance, publicly was
7424.88 -> that in our security group, we had an inbound
rule on port 80. So Port 80, is what websites
7429.52 -> run on. And when we're accessing through the
web browser, there's and we are allowing my
7434.13 -> IP here. So that's why I was allowed to access
it. So I just want to illustrate to you what
7439.86 -> happens if I change my IP. So at the top here,
I have a VPN, it's a, it's a service, you
7445.77 -> can you can buy, a lot of people use it so
that they can watch Netflix in other regions.
7450.32 -> I use it for this purpose not to watch Netflix
somewhere else.
7454.84 -> So don't get that in your mind there. But
I'm just going to turn it on. And I'm going
7459.01 -> to change my IP. So I get I think this is
Brazil. And so I'm going to have an IP from
7464.29 -> Brazil here shortly once it connects. And
so now if I were to go and access this here,
7469.59 -> it shouldn't work. Okay, so I'm just going
to close that tab here. And it should just
7474.45 -> hang Okay, so it's hanging because I'm not
using that IP. So that's how security groups
7481.13 -> work. Okay, and so I'm just going to turn
that off. And I think I should have the same
7485.76 -> one and it should resolve instantly there.
So great. So just showing you how the security
7490.91 -> groups work for inbound rules. Okay, for outbound
rules, that's traffic going out to the internet,
7496.34 -> it's almost always open like this. 0.0 dot
0.0 right. Because you'd want to be able to
7501.66 -> download stuff, etc. So that is pretty normal
business. Okay. So now that now that we can
7509.09 -> see that maybe we would like to show off how
knackles work compared to security groups
7513.74 -> to security groups. As you can see if we were
just to open this one up here, okay. security
7521.16 -> groups, by default, only can allow things
so everything is denied. And then you're always
7528.78 -> opening things up. So you're adding allow
rules only, you can't add an explicit deny
7533.41 -> rule. So we're knackles are a very useful
is that you can use it to block a very specific
7539.14 -> IP addresses, okay, or IP ranges, if you will.
And you cannot do that for a security group.
7544.32 -> Because how would you go about doing that?
So if I wanted to block access just to my
7547.969 -> IP address, I guess I could only allow every
other IP address in the world except for mine.
7553.99 -> But you can see how that would do undue burden.
So let's see if we can set our knakal to just
7559.93 -> block our IP address here. Okay. So security
groups are associated with the actual EC two
7565.93 -> instances, or? So the question is, is that,
how do we figure out the knackles and knackles
7572.61 -> are associated with the subnets. Okay, so
in order to block our IP address for this
7577.26 -> EC, two instance, we have to determine what
subnet it runs in. And so it runs in our beige
7582.2 -> or public a right and so now we get to find
the knakal, that's associated with it. So
7587.12 -> going up here to subnets, I'm going to go
to public a, and I'm going to see what knackles
7591.59 -> are associated with it. And so it is this
knakal here, and we have some rules that we
7596.45 -> can change. So let's actually try just blocking
my IP address here. And we will go just grab
7601.91 -> it from here. Okay. All right. And just to
note, if you look here, see houses forward
7609.19 -> slash 32. That is mean, that's a cider block
range of exactly one IP address. That's how
7613.989 -> you specify a single IP address with forward
slash 32. But I'm going to go here and just
7619.489 -> edit the knakal here, and we are going to
this is not the best way to do it. So I'm
7627.43 -> just going to open it here. Okay.
7628.88 -> And because I didn't get some edit options
there, I don't know why. And so we'll just
7634.45 -> go up to inbound rules here, I'm
7635.93 -> going to add a new rule. And it goes from
lowest to highest for these rules. So I'm
7640.87 -> just going to add a new rule here. And I'm
going to put in rule 10. Okay, and I'm going
7648.67 -> to block it here on the side arrange. And
I'm going to do it for Port 80. Okay, so this
7656.85 -> and we're going to have an explicit deny,
okay, so this should, this should not allow
7662.219 -> me to access that easy to instance, any any
longer. Okay, so we're going to go back to
7667.03 -> our instances here, we're going to grab that
IP address there and paste it in there and
7671.07 -> see if I still have access, and I do not okay,
so that knakal is now blocking it. So that's
7675.38 -> how you block individual IP addresses there.
And I'm just going to go back and now edit
7680.1 -> the rule here. And so we're just going to
remove this rule, and hit save. And then we're
7685.62 -> going to go back here and hit refresh. Okay.
And I should now have access on I do. So there
7691.11 -> you go. So that is security groups and knakal.
So I guess the next thing we can move on to
7695.36 -> is how do we actually get access to the private
subnet, okay, and the the the ways around
7703.75 -> that we have our private EC two instance.
And we don't have an IP address, so there's
7709.83 -> no direct way to gain access to it. So we
can't just easily SSH into it. So this is
7714.9 -> where we're going to need a bastion. Okay,
and so we're going to go ahead and go set
7718.95 -> one up here. So what I want you to do is I
want you to launch a new instance here, I'm
7725.15 -> just gonna open a new tab, just in case I
want this old tab here. And I'm just going
7730.2 -> to hit on launch instance here. Okay, and
so I'm going to go to the marketplace here,
7735.71 -> I'm gonna just type in Bastion. And so we
have some options here, there is this free
7740.17 -> one Bastion host, SSH, but I'm going to be
using guacamole and there is an associated
7744.64 -> cost here with it, they do have a trial version,
so you can get away without paying anything
7749.53 -> for it. So I'm just going to proceed and select
guacamole. And anytime you're using something
7754.64 -> from the marketplace, they generally will
have the instructions in here. So if you do
7757.95 -> view additional details here, we're going
to get some extra information. And then we
7763.41 -> would just scroll down here to usage information
such as usage instructions, and we're going
7769.42 -> to see there is more information. I'm just
going to open up this tab here because I've
7773.57 -> done this a few times. So I remember where
all this stuff is okay, and we're just going
7777.33 -> to hit continue here. Okay, and we're going
to start setting up this instance. So we're
7781.51 -> going to need a small so this one doesn't
allow you to go into micros. Okay, so there
7786.64 -> is an associated cost there. We're going to
configure this instance, we're going to want
7790.17 -> it in the same VPC as our private, okay, when
we have to launch this in a public subnet,
7798.13 -> so just make sure that you select the public
one Here, okay. And we're going to need to
7804.04 -> create a new Im role. And this is part of
guacamole these instructions here because
7808.01 -> you need to give it some access so that it
can auto discover instances. Okay? And so
7814.48 -> down here, they have the instructions here,
and they're just going to tell you to make
7817.28 -> an IM role, we could launch a cloudformation
template to make this, but I would rather
7820.91 -> just make it by hand here. So we're going
to grab this policy here, okay. And we are
7827.83 -> going to make a new tab and make our way over
to I am, okay. And once we're in I am here,
7836.03 -> we're going to have to make this policy. So
I'm going to make this policy. Okay, unless
7839.91 -> I already have it. Let's see if it's already
in here. New, okay, good. And I'm gonna go
7846.14 -> to JSON, paste that in there, review the policy,
I'm going to name it, they have a suggestion
7850.55 -> here, what to name it, Glock AWS, that seems
fine to me. Okay, and here, you can see it's
7856.03 -> gonna give us permissions to cloud watch an
STS. So we'll go ahead and create that policy.
7860.13 -> It says it already exists. So I already have
it. So just go ahead and create that policy.
7864.98 -> And I'm just going to skip the step for myself.
Okay, and we're just going to cancel there.
7870.07 -> So I'm just going to type walk. I don't know
why it's not showing up, says it already exists.
7877.22 -> Again, so yeah, there it is. So I already
have that policy. Okay, so I couldn't hit
7882.57 -> that last step. But you'll be able to get
through that no problem. And then once you
7885.71 -> have it, you're gonna have to create a new
role. So we're going to create a role here,
7889.17 -> and it's going to be for EC two, we're going
to go next. And we're going to want I believe
7894.4 -> EC to full access is that the right Oh, read
only access, okay. So we're going to want
7900.5 -> to give this easy to read only access. And
we're also going to want to give it that new
7904.93 -> GWAC role. So I'm going to type in type AWS
here. Oh, that's giving me a hard time here,
7910.37 -> we'll just copy and paste the whole name in
here. There it is. And so those are the two,
7915.53 -> two policies you need to have attached. And
then we're just going to name this something
7920.88 -> here. So I'm gonna just call it my Glock,
Bastion. Okay, roll here. I'm going to create
7927.5 -> that role. Okay, and so that role has now
been created, we're going to go back here,
7934.04 -> refresh the IM roles, and we're going to see
if it exists. And there it is my Glock Bastion
7938.43 -> role. Am I spell bashing wrong there, but
I don't think that really matters. And then
7942.51 -> we will go to storage. There's nothing to
do here, we'll skip tags, we'll go to security
7948.17 -> groups. And here you can see it comes with
some default configurations. So we're going
7952.34 -> to leave those alone. And then we're going
to launch this EC two instance. Okay. So now
7958.89 -> we're launching that it's taking a bit of
time here, but this is going to launch. And
7963.91 -> as soon as this is done, we're going to come
back here and actually start using this Bastion
7970.969 -> to get into our private instance. So our bashing
here is now already in provisioned. So let's
7977.41 -> go ahead and just type in Bastion, so we don't
lose that later on, we can go grab either
7982.86 -> the DNS or public IP, I'll just grab the DNS
one here. And we're going to get this connection,
7987.93 -> not private warning, that's fine, because
we're definitely not using SSL here. So just
7992.01 -> hit advanced, and then just click to proceed
here. Okay, and then it's might ask you to
7996.72 -> allow we're going to definitely say allow
for that, because that's more of the advanced
8000.29 -> functionality, guacamole there, which we might
touch in. At the end of this here, we're going
8004.14 -> to need the username and password. So it has
a default, so we have block admin here, okay.
8009.76 -> And then the password is going to be the name
of the instance ID. All right, and this is
8014.89 -> all in the instructions here. I'm just speaking
you through it. And then we're going to hit
8018.51 -> login here. And so now it has auto discovered
the instances which are in the VPC that is
8025.12 -> launched. And so here, we have Bayshore, private.
So let's go ahead and try to connect to it.
8030.42 -> Okay. So as soon as I click, it's going to
make this shell here. And so we'll go attempt
8035.68 -> and login now. So our user is easy to user.
And I believe our password is KI wi n Chi
8043.24 -> win. And we are in our instance, so there
you go. That's how we gain access to our private
8050.13 -> instance here. Just before we start doing
some other things within this private easy
8056.03 -> to I just want to touch on some of the functionality
of Bastion here, or sorry, guacamole, and
8060.64 -> so why you might actually want to use the
bastion. So it does, it is a hardened instance,
8066.34 -> it does allow you to authenticate via multiple
methods. So you can enable multi factor authentication
8072.01 -> to use this. It also has the ability to do
screen recordings, so you can really be sure
8078.09 -> what people are up to, okay, and then it just
has built in audit logs and etc, etc. So,
8083.36 -> there's definitely some good reasons to use
a bastion, but we can also use a sessions
8087.59 -> manager which does a lot of this for us with
the exception of screen recording within the
8093.04 -> within AWS. But anyway, so now that we're
in our instance, let's go play around here
8097.86 -> and see what we can do. So now that we are
in this private EC two instance, I just want
8108.131 -> to show you that it doesn't have any internet
access. So if I was to ping something like
8111.4 -> Google, right, okay, and I'm trying to get
information here to see how it's hanging,
8116.43 -> and we're not getting a ping back, that's
because there is no route to the internet.
8121.26 -> And so the way we're going to get a route
to the internet is by creating a NAT instance
8126.251 -> or a NAT gateway. Generally, you want to use
a NAT gateway, there are cases use Nat instances.
8131.5 -> So if you were trying to save money, you can
definitely save money by having to manage
8135.98 -> a NAT instance by herself. But we're gonna
learn how to do Nat gateway, because that's
8139.37 -> the way to this wants you to go. And so back
in our console, here, we are in EC two instances,
8146.18 -> where we're going to have to switch over to
a V PC, okay, because that's where the NAT
8151.969 -> gateway is. So on the left hand side, we can
scroll down and we are looking under VPC,
8160.01 -> we have Nat gateways. And so we're going to
launch ourselves a NAT gateway, that gateways
8164.51 -> do cost money. So they're not terribly expensive.
But you know, at the end of this, we'll tear
8170.29 -> it down, okay. And so, the idea is that we
need to launch this Nat gateway in a public
8176.35 -> VPC or sorry, public subnet, and so we're
gonna have to look, here, I'm gonna watch
8180.55 -> it in the beige or public a doesn't matter
which one just has to be one of the public
8184.27 -> ones. And we can also create an elastic IP
here. I don't know if it actually is required
8190.66 -> to sign a pipe network, I don't know if it
really matters. Um, but we'll try to go ahead
8196.84 -> and create this here without any IP, no, it's
required. So we'll just hit Create elastic
8201.349 -> IP there. And that's just a static IP address.
So it's never changing. Okay. And so now that
8206.17 -> we have that, as associated with our Nat gateway,
we'll go ahead and create that. And it looks
8211.029 -> like it's been created. So once your Nat gateway
is created, the next thing we have to do is
8214.939 -> edit your route table. So there actually is
a way for that VPC to our sorry, that private
8220.889 -> instance to access the internet. Okay, so
let's go ahead and edit that road table.
8226.599 -> And so we created a private road table specifically
for our private EC two. And so here, we're
8232.51 -> going to add it the routes, okay. And we're
going to add a route for that private or to
8238.319 -> that Nat gateway. Okay. So um, we're just
going to type in 0.0 dot 0.0, Ford slash zero.
8247.59 -> And we are then just going to go ahead, yep.
And then we're going to go ahead and choose
8251.849 -> our Nat gateway. And we're going to select
that there, and we're going to save that route.
8257.51 -> Okay, so now our Nat gateway is configured.
And so there should be a way for our instance
8263.07 -> to get to the internet. So let's go back and
do a ping. And back over here, in our private
8268.969 -> EC two instance, we're just going to go ahead
and ping Google here. Okay, and we're going
8273.619 -> to see if we get some pings back, and we do
so there you go. That's all we had to do to
8278.029 -> access the internet. All right. So why would
our private easy to instance need to reach
8283.429 -> the internet. So we have one one inbound traffic,
but we definitely want outbound because we
8288.989 -> would probably want to update packages on
our EC two instance. So if we did a sudo,
8294.029 -> yum, update, okay, we wouldn't be able to
do this without a outbound connection. All
8298.149 -> right. So it's a way of like getting access
to the internet, only for the things that
8302.761 -> we need for outbound connections. Okay. So
now that we've seen how we can set an outbound
8312.639 -> connection to the Internet, let's talk about
how we could access other AWS services via
8317.55 -> our private EC two instance here. So s3 would
be a very common one to utilize. So I'm just
8322.439 -> going to go over to s3 here, I'm just going
to type in s3, and open this in a new tab,
8326.699 -> I'm going to try to actually access some s3
files here. Okay. And so I should already
8332.439 -> have a bucket in here called exam pro 000.
And I do have some images already in here
8338.429 -> that we should be able to access. And we did
get that Iam role permissions to access that
8343.359 -> stuff there. So the ACL, the ACL, I should
be already pre installed here. And so we'll
8349.029 -> just type in AWS s3, and it should be, if
we wanted to copy a file locally, we'll type
8356.779 -> in CP. And we're going to need to actually
just do LS, okay, so we'll do LS here, okay.
8362.409 -> I don't think we have to go as advanced as
copying and doing other stuff here. But you
8367.569 -> can definitely see that we have a way of accessing
s3 via the COI. So what would happen if we
8374.279 -> removed that Nat gateway, would we still be
able to access s3? So let's go find out. All
8379.289 -> right. I think you know the answer to this,
but let's just do it. And then I'll show you
8383.08 -> a way that you can still access s3 without
a NAT gateway. All right. So we're going to
8389.239 -> go ahead here and just delete this Nat gateway,
it's not like you can just turn them off so
8392.66 -> you have to delete them. And we'll just wait
till that finishes deleting here. So our Nat
8398.74 -> gateway has deleted after a few months. here
just hit the refresh button here just in case
8402.31 -> because sometimes it will say it's deleting
when it's already done. And you don't want
8405.66 -> to be waiting around for nothing. So
8407.449 -> let's go back to our EC two instance here.
We'll just clear the screen here. And now
8412.459 -> the question is, will we be able to access
AWS s3 via the via the CLR here, okay, and
8420.71 -> so I hit Enter, and I'm waiting, waiting,
waiting. And it's just not going to complete
8424.52 -> because it no longer has any way to access
s3. So the way it works when using the COI
8430.43 -> through UC Davis is it's going to go out out
to the internet out of the universe network,
8434.699 -> and then come back into nativas network to
then access s3. And so since there is no outbound
8441.189 -> way of connecting to the internet, there's
no way we're going to be able to connect to
8445.039 -> s3. Okay, so it seems a little bit silly,
because you'd say, Well, why wouldn't you
8449.659 -> just keep the traffic within the network because
we're already on an easy to within AWS network,
8454.939 -> and s3 is with Ava's network. And so that
brings us to endpoints, which is actually
8460.85 -> how we can create a like our own little private
tunnel within the US network, so that we don't
8466.52 -> have to leave up to the internet. So let's
go ahead and create an endpoint and see if
8470.319 -> we can connect to s3 without having outbound
Connect. So we're going to proceed to create
8474.959 -> our VPC endpoints on the left hand side, you're
going to choose endpoints, okay. And we're
8480.13 -> going to create ourselves a new endpoint.
And this is where we're going to select it
8485.39 -> for the service that we want to use. So this
is going to be for s3. But just before we
8489.64 -> do that, I want you to select the VPC that
we want this for down below. And then we're
8494.869 -> going to need this for s3. So we'll just scroll
down here and choose s3. Okay, and we're going
8499.909 -> to get a blotch options here. Okay. And so
we're going to need to configure our route
8506.479 -> tree, our route table. So we have that connection
there. And it's going to ask what route table
8511.13 -> Do you want put it in, and we're going to
want to put it in our private one, because
8514.54 -> that's where our private easy to instance
is. And then down below, we will have a policy
8519.399 -> here. And so this is going to be great. So
we will just leave that as is, and we're going
8524.76 -> to hit Create endpoint. Okay, so we're gonna
go back and hit close there. And it looks
8530.59 -> like our endpoint is available immediately
there. And so now we're going to go find out
8535.189 -> if we actually have access to s3. So back
over here we are in our private EC two instance.
8542.789 -> And I'm just going to hit up and see if we
now have access and look at that. So we've
8546.869 -> created our own private connection to s3 without
leaving the egress network.
8556.17 -> Alright, so we had a fun time playing around
with our private EC two instance there. And
8563.989 -> so we're pretty much wrapped up here for stuff.
I mean, there's other things here, but you
8567.949 -> know, at the associate level, it's, there's
not much reason to get into all these other
8572.76 -> things here. But I do want to show you one
more thing for VP C's, which are VPC flow
8578.26 -> logs, okay, and so I want you to go over to
your VPC here, okay, and then I just want
8584.41 -> you to go up and I want you to create a flow
log, so flow logs will track all the, the
8590.89 -> traffic that is going through through your
VPC. Okay, and so it's just nice to know how
8596.409 -> to create that. So we can have it to accept
reject, or all I'm going to set it to all
8600.829 -> and it can either be delivered to cloudwatch
logs, or s3 Cloud watch is a very good destination
8604.899 -> for that. In order to deliver that, we're
going to need a destination log group on I
8611.489 -> don't have one. So in order to send this to
a log group, we're going to have to go to
8617.81 -> cloud watch, okay. We'll just open this up
in a new tab here. Okay. And then once we're
8624.949 -> here in cloud watch, we're going to create
ourselves a new cloud watch log, alright.
8630.459 -> And we're just going to say actions create
log group, and we'll just call this Bayshore,
8635.479 -> your VPC flow logs or VPC logs or flow logs,
okay. And we will hit Create there. And now
8643.83 -> if we go back to here and hit refresh, we
may have that destination now available to
8649.45 -> us. There it is. Okay, we might need an IM
role associated with this, to have permissions
8654.999 -> to publish to cloud watch logs. So we're definitely
going to need permissions for that. Okay.
8662.579 -> And I'll just pop back here with those credentials
here in two seconds. So I just wanted to collect
8667.359 -> a little bit of flow log data, so I could
show it off to you to see what it looks like.
8671.96 -> And so you know, under our VPC, we can see
that we have flow logs enabled, we had just
8675.619 -> created that a log there a moment ago. And
just to get some data, I went to my EC two
8680.62 -> instances, and we had that public one running
right. And so I just took that IP address.
8684.92 -> And I just started refreshing the page. I
don't know if we actually looked at the actual
8688.58 -> webpage I had here earlier on, but here it
is. And so I just hit enter, enter, enter
8693.689 -> here a few times. And then we can go to our
cloud watch here and look for the log here
8698.999 -> and so we should have some streams, okay.
And so if we just open that up, we can kind
8704.82 -> of see what we have here. And so we have some
raw data. So I'll just change it to text.
8709.09 -> And so here we can see the IP address of the
source and the destination and additional
8714.54 -> information, okay, and that we got a 200 response.
Okay, so there you go. I'm just wanting to
8721.59 -> give you a little quick peek into there.
8729.239 -> So now we're done the VPC section, let's clean
up whatever we created here, so we're not
8733.72 -> incurring any costs. So we're gonna make our
way over two EC two instances. And you can
8738.029 -> easily filter out the instances which are
in that. That VPC here by going to VPC ID
8744.71 -> here, and then just selecting the VPC. So
these are the three instances that are running,
8748.47 -> and I'm just going to terminate them all.
Because you know, we don't want to spend up
8752.789 -> our free credits or incur cost because of
that bash in there. So just hit terminate
8757.499 -> there, and those things are going to now shut
down. We also have that VPC endpoint still
8763.859 -> running, just double check to make sure your
Nat gateway isn't still there. So under the
8768.64 -> back in the VPC section here, just make sure
that you had so and there we have our gateway
8775.42 -> endpoint there for s3. So we'll just go ahead
and delete that. I don't believe it cost us
8779.44 -> any money, but it doesn't hurt to get that
out of the way there. We'll also check our
8783.38 -> elastic IPS. Now it did create a e IP when
we created that gateway, IPS that are not
8788.46 -> being utilized cost us money. So we'll go
ahead and release that tip. Okay, we'll double
8794.6 -> check our gateway. So under Nat gateways,
making sure nothing is running under there,
8800.539 -> and we had deleted previously, so we're good,
we can go ahead and delete these other things
8804.14 -> not so important. But we can go ahead and
attempt to delete them, it might throw an
8809.789 -> error here. So we'll see what it says. Nope,
they're all deletes. That's great. So those
8815.13 -> are deleted now. And then we have our route
table. So we can delete those two route tables,
8820.61 -> okay. And we can get rid of our internet gateway,
so we can find that internet
8826.26 -> gateway.
8828.37 -> There it is, okay, we will detach it, okay.
And then we will go ahead and delete that
8835.84 -> internet gateway, okay. And we'll go attempt
to delete our, our actual VPC now, we'll see
8843.569 -> if there's any dependencies here. So if we
haven't deleted all the things that it wants,
8847.459 -> from here, it's going to complain. So there
might be some security groups here, but we'll
8850.78 -> find out in a second. Oh, just delete the
forest. Great. So it just deleted it there.
8854.909 -> So we're all cleaned up. So there you go.
8860.02 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at identity access management
8867.39 -> Iam, which manages access of AWS users and
resources. So now it's time to look at I am
8877.81 -> core components. And we have these installed
identities. And those are going to be users
8882.06 -> groups and roles. Let's go through those.
So a user is an end user who can log into
8886.449 -> the console or interact with AWS resources
programmatically, then you have groups, and
8891.05 -> that is when you take a bunch of users and
you put them into a logical grouping. So they
8895.11 -> have shared permissions. That could be administrators,
developers, auditors, whatever you want to
8899.51 -> call that, then you have roles and roles,
have policies associated with them. That's
8904.119 -> what holds the permissions. And then you can
take a role and assign it to users or groups.
8909.659 -> And then down below, you have policies. And
this is a JSON document, which defines the
8914.569 -> rules in which permissions are loud. And so
those are the core components. But we'll get
8919.819 -> more in detail to all these things. Next.
So now that we know the core components aren't,
8925.479 -> let's talk about how we can mix and match
them. Starting at the top here, we have a
8928.959 -> bunch of users in a user group. And if we
want to on mass, apply permissions, all we
8934.17 -> have to do is create a role with the policies
attached to that role. And then once we attach
8939.439 -> that role to that group, all these users have
that same permission great for administrators,
8943.7 -> auditors, or developers. And this is generally
the way you want to use Iam when assigning
8952.609 -> roles to users. You can also assign a role
directly to a user. And then there's also
8959.229 -> a way of assigning a policy, which is called
inline policy directly to a user. Okay, so
8965.47 -> why would you do this? Well, maybe you have
exactly one action, you want to attach this
8970.14 -> user and you want to do it for a temporary
amount of time. You don't want to create a
8974.579 -> managed role because it's never it's never
going to be reused for anybody else. There
8978.37 -> are use cases for that. But generally, you
always want to stick with the top level here.
8982.149 -> A role can have multiple policies attached
to it, okay. And also a role can be attached
8988.359 -> to certain AWS resources. All right. Now,
there are cases where resources actually have
8995.77 -> inline policies directly attached to them,
but there are cases where You have roles attached
9002.17 -> to or somehow associated to resources, all
right. But generally, this is the mix and
9007.81 -> match of it. If you were taking the eight
of a security certification, then this stuff
9014.579 -> in detail really matters. But for the associate
and the pro level, you just need to conceptually
9019.89 -> know what you can and cannot do. All right.
So in I am you have different types of policies,
9029.899 -> the first being managed policies, these are
ones that are created by AWS out of convenience
9034.229 -> for you for the most common permissions you
may need. So over here, we'd have Amazon easy
9039.67 -> to full access, you can tell that it's a managed
policy, because it says it's managed by AWS,
9044.42 -> and an even further indicator is this orange
box, okay? Then you have custom customer managed
9050.17 -> policies, these are policies created by you,
the customer, they are edible, whereas in
9054.14 -> the Manage policies, they are read only. They're
marked as customer managed, you don't have
9058.93 -> that orange box. And then last are inline
policies. So inline policies, you don't manage
9064.619 -> them, because they're like they're one and
done. They're intended to be attached directly
9068.77 -> to a user or directly to a, a resource. And
they're, and they're not managed, so you can't
9076.659 -> apply them to more than one identity or resource.
Okay, so those are your three types of roles.
9087.17 -> So it's now time to actually look at a policy
here. And so we're just going to walk through
9092.292 -> all the sections so we can fully understand
how these things are created. And the first
9096.13 -> thing is the version and the version is the
policy language version. If this changes,
9100.77 -> then that means all the rules here could change.
So this doesn't change very often, you can
9106.35 -> see the last time was 2012. So it's going
to be years until they change it, if they
9110.02 -> did make changes, it probably would be minor,
okay, then you have the statement. And so
9115.249 -> the statement is just a container for the
other policy elements. So you can have a single
9119.899 -> one here. So here I have an array, so we have
multiples. But if you didn't want to have
9124.939 -> multiples, you just get rid of the the square
brackets there, you could have a single policy
9130.43 -> element there. Now going into the actual policy
element, the first thing we have is Cid and
9134.8 -> this is optional. It's just a way of labeling
your statements. So Cid probably stands for
9139.239 -> statement identifier, you know, again, it's
optional, then you have the effect, the effect
9145.311 -> can be either allow or deny. And that's going
to set the conditions or the the access for
9151.609 -> the rest of the policy. The next thing is
we have the action. So actions can be individualized,
9158.35 -> right. So here we have I am and we have an
individual one, or we can use asterisk to
9163.47 -> select everything under s3. And these are
the actual actions the policy will allow or
9169.47 -> deny. And so you can see we have a deny policy,
and we're denying access all to s3 for a very
9176.439 -> specific user here, which gets us into the
principal. And the principal is kind of a
9181.229 -> conditional field as well. And what you can
do is you can specify an account a user role
9185.619 -> or federated user, to which you would like
to allow or deny access. So here we're really
9190.49 -> saying, hey, Barkley, you're not allowed to
use s3, okay, then you have the resource,
9195.499 -> that's the actual thing. That is we're allowing
or denying access to so in this case, it's
9201.229 -> a very specific s3 bucket. And the last thing
is condition. And so condition is going to
9206.649 -> vary based on the based on the resource, but
here we have one, and it does something, but
9215.39 -> I'm just showing you that there is a condition
in here. So there you go, that is the makeup
9219.579 -> of a policy, if you can master master these
things, it's going to make your life a whole
9225.459 -> lot easier. But you know, just learn what
you need to learn.
9234.1 -> So you can also set up password policies for
your users. So you can set like the minimum
9238.67 -> password length or the rules to what makes
up a good password, you can also rotate out
9244.479 -> passwords, so that is an option you have as
well. So it will expire after x days. And
9248.96 -> then a user then must reset that password.
So just be aware that you have the ability
9252.932 -> to password.
9259.43 -> Let's take a look at access keys. Because
this is one of the ways you can interact with
9263.199 -> AWS programmatically either through the ad
vcli or the SDK. So when you create a user
9268.67 -> and you say it's allowed to have programmatic
access, it's going to then create an access
9273.57 -> key for you, which is an ID and a secret access
key. One thing to note is that you just can
9279.82 -> only have up to two access keys within their
accounts down below, you can see that we have
9285.039 -> one, as soon as we add a second one, that
grey button for create access key will vanish.
9289.619 -> And if we want more we would either have to
we'd have to remove keys, okay. But you know,
9294.529 -> just be aware that that's what access keys
are and you can make them inactive and you're
9298.5 -> only allowed to have Let's quickly talk about
MFA. So MFA can be turned on per user. But
9308.7 -> there is a caveat to it where the user has
to be the one that turns it on. Because when
9314.729 -> you turn it on, you have to connect it to
a device and your administrator is not going
9317.81 -> to have the device notes on the user to do
so there is no option for the administrator
9322.89 -> to go in and say, Hey, you have to use MFA.
So it cannot be enforced directly from an
9329.329 -> administrator or root account. But what the
minister can do if if if they want is they
9335.859 -> can restrict access to resources only to people
that are using MFA, so you can't make the
9340.829 -> user account itself have MFA. But you can
definitely restrict access to API calls and
9346.29 -> things like that. And this is Andrew Brown
from exam Pro, and we are going to do the
9355.199 -> I am follow along. So let's make our way over
to the IM console. So just go up to services
9360.55 -> here and type in Im, and we will get to learning
this a right right away. So here I am on the
9369.43 -> IM dashboard, and we have a couple things
that a device wants us to do. It wants us
9374.55 -> to set MFA on our root account. It also wants
us to apply an IM password policy, so that
9381.42 -> our passwords stay very secure. So let's take
what they're saying in consideration and go
9386.99 -> through this. Now I am logged in as the root
user. So we can go ahead and set MFA. So what
9392.439 -> I want you to do is drop this down as your
root user and we'll go manage MFA. And we
9397.789 -> will get to this here. So this is just a general
disclaimer here to help you get started here.
9408.359 -> I don't ever want to see this again. So I'm
just going to hide it. And we're going to
9411.239 -> go to MFA here and we're going to activate
MFA. So for MFA, we have a few options available.
9417.37 -> We have a virtual MFA, this is what you're
probably most likely going to use where you
9420.89 -> can use a mobile device or computer, then
you can use a u2f security key. So this is
9428.159 -> like a UB key. And I actually have an OB key,
but we're not going to use it for this, but
9433.06 -> it's a physical device, which holds the credentials,
okay, so you can take this key around with
9437.739 -> you. And then there are other hardware mechanisms.
Okay, so but we're going to stick with virtual
9443.91 -> MFA here. Okay, so we'll hit Continue. And
what it's going to do is it's going to you
9449.359 -> need to install a compatible app on your mobile
phone. So if we take a look here, I bet you
9454.06 -> authenticator is one of them. Okay. So if
you just scroll down here, we have a few different
9460.869 -> kinds. I'm just looking for the virtual ones.
Yeah. So for Android or iPhone, we have Google
9466.55 -> Authenticator or authy, two factor authentication.
So you're going to have to go install authenticator
9472.52 -> on your phone. And then when you are ready
to do so you're going to have to show this
9476.31 -> QR code. So I'm just going to click that and
show this to you here. And then you need to
9480.779 -> pull out your phone. I know you can't see
me doing this, but I'm doing it right now.
9485.399 -> Okay. And I'm not too worried that you're
seeing this because I'm going to change this
9490.51 -> two factor authentication out here. So if
you decide that you want to also add this
9495.029 -> to your phone, you're not going to get too
far. Okay, so I'm just trying to get my authenticator
9500.55 -> app out here, and I'm gonna hit plus and the
thing and I can scan the barcode, okay. And
9508.249 -> so I'm just going to put my camera over it
here. Okay, great. And so is is saved the
9515.01 -> secret. All right, and so it's been added
to Google Authenticator. Now, now that I have
9520.49 -> it in my application, I need to enter in to
two consecutive MFA codes. Okay, so this is
9526.76 -> a little bit confusing. It took me a while
to figure this out. The first time I was using
9529.869 -> AWS, the idea is that you need to set the
first one. So the first one I see is 089265.
9536.97 -> Okay, and so I'm just going to wait for the
next one to expire, okay, so there's a little
9542.85 -> circle that's going around. And I'm just waiting
for that to complete to put in a second one,
9547.739 -> which just takes a little bit of time here.
9551.76 -> Still going here. Great. And so I have new
numbers. So the numbers are 369626. Okay,
9565.46 -> so it's not the same number, but it's two
consecutive numbers, and we'll hit assign
9570.93 -> MFA. And now MFA has been set on my phone.
So now when I go and log in, it's going to
9577.029 -> ask me to provide additional code. Okay, and
so now my root account is protected. So we're
9583.41 -> gonna go back to our dashboard, and we're
gonna move on to password policies. Okay.
9588.37 -> So let's take the recommendation down here
and manage our password policy. Okay. And
9593.81 -> we are going to set a password policy. So
password policy allows us to enforce some
9598.83 -> rules that we want to have on Your users.
And so to make passwords a lot stronger, so
9603.87 -> we can say it should require at least one
upper letter, one lowercase letter, or at
9609.01 -> least one number, a non non alphanumeric character,
enable the password expiration. So after 90
9617.359 -> days, they're going to have to change the
password, you can have password expiration
9621.18 -> requires the administrator reset, so you can't
just reset it, the admin will do it for you
9627.459 -> allow users to change their own password is
something you could set as well. And then
9630.8 -> you could say prevent password reuse. So for
the next five passwords, you can't reuse the
9634.729 -> same one. Okay? So and I would probably put
this a big high numbers, so that is a very
9639.85 -> high chance they won't use the same one. Okay,
so, um, yeah, there we go. We'll just hit
9644.39 -> Save Changes. And now we have a password policy
in place. Okay. And so that's, that's how
9652.119 -> that will be. So to make it easier for users
to log into the Iam console, you can provide
9657.729 -> a customized sign in link here. And so here,
it has the account ID, or I think that's the
9664.37 -> account ID but we want something nicer here.
So we can change it to whatever you want.
9668.449 -> So I can call it Deep Space Nine. Okay. And
so now what we have is if I spelt that, right,
9675.14 -> I think so yeah. So now that we have a more
convenient link that we can use to login with,
9680.189 -> okay, so I'm just going to copy that for later,
because we're going to use it to login. I
9684.859 -> mean, obviously, you can name it, whatever
you want. And I believe that these are just
9689.439 -> like, I'm like picking like your Yahoo or
your your Gmail email, you have to be unique.
9695.43 -> Okay, so you're not gonna be at least Deep
Space Nine, as long as I have to use I believe.
9700.169 -> But yeah, okay, so maybe we'll move on to
actually creating a. So here I am under the
9706.199 -> Users tab, and I am, and we already have an
existing user that I created for myself, when
9710.869 -> I first set up this account, we're going to
create a new user so we can learn this process.
9714.709 -> So we can fill the name here, Harry Kim, which
is the character from Star Trek Voyager, you
9720.029 -> can create multiple users in one go here,
but I'm just gonna make one. Okay, I'm going
9724.489 -> to give him programmatic access and also access
to the console. So you can log in. And I'm
9730.75 -> going to have an auto generated password here,
so I don't have to worry about it. And you
9733.97 -> can see that it will require them to reset
their password and they first sign in. So
9738.029 -> going on to permissions, we need to usually
put our users within a group, we don't have
9743.22 -> to, but it's highly recommended. And here
I have one called admin for edit, which has
9747.72 -> add administrator access, I'm going to create
a new group here, and I'm going to call it
9752.01 -> developers okay. And I'm going to give them
power access, okay, so it's not full access,
9758.659 -> but it gives them quite a bit of control within
the system. Okay. And I'm just going to create
9763.39 -> that group there. And so now I have a new
group. And I'm going to add Harry to that
9769.399 -> group there. And we will proceed to the next
step here. So we have tags, ignore that we're
9775.749 -> going to review and we're going to create
Harry Kim the user. Okay. And so what it's
9780.739 -> done here is it's also created a secret access
key and a password. Okay, so if Harry wants
9787.029 -> that programmatic access, he can use these
and we can send the an email with the, with
9792.26 -> this information along to him. Okay.
9794.529 -> And, yeah, we'll just close that there. Okay.
And then we will just poke around here in
9798.77 -> Harry Kim for a little bit.
9800.31 -> So just before we jump into Harry Kim, here,
you can see that he has never used his access
9804.76 -> key. He, the last time his password was used
was today, which was set today. And there
9809.35 -> is no activity and he does not have MFA. So
if we go into Harry Kim, we can look around
9814.899 -> here. And we can see that he has policies
applied to him from a group. And you can also
9820.909 -> individually attach permissions to him. So
we have the ability to give them permissions
9824.949 -> via group. Or we can copy permissions from
existing user or we can attach policies directly
9829.39 -> directly to them. So if we wanted to give
them s3, full access, we could do so here.
9834.939 -> Okay. And then we can just apply those permissions
there. And so now he has those permissions.
9841.039 -> We also have the ability to add inline policies.
Okay, so in here, we can add whatever we want.
9846.52 -> And so we could use the visual editor here
and just add an inline policy. Okay, so I'm
9851.01 -> just trying to think of something we could
give him access to some ABC to okay, but he
9854.979 -> already has access to it because he's a power
user, but we're just going through the motions
9857.959 -> of this here. So I'm gonna select EC two,
and we're going to give him just a list access,
9864.739 -> okay. And we're going to say review policy,
okay. And so we have full, full access there.
9871.939 -> And we can name the policy here so we can
say, full Harry, okay. Full hair, you see,
9879.199 -> too. And we'll go ahead and create that policy,
their maximum policy size exceeds for Harry
9886.169 -> Kim, I guess it's just that it has a lot of
stuff in there. So I'm gonna go previous,
9891.199 -> okay, and I guess it's just a very large policy.
So I'm just going to pare that down there.
9897.14 -> Okay. Again, this is just for show So it doesn't
really matter what we select here. And then
9902.811 -> we'll go ahead and review that policy, and
then we will create that policy. Okay. And
9907.329 -> so here we have an inline policy, or a policy
directly attached. And then we have a manage
9912.989 -> policy, okay. And then we have a policy that
comes from a group. Alright. So that's policies,
9920.499 -> we can see what group he belongs to here and
add them to additional groups, tags or tags,
9925.47 -> we can see his security credentials here.
So we could manage whether we want to change
9930.89 -> whether he has full access or not retroactively.
And we can fiddle with his password or reset
9936.33 -> it for him here. And we can manage MFA. So
we can set MFA for this user. Normally, you
9944.38 -> want the user to do it, by themselves, because
if you had to set MFA for them, as administrator,
9949.47 -> they'd have to give me their phone right to
set this up. But I guess if they had, if you
9953.64 -> had a yubikey, that set up the yubikey for
them, and they give them the UB key. And then
9959.279 -> we have the access keys. So you can have up
to two access keys within your account here.
9963.55 -> Okay, so I can go ahead and create a another
one, it's a good habit to actually create
9967.361 -> both of them, because for security purposes,
if you take the ADA security certification,
9973.479 -> one way of compromising account is always
taking up that extra slot, you can also make
9977 -> these inactive. Okay? So if this becomes an
active, you can set them all right. But see,
9982.609 -> we still can't create any additional keys,
we have to hit the x's here. And so then we
9987.51 -> can create more access keys. Okay, if we were
using code commit, we could upload their SSH
9994.06 -> key here. And so same thing, we can generate
a credentials for code commit. And then there's
10001.56 -> access advisor. This is gives you general
advice of like what access they have, I think
10007.3 -> Suzy can scroll down here and see what do
they actually have access to? And when did
10011.47 -> they last access something? Okay. And then
there's the Arn to Harry Kim. So it's something
10016.939 -> that we might want to utilize there. So we
got the full gambit of this here. Okay. And
10021.119 -> so I'm just going to go ahead and delete Harry,
because we're pretty much done here. Okay.
10027.189 -> Great. And so there we are. So that was the
run through with users. So just to wrap up
10033.829 -> this section, we're just going to cover a
rules and policies here. So first, we'll go
10037.72 -> into policies. And here we have a big list
of policies here that are managed by AWS,
10042.46 -> they say they're managed over here, and you
can definitely tell because they're camelcase.
10046.729 -> And they also have this nice little orange
box, okay. And so these are policies, which
10051.76 -> you cannot edit, they're read only, but they're
a quick and fast way for you to start giving
10056.239 -> access to your users. So if we were just to
take a look at one of them, like the EC 214,
10061.199 -> or maybe just read only access, we can click
into them. And we can see kind of the summary
10066.919 -> of what we get access to. But if we want to
actually see the real policy here, we can
10071.419 -> see it in full. Alright. And so we do have
some additional things here to see actually
10077.55 -> who is using this policy. So we can see that
we have a roll named
10083.149 -> that I created here for a different follow
along and it's utilizing this policy right
10087.54 -> now. We also have policy versions. So a policy
can have revisions over time, and to have
10093.319 -> up to five of them, okay, so if you ever need
to roll back a policy or just see how things
10097.76 -> have changed, we can see that here. And we
also have the access advisor, which tells
10101.739 -> us who is utilizing this. So again, for Amazon
ECS, we're seeing that role being utilized
10107.83 -> for this custom role that I've created. Okay,
so let's just actually copy the the Jason
10113.029 -> here, so we can actually go try and make our
own policy. Okay, because we did create a
10117.109 -> policy for Harry Kamba, it would be nice to
actually create one with the JSON here. So
10121.289 -> we'll go back to the policy here. And we'll
create a new policy, we'll go to the JSON
10125.279 -> editor here, and we will paste that in. And
we do cover this obviously, in the lecture
10129.499 -> content, but you know, you have to specify
the version, then you need a statement. And
10134.08 -> the statement has multiple, multiple actions
within here that you need to define. Okay.
10142.41 -> And so here we have one that has an allow,
allow effect, and it is for the action, easy
10147.189 -> to describe. And it's for all possible resources.
Okay, so we're going to go ahead and create
10152.189 -> that there. And we're just going to name this
as my read, only easy to access, okay. We're
10160.319 -> just gonna go ahead and create that policy.
Okay. And so we have that policy there. I'm
10167.25 -> just going to search it there quickly. And
you can see that this is customer manage,
10170.85 -> because it doesn't have the orange box. And
it says that it's customer managed. All right.
10175.649 -> So let's just go ahead and go ahead and create
a role now. So we can go ahead and create
10179.189 -> a role. And so generally, you want to choose
the role, who this role is for. We're gonna
10185.09 -> say this is for EC two, okay, but you could
also set it for another ABC account. This
10190.779 -> is for creating cross rules. Then you have
web identity and SAML. We're gonna stick with
10195.619 -> services here and go to EC two and now we
have the option You need to choose their policies
10200.979 -> and we can create or sorry, choose multiple
policies here. So I'm going to do my read,
10204.46 -> only easy to hear, okay? But we could also
select on them, and I guess three, okay. And
10211.51 -> I'm just going to skip tags because they don't
care. And we're going to go next review, I'm
10214.739 -> going to say my role, alright, and shows the
policies that were attached there. And now
10219.329 -> we can create that role. Okay. And so we now
have that role. And so we can now attach it,
10224.38 -> we can attach that role to resource such such
as when we launched got easy to instance,
10230.439 -> we could assign it that way, or, you know,
affiliate with a user but yeah, there you
10239.58 -> go.
10241.329 -> We're onto the IM cheat sheets. Let's jump
into it. So identity access management is
10245.42 -> used to manage access to users and resources
I am is a universal system, so it's applied
10250.6 -> to all regions at the same time, I am is also
a free service. A root account is the account
10256.09 -> initially created when AWS is set up, and
so it has full administrator access. New Iam
10261.26 -> accounts have no permissions by default until
granted, new users get assigned an access
10266.34 -> key ID and secret when first created when
you give them programmatic access. Access
10271.26 -> keys are are only used for the COI and SDK,
they cannot access the console. Access keys
10277.09 -> are only shown once when created, if lost,
they must be deleted and recreated again,
10281.84 -> always set up MFA for your root accounts.
Users must enable MFA on their own administrators
10286.8 -> cannot turn it on for each user, I am allows
you, you to Set password policies to set minimum
10294.239 -> password requirements or rotate passwords.
Then you have Iam identities, such as users,
10299.56 -> groups and roles and we'll talk about them
now. So we have users, those are the end users
10303.59 -> who log into the console or interact with
AWS resources programmatically, then you have
10308.449 -> groups. So that is a logical group of users
that share all the same permission levels
10312.72 -> of that group. So think administrators, developers,
auditors, then you have roles, which associates
10318.43 -> permissions to a role, and then that role
is then assigned to users or groups. Then
10323.039 -> you have policies. So that is a JSON document,
which grants permissions for specific users
10328.05 -> groups, roles to access services, policies
are generally always attached to Iam identities,
10333.729 -> you have some variety of policies, you have
managed policies, which are created by AWS,
10338.459 -> that cannot be edited, then you have customer
managed policies, those are policies created
10342.56 -> by you that are editable and you have inline
policies which are directly attached to the
10347.959 -> user. So there you go, that is I am. Hey,
this is Andrew Brown from exam Pro. And we
10356.34 -> are looking at Amazon cognito, which is a
decentralized way of managing authentication.
10361.859 -> So think sign up sign in integration for your
apps, social identity providers, like connecting
10366.51 -> with Facebook, or Google. So Amazon cognito
actually does multiple different things. And
10373.26 -> we are going to look at three things in specific,
we're going to look at cognito user pools,
10377.359 -> which is a user directory to authenticate
against identity providers, we're going to
10381.499 -> look at cognito identity pools, which provides
temporary credentials for your users to access
10386.01 -> native services. And we're gonna look at cognito
sync, which syncs users data and preferences
10390.039 -> across all devices. So let's get to
10397.31 -> so to fully understand Amazon cognito, we
have to understand the concepts of web identity
10401.34 -> Federation and identity providers. So let's
go through these definitions. So for web identity
10406.8 -> Federation, it's to exchange the identity
and security information between an identity
10411.64 -> provider and an application. So now looking
at identity provider, it's a trusted provider
10418.05 -> for your user identity that lets you authenticate
to access other services. So an identity provider
10423.919 -> could be Facebook, Amazon, Google, Twitter,
GitHub, LinkedIn, you commonly see this on
10429.649 -> websites where it allows you to log in with
a Twitter or GitHub account, that is an identity
10434.989 -> provider. So that would be Twitter or GitHub.
And they're generally powered by different
10440.22 -> protocols. So whenever you're doing this with
social social accounts, it's going to be with
10444.761 -> OAuth. And so that can be powered by open
ID Connect, that's pretty much the standard
10448.68 -> now, if there are other identity providers,
so if you needed a single sign on solution,
10454.439 -> SAML is the most common one. Alright. So the
first thing we're looking at is cognito. User
10463.33 -> pools, which is the most common use case for
cognito. And that is just a directory of your
10469.069 -> users, which is decentralized here. And it's
going to handle actions such as signup sign
10474.58 -> in account recoveries, that would be like
resetting a password account confirmation,
10479.14 -> that would be like confirming your email after
sign up. And it has the ability to connect
10484.069 -> to identity providers. So it does have its
own like email and password form that it can
10488.729 -> take, but it can also leverage. Maybe if you
want to have Facebook Connect or Amazon connect
10494.569 -> and etc. You can do that as well. The way
it persists a connection after it authenticate
10500.52 -> is that generates a JW t. So that's how you're
going to persist that connection. So let's
10504.689 -> look at more of the options so that we can
really bake in the utility here of user pools.
10509.569 -> So here left hand side, we have a bunch of
different settings. And for attributes, we
10513.979 -> can determine what should be our primary attribute
should be our username when they sign up,
10518.97 -> or should it be email and phone phone number.
And if it is, you know, can they sign up or
10526.01 -> sign in, if the email address hasn't been
verified, or the conditions around that, we
10530.42 -> can set the restrictions on the password the
length, if it requires special characters,
10535.68 -> we can see what kind of attributes are required
to collect on signup, if we need their birthday,
10539.84 -> or email or etc. It has the capabilities of
turning on MFA. So if you want multi factor
10544.13 -> authentication, very easy way to integrate
that if you want to have user campaigns, so
10549.249 -> if you're used to like sending out campaigns
via MailChimp, you can easily integrate cognito
10553.859 -> with pinpoint, which is user campaigns, right.
And you also can override a lot of functionality
10561.68 -> using lambda. So anytime like a sign up or
sign in, or recovery passwords triggered,
10566.409 -> there is a hook so that you can then trigger
lambda to do something with that. So that's
10570.329 -> just some of the things that you do with cognito
user pools. But the most important thing to
10573.749 -> remember is just it's a way of decentralizing
your authentication that's for for user pool.
10580.479 -> All right. So now it's time to look at cognito
identity pools, identity pools provide temporary
10587.97 -> natives credentials to access services such
as dynamodb, or s3, identity pools can be
10592.689 -> thought of as the actual mechanism authorizing
access to the AWS resources. So you know,
10598.819 -> the idea is you have an identity pool, you're
gonna say, who's allowed to generate those
10603.609 -> AWS credentials, and then use the SDK to generate
those credentials. And then that application
10608.669 -> can then access those at the services. So
just to really hit that home here, I do have
10614.72 -> screenshots to give you an idea what that
is. So first, we're going to choose a provider.
10619.34 -> So our provider can be authenticated. So we
can choose cognito, or even a variety of other
10624.66 -> ones, or you can have it on authenticated.
So that is also an option for you. And then
10630.039 -> after you create that identity pool, they
have an easy way for you to use the SDK. So
10635.02 -> you could just drop down your platform and
you have the code and you're ready to go to
10638.399 -> go get those credentials. If you're thinking
did I actually put in my real, real example
10644.76 -> or identity pool ID there? It's not, it's
not I actually go in and replace all these.
10648.989 -> So if you're ever wondering and watching these
videos, and you're seeing these things I always
10652.04 -> replaced.
10657.85 -> We're going to just touch on one more, which
is cognito. Sync. And so sync lets you sync
10662.89 -> user data and preferences across all devices
with one line of code cognito, uses push notifications
10667.649 -> to push those updates and synchronize data.
And under the hood, it's using simple notification
10672.329 -> service to push this data to devices. And
the the data which is user data and preferences
10680.489 -> is key value data. And it's actually stored
with the identity pool. So that's what you're
10684.81 -> pushing back and forth. But the only thing
you need to know is what it does. And what
10688.39 -> it does is it syncs user data and preferences
across all devices with one line of code.
10697.05 -> So we're onto the Amazon cognito cheat sheets,
and let's jump into it. So cognito is a decentralized
10702.529 -> managed authentication system. So when you
need to easily add authentication to your
10706.529 -> mobile or desktop apps, think cognito. So
let's talk about user pools. So user pool
10711.52 -> is the user directory allows users to authenticate
using OAuth two ipds, such as Facebook, Google
10716.88 -> Amazon to connect to your web applications.
And cognito user pool isn't in itself an IPD.
10723.72 -> All right, so it can be on that list as well.
User pools use JW T's to persist authentication.
10730.199 -> Identity pools provide temporary database
credentials to access services, such as s3
10735.93 -> dynamodb, cognito. Sync can sync user data
preferences across devices with one line of
10740.39 -> code powered by SNS web identity Federation,
they're not going to ask you these questions.
10744.479 -> But you need to know what these are exchange
identity and security information between
10748.17 -> identity provider and an application. identity
provider is a trusted provider for your user
10753.03 -> to authenticate or sorry to identify that
user. So you can use them to dedicate to access
10759.249 -> other services. Then you have Oh IDC is a
type of identity provider which uses OAuth
10764.409 -> and you have SAML which is a type of identity
provider which is used for single sign on
10768.709 -> so there you go. We're done with cognito Hey,
this is Angie brown from exam Pro, and we
10777.18 -> are going to take a look here at AWS command
line interface also known as COI, which control
10782.3 -> multiple at the services from the command
line and automate them through scripts. So
10788.539 -> COI lets you interact with AWS from anywhere
by simply using a command line. So down below
10793.92 -> here I have a terminal and I'm using the AWS
CLI which starts with AWS so to get installed
10800.43 -> on your computer. AWS has a script, a Python
script that you can use to install the COI.
10806.34 -> But once it's installed, you're going to now
have the ability to type AWS within your terminal,
10810.89 -> followed by a bunch of different commands.
And so the things that you can perform from
10814.47 -> the CLR is you could list buckets, upload
data to s3, launch, stop, start and terminate
10819.369 -> easy to instances, update security groups
create subnets, there's an endless amount
10823.42 -> of things that you can do. All right. And
so I just wanted to point out a couple of
10828.13 -> very important flags, flags are these things
where we have hyphen, hyphen, and then we
10831.909 -> have a name here. And this is going to change
the behavior of these COI commands. So we
10837.359 -> have output and so the outputs what's going
to be returned to us. And we have the option
10840.909 -> between having Jason table and plain text.
I'm for profiles, if you are switching between
10847.53 -> multiple AWS accounts, you can specify the
profile, which is going to reference to the
10852.729 -> credentials file to quickly let you perform
CL actions under different accounts. So there
10860.42 -> you go. So now we're going to take a look
at eight of a software development kit known
10866.979 -> as SDK. And this allows you to control multiple
AWS services using popular programming languages.
10873.709 -> So to understand what an SDK is, let's go
define that. So it is a set of tools and libraries
10878.989 -> that you can use to create applications for
a specific software package. So in the case
10884.79 -> for the Ava's SDK, it is a set of API libraries
that you you that let you integrate data services
10891.51 -> into your applications. Okay, so that fits
pretty well into the distribution of an SDK.
10896.55 -> And the SDK is available for the following
languages. We have c++, go Java, JavaScript,
10901.22 -> dotnet, no, Jess, PHP, Python, and Ruby. And
so I just have an example of a couple of things
10908.369 -> I wrote in the SDK. And one is a no Jess and
one is Ruby into the exact same script, it's
10914.77 -> for ABS recognition for detecting labels,
but just to show you how similar it is among
10921.1 -> different languages, so more or less the,
the syntax is going to be the same. But yeah,
10927.419 -> that's all you need to know, in order to use
the line SDK, we're gonna have to do a little
10935.39 -> bit work beforehand and enable programmatic
access for the user, where we want to be able
10940.979 -> to use these development tools, okay. And
so when you turn on programmatic access for
10945.32 -> user, you're going to then get an access key
and a secret. So then you can utilize these
10950.52 -> services. And so down below, you can see I
have an access key and secret generated.
10955.18 -> Now, once you have these, you're going to
want to store them somewhere. And you're going
10958.6 -> to want to store them in your user's home
directory. And you're going to want them within
10964.21 -> a hidden directory called dot AWS, and then
a file called credentials. Okay, so down below
10969.8 -> here, I have an example of a credentials file.
And you'll see that we have default credentials.
10975.1 -> So if we were to use CLR SDK, it's going to
use those ones by default if we don't specify
10979.97 -> any. But if we were working with multiple
AWS accounts, we're going to end up with multiple
10985.689 -> credentials. And so you can organize them
into something called profiles here. And so
10989.43 -> I have one here for enterprise D, and D, Space
Nine. So now that we understand pragmatic
10994.16 -> access, let's move on to learning about Ceylon.
Hey, this is Andrew Brown from exam Pro. And
11003.609 -> we are going to do the COI and SDK follow
along here. So let's go over to I am and create
11010.189 -> ourselves a new user so we can generate some
database credentials. So now we're going to
11016.609 -> go ahead and create a new user. And we're
going to give them programmatic access so
11022.35 -> we can get a key and secret. I'm going to
name this user Spock. Okay. And we're going
11028.619 -> to go next here. And we're going to give them
developer permissions, which is a power user
11033.779 -> here. Okay, and you can do the same here,
if you don't have that, just go ahead and
11038.609 -> create that group. Name it developers and
just select power access there, okay? power
11044.52 -> with a capital I guess, there, okay. But I
already created it earlier with our I am,
11050.69 -> I am walk through there or follow along. So
we're going to skip over to tags and a review.
11057.27 -> And we're gonna create that user. Alright.
And so now we're gonna get an Access ID and
11062.359 -> secret. So I actually want to hold on to these.
So I'm just gonna copy that there. And we're
11066.649 -> gonna move over here and just paste that in.
Okay. And so we're just going to hold on to
11073.77 -> these and now we need an environment where
we can actually do a bit of coding and use
11078.939 -> the CLA and the best place to do that in AWS
is cloud nine. So we're gonna make our way
11083.189 -> over to cloud nine, and we're gonna spin ourselves
up a new environment, okay. So here we are,
11090.27 -> and still have these two other environments
here. I just seem to not be able to get rid
11095.021 -> of them. Generally, they they do delete with
very little trouble because I messed with
11099.68 -> the cloudformation stack, they're sticking
around here, but you won't have this problem.
11103.709 -> So I'm just going to create a new environment
here. And we are going to go ahead and call
11108.13 -> this Spock Dev. Okay, and we're gonna go to
the next step. And I'm just going to continuously
11113.03 -> ignore these silly errors. And we're going
to use the smallest instance here, the TT
11117.209 -> micro, which will be on the free tier, okay,
and we're going to use Amazon Linux. And this,
11124.909 -> cloud nine will actually spin down after 30
minutes Non Us. So if you forget about it,
11129.489 -> it will automatically turn itself off, start
us off, which is really nice. And we'll go
11133.529 -> ahead and go to next step. And it's going
to then just give us a summary here, we'll
11139.39 -> hit Create environment. And now we just have
to wait for this environment to start up.
11145.21 -> So we'll just wait a few minutes here. So
our cloud nine environment is ready here.
11154.08 -> Okay. And we have a terminal here, and it's
connected to any c two instance. And the first
11158.999 -> thing I'm going to do is I just can't stand
this white theme. So I'm gonna go to themes
11162.3 -> down here, go to UI themes, and we're going
to go to classic dark, okay, and that's gonna
11166 -> be a lot easier on my eyes here. And so the
first thing we want to do is we want to plug
11171.619 -> in our credentials here so that we can start
using the CLA. So the COI is already pre installed
11176.249 -> on this instance here. So if I was to type
AWS, we already have access to it. But if
11181.729 -> we wanted to learn how to install it, let's
actually just go through the motions of that.
11185.689 -> Okay, so I just pulled up a couple of docs
here just to talk about the installation process
11190.209 -> of the COI, we already have the COI installs
of a type universe, it's already here. So
11195.43 -> it's going to be too hard to uninstall it
just to install it to show you here, but I'm
11199.109 -> just going to kind of walk you through it
through these docks here just to get you an
11201.949 -> idea how you do it. So the COI requires either
Python two or Python three. And so on the
11208.84 -> Amazon Linux, I believe that it has both.
So if I was to type in pi and do virgin here,
11213.609 -> okay.
11214.609 -> Or just Python, sorry, I'm always thinking
of shorthands. This has version 3.6, point
11220.62 -> eight, okay. And so when you go ahead and
install it, you're going to be using Pip.
11225.71 -> So PIP is the way that you install things
in Python, okay, and so it could be PIP or
11231.449 -> PIP three, it depends, because there used
to be Python two, and southern Python three
11235.91 -> came out, they needed a way to distinguish
it. So they called it PIP three. But Python
11239.329 -> two is no longer supported. So now PIP three
is just Pip. Okay, so you know, he's got to
11245.27 -> play around based on your system, okay. But
generally, it's just Pip pip, install ATSC
11250.989 -> Li. And that's all there is to it. And to
get Python install, your system is going to
11254.8 -> vary, but generally, you know, it's just for
Amazon, Linux, it is a CentOS, or Red Hat
11260.59 -> kind of flavor of Linux. So it's going to
use a Yum, install Python. And just for all
11265.68 -> those other Unix distributions, it's mostly
going to be apt get Okay, so now that we know
11272.109 -> how to install the CLA, I'm just gonna type
clear here and we are going to set up our
11276.479 -> credentials. Alright, so we're gonna go ahead
and install our credentials here, they're
11286.949 -> probably already installed, because cloud
nine is very good at setting you up with everything
11290.84 -> that you need. But we're going to go through
the motions of it anyway. And just before
11293.77 -> we do that, we need to install a one thing,
cloud nine here. And so I'm going to install
11298.119 -> via node package manager c nine, which allows
us to open files from the terminal into Cloud
11305.12 -> Nine here. And so the first thing I want you
to do is I want to go to your home directory,
11308.779 -> you do that by typing Tilda, which is for
home, and forward slash, OK. And so now I
11314.689 -> want you to do l LS, hyphen, LA, okay. And
it's going to list everything within this
11318.949 -> directory, and we were looking for a directory
called dot AWS. Now, if you don't have this
11323.34 -> one, you just type it MK Dir. And you do dot
AWS to create it, okay, but already exists
11328.279 -> for us. Because again, cloud nine is very
good at setting things up for us. And then
11333.07 -> in here, we're expecting to see a credentials
file that should contain our credential. So
11336.221 -> typing see nine, the program we just installed
there, I'm just going to do credentials here,
11341.409 -> okay. And it's going to open it up above here.
And you can already see that it's a set of
11346.13 -> credentials for us, okay. And I'm just going
to flip over and just have a comparison here.
11351.35 -> So we have some credentials. And it is for
I don't know who, but we have them and I'm
11358.92 -> going to go ahead and add a new one, I'm just
going to make a new one down here called Spark,
11364.669 -> okay. All right. And basically, what I'm doing
is I'm actually creating a profile here, so
11371.699 -> that I can actually switch between credentials.
Okay. And I'm just going to copy and paste
11381.03 -> them in here. Alright, and so I'm just going
to save that there. And so now I have a second
11389.189 -> set of credentials within the credential file
there, and it is saved. And I'm just going
11394.419 -> to go down to my terminal here and do clear.
And so now what I'm going to do is I'm going
11398.879 -> to type in AWS s Three LS, and I'm going to
do hyphen, hyphen profile, I'm going to now
11404.779 -> specify Spock. And that's going to use that
set of credentials there. And so now I've
11410.109 -> done that using sparks credentials, and we
get a list of a bucket stuff. Okay? So now
11415.46 -> if we wanted to copy something down from s3,
we're going to AWS s3, CP. And we are going
11423.57 -> to go into that bucket there. So it's going
to be exam Pro, enterprise D, I have this
11430.829 -> from memory. And we will do data dot jpg,
okay. And so what that's going to do is it's
11438.27 -> going to download a file, but before I actually
run this here, okay, I'm just going to CD
11443.76 -> dot dot and go back to my home directory here.
Okay. And I'm just going to copy this again
11450.1 -> here and paste it and so I should be able
to download it. But again, I got to do a hyphen
11453.8 -> having profile specifies proc Spark, because
I don't want to use the default profile there.
11458.08 -> Okay, um, and, uh, complain, because I'm missing
the G on the end of that there, okay?
11465.789 -> And it's still complaining. Maybe I have to
do s3 forward slash forward slash,
11476.449 -> huh?
11477.709 -> No, that's the command. Oh, you know why?
It's because when you use CP, you have to
11482.989 -> actually specify the output file here. So
you need your source and your destination.
11487.09 -> Okay, so I'm just gonna type Spock, or sorry,
data, sorry, data dot JPG there. Okay. And
11493.3 -> that's going to download that file. So, I
mean, I already knew that I had something
11497.35 -> for AWS there. So I'm just going to go to
AWS to show you that there. So if you want
11502.871 -> to do the same thing as I did, you knew you
definitely need to go set up a bucket in s3.
11507.829 -> Okay. So if I just go over here, we have the
exam, pro 00, enterprise D, and we have some
11513.76 -> images there. Okay. So that's where I'm grabbing
that image from. And I could just move this
11519.039 -> file into my environment directory, so I actually
can have access to it there. Okay, so I'm
11524.46 -> just going to do MB data. And I'm just going
to move that one directory up here. Okay.
11531.43 -> All right. And so now we have data over here,
okay. And so you know, that's how you'd go
11538.579 -> about using the CLA with credentials. Okay.
Yeah, we just open that file there if we wanted
11544.221 -> to preview it. Okay. So now let's, let's move
on to the SDK, and let's use our credentials
11549.97 -> to do something programmatically. So now that
we know how to use the COI, and where to store
11558.859 -> our credentials, let's go actually do something
programmatically with the SDK. And so I had
11564.59 -> recently contributed to database docs, for
recognition. So I figured we could pull some
11568.369 -> of that code and have some fun there. Okay.
So why don't you do is go to Google and type
11573.21 -> in Avis docs recognition. And we're going
to click through here to Amazon recognition,
11579.609 -> we're going to go to the developer developers
guide with HTML, apparently, they have a new
11583.779 -> look, let's give it a go. Okay, there's always
something new here. I'm not sure if I like
11588.63 -> it. But this is the new look to the docks.
And we're gonna need to find that code there.
11593.93 -> So I think it is under detecting faces here.
And probably under detecting faces in an image,
11601.619 -> okay. And so the code that I added was actually
the Ruby and the no GS one, okay, so we can
11606.249 -> choose which one we want. I'm going to do
the Ruby one, because I think that's more
11610.06 -> fun. And that's my language of choice. Okay.
And so I'm just going to go ahead and copy
11615.729 -> this code here. Okay. And we're going to go
back to our cloud nine environment, I'm going
11621.01 -> to create a new, a new file here, and I'm
just going to call this detect faces, ooh,
11630.619 -> keep underscore their faces.rb. Okay. And
I'm just going to double click in here and
11636.899 -> paste that code in. Alright. And what we're
going to have to do is we're going to need
11641.35 -> to supply our credentials here, generally,
you do want to pass them in as environment
11647.209 -> variables, that's a very safe way to provide
them. So we can give that a go. But in order
11653.41 -> to get this working, we're going to have to
create a gem file in here. So I'm just going
11656.64 -> to create a new file here, because we need
some dependencies here. And we're just going
11661.109 -> to type in gem file, okay. And within this
gem file, we're going to have to provide the
11667.749 -> gem recognition. Okay, so I'm just gonna go
over here and supply that there. There is
11673.511 -> a few other lines here that we need to supply.
So I'm just gonna go off screen and go grab
11677.55 -> them for you. Okay, so I just went off screen
here and grabbed that extra code here. This
11682.129 -> is pretty boilerplate stuff that you have
to include in a gem file. Okay. And so what
11686.85 -> this is going to do, it's going to install
of AWS SDK for Ruby, but specifically just
11691.35 -> for recognition. So I do also have open up
here the AWS SDK, for Ruby and for no GS,
11698.499 -> Python, etc. They all
11701.249 -> I have one here. And so they tells you how
you can install gems. So for dealing with
11706.329 -> recognition here, I'm just going to do a quick
search here for recognition. Okay, sometimes
11711.329 -> it's just better to navigate on the left hand
side here. Alright, and so I'm just looking
11716.22 -> for a recognition. Okay, and so if we want
to learn how to use this thing, usually a
11720.51 -> lot of times with this, it's going to tell
you what gem you're gonna need to install.
11723.64 -> So this is the one we are installing. And
then we click through here through client,
11727.449 -> and then we can get an idea of all the kind
of operations we can perform. Okay, so when
11732.091 -> I needed to figure out how to write this,
I actually went to the CLR here, and I just
11735.64 -> kind of read through it and pieced it together
and looked at the output to figure that out.
11739.829 -> Okay, so nothing too complicated there. But
anyway, we have all the stuff we need here.
11744.83 -> So we need to make sure we're in our environment
directory here, which is that Spock dev directory.
11750.58 -> So we're going to tilde out, which goes to
our home directory environment. Okay, we're
11755.66 -> gonna do an ls hyphen, LA. And just make sure
that we can see that file there and the gem
11759.76 -> file, okay, and then we can go ahead and do
a bundle install. All right, and so what that's
11764.85 -> going to do is it's going to now install that
dependency. so here we can see that installed
11769.039 -> the EVAs SDK, SDK, core and also recognition.
Okay. And so now we have all our dependencies
11775.289 -> to run the script here. So the only thing
that we need to do here is we need to provide
11780.42 -> it an input. So here, we can provide it a
specific bucket and a file, there's a way
11788.479 -> to provide a locally, we did download this
file, but I figured what we'll do is we'll
11791.88 -> actually provide the bucket here. So we will
say what's the bucket called exam pro 000.
11801.529 -> And the next thing we need to do is define
the key. So it's probably the key here. So
11805.05 -> I'm going to do enterprise D. Okay. And then
we're just going to supply data there. All
11812.2 -> right. And we can pass these credentials via
the environment variables, we could just hard
11817.76 -> code them and paste them in here. But that's
a bit sloppy. So we are going to go through
11821.84 -> the full motions of providing them through
the environment here. And all we have to do
11826.409 -> that is we're just going to paste in, like
so. Okay. And we're just going to copy that.
11834.539 -> That's the first one. And then we're going
to do the password here. Oops. Okay. And hopefully,
11842.609 -> this is going to work the first time and then
we'll have to do bundle, exactly. detect faces,
11848.26 -> okay. And then this is how these are going
to get passed into there. And assuming that
11852.499 -> my key and bucket are correct, then hopefully,
we will get some output back. Okay.
11857.95 -> All right, it's just saying it couldn't detect
faces here, I just have to hit up here, I
11863.649 -> think I just need to put the word Ruby in
front of here.
11866.01 -> Okay, so my bad. Alright, and we is it working.
So we don't have the correct permissions here.
11877.499 -> So we are a power user. So maybe we just don't
have enough permission. So I'm just going
11880.619 -> to go off screen here and see what permissions
we need to be able to do this. So just playing
11887.359 -> around a little bit here, and also reading
the documentation for the Ruby SDK, I figured
11892 -> out what the problem was, it's just that we
don't need this forward slash here. So we
11897.869 -> just take that out there, okay, and just run
what we ran last there, okay. And then we're
11902.93 -> going to get some output back. And then it
just shows us that it detected a face. So
11907.039 -> we have the coordinates of a face. And if
we used some additional tool there, we could
11914.1 -> actually draw overtop of the image, a bounding
box to show where the face is detected. There's
11918.42 -> some interesting information. So it detected
that the person in the image was male, and
11923.31 -> that they were happy. Okay. So, you know,
if you think that that is happy, then that's
11929.66 -> what recognition thinks, okay. And it also
detected the face between ages 32 and 48.
11935.6 -> To be fair, data is an Android, so he has
a very unusual skin color. So you know, it's
11941.31 -> very hard to do to determine that age, but
I would say that this is the acceptable age
11947.18 -> range of the actor at the time of, so it totally
makes sense. Okay. Um, so yeah, and there
11954.34 -> you go. So that is the pragmatic way of doing
it. Now, you don't ever really want to ever
11960.13 -> store your credentials with on your server,
okay? Because you can always use Iam roles,
11965.359 -> attach them to EC two instances, and then
that will safely provide credentials onto
11970.05 -> your easy two instances, to have those privileges.
But it's important to know how to use the
11975.46 -> SDK. And whenever you're in development working
on your local machine, or maybe you're in
11979.88 -> cloud nine environment, you are going to have
to supply those credentials. Okay. So there
11985.31 -> you go. So now that we are done with our ADA,
or SC Li and SDK, follow along here, let's
11993.319 -> just do some cleanup. So I'm just going to
close this tab here for cloud nine. And we're
11997.399 -> going to go over to cloud nine and we're going
to just delete that and Now again, it's not
12001.529 -> going to be bad for you to have it hanging
around here, it's not going to cause you any
12004.959 -> problems, it's going to shut down on its own.
But you know, if we don't need it, we might
12008.52 -> as well get rid of it. Okay. And so I'm just
gonna have the word delete here. And hopefully
12013.76 -> this one deletes as long as they don't fiddle
and delete these security groups before it
12017.42 -> has an opportunity to delete. That should
have no problem for me here. And then we're
12021.64 -> just going to go to our Im role, or sorry,
our user there. And what we really want to
12026.72 -> do is since they're not being used anymore,
we want to expire those credentials. But I'm
12031.51 -> actually going to also go ahead and delete
the user here. So they're going to be 100%
12036.31 -> gone there. Okay, so there, that's all the
cleanup we had to do.
12041.669 -> over onto the AWS ccli SDK cheat sheet so
let's jump into it. So COI stands for command
12050.88 -> line interface SDK stands for software development
kit. The COI lets you enter interact with
12056.439 -> AWS from anywhere by simply using a command
line. The SDK is a set of API libraries that
12062.979 -> let you integrate Ada services into your applications.
promatic access must be enabled per user via
12068.84 -> the Iam console to UCLA or SDK into its config
command is used to set up your database credentials
12075.709 -> for this Eli, the CLA is installed via a Python
script credentials get stored in a plain text
12083.069 -> file, whenever possible use roles instead
of at this credentials. I do have to put that
12087.419 -> in there. And the SDK is available for the
following programming languages c++, go Java,
12091.89 -> JavaScript, dotnet, no GS, PHP, Python and
Ruby. Okay, so for the solution architect
12098.789 -> associate, they're probably not going to ask
you questions about the SDK, but for the developer,
12103.17 -> there definitely are. So just keep that in.
Hey, this is Andrew Brown from exam Pro. And
12112.499 -> we are looking at domain name systems also
abbreviated as DNS, and you can think of them
12116.77 -> as the phonebook of the internet. DNS translates
domain names to IP addresses, so browsers
12122.021 -> can find internet resources. So again, domain
name servers are a service which handles converting
12128.829 -> a domain name such as exam prodotto, into
a rentable Internet Protocol address. So here
12134.131 -> we have an example of an ipv4 address. And
this allows your computer to find specific
12138.859 -> servers on the internet automatically, depending
what domain name you browse. so here we can
12144.399 -> see it again. So we see example, CO, it looks
up the the domain name and determines that
12151.209 -> it should go this IP address, which should
go to this server. So that is the process.
12156.09 -> So we need to understand the concept of Internet
Protocol, also known as IP. So IP addresses
12164.689 -> are what are uniquely used to identify computers
on a network. And it allows communication
12170.069 -> between using them. And that is what the IP
is. And so IPS come in two variations. We
12175.449 -> have ipv4, which you're probably most familiar
with, because it's been around longer. And
12180.02 -> then we have ipv6, which looks a bit unusual,
but there are definitely benefits to it. So
12184.699 -> ipv4 is an address space with 32 bits. And
this is the amount of available addresses
12190.59 -> that are out there. And the issue with ipv4
is we're running out of IP addresses, okay,
12196.22 -> because it's based on this, this way of writing
numbers, and there's a limit to how many numbers
12202.239 -> are here. So to come up, or to combat that
solution. That's where ipv6 comes in. And
12207.939 -> so ipv6 uses an address space of 128 bits.
And it has up to 340 on the sillian potential
12216.209 -> addresses. And so basically, they've invented
a way so we will not run out of addresses,
12222.959 -> okay, and this is what that address looks
like. So it's big and long, it's not as easy
12226.68 -> to look look at as an IP, ipv4 address, but
we're never gonna run out of them. So you
12233.069 -> know, come the future. We're gonna see this
implemented more, you can definitely use ipv6
12238.97 -> on AWS, as well as ipv4, but you know, this
is future. So domain registers are the authorities
12250.169 -> who have the ability to assign domain names
under one or more top level domains. And if
12254.8 -> you're thinking about what are some common
registrar's, we have them listed down below.
12259.239 -> So you've probably seen them before like Hostgator,
GoDaddy, domain.com. AWS is also their own
12265.419 -> domain register with route 53. And name cheap,
okay, domain domains get registered through
12270.909 -> the internet, which is a service provided
by the internet Corporation for Assigned Names
12275.47 -> and Numbers, I can and enforces the uniqueness
of domain names all over the internet. And
12281.239 -> when you register a domain name, it can be
found publicly in the central who is database.
12285.76 -> So if you've ever wondered who owns a domain,
there's a high chance that you could go type
12289.479 -> it in who is and they might have the registers
contact information. Now you can pay additional,
12295.06 -> or in the case of revenue three, I don't think
there's any additional money to keep that
12297.869 -> information private. But that's it. You have
a registered domain name. And you wonder why
12302.05 -> somebody is calling you out of the blue, maybe
they've been looking you up through here.
12305.43 -> So there you go domain. So we're looking
12311.819 -> at the concept of top level domains. And so
if you've ever typed a domain name in and
12316.06 -> you're wondering what that.com is, that is
the top level domain name. And there are domains
12321.06 -> that also have second level domains. So in
the example, of.co.uk the.co is the second
12327.77 -> level, top level domain names are controlled
by the Internet Assigned Numbers Authority.
12333.239 -> And so anytime there's new ones, they're the
ones who are the number one authority on new
12340.2 -> top level domains. These domains are managed
by different organizations. And it would surprise
12347.1 -> you to see that there's hundreds upon hundreds
of top level domains. And you might not even
12352.76 -> know about them, because these companies are
just sitting on them. But like, you can see
12356.399 -> Disney has one for dot ABC. And then we have
a dot Academy one, and also AWS has their
12362.079 -> own, which is dot AWS. Okay, so there you
go, that's top level. So when you have a domain
12372.149 -> name, you're gonna have a bunch of records
that tell it what to do. And one that's very
12376.31 -> important and is absolutely required is the
SLA, the start of authority. And the SLA is
12381.85 -> a way for the domain admins to provide additional
information about the domain such as how often
12387.47 -> it's updated, what's the admins email address,
if there was a failure with responding to
12393.379 -> master? How many seconds should it tried to
fault failover to the secondary namespace,
12399.289 -> so it can contain a bunch of information,
and you can see it on the right hand side
12403.01 -> here. And you don't necessarily have to provide
all the information. But those are all the
12408.449 -> options. And it comes in the format of one
big long string. So you can see, we can see
12415.069 -> the format here. And we have an example. And
then we got an eight it was example. So there
12420.289 -> you go. And yeah, there you can only actually
have one, so a record within a single zone.
12426.249 -> So you can't have more than one. But yeah,
it's just to give additional information.
12431.8 -> It's absolutely.
12436.39 -> So now we're gonna take a look at address
records. And a records are one of the most
12439.84 -> fundamental types of DNS records. And the
idea is that you're going to be able to convert
12445.029 -> the name of a domain directly into an IP address.
Okay. So if you had testing domain comm, and
12452.959 -> you wanted to point it to the ipv4 address
50 221 6834, you'd use an a record for it.
12461.319 -> And one more thing I want to know about is
that you can use naked domain names, or root
12467.319 -> domains, that's when you don't have any www.no.
subdomains as an A record. Okay. So canticle
12478.169 -> names, also known as C names are another fundamental
DNS record used to resolve one domain name
12483.61 -> to another rather than an IP address. So if
you wanted, for example, to send all your
12490.649 -> naked domain traffic to the www route, you
could do so here, right? So here we have the
12497.64 -> we're specifying the naked domain, and we're
going to send it to look like the www domain
12504.581 -> for some reason, for some reason, I have four
W's in here. But you can't give me a hard
12508.64 -> time because at least that error is consistent.
But yeah, that's all there is to it. So a
12513.699 -> records are IP addresses, and C names are
domain. So besides the SLA, the second most
12523.499 -> important record or records are the name server
records. And they're used by top level domain
12529.709 -> servers to direct traffic to DNS to the DNS
server contain the authoritive DNS records.
12535.409 -> If you don't have these records, your domain
name cannot do anything. Typically, you're
12540.529 -> gonna see multiple name servers provided as
redundancy. Something like with GoDaddy, they're
12546.189 -> gonna give you two, with AWS, you're gonna
have four. But the more the merrier. Okay,
12551.579 -> and so you can see an example down below here.
So if you're managing your DNS records with
12557.47 -> route 53, DNS records for the domain would
be pointing to AWS servers, and there we have
12562.409 -> four we have a.com, it's probably the US dotnet.
I'm not sure that is co n.org. Okay, so we
12568.14 -> have a lot of redundancy there. Oh, now I
want to talk about the concept of TTL Time
12576.81 -> To Live and this is the length of time that
a DNS record gets cached on resolving servers
12582.52 -> or the user's own local machine. So the lower
the TTL, the faster the changes to DNS records
12587.899 -> will propagate across the internet. TTL is
always measured in seconds under ipv4, so
12594.609 -> you're gonna see more TTL is here. So if it's
not super clear, it will make sense further
12599.39 -> into this.
12605.149 -> So it's time to wrap up DNS with another cheat
sheet. So let's get to it. So Domain Name
12610.689 -> System, which is DNS is an internet service
that converts domain names into routable IP
12616.459 -> addresses. We have two types of internet protocols.
We have ipv4, which is a 32 bit address space
12622.329 -> and has a limited number of addresses. And
then we have an example of one there. And
12626.14 -> then we have ipv6, which is 128 bit address
space and has unlimited number of addresses.
12632.64 -> And we also have an example there as well.
Then we talked about top level domains. And
12636.93 -> that's just the last part of a domain like.com,
then you have second level domains. And this
12642.33 -> doesn't always happen. But it's usually the
second last part of the domain. So in.co.uk,
12647.17 -> it's going to be the.co. Then there we have
domain registers. These are third party companies
12652.369 -> who register don't you register domains through,
then you have name servers, they're the servers
12657.539 -> which contain the DNS records of the domain,
then we have some records of interest here.
12663.59 -> So we have the SLA. This contains information
about the DNS zone and associated DNS records,
12668.55 -> we have a records, these are records which
directly convert a domain name into an IP
12673.609 -> address, then we have a C name records. And
these are records, which lets you convert
12677.72 -> a domain name into another domain name. And
then we have TT ELLs, and it's the time it
12683.951 -> takes for DNS records, or it's the time that
a DNS record will be cached for a cache for
12689.51 -> and the lower that time means, the faster
it will propagate or update. Okay, and there
12697.829 -> you go. Hey, this is Andrew Brown from exam
Pro. And we are looking at remedy three, which
12703.819 -> is a highly available and scalable domain
name service. So whenever you think about
12708.739 -> revenue three, the easiest way to remember
what it does is think of GoDaddy or Namecheap,
12713.09 -> which are both DNS providers. But the difference
is that revenue three has more synergies with
12719.26 -> AWS services. So you have a lot more rich
functionality that you could do on on AWS
12723.729 -> than you could with one of these other DNS
providers. What can you do with revenue three,
12729.039 -> you can register and manage domains, you can
create various record sets on a domain, you
12733.789 -> can implement complex traffic flows such as
bluegreen, deploys, or fail overs, you can
12738.51 -> continuously monitor records, via health checks,
and resolve epcs outside of AWS.
12750.01 -> So here, I have a use case, and this is actually
how we use it at exam Pro is that we have
12754.59 -> our domain name, you can purchase it or you
can have refer to three manage the the name
12760.789 -> servers, which allow you to then set your
record sets within route 53. And so here we
12766.841 -> have a bunch of different record sets for
subdomains. And we want those sub domains
12771.029 -> to point to different resources on AWS. So
for our app, our app runs behind elastic load
12776.619 -> balancer. If we need to work on an ami image,
we could launch a single EC two instance and
12781.899 -> point that subdomain there for our API, if
it was powered by API gateway, we could use
12786.239 -> that subdomain for that, for our static website
hosting, we would probably want to point to
12790.46 -> CloudFront. So the WW dot points to a CloudFront
distribution. And for fun, and for learning,
12795.529 -> we might run a minecraft server on a very
specific IP, probably would be elastic IP
12800.3 -> because we wouldn't want it to change. And
that could be Minecraft exam pro quo. So there's
12804.76 -> a basic example. But we're gonna jump into
all the different complex rules that we can
12810.22 -> do in revenue three here. So in the previous
use case, we saw a bunch of sub domains, which
12820.55 -> were pointing to AWS resources, well, how
do we create that link so that a remedy three
12825.84 -> will point to those resources, and that is
by creating record sets. So here, I just have
12831.52 -> the form for record sets. So you can see the
kind of the types of records that you can
12835.489 -> create, but it's very simple, you just fill
in your sub domain or even leave the naked
12839.069 -> domain and then you choose the type. And in
the case for a this is allows you to point
12843.729 -> this sub domain to a specific IP address,
you just fill it in, that's all there is to
12849.039 -> it. Okay, now, I do need to make note of this
alias option here, which is a special option
12855.989 -> created by AWS. So here in the next slide
here, we've set alias to true. And what it
12861.271 -> allows us to do is directly select specific
AWS resources. So we could select CloudFront,
12867.919 -> Elastic Beanstalk, EOB, s3 VPC API gateway,
and why would you want to do this over making
12874.629 -> a traditional type record? Well, the idea
here is that this alias has the ability to
12881.39 -> detect changes of IP addresses. So it continuously
keeps pointing that endpoint to the correct
12886.84 -> resource. Okay. So that's normally when if,
if and whenever you can use alias always use
12893.629 -> alias because it just makes it easier to manage
the connections between resources via roughly
12899.38 -> three records. That's, and the limitations
are listed here as follows.
12907.09 -> So the major advantage of Route 53 is it's
seven types of routing policies. And we're
12915.989 -> going to go through every single one here.
So we understand the use case, for all seven.
12921.34 -> Before we get into that a really good way
to visualize how to work with these different
12926.459 -> routing policies is through traffic flow.
And so traffic flow is a visual editor that
12930.959 -> lets you create sophisticated routing configurations
within route 53. Another advantage of traffic
12936.079 -> flow is that we can version these policy routes.
So if you created a complex routing policy,
12942.159 -> and you wanted to change it tomorrow, you
could save it as version one, version two,
12945.709 -> and roll, roll this one out or roll back to
that. And just to play around traffic flow,
12950.45 -> it does cost a few dollars per policy record.
So this whole thing is one policy record.
12955.46 -> But they don't charge you until you create
it. So if you do want to play around with
12958.319 -> it, just just create a new traffic flow, and
name it and it will get, you'll get to this
12964.579 -> visual editor. And it's not until you save
this. So you can play around with this to
12968.109 -> get an idea of like all the different routing
rules and how you can come up with creative
12971.62 -> solutions. But now that we've covered traffic
flow, and we know that there are seven routing
12976.27 -> rules, let's go deep and look at what we can
do. We're gonna look at our first routing
12986.189 -> policy, which is the simple routing policy.
And it's also the default routing policy.
12991.27 -> So when you create a record setting here,
I have one called random, and we're on the
12995.26 -> a type here. Down below, you're gonna see
that routing policy box that's always by default
13001.109 -> set to simple. Okay, so what can we do with
simple The idea is that you have one record,
13007.249 -> which is here, and you can provide either
a single IP address or multiple IP addresses.
13013.609 -> And if it's just a single, that just means
that random is going to go to that first IP
13017.539 -> address every single time. But if you have
multiples, it's going to pick one at random.
13022.779 -> So it's good way to make like a, if you wanted
some kind of random thing made for a to b
13028.249 -> testing, you could do this. And that is as
simple as it is. So there you go. So now we're
13038.989 -> looking at weighted routing policies. And
so what a weighting routing policy lets you
13043.55 -> do is allows you to split up traffic based
on different weights assigned. Okay, so down
13048.359 -> below, we have app.example.co. And we would
create two record sets in roughly three, and
13054.91 -> they'd be the exact same thing, they both
say app.example.com. But we'd set them both
13058.14 -> to weighted and we give them two different
weights. So for this one, we would name it
13062.39 -> stable. So we've named that one stable, give
it 85%. And then we make a new record set
13066.529 -> with exact same sub domain and set this one
to 15% call experiment, okay. And the idea
13073.06 -> is that when ever traffic, any traffic hits
app.example.co, it's going to look at the
13078.31 -> two weighted values. 85% is gonna go to the
stable one. And for the 15%, it's going to
13083.31 -> go to the experimental one. And a good use
case for that is to test a small amount of
13087.609 -> traffic to minimize impact when you're testing
out new experimental features. So that's a
13092.21 -> very good use case for a weighted routing.
Now we're going to take a look at latency
13101.689 -> based routing. Okay, so lane based routing
allows you to direct traffic based on the
13107.149 -> lowest network latency possible for your end
user based on a region. Okay, so the idea
13112.979 -> is, let's say people want to hit AP dot exam
protocol. And they're coming from Toronto.
13118.22 -> All right, so coming from Toronto, and the
idea is that we have, we've created two records,
13124.589 -> which have latency with this sub domain, and
one is set to us West. So that's on the west
13130.229 -> coast. And then we have one central Canada,
I believe that's located in Montreal. And
13135.369 -> so the idea is that it's going to look here
and say, Okay, which one produces the least
13138.729 -> amount of latency, it doesn't necessarily
mean that it has to be the closest one geographically,
13142.829 -> just whoever has the lowest return in milliseconds
is the one that it's going to route traffic
13149.05 -> to. And so in this case, it's 12 milliseconds.
And logically, things that are closer by should
13153.959 -> be faster. And so the, so it's going to route
it to this lb, as opposed to that one. So
13160.039 -> that's, that's how latency based routing works.
So now we're looking at another routing policy,
13169.939 -> this one is for failover. So failover allows
you to create an Active Passive setup in situations
13174.869 -> where you want a primary site in one location,
and a secondary data recovery site and another
13180.669 -> one. Okay, another thing to note is that revenue
three automatically monitors via health checks
13185.8 -> from your primary site to determine if that
that endpoint is healthy. If it determines
13191.649 -> that it's in a failed state, then all the
traffic will be automatically redirected to
13196.01 -> that secondary location. So here, we have
done following examples, we have app.example.co.
13202.279 -> And we have a primary location and a secondary
one. All right. And so the idea is that
13211.039 -> roughly three, it's going to check and it
determines that this one is unhealthy based
13215.3 -> on a health check, it's going to then reroute
the traffic to our secondary locations. So
13220.92 -> you'd have to create, you know, two routing
policies with the exact same. The exact same
13227.06 -> domain, you just set which one is the primary
and which one is the secondary, it's that
13232.52 -> simple. So here, we are looking at the geolocation
routing policy. And it allows you to direct
13240.55 -> traffic based on the geolocation, geographical
location of where the request is originating
13245.51 -> from. So down below, we have a request from
the US hitting app dot exam pro.co. And we
13252.22 -> have a a record set for a geolocation that
set for North America. So since the US is
13259.979 -> in North America, it's going to go to this
record set. Alright. And that's as
13265.52 -> simple as that.
13269.439 -> So we're going to look at geo proximity routing
policy, which is probably the most complex
13274.869 -> routing policy is a bit confusing, because
it sounds a lot like geolocation, but it's
13278.909 -> not. And we'll see shortly here, you cannot
create this using record sets, you have to
13285.101 -> use traffic flow, because it is a lot more
complicated, and you need to visually see
13289.51 -> what you're doing. And so it's gonna be crystal
clear, we're just going to go through here
13293.77 -> and look at what it does. So the idea is that
you are choosing a region. So you can choose
13300.029 -> one of the existing Eva's regions, or you
can give your own set of coordinates. And
13304.239 -> the idea is that you're giving it a bias around
this location, and it's going to draw boundaries.
13309.229 -> So the idea is that if we created a geo proximity
routing for these regions, this is what it
13315.011 -> would look like. But if we were to give this
120 5% more bias, you're gonna see that here,
13320.209 -> it was a bit smaller, now it's a bit larger,
but if we minus it, it's going to reduce it.
13324.149 -> So this is the idea behind a geo proximity
where you have these boundaries, okay. Now,
13328.699 -> just to look at in more detail here, the idea
is that you can set as many regions or points
13337.199 -> as you want here. And so here, I just have
two as an example. So I have China chosen
13342.47 -> over here. And this looks like we have Dublin
shows. So just an idea to show you a simple
13347.47 -> example. Here's a really complicated one here,
I chose every single region just so you have
13351.92 -> an idea of split. So the idea is you can choose
as little or as many as you want. And then
13357.66 -> you can also give it custom coordinates. So
here I chose Hawaii. So I looked at the Hawaii
13361.419 -> coordinates, plugged it in, and then I turned
the bias down to 80%. So that it would have
13365.76 -> exactly around here and I could have honed
it in more. So it just gives you a really
13369.699 -> clear picture of how geo proximity works.
And it really is boundary based and you have
13375.169 -> to use traffic flow for that. So the last
routing policy we're going to look at is multivalue.
13385.76 -> And multivalue is exactly like simple routing
policy. The only difference is that it uses
13392.76 -> a health check. Okay, so the idea is that
if it picks one by random, it's going to check
13397.109 -> if it's healthy. And if it's not, it's just
going to pick another one by random. So that
13401.22 -> is the only difference between multivalue
and simple. So there you go.
13410.399 -> Another really powerful feature of Route 53
is the ability to do health checks. Okay,
13414.999 -> so the idea is that you can go create a health
check, and I can say for AP dot exam dot prodotto,
13420.729 -> it will check on a regular basis to see whether
it is healthy or not. And that's a good way
13425.67 -> to see at the DNS level if something's wrong
with your instance, or if you want to failover
13430.689 -> so let's get into the details of here. So
we can check health every 30 seconds by default
13436.319 -> and we can it can be reduced down to 10 seconds,
okay, a health check and initiate a failover.
13441.71 -> If the status is returned unhealthy, a cloudwatch
alarm can be created to alert you of status
13447.319 -> unhealthy, a health check can monitor other
health checks to create a chain of reactions.
13454.02 -> You can have up to 50 in a single AWS account.
And the pricing is pretty affordable. So it's
13461.249 -> 50 cents. So that's two quarters for per endpoint
on AWS. And there are some additional features,
13467.41 -> which is $1 per feature. Okay.
13469.979 -> So if you're using route 53, you might wonder
well, how do I route traffic to my on premise
13480.749 -> environment and that's where revenue three
resolver comes into play, formerly known as
13485.229 -> dot two. resolver is a regional service that
lets you connect route DNS queries between
13490.5 -> your VBC and your network. So it is a tool
for hybrid environments on premises and cloud.
13496.109 -> And we have some options here. If we just
want to do inbound and outbound inbound only
13499.539 -> or outbound Only. So that's all you really
need to know about it. And that's how you
13504.949 -> do hybrid networking. So now we're taking
a look at revenue three cheat sheet, and we're
13514.859 -> going to summarize everything that we have
learned about roughly three. So revenue three
13518.279 -> is a DNS provider to register and manage domains
create record sets, think GoDaddy or namecheap.
13523.439 -> Okay, there's seven different types of routing
policies, starting with simple routing policy,
13527.81 -> which allows you to input a single or multiple
IP addresses to randomly choose an endpoint
13533.899 -> at random, then you have weighted routing,
which splits up traffic between different
13537.88 -> weights, assign some percentages, latency
based routing, which is based off of routing
13541.95 -> traffic to the based on region for the lowest
possible latency for users. So it's not necessarily
13546.81 -> the closest geolocation but the the lowest
latency, okay, we have a failover routing,
13551.689 -> which uses a health check. And you set a primary
and a secondary, it's going to failover to
13556.81 -> the secondary if the primary health check
fails, you have geolocation, which roads traffic
13561.069 -> based on the geolocation. So this is based
on geolocation would be like North America
13567.649 -> or Asia, then you have geo proximity routing,
which can only be done in traffic flow allows
13573.359 -> you to set biases so you can set basically
like this map of boundaries, based on the
13579.839 -> different ones that you have, you have multi
value answer, which is identical to a simple,
13584.939 -> simple routing, the only difference being
that it uses a health check. In order to do
13589.3 -> that. We looked at traffic flow, which is
a visual editor for changing routing policies,
13594.01 -> you conversion those record those policy records
for easy rollback, we have alias record, which
13599.38 -> is a device's Smart DNS record, which detects
IP changes freedoms resources and adjusts
13604.31 -> them automatically always want to use alias
record, when you have the opportunity to do
13608.3 -> so you have route 53 resolver, which is a
hybrid solution. So you can connect your on
13614.89 -> premise and cloud so you can network between
them. And then you have health checks, which
13620.089 -> can be created to monitor and and automatically
failover to another endpoint. And you can
13625.68 -> have health checks, monitor other health checks
to create a chain of reactions, for detecting
13630.949 -> issues for endpoints. Hey, this is Angie brown
from exam Pro. And we are looking at elastic
13642.329 -> Compute Cloud EC two, which is a cloud computing
service. So choose your last storage memory,
13647.589 -> network throughput and then launch and SSH
into your server within minutes. Alright,
13652.609 -> so we're on to the introduction to EC two.
And so EC T was a highly configurable server.
13657.56 -> It's a resizeable compute capacity, it takes
minutes to launch new instances, and anything
13662.569 -> and everything I need is uses easy to instance
underneath. So whether it's RDS or our ECS,
13669.01 -> or simple system manager, I highly, highly
believe that at AWS, they're all using easy
13675.479 -> to okay. And so we said they're highly configurable.
So what are some of the options we have here?
13679.689 -> Well, we get to choose an Amazon machine image,
which is going to have our last so whether
13684.35 -> you want Red Hat, Ubuntu windows, Amazon,
Linux or Susi, then you choose your instance
13689.169 -> type. And so this is going to tell you like
how much memory you want versus CPU. And here,
13694.279 -> you can see that you can have very large instances.
So here is one server that costs $5 a month.
13701.539 -> And here we have one that's $1,000 a month.
And this one has 36 CPUs and 60 gigabytes
13707.289 -> of memory with 10 gigabyte performance, okay,
then you add your storage, so you could add
13713.529 -> EBS or Fs and we have different volume types
we can attach. And then you can configure
13719.499 -> your your instance. So you can secure it and
get your key pairs. You can have user data,
13724.97 -> Im roles and placement groups, which we're
all going to talk about starting. All right,
13731.51 -> so we're gonna look at instance types and
what their usage would be. So generally, when
13737.489 -> you launch an EC two instance, it's almost
always going to be in the T two or the T three
13741.93 -> family. And
13742.93 -> yes, we have all these little acronyms which
represent different types of instance types.
13748.39 -> So we have these more broad categories. And
then we have subcategories, or families have
13753.35 -> instances that are specialized. Okay, so starting
with general purpose, it's a balance of compute
13758.1 -> and compute memory and networking resources.
They're very good for web servers and code
13762.97 -> repository. So you're going to be very familiar
with this level here. Then you have compute
13767.529 -> optimized instances. So these are ideal for
compute bound applications that benefit from
13773.06 -> high performance processors. And as the name
suggests, this is compute it's going to have
13778.069 -> more computing power. Okay, so scientific
modeling, dedicated gaming servers, and ad
13782.7 -> server engines. And notice they all start
with C. So that makes it a little bit easier
13786.149 -> to remember. Then you have memory optimized
and as the name implies, it's going to have
13790.789 -> more memory on the server. So fast performance
for workloads that process large data sets
13795.819 -> in memory. So use cases in memory caches in
memory databases, Real Time big data analytics,
13802.22 -> then you have accelerated optimized instances,
these are utilizing hardware accelerators
13807.819 -> or co processors. They're going to be good
for machine learning, computational finance,
13813.369 -> seismic analysis, speech recognition, really
cool. Future tech uses a lot of accelerated
13818.239 -> optimized instances. And then you have storage
optimized. So this is for high sequential
13823.88 -> reads and write access to very large data
sets on local storage. Two use cases might
13827.979 -> be a no SQL database in memory or transactional
databases or data warehousing. So how is it
13833.39 -> important? How important is it to know all
these families, it's not so important to associate
13838.6 -> track at the professional track, you will
need to know themselves, all you need to know
13842.239 -> are these general categories and what and
just kind of remember, which, which fits into
13847.879 -> where and just their general purposes. All
right.
13857.279 -> So in each family of EC two instance types,
so here we have the T two, we're gonna have
13862.529 -> different sizes, and so we can see small,
medium, large x large, I just wanted to point
13867.069 -> out that generally, the way the sizing works
is you're gonna always get double of whatever
13872.33 -> the previous one was, generally, I say generally,
because it does vary. But the price is almost
13877.97 -> always double. Okay, so from small to medium,
you can see the ram has doubled, the CPU has
13882.77 -> doubled for medium large, isn't exactly doubled.
But for here, the CPU has doubled. Okay, but
13888.819 -> the price definitely definitely has doubled,
almost nearly so it's almost always twice
13894.819 -> inside. So general rule is, if you're wondering
when you should upgrade, if you need to have
13899.55 -> something, then you're better off just going
to the next version.
13904.669 -> So we're gonna look at the concept called
instance profile. And this is how your EC
13910.859 -> two instances get permissions. Okay? So instead
of embedding your AWS credentials, your access
13916.569 -> key and secret in your code, so your instance
has permissions to access certain services,
13921.369 -> you can attach a role to an instance via an
instance profile. Okay, so the concept here
13927.89 -> is you have any situ instance, and you have
an instance profile. And that's just the container
13931.939 -> for a role. And then you have the role that
actually has the permissions. Alright. And
13937.6 -> so I do need to point out that whenever you
have the chance to not embed a those credentials,
13943.159 -> you should never embed them. Okay, that's
like a hard rule with AWS. And anytime you
13948.65 -> see an exam question on that, definitely,
always remember that the way you set an instance
13953.499 -> profile tuneecu instance, if you're using
the wizard, you're going to see the IM role
13958.29 -> here. And so you're going to choose, you're
going to create one and then attach it. But
13962.09 -> there's one thing that people don't see is
they don't see that instance profile, because
13966.54 -> it's kind of like this invisible step. So
if you're using the console, it's actually
13970.52 -> going to create it for you. If you're doing
this programmatically through cloud formation,
13974.479 -> you'd actually have to create an instance
profile. So sometimes people don't realize
13977.42 -> that this thing exists. Okay. We're gonna
take a look here at placement groups, the
13987.609 -> placement groups let you choose the logical
placement of your instances to optimize for
13991.149 -> communication performance, or durability.
And placement groups are absolutely free.
13995.96 -> And they're optional, you do not have to launch
your EC two instance in within a placement
13999.97 -> group. But you do get some benefits based
on your use case. So let's first look at cluster
14004.58 -> so cluster PACs instances close together inside
an AZ and they're good for low latency network
14010.59 -> performance for tightly coupled node to node
communication. So when you want servers to
14014.41 -> be really close together, so communication
superfast, and they're well suited for high
14018.77 -> performance computing HPC applications, but
clusters cannot be multi az, alright, then
14024.659 -> you have partitions. And so partitions spread
instances across logical partitions. Each
14029.409 -> partition does not share the underlying hardware.
So they're actually running on individual
14034.209 -> racks here for each partition. They're well
suited for large distributed and replicated
14039.029 -> workloads, such as Hadoop, Cassandra, and
Kafka, because these technologies use partitions
14044.359 -> and now we have physical partitions. So that
makes total sense there, then you have spread
14049.27 -> and so spread is when each instance is placed
on a different rack. And so, when you have
14055.43 -> critical instances that should be kept separate
from each other. And this is the case where
14061.68 -> you use this and you can spread a max of seven
instances and spreads can be multi az, okay,
14067.939 -> whereas clusters are not allowed to go multi
AZ. So there you go. So user data is a script,
14077.55 -> which will automatically run when launching
easy to instance. And this is really useful
14081.209 -> when you want to install packages or apply
updates or anything you'd like before the
14085.319 -> launch of a an instance. And so when you're
going through the easy to wizard, there's
14089.699 -> this advanced details step where you can provide
your bash script here to do whatever you'd
14095.319 -> like. So here I have it installing Apache,
and then it starts that server. If you were
14100.97 -> Logging into an ECU instance. And you didn't
really know whether user data script was performed
14107.459 -> on that instance, on launch, you could actually
use the this URL at 169 24 169 24. If you
14114.93 -> were to curl that within that easy to instance,
with the user data, it would actually return
14119.47 -> whatever script was run. So that's just good
to know. But yeah, user data scripts are very
14123.93 -> useful. And I think you will be using one.
14131.439 -> So metadata is additional information about
your EC two instance, which you can get at
14137.619 -> runtime. Okay. So if you were to SSH into
your EC two instance, and run this curl command
14142.659 -> with latest metadata on the end, you're going
to get all this information here. And so the
14147.051 -> idea is that you could get information such
as the current public IP address, or the app
14153.55 -> ID that was used to watch the students, or
maybe the instance type. And so the idea here
14158.529 -> is that by being able to do this programmatically,
you could use a bash script, you could do
14162.56 -> somebody with user data metadata to perform
all sorts of advanced Ada staging operations.
14167.92 -> So yeah, better data is quite useful and great
for debugging. So yeah, it's time to look
14177.199 -> at the EC two cheat sheet here. So let's jump
into it. So elastic Compute Cloud EC two is
14181.64 -> a cloud computing service. So you configure
your EC two by choosing your storage, memory
14186.359 -> and network throughput, and other options
as well. Then you launch an SSH into your
14191.72 -> server within minutes. ec two comes in a variety
of instance types specialized for different
14196.109 -> roles. So we have general purpose, that's
for balance of compute memory and network
14200.061 -> resources, you have compute optimized, as
the name implies, you can get more computing
14204.18 -> power here. So ideal for compute bound applications
that benefit from high performance processors,
14209.43 -> then you have memory optimized. So that's
fast performance for workloads that process
14213.12 -> large data sets in memory, then you have accelerated
optimized hardware accelerators or co processors,
14219.109 -> then you have storage optimized that's high
sequential read and write access to very large
14224.069 -> data sets on local storage, then you have
the concept of instant sizes. And so instant
14229.709 -> sizes generally double in price and key attributes.
So if you're ever wondering when it's time
14233.999 -> to upgrade, just think when you're need double
of what you need that time to upgrade, then
14239.449 -> you have placement groups, and they let you
choose the logical placement of your instances
14243.05 -> to optimize communication performance, durability,
and placement groups are free, it's not so
14247.069 -> important to remember the types are because
I don't think we'll come up with a solution
14251.199 -> architect associate. And then we have user
data. So a script that will be automatically
14255.229 -> run when launching EC two instance, for metadata.
Metadata is about the current instance. So
14261.409 -> you could access this metadata via a local
endpoint when SSH into an EC two instance.
14266.81 -> So you have this curl command here with metadata,
and metadata could be the instance type, current
14272.229 -> IP address, etc, etc. And then the last thing
is instance profile. This is a container for
14276.209 -> an IM role that you can use to pass roll information
to an easy to instance, when the instance
14281.839 -> starts. Alright, so there you go, that's easy.
So we're gonna take a look at the EC two pricing
14291.56 -> model. And there are four ways we can pay
with EC two, we have on demand spot, reserved
14297.289 -> and dedicated. And we're going to go through
each section and see where each one is.
14302.939 -> We're gonna take first a look at on demand
pricing. And this is whenever you launch an
14311.899 -> EC, two instance, it's going to by default
use on demand, and so on demand has no upfront
14316.72 -> payment and no long term commitment. You're
only charged by the hour or by the man, it's
14321.22 -> going to vary based on ecsu instance type.
And that's how the pricing is going to work.
14325.829 -> And you might think, okay, what's the use
case here? Well, on demand is for applications
14329.489 -> where the workload is short term spike, you're
in predictable. When you have a new app for
14334.17 -> development, or you want to just run an experiment,
this is where on demand is going to be a good
14338.14 -> fit for you. Know, we're taking a look at
reserved instances, also known as RI and these
14346.979 -> are going to give you the best long term savings.
And it's designed for applications that have
14352.369 -> steady state predictable usage or require
reserved capacity. So what you're doing is
14357.72 -> you're saying to AWS, you know, I'm gonna
make a commitment to you. And I'm going to
14361.35 -> be using this over next period of time, and
they're going to give you savings. Okay, so
14365.02 -> this reduced pricing is going to be based
on three variables, we have term class offerings,
14368.459 -> and payment options. We'll walk through these
things to see how they all work. So for payment
14373.649 -> options, we have standard convertible and
scheduled standard is going to give us the
14376.479 -> greatest savings with 75% reduced pricing.
And this is compared to obviously to on demand.
14383.64 -> The thing here though, is that you cannot
change the ri attributes, attributes being
14387.569 -> like instance type, right? So whatever you
have, you're you're stuck with it. Now if
14391.89 -> you needed a bit more flexibility, because
you might need to have more room to grow in
14395.64 -> the future. You'd look at convertible so the
savings aren't gonna be as great. We're looking
14399.39 -> at it. To 54%. But now you have the ability
to let's say, change your instance type to
14404.569 -> a larger size, you can't go smaller, you can
always go larger. And you're, but you're going
14409.29 -> to have some flexibility there, then there's
scheduled, and this is when you need to reserve
14413.31 -> instance, for a specific time period, this
could be the case where you always have a
14418.339 -> workload that's predictable every single Friday
for a couple hours. And the idea is by telling
14424.239 -> you Ws that you're going to be doing out on
schedule, they will give you savings there
14427.55 -> that's going to vary. The other two things
is term and payment options. So the terms
14432.39 -> is how long are you willing to commit one
year or three year contract, the greater the
14437.56 -> terms, the greater the savings, and you have
payment options, so you have all the front
14442.149 -> partial upfront and no upfront, no friends,
the most interesting one, because you could
14446.27 -> say, you know, I'm going to use the server
for a year, and you and you'll just pay at
14451.64 -> the end of the month. And so that is a really
good way of saving money. Right off the bat,
14457.449 -> a lot of people don't seem to know that. So
you know, mix those three together. And that's
14461.3 -> going to change the the outcome there. And
I do here have a graphic to show you that
14465.739 -> you can select things and just show you how
they would estimate the actual cost for you.
14473.06 -> A couple things you wanted about reserved
instances that can be shared between multiple
14476.51 -> accounts within a single organization and
unreserved, our eyes can be sold in the reserved
14482.489 -> instance marketplace. So if you do buy into
one or through your contract, you're not fully
14486.51 -> out of luck, because you can always try to
resell it to somebody else who might want
14490.619 -> to use it. So there you go. Now we're taking
a look at spot instances, and they have the
14499.339 -> opportunity to give you the biggest savings
with 90% discount compared to on demand pricing.
14504.64 -> There are some caveats, though. So eight of
us has all this unused compute capacity, so
14508.869 -> they want to maximize utility of their idle
servers, it's no different than when a hotel
14513.3 -> offers discounts to fill vacant suites, or
when the plane offers discounts to fill vacant
14519.93 -> seats. Okay, so there's just easy two instances
lying around, it would be better to give people
14524.609 -> discounts than for them to do nothing. So
the only caveat though is that when you use
14530.779 -> spot instances, if another customer who wants
to pay on demand a higher price wants to use
14537.5 -> it. And they need to give that capacity to
that on demand user, this instance can be
14543.439 -> terminated at any given time, okay. And that's
going to be the trade off. So just looking
14548.83 -> at termination, termination conditions down
below, instances can be terminated by Avis
14552.869 -> at any time. If your instance is terminated
by AWS, you don't get charged for the partial
14559.479 -> hour of usage. But if you were to terminate
an instance, you will still be charged for
14563.76 -> any hour that it ran. Okay, so there you go.
That's the little caveat to it. But what would
14570.629 -> you use abundances for if it can if these
incidents could be interrupted at any time?
14574.899 -> Well, they're designed for applications that
have flexible Start and End Times or applications
14578.85 -> that are only feasible at very low compute
costs. And so you can see I pulled out the
14584.699 -> configuration graphic when you make spot.
So it's saying like, Is it for load balancing
14589.079 -> workloads, flexible workloads, big data workloads
are defined duration workloads. So you can
14592.739 -> see there is some definitions as to what kind
of utility you would have there. But there
14599.739 -> you are.
14601.39 -> So we're taking a look at dedicated host instances,
which is our most expensive option with EC
14607.319 -> two pricing models. And it's designed to meet
regulatory requirements when you have strict
14612.459 -> server bound licensing that won't support
multi tenancy or cloud deployments. So to
14617.8 -> really understand dedicated hosts, we need
to understand multi tenant versus single tenant.
14623.339 -> So whenever you launch an EC two instance,
and choosing on demand or or any of the other
14628.3 -> types beside dedicated hosts, it's multi tenant,
meaning you are sharing the same hardware
14633.59 -> as other AWS customers, and the only separation
between you and other customers is through
14638.899 -> virtualized isolation, which is software,
okay, then you have single tenant and this
14643.951 -> is when a single customer has dedicated hardware.
And so customers are separated through physical
14649.529 -> isolation. All right. And so to just compare
these two I think of multi tenant is like
14654.109 -> everyone living in an apartment, and single
tenant is everyone living in a house. Right?
14659.81 -> So, you know, why would we want to have our
own dedicated hardware? Well, large enterprises
14665.479 -> and organizations may have security concerns
or obligations about sharing the same hardware
14669.669 -> with other AWS customers. So it really just
boils down to that with dedicated hosts. It
14677.14 -> comes in an on demand flavor and a reserved
flavor. Okay, so you can save up to 70%. But
14683.71 -> overall, dedicated hosts is way more expensive
than our other EC two pricing. Now we're on
14694.069 -> to the CTU pricing cheat sheet and this one
is a two pager but we'll make our way through
14698.329 -> it. So EC two has four pricing models we have
on demand spot reserved instances also known
14703.09 -> as RI and dedicated looking first at on demand,
it requires the least commitment from you.
14709.189 -> It is low cost and flexible, you only pay
per hour. And the use cases here are for short
14715.1 -> term spiky, unpredictable workloads or first
time applications, it's going to be ideal
14720.052 -> when you want workloads that cannot be interrupted,
whereas in spot, that's when you can have
14724.939 -> interruption and we'll get to that here shortly.
So onto reserved instances, you can save up
14729.77 -> to 75% off, it's gonna give you the best long
term value. The use case here are steady state
14735.319 -> or predictable usage. You can resell unused
reserved instances and the reserved instance
14740.271 -> marketplace the reduced pricing is going to
be based off of these three variables terms
14744.239 -> class offering and payment option. So for
payment terms, we have a one year or a three
14748.6 -> year contract. With payment options, we can
either pay all upfront, partial upfront or
14752.949 -> no upfront. And we have three class offerings,
we have standard convertible and scheduled.
14757.72 -> So for standard we're gonna get up to 75%
reduced pricing compared to on demand. But
14762.05 -> you cannot change those ra attributes meaning
like, if you want to change to a larger instance
14766.609 -> type, it's not going to be possible, you're
stuck with what you have. If you wanted more
14770.489 -> flexibility we have convertible where you
can get up to 54% off, and you get that flexibility.
14776.119 -> As long as those ra attributes are greater
than or equal in value, you can change those
14781.35 -> values, then you have scheduled and this is
used. This is for reserved instances for specific
14786.14 -> time periods. So maybe you want to run something
once a week for a few hours. And the savings
14790.76 -> here are gonna vary. Now onto our last two
pricing models, we have spot pricing, which
14796.56 -> is up to 90% off, it's gonna give you the
biggest savings, what you're doing here is
14800.329 -> you're requesting spare computing capacity.
So you know, as we said earlier, it's like
14805.619 -> hotel rooms where they're just trying to fill
the vacant suites. If you are if you're comfortable
14812.14 -> with flexible Start and End Times spot price
is going to be good for you. The use case
14816.23 -> here is if you can handle interruptions, so
servers randomly stopping and starting, it's
14821.47 -> a very good use case is for non critical background
jobs. instances can be terminated by alias
14827.619 -> at any time, if your instance is terminated
by a device, you won't get charged for that
14832.589 -> partial hour of usage. If you terminate that
instance, you will be charged for any hour
14837.06 -> that it ran in, okay. And the last is dedicated
hosting, it's the most expensive option. And
14842.689 -> it's just dedicated servers, okay? And so
it can be it can be utilized in on demand
14849.489 -> or reserves you can save up to 70% off. And
the use case here is when you need a guarantee
14854.52 -> of isolette hardware. So this is like enterprise
requirements. So there you go, we made it
14859.029 -> all the way through ECP.
14861.649 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at Amazon machine images ami
14869.379 -> eyes, which is a template to configure new
instances. So an ami provides the information
14874.56 -> required to launch an instance. So you can
turn your EC two instances into ami. So that
14880.779 -> in turn, you can create copies of your servers,
okay. And so an ami holds the following, it's
14886.539 -> going to have a template for the root volume
for the instance. And it's either going to
14889.499 -> be an EBS snapshot, or an instant store template.
And that is going to contain your operating
14895.26 -> system, your application server, your applications,
everything that actually makes up what you
14899.689 -> want your AR ami to be, then you have launch
permissions, the controls which AWS accounts
14905.579 -> can use for the AMI to launch instances, then
you have block device mapping, that specifies
14912.069 -> volumes to attach to the instances when it's
launched. Alright, and so now I just have
14916.549 -> that physical representation over here. So
you have your EBS snapshot, which is registered
14921.089 -> to an ami and then you can launch that ami
or make a copy of an ami to make another ami,
14926.439 -> okay, and analyze our region specific. And
we're going to get into that shortly. I just
14935.58 -> wanted to talk about the use cases of ami,
and this is how I utilize it. So ami is help
14941.01 -> you keep incremental changes to your OS application
code and system packages. Alright, so let's
14945.939 -> say you have a web application and or web
server, and you create an ami on it. And it's
14950.85 -> going to have some things you've already installed
on it. But let's say you had to come back
14954.52 -> and install Redis because you want to run
something like sidekick, or now you need to
14958.159 -> install image magic for image processing.
Or you need the cloudwatch agent because you
14962.81 -> wanted to stream logs from your EC two instance
to cloud watch. And that's where you're going
14967.409 -> to be creating those revisions, okay, and
it's just gonna be based on the names. amye
14972.619 -> is generally utilized with SYSTEMS MANAGER
automation. So this is a service which will
14978.909 -> routinely Patreon eyes with security updates.
And then you can bake those ami so they can
14983.899 -> quickly launch them, which ties into launch
configuration. So when you're dealing with
14989.039 -> auto scaling groups, those use launch configurations
and launch configurations have to have an
14993.169 -> ami. So when you attach an ami to launch configuration
and you update the launch configuration in
14997.989 -> your auto scaling group, it's going to roll
out Those updates to all those multiple instances.
15002.569 -> So just to give you like a bigger picture
how ami is tied into the AWS ecosystem.
15010.989 -> So I just quickly wanted to show you the Avis
marketplace. And so again, the marketplace
15018.25 -> lets you purchase subscriptions to vendor
maintained amies. There can also be free ones
15022.829 -> in here as well. But generally they are for
paid. And they come in additional costs on
15027.5 -> top of your UC to instance. So here if you
wanted to use Microsoft's deep learning ami,
15033.37 -> you couldn't, you'd have to pay whatever that
is per hour. But generally people are purchasing
15039.01 -> from the marketplace security hardened ami
is a very popular so let's say you had to
15044.399 -> run Amazon Linux and you wanted it to meet
the requirements of level one cis. Well, there
15050.819 -> it is in the marketplace, and it only costs
you point 02. I guess that's two cents per
15055.339 -> hour or $130 $130 per year. Yeah, so just
wanted to highlight.
15065.88 -> So when you're creating an ami, you can create
an ami from an existing ecsu instance, it's
15069.85 -> either running or stopped, okay, and all you
have to do to create an ami is drop down your
15074.819 -> actions, go to image and create image. And
that's all there is to it.
15080.76 -> Now we're gonna look
15084.089 -> at how we would go about choosing our ami
and so AWS has hundreds of ami as you can
15088.879 -> search and select from. And so they have something
called the community ami which are free ami
15093.899 -> is maintained by the community. And then we
also have the ad bus marketplace, which are
15098.02 -> free or paid ami is maintained by vendors.
And so here in front of me, this is where
15103.72 -> you would actually go select an ami. But I
wanted to show you something that's interesting
15107.669 -> because you can have an ami, so let's say
Amazon Lex two. And if you were to look at
15112.249 -> it in North Virginia, and compare it to it
in another region, such as Canada Central,
15117.17 -> you're going to notice that there's going
to be some variation there. And that's because
15119.829 -> amies even though they are the same they they
are different to meet the needs of that region.
15125.699 -> And so um, you know, you can see here for
Amazon looks to North Virginia, we can launch
15130.489 -> it in x86 or arm, but only in Canada central
is 64 bit. Alright, so the way we can tell
15137.72 -> that these API's are unique is that they have
ami IDs, so they're not one to one. And so
15143.149 -> am eyes are region specific. And so they will
have different ami IDs per region. And you're
15149.199 -> not going to just be able to take it an ID
from another region and launch it from another
15153.169 -> region, there is some some things you have
to do to get an ami to another region. And
15158.12 -> we will talk about that. The most important
thing here I just want to show you is that
15162.359 -> we do have hundreds of API's to choose from.
And there is some variation between regions.
15167.689 -> So when choosing ami that we do have a lot
of options open to us to filter down what
15172.589 -> it is that we're looking for. And so you can
see that we could choose based on our LS whether
15177.55 -> the root device type is EBS, or instant store,
whether it's for all regions, recurrent regions,
15183.569 -> or maybe the architecture, so we do have a
bunch of filtration options available to us.
15187.869 -> amies are categorized as either backed by
EBS or backed by instant store this is a very
15192.959 -> important option here. And you're going to
notice that in the bottom left corner on there,
15198.68 -> I just wanted to highlight because it is something
that's very important.
15206.359 -> So
15207.359 -> you can also make copies of your ami. And
this feature is also really important when
15210.649 -> we're talking about am eyes because they are
region specific. So the only way you can get
15215.01 -> an ami from one region to another is you have
to use the copy command. So you do copy ami
15220.279 -> and then you'd have the ability to choose
to send that ami to another region. So there
15225.68 -> we are. So we're onto the AMI cheat sheets.
Let's jump into it. So Amazon machine image,
15236.359 -> also known as ami provides the information
required to launch an instance am eyes are
15240.52 -> region specific. If you need to use an ami
in another region, you can copy an ami into
15246.43 -> the destination region via the copy ami command.
You can create amies from an existing UC two
15252.739 -> instance that's either running or stopped
then we have community amies and these are
15257.01 -> free ami is maintained by the community then
there are 80 of us marketplace. amies and
15262.829 -> these are free or paid subscription ami is
maintained by vendors am eyes have ami IDs,
15269.14 -> the same ami so if you take Amazon links to
will vary in both ami ID and options, such
15275.56 -> as the architecture option in different regions,
they are not exactly the same, they're different.
15281.689 -> Okay, an ami holds the following information,
it's going to have a template for the root
15286.359 -> volume for the instance. So that's either
going to be an EBS snapshot or an instance
15289.64 -> store template. And that will contain operation
system, the application server and application
15295.569 -> data Okay. And then you have the launch permissions
that controls which ad was accounts can be
15301.59 -> used with the AMI to launch instances, and
a blocked device mapping that specifies the
15307.499 -> volume to attach to the instances when it's
launched. So there is your MIT ci and good
15314.619 -> luck. Hey, this is Andrew Brown from exam
Pro. And we are looking at auto scaling groups.
15321.919 -> So auto scaling groups lets you set scaling
rules, which will automatically launch additional
15325.839 -> EC two instances or shutdown instances to
meet the current demand. So here's our introduction
15331.529 -> to auto scaling groups. So auto scaling groups
revia to as G contains a collection of EC
15337.449 -> two instances that are treated as a group
for the purpose of automatic scaling and management.
15342.839 -> And automatic scaling can occur via capacity
settings, health check replacements, or scaling
15349.18 -> policies, which is going to be a huge topic.
15356.47 -> So the simplest way to use auto scaling groups
is just to work with the capacity settings
15360.939 -> with nothing else set. And so we have desired
capacity, Min, and Max. Okay, so let's talk
15366.43 -> through these three settings. So for min is
how many easy two instances should at least
15371.27 -> be running, okay, Max is the number of easy
two instances allowed to be running and desired
15377.3 -> capacity is how many easy two instances you
ideally want to run. So when min is set to
15382.68 -> one, and let's say you had a new auto scaling
group, and you lost it, and there was nothing
15387.399 -> running, it would always it would spin up
one. And if that server died, for whatever
15391.129 -> reason, because when it was unhealthy, or
just crashed, for whatever reason, it's always
15395.81 -> going to spin up at least one. And then you
have that upper cap, where it can never go
15400.811 -> beyond two, because auto scaling groups could
trigger more instances. And this is like a
15407.14 -> safety net to make sure that you know, you
just don't have lots and lots of servers running.
15411.149 -> And desired capacity is what you ideally want
to run. So as you will try to get it to be
15417.659 -> that value. But there's no guarantee that
it will always be that value. So that's capacity.
15424.17 -> So another way that auto scaling can occur
with an auto scaling group is through health
15431.43 -> checks. And down here, we actually have two
types, we have EC two and lb. So we're gonna
15436.04 -> look at EC two first. So the idea here is
that when this is set, it's going to check
15440.589 -> the seatoun instance to see if it's healthy.
And that's dependent on these two checks,
15444.81 -> that's always performed on DC two instances.
And so if any of them fail, it's going to
15449.239 -> be considered unhealthy. And the auto scaling
group is going to kill that EC two instance.
15454.18 -> And if you have your minimum capacity set
to one, it's going to then spin up a new EC
15460.01 -> two instance. So that's the that's easy to
type. Now let's go look at the lb type. So
15465.049 -> for the lb type, the health check is performed
based on an E lb health check. And the E lb
15470.93 -> can perform a health check by pinging an end
like an endpoint on that server could be HTTP
15476.01 -> or HTTPS, and it expects a response. And you
can say I want a 200 back at this specific
15481.72 -> endpoint or so here. That's actually what
we do. So if you have a web app, you might
15485.67 -> make a HTML page called health check. And
it should return 200. And if it is, then it's
15491.43 -> considered healthy. If that fails, then the
auto scaling group will kill that EC two instance.
15496.93 -> And again, if your minimum is set to one is
going to spin up a healthy new EC two instance.
15505.89 -> final and
15509.109 -> most important way of scaling gets triggered
within an auto scaling group is scaling policies.
15513.909 -> And there's three different types of scaling
policies. And we'll start with target tracking
15517.729 -> scaling policy. And what that does is it maintains
a specific metric and a target value. What
15523.609 -> does that mean? Well, down here, we can choose
a metric type. And so we'd say average CPU
15528.839 -> utilization. And if it were to exceed our
target value, and we'd set our target value
15532.579 -> to 75%. Here, then we could tell it to add
another server, okay, whenever we're adding
15538.31 -> stuff, that means we're scaling out whenever
we are removing instances, we're moving servers,
15542.949 -> and we're scaling in okay. The second type
of scaling policy is simple scaling policy.
15550.76 -> And this scales when alarm is breached. So
we create whatever alarm we want. And we would
15554.761 -> choose it here. And we can tell it to scale
out by adding instances, or scale in by removing
15560.659 -> instances. Now, this scaling policy is no
longer recommended, because it's a legacy
15564.919 -> policy. And now we have a new policy that
is similar but more robust. To replace this
15571.399 -> one, you can still use it but you know, it's
not recommended and but still in the console.
15575.56 -> But let's look at the one that replaces it
called scaling policies with steps. So same
15580.189 -> concept you scale based on when alarm is breach,
but it can escalate based on the alarms value,
15586.84 -> which changes over time. So before where you
just had a single value here, we could say
15591.49 -> well, if we have this, this alarm and the
value is between one and two, then add one
15598.119 -> instance and then when it goes between two
And three, then add another instance, or when
15602.069 -> exceeds three to beyond, then add another
instance. So you can, it helps you grow based
15608.68 -> on that alarm, that alarm as it changes, okay.
So earlier I was showing you that you can
15620.51 -> do health checks based on l bees. But I wanted
to show you actually how you would associate
15626.02 -> that lb to an auto scaling group. And so we
have classic load balancers. And then we have
15631.22 -> application load balancer and network load
balancer. So there's a bit of variation based
15635.55 -> on the load bouncer how you would connect
it, but it's pretty straightforward. So when
15640.089 -> in the auto scaling group settings, we have
these two fields, classic load balancers and
15643.339 -> target groups. And for classic load balancers,
we just select the load balancer, and now
15648.149 -> it's associated. So it's as simple as that
it's very straightforward, but with the new
15652.709 -> ways that there's a target group that's in
between the auto scaling group and the load
15656.589 -> balancer, so you're associating the target
group. And so that's all there is to it. So
15661.939 -> that's how you associate. So to give you the
big picture on what happens when you get a
15671.81 -> burst of traffic, and auto scaling occurs,
I just wanted to walk through this architectural
15675.329 -> diagram with you. So let's say we have a web
server and we have one EC two instance running,
15680.129 -> okay, and all of a sudden, we get a burst
of traffic, and that traffic comes into roughly
15684.379 -> three, revenue three points to our application
load balancer application load balancer has
15689.329 -> a listener that sends the traffic to the target
group. And we have this these students since
15693.01 -> which is associated with that target group.
And we have so much traffic that it causes
15697.859 -> our CPU utilization to go over 75%. And once
it goes over 75%, because we had a target
15705.439 -> scaling policy attached, that said anything
above 75%, spin up a new instance. That's
15710.689 -> what the auto scaling group does. And so the
way it does is it uses a launch configuration,
15715.35 -> which is attached to the auto scaling group.
And it launches a new EC two instance. So
15719.709 -> that's just to give you the, like, full visibility
on the entire pipeline, how that actually
15726.1 -> works.
15728.069 -> So when you have an auto scaling group, and
it launches in institutions, how does it know
15733.899 -> what configuration to use to launch a new
new ECU essence, and that is what a launch
15739.449 -> configuration is. So when you have an auto
scaling group, you actually set what a launch
15745.319 -> configuration you want to use. And a launch
configuration looks a lot like when you launch
15749.539 -> a new EC two instance. So you go through and
you'd set all of these options. But instead
15754.76 -> of launching an instance, at the end, it's
actually just saving the configuration. Hence,
15759.459 -> it's called a launch configuration. A couple
of limitations around loss configurations
15763.569 -> that you need to know is that a launch configuration
cannot be edited once it's been created. So
15769.18 -> if you need to update or replace that launch
configuration, you need to either make a new
15773.659 -> one, or they have this convenient button to
clone the existing configuration and make
15778.149 -> some tweaks to it. There is something also
known as a launch template. And they are launching
15785.229 -> figurations, just but with versioning. And
so it's AWS is new version of lock configuration.
15790.789 -> And you know, generally when there's something
new, I might recommend that you use it, but
15795.459 -> it seems so far that most of the community
still uses launch configuration. So the benefit
15801.34 -> of versioning isn't a huge, doesn't have a
lot of value there. So, you know, I don't
15807.799 -> I'm not pushing you to use launch templates,
but I just want you to know the difference
15811.319 -> because it is a bit confusing because you
look at it, it looks like pretty much the
15814.01 -> same thing. And it just has version in here
and we can review the auto scaling group cheat
15823.919 -> sheet. So an S g is a collection of up to
two instances group for scaling and management
15828.89 -> scaling out is when you add servers scaling
is when you remove servers scaling up is when
15834.26 -> you increase the size of an instance so like
you'd update the launch configuration with
15837.8 -> a larger size. The size of an ASC is based
on the min max and desired capacity. Target
15843.97 -> scaling policy scales based on when a target
value of a metric is breached. So example
15848.079 -> average CPU utilization exceeds 75% simple
scaling policy triggers a scaling when an
15853.68 -> alarm is breach. Scaling policy with steps
is the new version simple scaling policy allows
15858.64 -> you to create steps based on escalation alarm
values. desired capacity is how many instances
15864.22 -> you want to ideally run. An ESG will always
launch instances to meet the minimum capacity.
15870.479 -> health checks determine the current state
of an instance in nasg. health checks can
15874.129 -> be run against either an EOB or needs to do
instance, when an auto scaling when an auto
15880.529 -> scaling group launches a new instance it will
use a launch configuration which holds the
15884.909 -> configuration values of that new instance.
For example, maybe the AMI instance type of
15889.299 -> roll launch configurations cannot be edited
and must be cloned or a new one created. Launch
15895.6 -> configurations must be manually updated in
by editing the auto scaling group. setting.
15900.779 -> So there you go. And that's everything with
auto scale. Hey, it's Andrew Brown from exam
15909.899 -> Pro. And we are looking at elastic load balancers,
also abbreviated to lb, which distributes
15915.31 -> incoming application traffic across multiple
targets such as you see two instances containers,
15920.22 -> IP addresses or lambda functions. So let's
learn a little bit What a load bouncer is.
15924.949 -> So load balancer can be physical hardware
or virtual software that accepts incoming
15929.51 -> traffic and then distributes that traffic
to multiple targets. They can balance the
15934.01 -> load via different rules. These rules vary
based on the type of load balancers. So for
15939.089 -> elastic load balancer, we actually have three
load balancers to choose from. And we're going
15944.22 -> to go into depth for each one, we'll just
list them out here. So we have application
15947.59 -> load balancer, network, load balancer, and
classic load balancer.
15954.069 -> Understand the flow of traffic for lbs, we
need to understand the three components involved.
15960.839 -> And we have listeners rules and target groups.
And these things are going to vary based on
15965.25 -> our load balancers, which we're going to find
out very shortly here. Let's quickly just
15968.589 -> summarize what these things are. And then
see them in context with some visualization.
15973.689 -> So the first one our listeners, and they listen
for incoming traffic, and they evaluate it
15979.01 -> against a specific port, whether that's Port
80, or 443, then you have rules and rules
15986.189 -> can decide what to do with traffic. And so
that's pretty straightforward. Then you have
15991.56 -> target groups and target groups is a way of
collecting all the easy two instances you
15996.76 -> want to route traffic to in logical groups.
So let's go take a look first at application
16001.949 -> load bouncer and network load balancer. So
here on the right hand side, I have traffic
16005.829 -> coming in through repartee three, that points
to our load balancer. And once it goes, our
16010.979 -> load balancer goes to the listener, it's good
check what port it's running on. So if it's
16014.289 -> on port 80, I have a simple rule here, which
is going to redirect it to Port 443. So it's
16020.279 -> gonna go this listener, and this listener
has a rule attached to it, and it's going
16024.39 -> to forward it to target one. And that target
one contains all these two instances. Okay.
16031.34 -> And down below here, we can just see where
the listeners are. So I have listener at 443.
16035.47 -> And this is for application load balancer,
you can see I also can attach a SSL certificate
16041.35 -> here. But if you look over at rules, and these
rules are not going to appear for network
16046.27 -> load balancer, but they are going to appear
for a lb. And so I have some more complex
16050.63 -> rules. If you're using lb, it simply just
forwards it to a target, you don't get more
16056.399 -> rich options, which will show you those richer
options in a future slide. But let's talk
16062.109 -> about classic load balancer. So classic load
balancer is, is much simpler. And so you have
16067.81 -> traffic coming in it goes to CLB. You have
your listeners, they listen on those ports,
16072.899 -> and you have registered targets. So there
isn't target groups, you just have Lucy see
16079.109 -> two instances that are associated with a classic
load balancer. Let's take a deeper look at
16089.529 -> all three load balancers starting with application
load balancer. So application load balancer,
16094.93 -> also known as a lb is designed to balance
HTTP and HTTPS traffic. It operates at layer
16101.04 -> seven of the OSI model, which makes a lot
of sense because layer seven is application
16106.439 -> lb has a feature called request routing, which
allows you to add routing rules to your listeners
16111.069 -> based on the HTTP protocol. So we saw previously,
when we were looking at rules, it was only
16117.079 -> for a lb that is this is that request routing
rules, you can attach a web application firewall
16123.89 -> to a lb. And that makes sense because they're
both application specific. And if you want
16129.669 -> to think of the use case for application load
balancer, well, it's great for web applications.
16138.539 -> So now let's take a look at network load balancer,
which is designed to balance TCP and UDP traffic.
16144.409 -> It operates at the layer four of the OSI model,
which is the transport layer, and it can handle
16149.06 -> millions of requests per second while still
maintaining extremely low latency. It can
16153.869 -> perform cross zone load balancing, which we'll
talk about later on. It's great for you know,
16159.489 -> multiplayer video games, or when network performance
is the most critical thing to your application.
16170.35 -> Let's take a look at classic load balancers.
So it was AWS is first load balancer. So it
16174.699 -> is a legacy load balancer. It can balance
HTTP or TCP traffic, but not at the same time.
16181.189 -> It can use layer seven specific features such
as sticky sessions. It can also use a strict
16186.989 -> layer for for bouncing purely TCP applications.
So that's what I'm talking about where it
16192.56 -> can do one or the other. It can perform cross
zone load balancing, which we will talk about
16197.51 -> later on and I put this one in here. Because
it is kind of an exam question, I don't know
16202.529 -> if it still appears, but it will respond with
a 504 error in case of timeout if the underlying
16208.01 -> application is not responding. And an application
can be not respond spawning would be example
16212.76 -> as the web server or maybe the database itself.
So classic load balancer is not recommended
16218.04 -> for use anymore, but it's still around, you
can utilize it. But you know, it's recommended
16223.359 -> to use nlb or lb, when possible.
16232.46 -> So let's look at the concept of sticky sessions.
So sticky Sessions is an advanced load balancing
16237.079 -> method that allows you to bind a user session
to a specific EC two instance. And this is
16243.289 -> useful when you have specific information
that's only stored locally on a single instance.
16247.76 -> And so you need to keep on sending that person
to the same instance. So over here, I have
16252.77 -> the diagram that shows how this works. So
on step one, we wrote traffic to the first
16259.159 -> EC two instance, and it sets a cookie and
so the next time that person comes through,
16263.189 -> we check to see if that cookie exists. And
we're gonna send it to that same EC two instance.
16268.05 -> Now, this feature only works for classic load
balancer and application load balancer, it's
16273.93 -> not available for nlb. And if you need to
set it for application load balancer, it has
16280.8 -> to be set on the target group and not individually
easy to instance. So here's a scenario you
16290.079 -> might have to worry about. So let's say you
have a user that's requesting something from
16295.409 -> your web application, and you need to know
what their IP address is. So, you know, the
16300.85 -> request goes through and then on the EC two
instance, you look for it, but it turns out
16304.939 -> that it's not actually their IP address. It's
the IP address of the load balancer. So how
16309.47 -> do we actually see the user's IP address?
Well, that's through the x forwarded for header,
16315.22 -> which is a standardized header when dealing
with load balancers. So the x forwarded for
16321.26 -> header is a command method for identifying
the originating IP address of a connecting,
16326.1 -> or client connecting to a web server through
HTTP proxy or a load balancer. So you would
16330.84 -> just forward make sure that in your web application
that you're using that header, and then you
16337.43 -> just have to read it within your web application
to get that user's IP address. So we're taking
16346.81 -> a look at health checks for elastic load balancer.
And the purpose behind health checks is to
16351.55 -> help you route traffic away from unhealthy
instances, to healthy instances. And how do
16357.229 -> we determine if a instances unhealthy waltz
through all these options, which for a lb
16362.68 -> lb is set on the target group or for classic
load balancers directly set on the load balancer
16367.13 -> itself. So the idea is we are going to ping
the server at a specific URL at a with a specific
16374.89 -> protocol and get an expected specific response
back. And if that happens more than once over
16381.299 -> a specific interval that we specify, then
we're going to mark it as unhealthy and the
16385.77 -> load balancer is not going to send any more
traffic to it, it's going to set it as out
16389.52 -> of service. Okay. So that's how it works.
One thing that you really need to know is
16395.91 -> that e lb does not terminate unhealthy instances
is just going to redirect traffic to healthy
16401.58 -> instances. So that's all you need to know.
So here, we're taking a look at cross zone
16409.092 -> load balancing, which is a feature that's
only available for classic and network load
16413.561 -> balancer. And we're going to look at it when
it's enabled, and then when it's disabled
16416.781 -> and see what the difference is. So when it's
enabled, requests are distributed evenly across
16421.301 -> the instances in all the enabled availability
zones. So here we have a bunch of UC two instances
16426.18 -> in two different Z's, and you can see the
traffic is even across all of them. Okay?
16431.5 -> Now, when it's disabled requests are distributed
evenly across instances, it's in only its
16437.24 -> availability zone. So here, we can see in
az a, it's evenly distributed within this
16444.33 -> AZ and then the same thing over here. And
then down below, if you want to know how to
16449.051 -> enable cross zone load balancing, it's under
the description tab, and you'd edit the attributes.
16453.801 -> And then you just check box on cross zone
load balancing. Now we're looking at an application
16463.211 -> load balancer specific feature called request
routing, which allows you to apply rules to
16467.25 -> incoming requests, and then for to redirect
that traffic. And we can check on a few different
16472.49 -> conditions here. So we have six in total.
So we have the header host header source IP
16477.109 -> path is to be header is to be header method,
or query string. And then you can see we have
16482.25 -> some then options, we can forward redirect
returned to fixed response or authenticate.
16487.951 -> So let's just look at a use case down here
where we actually have 1234 or five different
16493.541 -> examples. And so one thing you could do is
you could use this to route traffic based
16497.621 -> on subdomain. So if you want an app to sub
domain app to go to Target, prod and QA to
16503.352 -> go to the target QA, you can do that you can
either do it also on the path. So you could
16507.691 -> have Ford slash prod and foresights qa and
that would route to the respective target
16511.4 -> groups, you could do it as a query string,
you could use it by looking at HTTP header.
16518.182 -> Or you could say all the get methods go to
prod on a why you'd want to do this, but you
16522.041 -> could and then all the post methods would
go to QA. So that is request request routing
16527.49 -> in a nutshell.
16532.76 -> We made it to the end of the elastic load
balancer section and on to the cheat sheet.
16537.111 -> So there are three elastic load balancers,
network application and classic load balancer.
16542.461 -> an elastic load balancer must have at least
two availability zones for it to work. Elastic
16548.051 -> load balancers cannot go cross region you
must create one per region lbs have listeners
16554.1 -> rules and target groups to route traffic and
OBS have listeners and target groups to route
16559.311 -> traffic. And CL B's use listeners and EC two
instances are directly registered as targets
16565.65 -> to the CLB. For application load balancer,
it uses HTTP s or eight or without the S traffic.
16573.211 -> And then as the name implies, it's good for
web applications. Network load balancer is
16577.33 -> for TCP, UDP, and is good for high network
throughput. So think maybe like multiplayer
16582.359 -> video games, classic load balancer is legacy.
And it's recommended to use a lb, or nlb when
16589.291 -> you can, then you have the x forwarded for.
And the idea here is to get the original IP
16595.422 -> of the incoming traffic passing through the
lb, you can attach web application firewall
16600.58 -> to a lb. Because you know web application
firewall has application the name and nlb
16605.66 -> and CLB do not, you can attach an Amazon certification
manager SSL certificate. So that's an ACM
16613.18 -> to any of the L B's for to get SSL. For a
lb you have advanced request routing rules,
16621.68 -> where you can route based on subdomain, header
path and other SP information. And then you
16627.121 -> have sticky sessions, which can be enabled
for CLB or lb. And the idea is that it helps
16632.561 -> the session remember, what would you say to
instance, based on cookie.
16638.32 -> All right, so it's time to get some hands
on experience with EC two and I want you to
16645.83 -> make your way over to the ECG console here
by going up to services and typing EC two
16652.26 -> and click through and you should arrive at
the EC to dashboard. Okay, I want you to take
16658.1 -> a look on the left hand side here because
it's not just easy two instances that are
16662.92 -> under here, we're going to have ami eyes.
Okay, we're going to have elastic block store,
16668.791 -> we're going to have some networking security.
So we have our security groups, our elastic
16672.73 -> IPS are key pairs, we're going to have load
balancing, we're going to have auto scaling,
16676.58 -> okay, so a lot of times when you're looking
for these things, they happen to be under
16680.689 -> the sea to console here, okay, but um, now
that we've familiar, familiarize ourselves
16687.76 -> with the overview here, let's actually go
ahead and launch our first instance. Alright,
16693.141 -> so we're gonna proceed to launch our first
instance here, and we're gonna get this really
16697.58 -> nice wizard here. And we're gonna have to
work our way through these seven steps. So
16702.32 -> let's first start with choosing your ami,
your Amazon machine image. And so an ami,
16706.91 -> it's a template that contains the software
configuration. So the operating system, the
16710.51 -> application server and applications required
to launch your instance, right. And we have
16716.4 -> some really same choices here we have Amazon
Linux two, which is my go to, but if you wanted
16722.471 -> something else, like red hat or or Susi or
Ubuntu or or Microsoft Windows, eight of us
16729.24 -> has a bunch of amies that they support they
manage, okay, and so if you were to pay for
16736.211 -> AWS support, and you were to use these ami
is you're gonna get a lot of help around them.
16741.051 -> Now if you want more options, besides the
ones that are provided from AWS, there is
16745.84 -> the marketplace and also community ami. So
if we go to community am eyes, and we're just
16749.83 -> going to take a peek here, we can see that
we can filter based on less than architecture
16755.25 -> etc. But let's say we wanted a WordPress ami,
something was pre loaded with WordPress, we
16760.93 -> have some here. And if we wanted to get a
paid paid one, one that is actually provided
16767.611 -> by a vendor and that they support it and you
pay some money to make sure that it's in good
16771.73 -> shape, maybe as good security or just kept
up to date. You can do that. So here's WordPress
16775.98 -> by bitnami. Alright, and that's a very common
one to launch for WordPress on here. But you
16780.55 -> know, we're just going to stick to the defaults
here and go back to the Quickstart and we're
16784.46 -> going to launch an Amazon Linux two ami, alright,
and we'll just proceed to a clicking select
16790.81 -> here. So now we're on to our second option,
which is to choose our instance type, okay,
16796.52 -> and this is going to determine how many CPUs
you're going to be using. realizing how much
16800.66 -> memory you're going to have are going to be
backed by an EBS or instant store. And also
16805.042 -> Are there going to be limitations around our
network performance. And so you can filter
16810.522 -> based on the types that you want. And we learned
this earlier that there are a bunch of different
16814.9 -> categories, we're going to stay in the general
purpose family here, which are the first that
16818.78 -> are listed, we're always going to see T two
and T two micro is a very same choice and
16822.84 -> also a free choice if we have our free tier
here. Okay, and so we will select it and proceed
16830.872 -> to instance details. So now it's time to configure
our instance. And the first option available
16838.122 -> to us is how many instances we want to run.
So if you wanted to launch 100, all at once,
16843.352 -> you can do so and put it in an auto scaling
group, or they were going to stick with one
16846.93 -> to be cost effective, then we have the option
here to turn this into a spot instance. And
16853.48 -> that will help us save a considerable amount
of money. But I think for the time being we
16857.74 -> will stick with on demand. Next we need to
determine what VPC and subnet we're going
16864.22 -> to want to launch this into. I'm going to
stick with the default VPC and the default
16868.301 -> sub network, I should say it will pick one
at random for me, we're definitely going to
16872.872 -> want to have a public IP address. So we're
going to allow that to be enabled, you could
16878.33 -> put this easy to insert into a placement group,
you'd have to create a placement group first.
16883.17 -> But we're not going to do that because we
don't have the need for it. And we're going
16886.192 -> to need an IM role. So I'm going to go ahead
here and right click and create a new tab
16890.55 -> here and create a new Im role for us to give
this easy to some permissions. And so we're
16896.41 -> gonna go to UC to here, we're going to go
next, I want you to type in SSM for system,
16902.47 -> simple SYSTEMS MANAGER. And we're going to
get this one role for SSM. We hit Next, we're
16907.25 -> going to go next to review, I'm going to say
my EC to EC to URL, okay. And we're gonna
16915.64 -> create that role, close this tab, and we're
gonna hit that refresh there and then associate
16919.98 -> this Im role to our instance here. The reason
I did that, and is because I want us to have
16926.74 -> the ability to use simple SYSTEMS MANAGER
sessions manager, because there are two different
16931.092 -> ways that we can log into our EC two instance,
after it is launched, we can SSH into it,
16936.122 -> or we can use SYSTEMS MANAGER in order to
use this sessions manager, we're going to
16940.88 -> have to
16941.88 -> have that Im role with those permissions,
the default behavior will it will shut down,
16947.56 -> that's or do a stop, that's good to me. If
we wanted detailed monitoring, we could turn
16951.262 -> that on would cost additional, if we want
to protect against accidental accidental termination,
16956.49 -> that is also a very good option. But again,
this is a test here. So we'll probably be
16960.84 -> tearing this down pretty quick. So we don't
need to have that enabled, then we have Tennessee
16965.55 -> option. So if we get a dedicated host, that's
going to be very expensive. But that'd be
16970.09 -> very good if you are on, you know, an enterprise
organization that has to meet certain requirements.
16976.372 -> So there are use cases for that. And the last
option here is under the advanced details,
16981.84 -> you might have to drop this down and open
it. And this is that script that allows us
16986.42 -> to set something up initially when the server
launches. And we have a script here that we
16991.61 -> want to run. So I'm going to just copy paste
that in there. And what it's going to do is
16995.702 -> when the EC two instance launches, it's going
to set us up without a proxy server and started
17000.26 -> that server. So we have a very basic website.
Okay. And so yeah, we're all done here, we
17006.433 -> can move on to storage. So now we're going
to look at adding storage to our EC two instance.
17011.991 -> And by default, we're going to always have
a root volume, which we cannot remove. But
17016.59 -> if we wanted to add additional volumes here
and choose their, their mounting directory,
17021.93 -> we could do so there. And we do have a lot
of different options available to us for volume
17027.33 -> types. But we only want one for the sake of
this tutorial. So we'll just remove that we
17033.22 -> can set the size, we're gonna leave it to
eight. If we want to have this root volume
17039.481 -> persist on the case that the CPU is terminated,
so the volume doesn't get deleted, we can
17043.981 -> uncheck box, this is a very good idea. In
most cases, you want to keep that unchecked.
17050.31 -> But for us, we want to make cleanup very easy.
So we're going to keep that on. And then we
17054.601 -> have encryption. And so we can turn on encryption
here using kms. The default key, we're just
17060.13 -> going to leave it off for the time being and
yeah, we'll proceed to tags here. And we're
17065.77 -> just gonna skip over tags. Now tags are good
to set. Honestly, I just I'm very lazy, I
17071.78 -> never set them. But if you want to group your
resources across a city or anything, you want
17076.622 -> to consistently set tags, but we're just gonna
skip that and go on to security groups. So
17082.542 -> now we need to configure some security groups
here for this easy to instance. And then we
17087.19 -> don't have any, so we're gonna have to create
a new one, but we could choose an existing
17090.452 -> one if there was one, but we're gonna go ahead
and name it here because I really don't like
17095.64 -> the default name. So we'll just say my SG
four For EC two, okay, and we're gonna set
17104.98 -> some inbound rules here. And so we have SSH
and that stuff, something that we definitely
17109.73 -> want to do, I'm going to set it to my IP,
because I don't want to have the ability for
17114.48 -> anyone to SSH into this instance, I want to
really lock it down to my own. We're gonna
17118.28 -> add another instance here, because this is
running an Apache server. And we do want to
17122.52 -> expose the Port 80 there, so I'm going to
drop that down, it's going to automatically
17127.02 -> like Port 80. And we do want this to be internet
accessible. But if we want to be very explicit,
17131.32 -> I'm just gonna say anywhere, okay, and I'm
gonna make a note here, just saying, like
17135.88 -> my home, and, you know, for paci, you know,
it doesn't hurt to put these notes in here.
17142.66 -> And then we'll go ahead and review. So now
it's time to just review and make sure everything
17148.01 -> we set in the wizard here is what we want
it to be. So you know, we just review it.
17153.16 -> And if you're happy with it, we can proceed
to launch. So now when you hit launch, it's
17157.32 -> going to ask you to create a key pair, okay.
And this is going to be so that we can actually
17162.25 -> SSH into the instance and access it. So I'm
going to drop down here, I'm going to create
17166.05 -> a new key pair pair here. And I'm going to
call it o m, I'm going to call it my EC two.
17174.76 -> And I'm going to go ahead and download that
pair there. And we're going to go ahead and
17178.42 -> launch that instance. Alright. And so now
it says that that instance, is being created.
17184.41 -> So we can go ahead and click on the name here.
And so now it is spinning up, I do suggest
17189.36 -> that you do on tick here so that we can see
all of our ECS instances. And what we're going
17193.5 -> to be waiting for is to move from a pending
state into a green state, I forget the name
17198.59 -> of it, I guess enabled or created. And then
after it goes green, we're going to be waiting
17204.322 -> for our two status checks to pass. And if
those two status checks pass, which will show
17209.83 -> up under here, we have our system says check
out our instance status check. That means
17214.39 -> the instance is going to be ready and available,
it's going to run that user data script that
17219.18 -> we have. So once again, at one with these
two checks are done. It's Oh, sorry, green
17223.71 -> is running. Great. So now we're waiting for
these checks. And once they are complete,
17228.21 -> we're going to be able to use our public IP
address here, either this one here, or even
17232.99 -> this DNS record, and we're going to see if
our, our
17237.38 -> if our, our servers running. Okay, so if that
works, then we will be in good shape here.
17243.71 -> Okay, so we're just going to wait for that
session. So our two checks have passed, meaning
17253.03 -> that our instance is now ready to attempt
to access via either the public IP or the
17259.182 -> public DNS record, we can use either or I
like using the public DNS record. And so I'm
17263.9 -> just going to copy it using the little clipboard
feature there, make a new tab in my browser,
17268.57 -> and paste it in there. And I'm going to get
the Apache test page, meaning that our user
17273.08 -> data script worked, and it's successfully
installed and started Apache here. So that's
17277.92 -> all great. Now, um, so now, you know, our
instances are in good working order. But let's
17283.332 -> say we had to log into our instance to debug
it or do something with it. That's where we're
17288.19 -> gonna have to know how to either SSH in, which
is usually the method that most people like
17292.872 -> to do. Or we can use simple systems sessions
manager, which is the recommended way by AWS.
17298.252 -> Okay, so I'm gonna show you both methods,
methods here, starting with sessions manager.
17304.78 -> But just before we do that, I just want to
name our instance something here to make our
17308.42 -> lives a little bit easier here, you just have
to click the little pencil there and rename
17312.862 -> it. So I'm just rename it to my server. And
we'll pop down services here and type SSM
17318.63 -> for simple SYSTEMS MANAGER and make a new
tab here, click off here. So we have our space
17325.682 -> here, and we will go to the SYSTEMS MANAGER
console. Now, on the left hand side, we are
17331.24 -> looking for sessions manager, which is all
the way down here. And we're just going to
17335.39 -> click that. And we are going to start a new
session. So we're going to choose the instance
17340.262 -> we want to connect to. So just hit start instance
here. Okay. And here, we have my server, which
17348.25 -> is the instance we want to connect to, we'll
hit start session, and it will gain access
17352.42 -> to this instance, immediately. So there is
next to no wait. And we are in this instance,
17357.01 -> the only thing I don't like is that it logs
you in as the root user, which to me is very
17361.55 -> over permissive. But we can get to the correct
user here. For for Amazon links to instances,
17367.06 -> it always has an EC to user, that is the user
that you want to be doing things under. So
17371.22 -> I'm just going to switch over to that user
here just quickly here. Okay, and so now I'm
17377.31 -> the correct user and I can go about doing
whatever it is that I want to do here. Okay.
17382.8 -> Right. So, so there you go. So that's the
sessions manager, I'm just going to terminate
17389.452 -> here to terminate that instance. And again,
the huge benefit here is that you get sessions
17394.76 -> history so you can get a history of who has
logged in and done like gone into server to
17400.49 -> do something. And, you know, the other thing
is that you don't have to share around that
17405.32 -> key pair. Because when you launch a sudo instance,
you only have that one key pair, and you really
17409.97 -> don't want to share that around. So this does
remove that obstacle for you. Also, because
17416.6 -> people have to log in the console, there are
people leave your company, you know, you're
17420.68 -> also denying them access there, and you don't
have to retract that key pair from them. So
17425.9 -> it is a lot easier to use sessions manager.
The downside, though, is that it just has
17431.36 -> a very simplistic terminal within the browser.
So if you're used to more rich features from
17436.4 -> your, your OS terminal, that is a big downside.
That's why people still SSH in, which is now
17443.98 -> the next method we are going to use to gain
access to our EC two instance. So in order
17449.3 -> to do that, we are going to need a terminal
Okay, and I just moved that, that key pair
17454.86 -> onto my desktop, when you download, it probably
went to your download. So just for convenience,
17459.46 -> I've moved it here. And what we're going to
do is we are going to use our SSH command
17464.542 -> here, and we're going to log in as the aect
user because that's the user you should be
17468.28 -> logging
17469.28 -> in as, as soon as you have to log in as when
you SSH into it, we're just going to grab
17472.93 -> the public IP, now we could use the public
17475.11 -> DNS record, but the IP address is a bit shorter
here, so it's a bit nicer. And then we're
17479.43 -> gonna use a hyphen flag to specify the private
key that we want to pass along. That's on
17485.13 -> our desktop here, I'm actually already on
the desktop, so I don't have to do anything
17489 -> additional here. So we're just going to pass
along, so we're gonna hit enter, okay. And
17495.022 -> we're just going to wait here, and we got
a Permission denied. Now, if this is the first
17498.57 -> time you've logged the server, it might ask
you to for a fingerprint where you will type
17502.16 -> in Yes. Okay, it didn't ask me that. So that's
totally fine. But you're going to see that
17507.782 -> it's giving me a 644, because the the private
key is to open so it is required that your
17514.182 -> private key files are not accessible by others.
So database wants to really make sure that
17518.351 -> you lock down those permissions. And so if
we just take an LS hyphen, Li here, we can
17523.542 -> see that it has quite a few permissions here.
And we could just lock that down by typing
17527.35 -> chmod 400. Okay. And then if we just take
a look here, again, now it's locked down here.
17534.442 -> So if we try to SSH in again, okay, we should
have better luck, this time around. It's not
17539.76 -> as fast as sessions manager, but it will get
you in the door there. There we are. And we
17543.811 -> are logged in as the user, okay, and we'd
go about our business doing whatever it is
17548.43 -> that we'd want to do. Now, we did talk about
in the, the the actual journey about user
17554.68 -> data metadata, and this is the best opportunity
to take a look there. So we have this private,
17561.88 -> this private addresses only accessible when
you're inside your EC two instance. So you
17565.09 -> can gain additional information. First one
is the user data one, okay, so if I just was
17569.76 -> to paste that in there, and it's just curl
a cp 169 254, once it's done to the four latest
17575.23 -> user data, and if I were to hit enter, it
will return the script that actually was a
17581.02 -> performed on launch. Okay, so if you were
to debug and easy to instance, and it wasn't,
17586.3 -> you know, you're like working for another
company, and you didn't launch that instance,
17588.85 -> and you really wanted to know, what was, uh,
what was performed on launch, you could use
17593.68 -> that to find out, then we have the metadata
endpoint, and that is a way for us to get
17598.4 -> a lot of different rich information. So it's
the same thing, it's just gonna have metadata
17601.72 -> on the end there with with a Ford slash, and
you have all these different options. Okay.
17607.112 -> So let's say we wanted to get the IP address
here of this instance, we could just append
17612.17 -> IP public ipv4 on there, okay. And there you
go. So, you know, that is how you log into,
17620.622 -> into an instance via SSH or, or such sessions
manager. And that's how we get some user data
17625.173 -> and metadata after the fact. So when we had
launched a cc two instance, we didn't actually
17634.43 -> encrypt the root volume, okay. And so if you
were to launch an instance, I just want to
17640.19 -> quickly just show you what I'm talking about
here. And we go to storage here. We just didn't
17646.6 -> we didn't go down here and select this. Alright,
so let's say we had to retro actively apply
17650.622 -> encryption, well, it's not as easy as just
a drop down here, we have to go through a
17655.511 -> bunch of steps, but we definitely need to
know how to do this. Okay, so how would we
17658.753 -> go about that? Well, we go to a volumes on
the left hand side here, and we'd find our
17664.272 -> running volume. So here is the volume that
we want to encrypt that is unencrypted. And
17670.53 -> so what we do is we would first create a snapshot
of it, okay. And so I would say, um, we'll
17677.782 -> say my volume. Okay. We'll go ahead and create
that snapshot. And we'll go back here, and
17684.84 -> we're just going to wait for this snapshot
to complete it's going to take a little bit
17690.372 -> of time here. But once it comes back here,
we'll move on to the next step. So our progress
17695.33 -> is at now at 100%. And we can see that this
snapshot is unencrypted. Okay, so what we're
17701.48 -> gonna do is we're gonna go up to actions at
the top here, make a copy of the snapshot.
17706.66 -> And this is where we're gonna have the ability
to apply encryption. Alright, and we're just
17710.56 -> going to use the default kms key here, which
is a pretty good default to use. And we're
17715.89 -> gonna hit copy. And that's going to now initiate
a copy here. So we'll just visit our snapshots
17721.56 -> page ever. Now we're just going to wait for
this to create our copy. So our progress is
17727.57 -> at 100%. It's encrypted, even though it's
it's pending and 0% down there. Sometimes
17732.14 -> if you hit refresh, that will just get your
interface up to date here. I'm just going
17736.83 -> to turn this off here. So you can see that
we have our volume here. And then we have
17741.52 -> our, our snapshot there. I don't really like
the name of the description, I wonder if I
17745.05 -> can change that. No, so that's fine, though.
But anyway, we now have our unencrypted and
17752.532 -> our encrypted volume. So if we wanted to have
this encrypted one launch, all we're gonna
17757.71 -> have to do here is launch the seats aect,
or sorry, create a an image from this volume
17764.34 -> here. So I'm just gonna hit Create image.
And this is
17768.02 -> going to create an image from our EBS snapshot,
okay, so that's going to be our ami. And I'm
17773.85 -> just gonna say, my server here, okay. And,
yeah, this all looks good to me. And we'll
17781.83 -> just go ahead and hit Create. Alright, and
we'll click through here to our ami. And so
17787.6 -> we're just going to wait here, actually, I
think it's instantly created. So our ami is
17792.172 -> ready to launch. Okay. So if we want to now
have our server with a version that is encrypted,
17799.73 -> we can just go ahead here and launch a new
instance. And it's going to pull up our big
17803.372 -> interface here. And we'll just quickly go
through it. So t to micro is good. We have
17809.24 -> one instance, we're going to drop down to
our, our AC t roll here, we're going to have
17814.66 -> to again, copy our startup script here. Okay.
I'm actually I guess not, because if we if
17822.96 -> we created a snapshot of our instance, this
would already be installed. So we don't have
17826.16 -> to do that again. So that's good. And then
we'll go to our storage, and there is our
17830.85 -> encrypted volume. Okay, we'll go to security
groups, and we're just going to select that
17834.602 -> existing one there. And then we're gonna go
to review, and we're going to launch, okay,
17839.72 -> and we're gonna choose the existing one. So
we'll say launch instance. Okay. And we'll
17844.85 -> go back here, and we just check box off here,
we're gonna actually have two instances running,
17848.69 -> you can see I have a few terminated ones,
those are just old ones there. But this is
17852.43 -> the one we have running here. So once this
is running, we'll have this here. And we'll
17857.42 -> just double check to make sure it's working.
But we'll talk about, you know, how do we
17861.4 -> manage launching multiple instances here next.
So our new instance is now a running, I'm
17871.85 -> just going to name it to my new server. So
we can distinguish it from our old one. And
17876.6 -> I want to see if that root device is actually
encrypted, because that was our big goal here.
17881.49 -> And so we're going to open this up in a new
tab here and look at our, our volume and is
17887.44 -> indeed encrypted. So we definitely were successful
there. Now, the main question, is our server
17892.88 -> still running our Apache test page here? So
I'm going to grab the new IP for the new server
17898.03 -> and take a look here, and we're going to find
that Apache isn't running. So you know, what
17902.15 -> happened? Why is it not working? And there's
a really good reason for that. If we go over
17905.692 -> to our script, when we first use this user
data script on our first server, what it did
17911.63 -> was it installed Apache, and then it started
Apache, okay. But that doesn't mean that it
17917.19 -> kept Apache running if there was a restart.
So the thing is, is that when we when we made
17923.52 -> an a, an ami, or like a copy of our volume,
it had the installed part, but there's nothing
17930.24 -> that says on launch, start up, start Apache,
okay, so what we need to do is we need to
17936.71 -> enable this command, which will always have
a paci start on boot, or stop or start or
17944.14 -> restart. Okay. So let's go ahead and actually
turn that on and get our server running. And
17949.4 -> we are going to use sessions manager to do
that. So we'll go back to SYSTEMS MANAGER
17953.872 -> here. If it's not there, just type SSM and
you can just right click and make a new tab
17958.25 -> there. And we're going to go down to sessions
manager. And we are going to start a new session,
17963.99 -> we are going to choose my new server if you
named it that makes it a lot easier to find
17967.47 -> it. And it will get us into that server lickety
split. And we are going to switch to the EC
17974.23 -> to user because we don't want to do this as
root. And what we're going to do is first
17978.43 -> start our service because it'd be nice to
see that it is working. So we'll go back to
17982.9 -> this IP here and is now working. And now we
want to work on reboots. I'm going to copy
17987.65 -> this command, paste it in, hit on and it's
going to create a symlink for the service.
17993.38 -> Okay, and so now when we restart the server,
this should work. So what I want you to do
18000.98 -> I want you to just close this tab here, leave
this one open. And we will go back to this
18005.432 -> server here. And we're going to reboot it.
So we're going to go, reboot. And so if our
18012.93 -> command does what we hope it will do, that
means that it will always work on reboot.
18018 -> Okay.
18019.28 -> Great. So it is, it should be rebooting now.
I'm pretty sure. And let me just do that one
18027.46 -> more time.
18028.46 -> Sure, you want to reboot? Yes. There we go.
Was it really that fast? Oh, I think it just
18034.41 -> booted really fast. Okay, so I guess it's
finished booting. And we'll just go here,
18039.31 -> and it's still working so great. So I always
get that confused, because if you stop and
18044.372 -> start instance, it takes a long time reboots
can be very quick. So now that we have this
18050.1 -> instance, here, the only only issue is that
if we were to create a copy of this instance,
18056.49 -> we want to bake in that new functionality.
So we need to create a new ami of that instance.
18061.76 -> Okay, so what we're gonna do is we're gonna
go to our images and create a new image, and
18065.97 -> we're gonna call this, my servers 000. And
we're gonna say what we did to this. So what
18073.122 -> we were doing was ensuring Apache restarts
on boot. On reboot, okay, and then we will
18084.46 -> create our image, okay. And we will let that
image proceed there, it failed. Oh, I've never
18092.682 -> I never get failure. That's interesting. Um,
well, that's fine. We'll just do a refresh
18097.952 -> here. Honestly, I've never had an image fail
on me. So what I'm going to do is I'm just
18102.702 -> going to try that one more time here. My servers,
there was a zero, restart Apache on reboot,
18114.98 -> okay. And we will create that image again.
Okay, we'll go back here. And we'll just see
18122.542 -> if that creates it, there just takes a little
bit time. And so I'll come back here. So sometimes
18126.65 -> instances can fail or like ami, or snapshots
can fail, and it's just AWS. So in those cases,
18132.21 -> you just retry again. But it rarely happens
to me. So. But yeah, we'll see you after this
18138.55 -> is done. Our ami is now available. And if
I was to remove this filter, here, we'd see
18147.41 -> our original ami, where we installed, install
the paci and, but it doesn't run it by default.
18153.27 -> So if we were to launch this one, we'd have
our problems that we had previous but this
18156.78 -> one would always start Apache, okay, now we
have a couple servers here, I want you to
18162.32 -> kill both of them because we do not need them
anymore. Okay, we're going to terminate them.
18166.71 -> And we're going to learn about auto scaling
groups. All right. So whenever we want to
18172.122 -> have a server always running, this is going
to be a good use case for it. So before we
18176.46 -> can go ahead and create an auto scaling group,
I want you to go create a launch configuration,
18181.17 -> okay. And so I just clicked that down there
below, and we are going to create a configuration,
18186.26 -> and we can choose our ami, but we want to
actually use one, one of our own ami, so I'm
18191.18 -> gonna go my ami here, and I'm going to select
this one, which is the Apache server, it's
18195.55 -> going to be T two micro, we want the role
to be my EC two, we're going to name this
18201.48 -> my server lc 000. LC stands for launch configuration
there. That's just my convention, you can
18211.251 -> do whatever you want. We're going to have
this volume encrypted by default, because
18216.85 -> if you use an ami that is encrypted, you can't
unencrypted. Okay, we'll go to our security
18222.27 -> groups. And we will drop this down and select
the security group we created previously here.
18227.56 -> Yeah, miss that one, we'll go to review. And
we will create our launch configuration, choose
18233.24 -> our existing key pair there and launch our
or sorry, create our launch configuration.
18237.622 -> Okay, so we've created launch configuration,
you'll see that process was very similar to
18242.15 -> an EC two instance. And that was just that
was saving all the settings because an ami
18246.74 -> doesn't save all the settings rights and a
launch configuration saves all of those settings.
18250.89 -> So now that we have our launch configuration,
we can go ahead and launch an auto scaling
18255.68 -> group, okay. And this is going to help us
keep a server always continuously running.
18260.49 -> So we'll go ahead and create our our auto
scaling group, we're going to use our launch
18263.92 -> configuration here. We're gonna go next steps.
We're going to name it we'll just say as G
18268.71 -> or say my server is G. ACS SD stands for auto
scaling group, we're going to have one an
18274.1 -> instance of one size, we're going to launch
it into default subnet, we are going to have
18277.292 -> to choose a couple here. So we'll do a and
b. Let's check advanced details. We're going
18282.69 -> to leave that alone. We're going to go to
the next step. And we're going to leave that
18287.41 -> alone. We're going to see notifications. We're
gonna leave that alone tags alone. Oops, I
18293.03 -> think I went backwards here review and we
are going to create that auto scaling group.
18297.92 -> So now we're going to hit close and we're
going to check out that ESG here. And so look
18301.81 -> at that it's set to one desired one min and
one max. Okay, it's using that ami. So now.
18306.013 -> Now what it's going to do, it's going to just
start spinning up that server. So the way
18312.08 -> this works, if I just go to the Edit options,
here, we have these three values, okay, and
18316.47 -> so minimum is the minimum number of instances
the
18318.5 -> auto scaling group should have at any time,
so there's not at least one server running.
18322.64 -> Okay, it's going to start it up. And we can
never have beyond one server. So there's a,
18327.6 -> there's a chance where if you have auto scaling
groups, it would try to trigger and go beyond
18331.32 -> the max. So this Max is like a, like a safety
so that we don't end up with too many servers.
18336.07 -> And then we have desired capacity. And that's
the desired number of instances we want to
18341.71 -> be running. And this value actually changes
based on auto scaling groups, it will adjust
18346.49 -> it. So generally, you want to be at this number,
etc. A lot of times I'll have this. Yeah,
18351.41 -> exactly. You know, I might have like two,
for very simple applications, and then this
18356.51 -> would be one in one. Okay. But anyway, this
instance, is automatically starting, okay.
18362.48 -> And it looks like it's in service. It doesn't
assume me it's running. Because if I go to
18365.622 -> instances over here, okay, and we take a look,
here we have it, and it's initializing. All
18372.02 -> right. So what is the big benefit to USGS
is the fact that, you know, if this instance
18380.85 -> got terminated for any reason, the ASU will
spin up another one, right. So we're just
18384.64 -> going to wait a little bit of time here for
this, to get it to two checks here. And then
18390.1 -> we will attempt to kill it and see if the
auto scaling group spins up a new one. So
18394.15 -> our instances running that was launched our
auto scaling group. And let's just double
18398.282 -> check to make sure that it's working. It's
always good to do sanity checks here in our
18402.5 -> Apache page is still operational. So now the
real question is, is that if we are to terminate
18409.32 -> this instance, will the auto scaling group
launch a new instance, because it should,
18413.261 -> it should detect that it's unhealthy and launch
a new one. So this is terminating here, and
18418.01 -> we're gonna go to our auto scaling group here.
And we are going to just see if it's going
18427.19 -> to monitor, so it's saying that it's terminating,
so it can actually tell that it's terminating,
18433.6 -> and it's unhealthy, okay. And so it's going
to determine that there are no instances it's
18437.53 -> going to start up another one here shortly.
Okay. So we are just going to give it a little
18443.55 -> bit of time here. And so now we have no instances
running, right. And so it should detect very
18450.69 -> shortly, okay. And there is that health grace
check period. So we are just waiting a little
18455.56 -> bit of time here. Okay, and great. So now
it's starting up a new EC two instance, because
18459.3 -> it determined that we're not running. So our
ASC is working as expected. Okay. So, yeah,
18465.97 -> there you go. So I think maybe the next thing
we want to do is, let's say we wanted to change
18470.49 -> our Apache page, that index to have some kind
of different text on there. And we can learn
18477.23 -> how to actually update that. So we'll have
to create a new oma ami and then swap out
18482.63 -> our launch configuration so that the auto
scaling group can update that page. So we'll
18486.91 -> just go back here. And we'll just wait for
this to complete so we can SSH in. And we
18491.32 -> will do that next. So we had terminated our
previous instance. And the auto scaling group
18496.22 -> spun up a new one, and is now running. So
let's just double check to make sure our party
18499.94 -> page is still there. And it is and so now
let's go through the process of figuring out
18505.702 -> how to update this page, so that when we spin
up new instances, with our auto scaling group,
18510.97 -> they all have the latest changes, okay, so
what we're going to have to do is we're going
18515.65 -> to need to update our launch configuration,
but we're also gonna have to bake a new ami.
18519.702 -> And even before we do that, we need to SSH
or get into an instance and update the default
18525.71 -> page. So what we're going to do is go to services
and type SSM and open up SYSTEMS MANAGER,
18531.51 -> okay. And we'll just click off here, so this
is not in our way, and we will go down to
18537.91 -> sessions manager here. Okay, start a new session,
we're going to choose the unnamed one because
18544.97 -> that was the one last by the auto scaling
group. And we will instantly get into our
18550.292 -> instance here and we are going to have to
switch over to the user because we do not
18554.19 -> want to do this as root. Okay, and so if my
memory is still good, Apache stores their
18563.702 -> their pages in HTML, var ww HTML in it is
great. And so we're just going to create a
18570.07 -> new page here. So I'm gonna do sudo touch
index HTML, it's going to create a new file.
18575.64 -> Alright, and now we're going to want to edit
that I'm going to use VA you can use nano
18579.9 -> here nano is a lot easier to use VI is every
single thing is a hotkey. So you might regret
18586.03 -> launching and vi here, but this is what I'm
most familiar with. So I'm going to just open
18589.72 -> that here. And I already have a page prepared.
And obviously you'll have access to this too,
18595.57 -> for this tutorial series. And I'm just going
to copy this in here and we're going to Paste
18602.48 -> that there. And I'm just going to write and
quit to save that file. Okay, and then we're
18606.82 -> going to go back,
18607.82 -> we'll kill this. Before we kill it, let's
just double check to make sure it works. So
18612.02 -> we're going to go back to this instance, grab
that IP, paste it in there. And there we are,
18616.622 -> we have our Star Trek reference page from
the very famous episode in the pale moonlight
18622.442 -> here. So our page is now been replaced. Great.
So now that we have this in place, the next
18628.14 -> thing we need to do is get it so that when
we launch a new auto scaling group, it will
18632.92 -> have that page, right. And so I said that
we need to create an ami. So let's we're going
18638.05 -> to do so we are going to get the current instance
here. And we are going to create a new image
18644.16 -> and we are going to follow our naming convention,
I can't remember what I called it. So I'm
18649 -> just going to go double check it here quickly
here. Because you only get the opportunity
18653.032 -> to name this one. So you want to get them
right. So we will go here and name it. What
18659.75 -> was it my server my image, I'm getting confused.
Now my server, okay. So we'll say my server
18666.84 -> 001, I really wish they'd show you the previous
names there. And we'd say, update default
18672.77 -> party, or create our own custom index HTML
page for a party. Okay, so there we go. And
18682.3 -> we are going to create that image. Great.
And we will go to View pending ami here. And
18688.07 -> we will just wait until that is complete.
And we will continue on to updating our launch
18692.07 -> configuration. So our image there is now available.
If we were just to click off here, we can
18697.85 -> see we have our ami, so this one has the Apache
server where it doesn't restart on boot. So
18703.461 -> you have to manually do it. This one starts
up Apache, but it has the default page. And
18708.23 -> then this one actually has our custom page.
Okay, so we need to get this new ami into
18712.99 -> our auto scaling group. So the way we're going
to do that is we're going to update the launch
18716.63 -> configuration. Okay, so launch configurations
are read only, you cannot edit it and change
18722.1 -> the AMI. So we'll have to create a copy of
that launch template, or some last configuration,
18728.682 -> careful, there's launch templates, it's like
the new way of doing launch configurations.
18732.53 -> But people still use launch. Launch config.
So we're just gonna go to actions here and
18738.08 -> create a copy of launch configuration. Okay,
and it's gonna go all the way to the end step,
18743.28 -> which is not very helpful. But we're going
to go all the way back to the beginning here.
18746.622 -> And we're going to remove our zeros zero here
was click that off there, we'll choose one,
18751.39 -> it probably will warn us saying Yeah, do you
want to do this? Yes, of course. And we're
18756.872 -> going to leave all the settings the same,
we're going to go to configure details, because
18760.47 -> it does a stupid copy thing here. And I'm
just going to name it one. And you got to
18764.08 -> be careful here because you do not get the
opportunity to rename these. So it's just
18768.042 -> nice to have consistency there. And we will
just proceed to add storage to make sure everything's
18772.192 -> fine. Yes, it's opposite encrypted security
group, it tried to make a new one. So we're
18777.06 -> going to be careful there and make sure it
uses our existing one, we don't need to continuously
18781.91 -> create lots of security groups that we don't
need, we're going to review it, we're going
18785.38 -> to create the lock configuration we're going
to associate with our key pair as usual. Okay,
18790.452 -> and now this launch configuration is ready.
So now that we have a new launch configuration,
18794.63 -> in order to launch this, what we need to do
is go to our auto scaling group, okay. And
18801.9 -> what we will do is, we're going to edit it,
okay, and we are just going to drop it down
18807.36 -> and choose one. Alright. So now what we can
do is we can either, we'll just hit Save here,
18814.932 -> but let's say we want this new instance to
take place. What we can do here is just terminate
18820.23 -> this old one, and now the new the new auto
scaling group should spin up and use the new
18824.59 -> launch configuration. Okay, I'm just paranoid
here. And you should always be in a device,
18828.93 -> I'm just going to double check to make sure
that I did set that correctly. Sometimes I
18832.56 -> hit save, and it just doesn't take effect.
And this is the new launch configuration being
18835.96 -> set for ASD. So we'll go back here. Okay.
And we're just going to stop this instance.
18843.372 -> And the new one that's spun up should have
that page that we created there. So I'll just
18848.31 -> terminate here. And I'll talk to you here
in a moment we get back, ok. Our new instances
18859.3 -> running here, and let's take a peek to see
if it's showing our web page here. And it
18863.8 -> is, so we are in good shape here. We successfully
updated that launch configuration for the
18869.21 -> auto scaling group. And so now anytime a new
instance is launched, it will have all all
18873.27 -> our latest changes. So auto scaling groups
are great because, you know, they just ensure
18879.452 -> that there's always at least one server running.
But what happens if the entire availability
18883.66 -> zone goes out? All right, it's not. It's not
going to help if the auto scaling group is
18888.73 -> set in that AZ. So what we're going to have
to do is create high availability using the
18894.622 -> load balancers so that we can run instances
in more than one
18898.8 -> one AZ at any given time. Okay, so let's go
ahead and layer in our auto scaling group
18903.65 -> into our load balancer. So what I want you
to do is make a new tab here, and we are going
18907.372 -> to create a load balancer. Okay, it's not
terribly hard, we'll just get through here.
18912.9 -> And we'll just create a load balancer here.
And we have three options provided to us application
18918.22 -> load balancer network load balancer, and classic
load balancer, okay. And we're going to make
18922.52 -> an application load balancer. Alright, and
I'm just going to name this al are my, my
18927.033 -> lb, okay, and it's going to be internet facing,
we're going to use ipv4 because that's the
18934.21 -> easiest to work with, we're gonna use the
HTTP Port 80 protocol, because we don't have
18939.89 -> a custom domain name. So we're gonna just
have to stick with that. We're gonna want
18943.41 -> to run this in multiple azs. It's always good
to run it at least three public AZ. So that's,
18950.15 -> I'm going to select those there. Okay, we're
going to proceed to security here, prove your
18956.292 -> load balancers, security settings don't seem
to have any here. We'll we'll go next here.
18960.99 -> And we are going to create a new security
group actually. And this is going to be the
18966.33 -> security group for the lb here. So we'll say
Alp. Let's what's the SD, that's always thing
18976.46 -> I like to put on the on the end there. And
we can leave the default description in there.
18980.802 -> And so we want it. So anything on port 80
is accessible from anywhere. So to me, that
18987.43 -> is a good rule. And we will go to the next
step here. And we will have to create our
18993.14 -> target group. Okay, so the target group is
what points to the actual easy two instances,
18999.38 -> okay, so we will just make a new one, and
we'll just call it my target group. And we
19006.34 -> will call it the production one here, because
you can have multiple levels to say tg prod,
19011.5 -> because you can have multiple target groups,
okay. And it's going to be for instances where
19016.5 -> use HTTP protocol, the health check is actually
going to be the default page. That's pretty
19020.58 -> good to me. And we're going to go ahead and
register some targets. And so here we can
19025.96 -> individually register targets, but we actually
want to associate it via the auto scaling
19031.352 -> group. So we're not going to do it this way.
We're just gonna go next and create this load
19036.44 -> balancer here. And it takes a very little
time to do so. Okay. And I mean, it says it's
19041.52 -> provisioning. So I guess we'll have to wait
a little bit here. But what we want to do
19045.35 -> is we want to get those auto scaling groups
associated with our target group. Okay. And
19049.372 -> so the way we'll go about doing that is we're
going to go to our auto scaling group. And
19055.21 -> we are going to go ahead and go to actions,
edit this here. And we are going to associate
19061.41 -> to tg prod. So that's how it's going to know
how to the load balancing browser, it's good
19067.782 -> to know how to associate with the the auto
scaling group. And we will also change your
19071.702 -> health check to EOB. Because that is a lot
better. Here. We're going to save that there.
19076.67 -> And we are going to go back to load balancers
and see how that is progressing. Remember
19080.452 -> how long it takes for an lb to spin up. Generally,
a very quick still says it's provisioning.
19085.782 -> But while that is going, we have an opportunity
to talk about some of the settings here. So
19089.61 -> the load bouncer has listeners, right. And
so it created a listener for us through that
19094.792 -> wizard there. And so we have one here that
listens on port 80. And they always have rules.
19100.01 -> And that rule is it's going to always forward
to the the target group we had selected. So
19106.33 -> we wanted to edit it or do some more things
with those rules, we can hit rules here. and
19110.942 -> here we can see Port 80. So if by default,
everything by default, is going to be Port
19116.28 -> 80, it's going to then forward it to that
target group. So if we wanted to here add
19120.67 -> another rule here, we could add rules such
as all sorts of kinds. So we could say if
19127.282 -> the path here was secret page, okay, we could
then make an action and forward it to a target
19136.23 -> group that that has some very special servers
in it. Okay, so there are some very advanced
19140.39 -> rules that we can set in here, we're going
to leave them alone, I just wanted to give
19144.11 -> you a little a little tour of that. And so
yeah, we're just gonna have to wait for this
19149.61 -> to finish here. And once it's done provisioning,
we're going to see if we can get traffic through
19155.26 -> our load balancer. So our lb is ready. Now
just make sure you press that refresh up there,
19161.69 -> because a lot of the times these things are
ready and you're just sitting there because
19165.4 -> the UI does not refresh. So always hit the
refresh there once in a while. And let's just
19171.102 -> see if our load balancer is working and routing
traffic to our single instance there. So down
19177.09 -> below, we do have our DNS DNS name. So this
is a way that we would access our load balancer
19184.491 -> there and there you go. So now everything's
being routed through the load balancer. Now,
19189.4 -> if we were to go back
19190.4 -> to this EC two instance here, okay, we might
want to actually restrict traffic so that
19196.41 -> you can't ever directly go to the instance
only through the load balancer. Alright, so
19199.99 -> I'm just going to copy this IP here. Okay,
and so I'm able to access this through the
19204.98 -> IP address, which is not too bad. But let's
say I didn't want to be able to access it
19209.43 -> through Yeah, through here, okay, so it always
has to be directly through the load balancer.
19215.01 -> And the way we can do that is we can just
go adjust this auto scaling group, or sorry,
19219.4 -> the security group. So that it it denies traffic
from Port 80. Now, I kind of like having these,
19226.09 -> the the security group around for the CTO
instance. And so what I want to do is I want
19233.45 -> to actually create a new security group just
for the auto scaling group. Okay, so we'll
19236.522 -> go to security groups here. And we will create
a new security group and we're going to call
19241.78 -> it my I don't think I was very consistent
here. So yeah, yeah, kinda. So we'll say,
19248.65 -> my ESG security group. And so for this, we
are going to allow us to use SSH. Honestly,
19260.63 -> we don't really need to SSH anymore because
we can use SYSTEMS MANAGER. But if the case
19265.07 -> we wanted to do that, we can set that rule
there. And so we will allow inbound traffic
19271.99 -> that way. And we will also allow inbound traffic
for Port 80, but only only from the load balancer.
19278.8 -> So in here, we can actually supply a security
group of another one here. So for the load
19284.48 -> balancer, I don't remember what's called Oh,
I can move this nice. And so the other one
19288.8 -> is called a lb. So I'm just going to start
typing a lb here. And now we are allowing
19294.24 -> any traffic, just from on port 80. From that
load bouncer. Okay, so we're going to hit
19300.85 -> Create there, oh, I gotta give it a description.
So my SD scritti. Group, okay. I know, you
19308.46 -> always have to provide those descriptions,
it's kind of annoying. Okay, and so now what
19313.55 -> we're gonna do is we're gonna go back to our
auto scaling group, we might actually, we
19321.07 -> might have to make a new launch configuration,
because there's no the the the security group
19325.76 -> is associated with the launch configuration.
So I think that's what we're gonna have to
19329.11 -> do here. So we're gonna have to create a new
ami, or new launch configuration, you can
19334.11 -> see this is a very common pattern here, copy
the launch configuration here. And we're going
19339.77 -> to want to go back, we're gonna want to use
the same, sorry, we're gonna wanna use the
19344.01 -> same ami, we're not changing anything here,
we just want to make sure that it's using
19347.76 -> the new security group here. So we will go
to the ASC one here. Okay. And I think that's
19356.75 -> all I want to do. I'm just gonna double check,
make sure all our settings are the same. Oh,
19359.442 -> yeah, does this stupid copy thing. So we'll
just do 002 there. And yeah, everything is
19364.51 -> in good shape, we will just double check here.
Yep. And we will all create that new launch
19370.4 -> configuration there, close it. Okay, we'll
go to our auto scaling group. And we're going
19376.7 -> to go to actions, edit, as always, and go
to version two here. Okay, and we are going
19382.18 -> to save it. Alright. And the next thing we're
going to do is we are going to terminate this
19389.182 -> instance here, because we want the new security
group to take effect, okay. So we're going
19394.89 -> to terminate that instance, if we stopped,
it also would do it. But we want to get rid
19397.971 -> of the server here. So we'll terminate it,
we're going to go back to our auto scaling
19402.23 -> group, okay, because we want to by default,
run at least in three azs. To get that, that
19407.6 -> full availability there. So what I'm going
to do is I'm going to change our desire to
19413.5 -> three, our min to three and our max to three.
Okay? And, um, I'm going to add an additional
19422.042 -> subnet here, we need See here, okay, so that
I can actually do that. And, yeah, now I'm
19428.782 -> kind of expecting this AZ to, to launch in
all three, okay. If I just go back here and
19436.99 -> edit here. Yeah. So now all three Z's appear
there. Because if we didn't have that there,
19442.122 -> I don't think the load balancer would have
been able to do that it would have launched
19444.83 -> two and one is the one another. So now we
have those there. And what we're gonna do
19450.262 -> is we're just going to wait for these three
servers to spin up here. Okay. There's two,
19457.6 -> where's my third one? Give me a third one.
But yeah, we set it to three. So the third
19464.442 -> one will will appear here shortly. And we
will just resume here once they all appear.
19468.85 -> So our three EC two instances have launched,
I was a bit worried about the last one, but
19473.17 -> it eventually made its way here and look at
the azs. One is an A one is in B and one is
19478.372 -> in C. So we have high availability. So if
two available availability zones go out, we're
19485.33 -> always going to have a third server running.
So we're going to be in very good shape. Now,
19488.92 -> the other thing we were doing is we wanted
to make sure that people couldn't directly
19492.39 -> access the servers and had to be through the
load balancer. So let's go discover that right
19496.82 -> now. So if I pull up this IP should change
I'll be accessible via the IP or public DNS
19502.48 -> here and I try this here. It's never loading.
That's great. That's what we want. Okay, so
19507.81 -> the question is, is now if we go to our load
balancer, do, we still have access to our
19511.99 -> our instances here, through that DNS record
here, okay. So we're gonna copy that. And
19518.25 -> we do. So, you know, the good reason for that
is that we always want to restrict our traffic
19524.3 -> through like this narrow pipeline. Because
if everything always passes through the load
19528.952 -> balancer, then we can get richer analytics
on and put things in front of there. One thing
19533.53 -> we can do with a load balancer is attach a
laugh, web application firewall, which is
19538.151 -> a really good thing to do. And so if you have
people accessing things directly, not through
19545.88 -> the load balancer, then they wouldn't pass
it a laugh, okay, so it's just creating those
19549.34 -> nice choke points there. And so now that we
have our lb, I guess the next step would be
19555.85 -> really to serve this website up via a custom
domain name. So yeah, let's do a bit with
19562.84 -> route 53 and get a custom domain.
19569.92 -> So now it's time to learn how to use route
53 to get a custom domain name, because, you
19575.47 -> know, we do have this URL for our website,
but it's kind of ugly. And we want to go custom
19580.43 -> here. And we want to learn how to integrate
router d3 into our load balancer. So let's
19585.31 -> go up at the top here, type a route 53. Okay,
and we're going to register our domain. Now,
19590.68 -> this does cost money. So I guess you could
skip it or just watch here. But you know,
19595.452 -> to get the full effect, I really do think
you should go out and purchase a very inexpensive
19600.33 -> domain. And so
19601.4 -> we're gonna go to the top here, and I'm going
to register a new domain, I'm going to get
19604.55 -> a.com if I can, unless there's something cheaper.
19607.21 -> I mean, there are cheaper ones. I guess I
don't have to be so frugal, okay, and I'm
19615.36 -> going to try to get I know what domain I want.
And as always, it's going to be Star Trek,
19619.99 -> I'm gonna see if I can get the friendly lion.
So we'll type in frame your lines here and
19623.51 -> check if it's available. And it is, so that's
going to be our domain name. So I'm going
19628.25 -> to add it to my cart here. We'll have our
subtotal. We'll hit Continue. And now we have
19632.86 -> a bunch of information to fill in. And I just
want to point out down below that you can
19638.98 -> have privacy protection turned on. So normally,
with other services like GoDaddy, you'd have
19643.46 -> to pay an additional fee in order to have
yourself protected. So it doesn't display
19649.52 -> your information on who is okay. So if you're
wondering, I'm talking about if you go to
19653.43 -> who is here.
19656.21 -> Domain tools, I believe it this is this one
here? I'm not sure why Oh, here it is. There
19662.27 -> we go. I clicked the wrong one.
19665.14 -> Okay, and so if we were to go to who is here,
we can generally look up anything we want.
19670.182 -> So if we typed in like google.com, okay. I
say I'm not a robot. Okay. Generally, there
19680.42 -> is like additional information here. And it
can provide like someone's phone number and
19685.352 -> the company here. And sometimes you want to
keep this information, private. And so that
19691.76 -> is what this option here is going to do. So
if you've ever had a random call from somebody,
19695.782 -> you wonder how they got your phone number,
maybe you registered a domain name, and it
19699.2 -> wasn't a had privacy turned on. But anyway,
I'm gonna go ahead and fill this out. And
19704.512 -> then I'm going to show you the steps afterwards.
Okay. All right. So we're on to the next page
19709.05 -> here. And I did substitute my private information
here. And if you do call this number, you
19713.85 -> are looking forward to some tasty, tasty pizza
hot in Toronto. And so on this page here,
19722.01 -> it's gonna ask us if we want to automatically
renew our domain, I'm gonna say yes, because
19725.22 -> I think I want to keep this domain. And I'm
just going to agree to the terms and conditions,
19729.76 -> it's going to cost $12 USD. Unfortunately,
it's not Canadian dollars. So it's a little
19733.92 -> bit more expensive. But for me, it is worth
it, we're going to go ahead and complete our
19737.88 -> purchase. Okay, and so it says we've registered
the domain. And so it has been successfully
19745.042 -> registered. So there we go. Cool. So I just
want to show you that the domain is in pending.
19750.93 -> So we are just waiting to get some emails
from AWS. And once we get those emails, we
19756.26 -> will confirm them. And we should have our
domain very shortly. They say it can take
19760.69 -> up to three days, I've never had to wait that
long to get a domain it's pretty quick, especially
19764.84 -> with dot coms. So as those emails come in,
I will then switch over to our email and show
19768.932 -> you what those look like. So our first email
has arrived is from Amazon registar. And it's
19774.09 -> just it's a confirmation. And there's nothing
for us to confirm. It's just saying, hey,
19778.5 -> you are now using Amazon register. That's
the thing that actually registers the domains
19781.93 -> behind revenue three. And this is not really
the email we care about, but I just want to
19786.11 -> show you so if you're wondering, you know
what it's about. Okay, so our second email
19790.66 -> came in here. It didn't take too long here.
I think I made maybe waited about 15 minutes
19795.91 -> and it says I've successfully registered the
Frankie Alliance calm and So now, we are ready
19801.72 -> to start using this domain. So we're gonna
go back to row 53. Here, I'm just going to
19805.4 -> get out of my email. So our domain name is
registered. And we can see it appearing under
19810.75 -> the register domains, it's no longer in the
pending state. So let's go all the way up
19814.63 -> to hosted zones, because either of us will
will have created one by default for us. And
19819.442 -> we can go ahead and start hooking up this
domain to our load balancer. So I'm going
19823.38 -> to click in to the Frankie Alliance, okay.
And right off the bat, we have some ns records,
19829.952 -> we have an SLA record. So they've really set
us up here. And we're going to create our
19834.581 -> first record set. And we're going to want
to hook up our www dot, okay, and we are going
19843.292 -> to want to use alias and we're going to choose
the target, and we're gonna choose our E lb.
19847.792 -> Okay, and so we're gonna leave it on simple
routing. And we're gonna create that. And
19852.06 -> now our www dot should start pointing to our
domain name, I'm pretty sure this takes effect,
19858.11 -> like immediately, this is a new domain name.
So I'm not sure if it has to propagate through
19862.15 -> all the DNS records around the world. So this
doesn't work. I'm not going to get too upset
19866.16 -> about this here, but I'm gonna cross my fingers
and can't be reached, okay, so it doesn't
19872.1 -> work just as of yet. So what I'm going to
do is, I'm just going to give it a little
19876.792 -> bit of time here, just to see if it does take
effect here, because everything is hooked
19881.9 -> up correctly. And we will be back here shortly.
So I took a quick break there had some coconut
19887.36 -> water came back refreshed, and now our website's
working, I didn't have to do anything, it
19891.481 -> just sometimes takes a little bit time to
propagate over the internet, those those changes
19896.58 -> there. So now our next thing we need to resolve
is the fact that this isn't secure. Okay.
19902.07 -> And so AWS has a server certificate, a service
called Amazon certification manager, certificate
19908.603 -> manager, and it'll allows you to get free
SSL certificates. But just be sure, when you
19913.97 -> do go to the service, we're going to click
on provision certificates and not private,
19918.51 -> these ones are very expensive, they're $500.
Initially, and you really want to just provision
19924.872 -> a certificate, I wish they'd say free or public
over here. So it's less confusing, you're
19928.93 -> only ever going to see the splash screen once,
for the first time you've ever created a certificate
19933.53 -> within a zone. So hopefully, that's not too
confusing there. But it does ask you again,
19937.872 -> if you want public or private, you definitely
definitely want the public certificate, not
19941.9 -> the private, okay, so we're going to request
a certificate, and we are going to put our
19945.872 -> domain name in. So just to cover all our bases
here, I'm going to put in the naked domain,
19951.2 -> I'm also going to put in a wildcard. So all
our sub domains are included here, this is
19956.35 -> going to catch all cases. And we'll just hit
next here,
19960.57 -> we're going to use DNS validation. Email validation
is the older mechanism here for validation.
19966.41 -> Everyone does DNS validation, okay, and we're
gonna hit review, then we're gonna hit confirm
19972.26 -> request. And this is going to start spinning
here. And now what's gonna ask for us for
19978.622 -> to do is to validate that we have ownership
of that domain. And so we can verify this
19984.13 -> here. And luckily, we can just drop this down
and create a record in row feet, roughly three.
19989.122 -> So this is what they do, they put a C name
in your, your DNS records here. And that's
19995.22 -> how we know that we own that domain name.
And we're gonna do it for both here. Okay,
20000.362 -> and so now we're just gonna have to wait for
these to validate here and it shouldn't take
20004.66 -> too long. So I took a quick break there had
some coconut water came back refreshed, and
20014.03 -> now our website's working, I didn't have to
do anything, it just sometimes takes a little
20017.39 -> bit time to propagate over the internet, those
those changes there. So now our next thing
20023.21 -> we need to resolve is the fact that this isn't
secure. Okay. And so AWS has a server certificate,
20029.18 -> a service called Amazon certification manager,
certificate manager and allows you to get
20035.21 -> free SSL certificates. But just be sure, when
you do go to the service, we're going to click
20040.24 -> on provision certificates and not private,
these ones are very expensive. They're $500.
20047.22 -> Initially, and you really want to just provision
a certificate, which let's say free or public
20052.362 -> over here, so it's less confusing. You're
only ever going to see the splash screen once,
20056.3 -> for the first time you've ever created a certificate
within a zone. So hopefully, that's not too
20061.07 -> confusing there. But it does ask you again,
if you want public or private, you definitely
20064.68 -> definitely want the public certificate, not
the private, okay, so we're going to request
20068.89 -> a certificate, and we are going to put our
domain name in. So I'm going to do this and
20073.53 -> just so I don't type it wrong, and we're going
to go back to row 53 and grab our domain name,
20080.432 -> okay. And there it is. So I'm just going to
copy it there. Go back here, and I'm going
20086.022 -> to give it wildcard so it just saves you a
lot of time if you wildcard it Okay, so you
20090.763 -> don't have to keep on adding certificates
for subdomains you don't cover but we have
20095.42 -> wildcard it so we're gonna hit next. And then
we need to validate that we own That domain
20101.012 -> name. And since we're using record three,
it's going to be very easy, we can just use
20104.96 -> the DNS validation method, nobody uses email
anymore. It's like very old method. So we're
20109.05 -> always going to do DNS. And so we're going
to go to the next step here and say review.
20113.29 -> And we're going to say, confirm and request.
And now we just need to confirm ownership.
20117.682 -> So they will issue us a free SSL certificate.
So we see pending validation, okay, we're
20122.83 -> going to drop down here. And what we're going
to have here is a button. And what this is
20126.772 -> going to do is it's going to automatically
create the C name record in roughly three,
20130.76 -> this is the way certificate manager is going
to confirm that we own the domain, because
20135.58 -> we can add it to the domain, that means we
must own it. And so they have a one button
20139.202 -> press for us to do that there. So that's a
very convenient, okay, and this should not
20144.05 -> take too long to confirm. So we'll hit continue
here. Okay, and it's in pending validation.
20150.41 -> And so we're just going to wait a little bit
here, just like we did for the website update.
20156.73 -> And we'll do I will do a refresh because it
says a sack page, so you will have to hit
20160.6 -> this a few times. So that certificate has
been issued, the console didn't directly take
20166.43 -> me to the screen. So I did have to go to the
top here, type ACM, like this, and to get
20172.06 -> here and hit refresh. But again, this only
takes a few minutes when you are using route
20176.82 -> 53. And so um, you know, just be aware, that's
how long it should take. And so now that it's
20181.98 -> been issued, we can go ahead and attach it
to our load balancers. So we're gonna go back
20186.42 -> to the C two console. On the left hand side,
we're going to go to load balancers. And we're
20192.95 -> going to make sure our load balancers selected
there, go to listeners, and we're going to
20196.4 -> add another 14443, which is SSL. And we're
going to forward this to our target production
20204.1 -> group here. Okay. And then this is where we
get to attach your SSL certificate. So we're
20208.35 -> going to drop that down to Frankie Alliance,
and we're going to hit save. Okay, and so
20212.75 -> now, we're able to listen on port four, or
sorry, 4443. Here, we do have this little
20218.18 -> caution symbol is saying that we can't actually
receive inbound traffic for 443. So we're
20223.86 -> gonna have to update our security group. So
going down to a security group here, we will
20227.782 -> click it,
20228.782 -> and it is the alrb security group that we're
looking for. So this one here, we're going
20232.692 -> to go to inbound rules
20233.74 -> we're going to edit, we're going to add a
new rule, and we're going to set it for HTTP
20237.43 -> s. So there we go. So now, we can accept traffic
on 443. And it is attached there. So now we
20246.18 -> should be able to, we should be able to have
a protected or secure URL there when we access
20254.18 -> our domain name. So I'm just gonna grab the
name. So don't make any spelling mistakes
20257.372 -> here. And we'll paste it in here. And there
we go. It's secure. So um, yeah, so there
20263.34 -> we are. Okay, great. So we're all done this
fall along here. And I just want to make sure
20267.91 -> that we just do some cleanup here to make
sure that we aren't being charged for things,
20271.83 -> we don't need any more. So the first thing
we're going to do is we're going to just terminate
20275.42 -> our load balancer there, which is not too
difficult to do. So we'll just go ahead there
20281.15 -> and go to actions and just go ahead and delete,
and delete that, it should be pretty quick.
20286.042 -> And wow, that was very fast. So now on to
our auto scaling groups. So that's the next
20291.24 -> thing, we need to go ahead and delete there.
And so we're just gonna drop down to actions
20296.82 -> there and go delete. Now, when you delete
the auto scaling group, it will automatically
20300.372 -> delete those easy two instances there. But
it's good to keep a little eye on them there.
20304.35 -> So we're going to pop over to here for a minute.
And you can see they're not terminating just
20309.26 -> yet. So we're gonna wait on that auto scaling
group. And so once that scaling group has
20315.56 -> been deleted, and this might take a little
bit of time here, it will definitely get rid
20319.38 -> of those easy two instances for us. That took
an incredibly long time to delete that auto
20323.91 -> scaling group. I don't know why. But we'll
go back to our instances here. And we will
20328.64 -> see if they are still running. So you can
see they've all been terminated. So when that
20332.122 -> auto scaling group is deleted, it's going
to take down the EC two instances with it,
20336.1 -> we're probably also going to want to go to
route 53. And remove those, those dead endpoints
20343.6 -> there because there is a way of compromising
those, if you are a very smart, so we'll just
20350.17 -> go ahead and delete that record because it's
pointing to nothing right now. Right. And
20353.542 -> so there you go, that was our cleanup. So
we're all in good shape. And hopefully you
20358.432 -> found the the section.
20361.83 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at elastic file system Fs,
20369.42 -> which is a scalable elastic cloud native NFS
file system. So you can attach a single file
20374.59 -> system to multiple EC two instances and you
don't have to worry about running out or managing
20379.77 -> disk space. Alright, so now we are looking
at EF s and it is a file storage service for
20385.1 -> easy to instances, storage capacity is going
to grow up to petabytes worth and shrink automatically
20391.17 -> based on your data stored so that's why it
has elastic in its name that drive is going
20396.45 -> to change to meet whatever the demand you
are Having stored now, um, the huge huge advantage
20404.43 -> here is that you can have multiple EC two
instances in the same VPC mount to a single
20409.85 -> e Fs volume. So it's like they're all sharing
the same drive. And it's not just easy two
20414.75 -> instances, it's any any of them in a single
VPC. That's amazing. And so in order for you
20420.47 -> easy two instances, to actually mount FSX,
it does have to install the NFS version 4.1
20426.24 -> client, which totally makes sense, because
Fs is using that protocol, the NFS version
20432.101 -> four protocol. And so Fs, it will create multiple
targets in all your VPC subnets. So this is
20438.47 -> how it's able to allow you to mount in different
subnets or different Z's here. And so we'll
20446.01 -> just see a create a bunch of Mount points
and that's what you will mount. And the way
20450.76 -> it builds, it's going to, it's going to be
based on the space that you're using, it's
20454.86 -> going to be 30 cents for a gigabyte, month
over month reoccurring. Okay. So there you
20460.77 -> go. Hey, this is Angie brown from exam Pro,
and we are going to do a quick EF s follow
20470.49 -> along. So we're going to launch to EC two
instances, connect them both IE Fs and see
20474.611 -> if they can share files between them from
a one DFS, okay, so what we're going to need
20480.31 -> to do is make our way to the DFS console here,
okay. And we're going to get here and we're
20487.21 -> going to create a new file system, we are
going to launch this in our default VPC, you're
20491.72 -> going to see that it's going to create mount
targets for every single availability zone
20497.25 -> for us here, okay. And so it's also going
to use the default security group. Okay, and
20503.14 -> so we're gonna go next, we're gonna skip tags,
we do have the ability to do Lifecycle Management.
20508.36 -> So this is no different than s3, allowing
you to reduce costs by moving stuff that you're
20513.28 -> not using as frequently into infrequent access.
So you get a cheaper, cheaper storage cost
20518.83 -> there. So that's really nice. We're going
to be comfortable with using bursting here.
20524.05 -> We're going to stick with general purpose,
and we're going to turn on encryption, okay,
20527.17 -> it's good practice to use encryption here,
especially when you're doing the certifications.
20531.27 -> You want to know what what you can encrypt.
Okay, so, AWS has a default kms key for us.
20537.282 -> So we can go to next here. And we're just
going to do a quick review. And we're going
20541.41 -> to create that file system. Okay. So it's
going to take a little bit of time here to
20546.08 -> create that file system. Well, while that's
going, I think we can go prep our EC two instances.
20550.692 -> Okay. So yeah, it's just going to take a bit
of time. So if we are looking at our Fs volume
20557.01 -> here, we can still see that our mount targets
are creating. So let's go ahead and create
20560.262 -> those EC two instances. So in the ECG console,
you can get there just by typing EC two, and
20566.02 -> we are going to launch a couple of instances.
So we will choose Amazon Lex two, because
20570.36 -> that's always a good choice, we'll stick with
T two micros because that is part of the free
20574.542 -> tier, we're going to go next. And we're going
to launch two of these fellows here. Okay,
20579.36 -> and we got to make sure that we are launching
this in the same VPC as a DFS, which is the
20583.08 -> default one, okay? We're not going to worry
about what which subnet because we're gonna
20587.51 -> be able access it from anywhere. And I want
you to create a new role here. So we're gonna
20592.34 -> go here and create a new role. Okay, and we've
created this multiple times through our fall
20597.252 -> alongs. But just in case you've missed it,
we're going to go ahead and create a role
20600.5 -> go to EC to go next. And we're going to choose
SSM for SYSTEMS MANAGER, okay, and we're going
20607.52 -> to hit next and then next, and then we're
going to type in a my will say Fs DC to role,
20615.26 -> okay, and we will create that role. Alright.
And so that's going to allow us to access
20619.7 -> these instances via session manager, which
is the preferred way over SSH. So we will
20624.32 -> just hit refresh here and then see if it is
there. And there it is. Okay, and we're just
20630.57 -> going to need to also just attach a user data
script here, because we need to configure
20636.17 -> these instances here to
20638.94 -> to be able to use EF s, okay, so there is
this little thing you do have to install.
20644.97 -> And so we're just gonna make that easy here,
it's gonna install it for both our instances.
20649.1 -> Alright, and so we're gonna go ahead and go
to storage. And then we're going to go to
20653.5 -> tags, and we're gonna go to security groups,
and we're gonna make a new security group
20658.08 -> for this here. Okay? So I'm just going to
call it my EC to a u Fs. sg, alright. And
20666.73 -> I mean, we don't really plan on doing much
with this here. So I'm just going to go ahead
20671.68 -> and review and launch and then we will get
our key pair here. Not that our key pair matters
20676.96 -> because we are going to use sessions manager
to gain access to these instances. Okay, so
20682.18 -> now as these are spinning up, let's go take
a look back over here and see if these endpoints
20686.92 -> are done creating. Now it says they're still
creating but you know, the ages constantly
20690.83 -> can never trust it. So we'll go up here and
do a refresh here and see if they are done
20695.35 -> crazy and they are all available. Okay? So
we do actually have something instructions
20700.46 -> on how to set these up here with us. So if
we just click here, it's going to tell us
20705.33 -> that we need to install this here onto our
EC two instances for it to work. And we have
20710.542 -> absolutely done that. And then when we scroll
down here, we need to then mount the EF Fs.
20717.702 -> Okay, and so since we are using encryption,
we're going to have to use this command here.
20722.96 -> Okay, and so that's what we're going to do,
we're going to have to use sessions manager
20727.01 -> to log in, and then mount it using the following
command. Okay, so we're going to go back to
20733.76 -> EC two instances, and we're just going to
wait till those style checks are complete.
20737.3 -> Okay, so as these are initializing here, I
bet we should probably go set our security
20742.28 -> groups. So I'm going to go over to security
groups here. And I'm going to look for our
20746.68 -> default one, because I believe that's what
we launched, our DFS is in, I probably should
20750.51 -> have created our own security group here,
but that's totally fine. So it's going to
20753.78 -> be 2198. Okay, and so we're, that's what we're
looking for is 2918. And it is this one here,
20761.35 -> and it's the default here, okay, and we're
just going to add an inbound rule here. And
20764.81 -> we're going to have a Roy here from an old
an old following here, this will just remove
20769.61 -> that. And we'll look for NFS. Okay, that's
gonna set it for 2049. And now we just need
20774.15 -> to allow the security group for our, our EC
to there's so I believe we called it and so
20780.702 -> I just start typing mine. So we have my EC
two Fs SG. And so now we shouldn't have any
20785.98 -> issues with mounting, because we probably
do need that to gain access. Okay, and so
20791.1 -> we're gonna go and now we'll go back and see
how our instances are doing. So it looks like
20795.51 -> they're ready to go. So now that they're,
they're in good shape, we can actually go
20799.622 -> gain access to them. So what we're going to
do is we're going to go to SYSTEMS MANAGER,
20805.35 -> so type SSM and make a new tab here, okay.
And we're going to wait for this to load here
20812.14 -> and on the left hand side, we're going to
go to sessions manager. And we're going to
20816.76 -> start a session. And so there are both of
our instances, we probably should go name
20820.18 -> them, it'll just make our lives a lot easier.
So I'm going to say EC two Fs, a, okay. And
20826.93 -> then we have E, F, E, AC to DFS B. All right,
so we'll just go back here, do a refresh.
20833.65 -> And there are two instances. So we need to
launch an instance for this one, okay. And
20838.851 -> I'm gonna have to start there. And then I'm
gonna have to make another session here and
20842.97 -> start it for a B. All right. Okay, and so
now we have these two here, and we're going
20849.39 -> to have to switch to the correct user. So
we are as root user, we don't want to be that
20853.57 -> we want to be the easy to user. Okay, so we
will switch over to EC two for a consuming
20859.372 -> this one's a doesn't really matter at this
point, because you know, doesn't, but it will
20864.86 -> just switch over here to the user here. Alright,
so now we are set up here, and we can go ahead
20871.68 -> and mount mounted there. So it's just a matter
of copying this entire command. So we are
20877.9 -> going to need the sudo in there. So just make
sure you include it. And we're just going
20881.872 -> to paste that in there. And we're going to
mount, okay, and it just says it doesn't exist.
20886.52 -> So we'll just Alright, so that can fail because
it has nothing to mount to. So we actually
20892.39 -> have to create a directory for it. Okay, so
just here in the home directory, I'm just
20895.77 -> going to go make MK dir, and type in Fs. Okay,
and then we will just go up, and then we will
20903.1 -> mount it. And so now it should mount to that
new directory. So it is mounted. We're going
20907.46 -> to do the same story over here. Okay, so just
to save us some time, I'm just going to copy
20912.6 -> these commands over, okay. All right. Oops,
we forgot the M there on the front. Okay.
20919.672 -> And then we will just copy over the same command
here. And we will also mount Fs, okay, so
20925.38 -> they should both be mounted now. And so I'm
just going to do an ls, and we're going to
20928.66 -> go into that directory there. Okay. And we
are just going to create any kind of files,
20933.35 -> I'm gonna say, touch on touch, base, your
20937.43 -> dot txt. Okay, so I've touched that file,
we cannot do that there. So I'll just type
20944.16 -> in sudo. And so I'll do an ls within that
directory. And so that file is there. So now
20947.5 -> if I go over to this one here, and do an ls
and go into Fs, and do an ls, there is that
20953.59 -> file. Okay. So that's how you can access files,
across instances using DFS. So that's all
20960.07 -> you need to know. So we're, we are done. Okay,
so we'll just terminate this instance here,
20964.5 -> okay. And we will also terminate this instance
here. Great, we will close that sessions manager,
20969.262 -> we will just kill our instances here. Okay.
Because we are all done. And we'll go over
20975.66 -> to DFS. So it's only only costs for when we're
consuming stuff, but since we're done with
20980.542 -> it, let's go ahead and tear this down. Okay,
and we need to just copy this fella in here.
20987.5 -> And we're all good to go. So there you are,
that is EF s. Alright, so we're onto the ZFS
20996.63 -> cheat sheet here. So elastic file system.
VFS supports the network files. System version
21000.601 -> four protocol you pay per gigabyte of storage
per month. volumes can scale to petabyte size
21006.84 -> storage volumes will shrink and grow to meet
current data stored. So that's why it's elastic
21013.65 -> can support 1000s of concurrent connections
over NFS. Your data is stored across multiple
21018.57 -> agencies within a region. So you have good
durability there can mount multiple EC two
21024.85 -> instances to a single Fs as long as they're
all in the same VPC. It creates a mount points
21029.99 -> in all your VPC subnets. So you can mount
from anywhere within your VPC and it provides
21034.532 -> read after write consistency. So there you
go. That's
21042.542 -> the
21043.542 -> Hey, this is Angie brown from exam Pro. And
we are looking at elastic block store also
21046.38 -> known as EBS, which is a virtual hard drive
in the cloud create new volumes attached easy
21051.36 -> to instances, backup via snapshots and easy
encryption. So before we jump into EBS, I
21057.44 -> wanted to lay some foundational knowledge
that's going to help us understand why certain
21061.85 -> storage mediums are better than others based
on their use case. So let's talk about IOP.
21067.442 -> So IOP stands for input output per second
it is the speed at which non contiguous reads
21072.542 -> and writes can be performed on a storage medium.
So when someone says hi IO, there, they're
21077.352 -> saying that this medium has the ability to
do lots of small fast reads and writes. Then
21082.92 -> we have the concept of throughput. So this
is the data transfer rate to and from the
21087.75 -> storage medium in megabytes per second. Then
you have bandwidth, which sounds very similar,
21092.3 -> but it's different. And so bandwidth is the
measurement of total possible speed of data
21096.85 -> movement along the network. So to really distinguish
between the throughput and the bandwidth,
21101.65 -> we're going to use the pipe in water example.
So think of bandwidth as the pipe and throughput
21106.99 -> as the water. Okay, so now let's jump into
EBS. So we are now on to talking about EBS
21115.71 -> and it is a highly available durable solution
for attaching persistent block storage volumes
21120.65 -> to easy to instances, volumes are automatically
replicated within their AZ to protect them
21125.6 -> from component failure. And we have five types
of EBS storage to choose from we have general
21130.72 -> purpose provision, I Ops, throughput, optimized
HDD, cold, HDD and EBS. Magnetic, okay, and
21139.38 -> so we do have some short definitions here,
but we're going to cover them again here.
21144.782 -> Alright, so we're gonna look at the different
volume types for EBS, and just try to understand
21152.44 -> their use cases. And so again, there are five
types, and we're gonna go through each one
21156.48 -> starting with general purpose. And it is as
the name implies, good for general usage without
21161.38 -> specific requirements. So you're gonna be
using this for most workloads like your web
21165.612 -> apps, and has a good balance between price
and performance for the actual attributes
21170.07 -> underneath it, it can have a volume size between
one gigabytes and 16 terabytes, and a max
21175.783 -> I ops of 16,000 per second. Moving on to provision
I ops SSD, it's really good when you need
21182.02 -> fast input and output, or the more verbose
description is when you need mission critical,
21189.15 -> low latency or high throughput. So it's not
just eye Ops, it's also high throughput as
21194.61 -> well, it's going to be great for large databases.
So you know, think RDS, or Cassandra. And
21202.36 -> the way you know, when you should start using
provision I ops if you exceed 16,000, I often
21208.89 -> see that's where the limit is for general
purpose. So when you need to go beyond that,
21212.36 -> you're going to want to use this one, or if
the throughput is greater than 250 megabytes
21217.89 -> there as well. Now, the volume size here can
between four gigabytes and 16 terabytes, and
21223.38 -> we can have a max I ops of 64,000. Okay, moving
on to our hard disk drives, we have throughput,
21230.34 -> optimized HDD, and it is a low cost. It's
designed for frequency, frequently accessed.
21238.28 -> data and the throughput and throughput intensive
workloads, it's going to be great for data
21242.58 -> warehousing, big data and log processing stuff
where we have a lot of data. The volume size,
21248.532 -> you can see by default is a lot larger, we
have 500 gigabytes to 15 terabytes in the
21253.07 -> max I ops is 500. Moving over to cold HDD,
this is the lowest costing hard drive here.
21262.28 -> It's less for less frequently used workloads,
you're going to have things for storage, okay,
21267.8 -> so if you want to put backups or file storage,
this is going to be a good drive for that.
21273.83 -> And it has the same volume size as the throughput
one is just not as the throughput is lower
21280.21 -> here, and we're have a max I ops of 250. Moving
on to EBS magnetic, we're looking at very
21289.202 -> inexpensive storage for long term archival
storage, where you're going to have between
21295.292 -> 500 gigabytes and 15 terabytes and a max I
ops as As 40 to 200, okay, and so generally
21303.77 -> it's using previous generation hard drives.
So that's why we get that low cost there.
21307.98 -> But yeah, there is the full spectrum of volume
types for you here, on on EBS.
21315.16 -> So we're gonna look at some different storage
volumes starting with hard disk drives. So
21323.45 -> hard disk drives, is a magnetic storage that
uses rotating platters on an actuator arm
21328.78 -> and a magnetic head. So look, we got this
round thing, an arm and a little needle, or
21333.68 -> head on the end of it remind you something,
it's like a record player, right? So h acds
21338.862 -> is very good at writing a continuous amount
of data. So the idea is that once the arm
21343.55 -> goes down, you have lots of data, it's really
good at just writing a bunch of data. Where
21348.752 -> hard disk drives do not accelerate is when
you have many small reads and writes. Okay,
21353.96 -> so that's high IO, okay. And the reason why
is just think about the arm, the arm would
21358.63 -> have to like, lift up, move to where it needs
to right, go down and then right, and then
21363.78 -> lift up again. And so there's a lot of physical
movement there. And that's going to limit
21368.8 -> its ability to have high i O. So hard disk
drives are really good for throughput, because
21375.543 -> again, it's continuous amounts of data. And
so that means fast amounts of data being written
21380.192 -> continuously. So it's gonna be great for throughput.
But you know, the caveat here is does it does
21385.21 -> have physical moving parts, right? Now we're
taking a look at our next storage volume,
21394.57 -> solid state drives SSDs. And they're different
because they don't have any physical moving
21399.76 -> parts, they use integrated circuits to transport
the data and store it on to things like flash
21406.292 -> memory here. And so SSDs are typically more
resistant to physical shock. They're very
21412.99 -> quiet, because there's no moving parts. And
they have quicker access times and lower latency.
21417.422 -> So they're really, really good at frequently
reads and writes. So they're going to have
21420.71 -> a high i O, they can also have really good
throughput. But generally when we're thinking
21425.112 -> about SSD, we just think hio Okay, so there
you go. So looking at our last storage volume,
21434.24 -> here, we have magnetic tape. And if you've
ever seen an old computer, you've seen magnetic
21438.112 -> tape because they look like film, right? You
have these big reels of magnetic tape, and
21444.38 -> we still use them today, because they are
highly durable. They last for decades. And
21449.71 -> they're extremely cheap to produce. So if
it isn't broke, why throw it out. Now, we
21455.41 -> don't really use magnetic tape in this way
anymore with big spools, or sorry, reels,
21461.292 -> what we do is we have a tape drive down below
and you can insert a cartridge into it, which
21466.21 -> contains the magnetic tape. Okay, so there
you go. So I want to talk about how we can
21476.83 -> move our volumes around. So if you wanted
to move your volume from one AZ to another,
21485.91 -> what you'd have to do is you have to create
a snapshot. And then once you create that
21490.08 -> snapshot, you'd create an ami from that snapshot.
And then you could launch an EC two instance
21495.63 -> into another availability zone. Now for regions,
there's a little bit more work involved, it's
21501.14 -> going to be the same process to begin with,
we're going to create a snapshot. And from
21505.672 -> there, we're going to create an ami from that
snapshot. But in order to get into another
21510.112 -> region, we're going to have to copy that ami.
So we're gonna use that copy ami command into
21515.48 -> region B, and then we're going to launch IBC
two instance. And so that's how we're going
21519.862 -> to get our volumes from one region to another.
Okay.
21528.5 -> So now we're taking a look at how to encrypt
our root volume. And so when you create an
21533.202 -> EC two instance, there is a little through
the launch wizard, there is a storage step.
21538.51 -> And so here we can see our storage volume,
that's going to be our root. And we actually
21543.86 -> have a box over here where we just drop it
down and encrypt it based on the method that
21548.24 -> we want. This wasn't possible prior, I don't
know how many years ago was maybe last year
21554.65 -> or something. But um, you weren't able to
encrypt a volume on creation, but you definitely
21559.89 -> can now. Now what happens if we have a volume
that we created that was unencrypted, and
21566.56 -> now we want to apply encryption to it? Well,
we're gonna have to go through a little bit
21571.21 -> more effort here. And here are the steps below.
So the first thing we're going to do is take
21574.6 -> a snapshot of that unencrypted volume. And
then once we have that snapshot, we're going
21579.51 -> to copy or use the copy command to create
another snapshot. And with that, we actually
21586.07 -> will have the option to encrypt it. And so
we will encrypt that copy giving us an encrypted
21592.22 -> snapshot. And then from there, we will launch
a new EC two instance with that encrypted
21597.25 -> ami and then launch a new EC two instance
from That ami and so that's how we're gonna
21601.84 -> get an encrypted root volume. So there you
go. So when you launch an EC two instance,
21611.89 -> it can be backed by either an EBS volume or
an instance store volume. And there's going
21615.92 -> to be some cases when you want to use one
or over the other, but 99.9% of the time,
21620.59 -> you're going to be wanting to use an EBS volume.
And so the EBS volume is a durable block level
21626.792 -> storage device that you can attach to a single
EC two instance. Now the instant store volume
21631.17 -> is a temporary storage type located on disk
that are physically attached to a host machine.
21639.01 -> And so the key word here is one is durable
and one is temporary. All right. And you know,
21645.16 -> anytime we talk about instant stores, a Ferial
is another word that comes up. And so if you
21651.042 -> ever see that word, it means lasting for a
very short time. Okay, so that makes total
21654.48 -> sense why they're called that sometimes. And
so for an EBS volume, if you, if you want
21660.551 -> to use it within institutions, it's going
to the volume is going to be created from
21663.39 -> an EBS snapshot with an instance or store
volume, it's going to be created from a template
21668.88 -> stored in s3. Now, the way you use these volumes
is going to also affect the behavior you see
21674.57 -> too, because you can stop and start an EC
two instance. And the data is going to re
21679.69 -> persist when it starts back up. Again, when
you look at an instance store volume, you
21684.27 -> cannot stop the instance, if you terminate
it, you're going to lose all that data because
21687.99 -> it's a temporary storage type, you might be
able to reboot it, but you won't be able to
21693.74 -> stop the volume, okay, so you might just have
reboot and terminate as your only two options
21700.16 -> there. And also, you know, when an EC two
instance launches up, it goes through status
21704.66 -> checks, and so the one it goes through is
like a host check. If that were to fail, then
21710.02 -> also the data would be lost there. But generally,
when you're spinning up an EC two instance,
21715.16 -> you don't have any data you're not too worried
about any way, but that's just a consideration
21718.622 -> to list out there. And so we're gonna talk
about the use cases. So, so EBS volumes are
21723.63 -> ideal when you want data to persist. In most
cases, you'll want to use an EBS, EBS backed
21729.39 -> volume. And for instance store it's ideal
for temporary backup for storing an applications
21734.48 -> cash logs or random data. So this is data
where when the server is not running you,
21741.23 -> it should go away. Okay, so there you go.
That's the difference between EBS volumes
21747.48 -> and instant store
21749.65 -> volume.
21750.65 -> So we're done with EBS, and we're looking
at the EBS cheat sheet so let's jump into
21756.292 -> it. So elastic block store EBS is a virtual
hard disk, and snapshots are a point in time
21761.84 -> copy of that disk. volumes exist on EBS snapshots
exist on s3. snapshots are incremental only
21768.96 -> changes made since the last snapshots are
moved to to s3. Initial snapshots of an EC
21773.792 -> two instance will take a large crate then
subsequent snapshots of taking snapshots of
21778.24 -> a volume the EC two instance should be stopped
before snapshotting you can take snapshots
21783.21 -> while the instance is still running, you can
create amies from volumes or from snapshots.
21789.38 -> EBS volumes are just define that what they
are here a durable block level storage device
21794.022 -> that you can attach to a single EC two instance.
EBS volumes can be modified on the fly so
21799.13 -> you can increase their storage type or their
volume size. volumes always exists in the
21803.3 -> same AZ as easy to instance. And then looking
at instance store volumes a temporary storage
21808.28 -> type located on disks that are physically
attached to a host machine. Instant storage
21812.96 -> volumes are serial meaning that cannot be
if they cannot be stopped. If the host fails
21818.77 -> and you lose your data. Okay, EBS backed instances
can be stopped and you will not lose any data.
21824.692 -> By default root volumes are deleted on termination.
EBS volumes can have termination protection,
21830.23 -> so don't delete the volume on termination.
And snapshots or restored encrypted volumes
21834.792 -> will also be encrypted. You cannot share a
snapshot if it has been encrypted and unencrypted
21840.34 -> snapshots can be shared with other Eva's counts
or made public. So there you go. That's your
21848.2 -> EBS tg
21850.6 -> Hey, this is Angie brown from exam Pro. And
we are looking at CloudFront, which is a CDN
21856.36 -> a content distribution network. It creates
cache copies of your website at various edge
21860.9 -> locations around the world. So to understand
what CloudFront is, we need to understand
21866.14 -> what a content delivery network is. So a CDN
is a distributed network of servers, which
21870.17 -> deliver webpages and content to users based
on their geographical location, the origin
21874.862 -> of the web page and a content delivery server.
So over here, I have a graphical representation
21880.76 -> of a CDN specifically for CloudFront. And
so the idea is that you have your content
21886.11 -> hosted somewhere. So here the origin is s3
and the idea is that the server CloudFront
21892.68 -> is going to distribute a copy of your website
on multiple edge locations which are just
21899.55 -> servers around the world that are nearby to
the user. So when a user from Toronto tries
21904.522 -> to access our content, it's not going to go
to the s3 bucket, it's going to go to CloudFront.
21909.872 -> And CloudFront is going to then route it to
the nearest edge location so that this user
21916.3 -> has the lowest latency. And that's the concept
behind.
21925.43 -> So it's time to look at the core components
for CloudFront. And we'll start with origin,
21930.18 -> which is where the original files are located.
And generally, this is going to be an s3 bucket.
21936.11 -> Because the most common use case for CloudFront
is static website hosting. However, you can
21940.43 -> also specify origin to be an easy to instance,
on elastic load balancer or route 53. The
21946.57 -> next thing is the actual distribution itself.
So distribution is a collection of edge locations,
21951.05 -> which define how cache content should behave.
So this definition, here is the thing that
21958.35 -> actually says, Hey, I'm going to pull from
origin. And I want this to update the cache,
21964.48 -> whatever whatever frequency or use HTTPS,
or that should be encrypted. So that is the
21970.21 -> settings for the distribution. And then there's
the edge locations. And an edge location is
21974.28 -> just a server. And it is a server that is
nearby to the actual user that stores that
21980.61 -> cache content. So those are the three components
to so we need to look at the distribution
21990.872 -> component of CloudFront in a bit more detail,
because there's a lot of things that we can
21994.852 -> set in here. And I'm not even showing you
them all, but let's just go through it. So
21999.48 -> we have an idea, the kinds of things we can
do with it. So again, a distribution is a
22003.67 -> collection of edge locations. And the first
thing you're going to do is you're going to
22008.03 -> specify the origin. And again, that's going
to be s3 ec two lb or refer d3. And when you
22013.602 -> said your distribution, what's really going
to determine the cost and also how much it's
22018.09 -> going to replicate across is the price class.
So here, you can see, if you choose all edge
22023.34 -> locations, it's gonna be the best performance
because your website is going to be accessible
22026.82 -> from anywhere in the world. But you know,
if you're operating just in North America
22031.202 -> and the EU, you can limit the amount of servers
it replicates to. There are two types of distributions,
22037.762 -> we have web, which is for websites, and rtmp,
which is for streaming media, okay, um, you
22044.55 -> can actually serve up streaming video under
web as well. But rtmp is a very specific protocol.
22050.542 -> So it is its own thing. When you set up behaviors,
there's a lot of options we have. So we could
22055.792 -> redirect all the traffic to be HTTPS, we could
restrict specific HTTP methods. So if we don't
22061.33 -> want to have puts, we can say we not include
those. Or we can restrict the viewer access,
22067.522 -> which we'll look into a little bit more detail
here, we can set the TTL, which is time to
22072.16 -> expiry, or Time To Live sorry, which says
like after, we could say every two minutes,
22077.65 -> the content should expire and then refresh
it right, depending on how, how stale we want
22081.862 -> our content to be. There is a thing called
invalidations in CloudFront, which allow you
22086.272 -> to manually set so you don't have to wait
for the TTL. to expire, you could just say
22090.93 -> I want to expire these files, this is very
useful when you are pushing changes to your
22096.85 -> s3 bucket because you're gonna have to go
manually create that invalidation. So those
22099.78 -> changes will immediately appear. You can also
serve back at error pages. So if you need
22105.741 -> a custom 404, you can do that through CloudFront.
And then you can set restrictions. So if you
22111.78 -> for whatever reason, aren't operating in specific
countries, and you don't want those countries
22118.522 -> to consume a lot of traffic, which might cost
you money, you can just restrict them saying
22123.21 -> I'm blocking these countries, or or you could
do the other way and say I only whitelist
22128.32 -> these countries, these are the only countries
that are allowed to view things from CloudFront.
22137.7 -> So there's one interesting feature I do want
to highlight on CloudFront, which is lambda
22141.342 -> edge and lambda edge are lambda functions
that override the behavior of requests and
22145.85 -> responses that are flowing to and from CloudFront.
And so we have four that are available to
22151.542 -> us, we have the viewer requests, the origin
requests, the origin response, and the viewer
22156.282 -> response, okay, and so on our on our CloudFront
distribution under probably behavior, we can
22162.65 -> associate lambda functions. And that allows
us to intercept and do things with this, what
22167.83 -> would you possibly use lambda edge for a very
common use case would let's say you have protected
22174.682 -> content, and you want to authenticate it against
something like cognito. So only users that
22181.94 -> are within your cognito authentication system
are allowed to access that content. That's
22187.282 -> actually something we do on exam pro for the
video content here. So you know, that is one
22192.41 -> method for protecting stuff, but there's a
lot of creative solutions here with you can
22197.362 -> use lambda edge, you could use it to serve
up a to b testing websites, so you could have
22205.452 -> it. So when the viewer request comes in, you
have a roll of the die, and it will change
22209.622 -> what it serves back. So it could be, it could
set up a or set up B. And that's something
22214.032 -> we also do in the exam pro marketing website.
So there's a lot of opportunities here with
22218.372 -> lambda edge. I don't know if it'll show up
in the exam, I'm sure eventually it will.
22222.52 -> And it's just really interesting. So I thought
it was worth
22224.74 -> an hour talking about cloud front protection.
So CloudFront might be serving up your static
22235.292 -> website, but you might have protected content,
such as video content, like on exam Pro, or
22241.06 -> other content that you don't want to be easily
accessible. And so when you're setting up
22245.95 -> your CloudFront distribution, you have this
option to restrict viewer access. And so that
22249.862 -> means that in order to view content, you're
going to have to use signed URLs or signed
22254.13 -> cookies. Now, when you do check this on, it
actually will create you an origin identity
22258.91 -> access, and oh AI. And what that is, it's
a virtual user identity that it will be used
22264.88 -> to give CloudFront distributions permission
to fetch private objects. And so those private
22270.09 -> objects generally mean from an s3 bucket that's
private, right. And as soon as that set up,
22276.18 -> and that's automatically set up for you. Now
you can go ahead and use sign URLs inside
22280.612 -> cookies. So one of these things well, the
idea behind it is a sign URL is just a URL
22286.65 -> that CloudFront will provide you that gives
you temporary access to those private cash
22291.46 -> objects. Now, you might have heard of pre
signed URLs, and that is an s3 feature. And
22297.71 -> it's similar nature. But it's very easy to
get these two mixed up because sign URLs and
22302.792 -> pre signed URLs sound very similar. But just
know that pre signed URLs are for s3 and sign
22308.68 -> URLs are for CloudFront, then you have signed
cookies, okay. And so it's similar to sign
22314.862 -> URLs, the only difference is that you're you
passing along a cookie with your request to
22322.27 -> allow users to access multiple files, so you
don't have to, every single time generate
22327.69 -> a signed cookie, you set it once, as long
as that cookie is valid and pass along, you
22332.602 -> can access as many files as you want. This
is extremely useful for video streaming, and
22337.07 -> we use it on exam Pro, we could not do video
streaming protected with sign URLs, because
22342.19 -> all the video streams are delivered in parts,
right, so a cookie has to get set. So that
22349.02 -> that is your options for protecting cloud.
It's time to get some hands on experience
22356.8 -> with CloudFront here and create our first
distribution. But before we do that, we need
22360.58 -> something to serve up to the CDN. Okay, um,
so we had an s3 section earlier, where I uploaded
22367.02 -> a bunch of images from Star Trek The Next
Generation. And so for you, you can do the
22372.42 -> same or you just need to make a bucket and
have some images within that bucket so that
22377.41 -> we have something to serve up, okay. So once
you have your bucket of images prepared, we're
22382.97 -> going to go make our way to the CloudFront
console here. And so just type in CloudFront,
22386.97 -> and then click there. And you'll get to the
same place as me here, we can go ahead and
22391.44 -> create our first distribution. So we're gonna
be presented with two options, we have web
22396.5 -> and rtmp. Now rtmp is for the Adobe Flash
media server protocol. So since nobody really
22402.44 -> uses flash anymore, we can just kind of ignore
this distribution option. And we're going
22406.42 -> to go with web, okay. And then we're going
to have a bunch of options, but don't get
22410.96 -> overwhelmed, because it's not too tricky.
So the first thing we want to do is set our
22414.74 -> origin. So where is this distribution going
to get its files that wants to serve up, it's
22419.53 -> going to be from s3. So we're going to click
into here, we're going to get a drop down
22422.78 -> and we're going to choose our s3 bucket, then
we have path, we'll leave that alone, we have
22427.07 -> origin ID, we'll leave that alone. And then
we have restrict bucket access. So this is
22430.75 -> a cool option. So the thing is, is that let's
say you only want two people to access your,
22438.01 -> your bucket resources through CloudFront.
Because right now, if we go to s3 console,
22442.82 -> I think we made was data public, right? And
if we were to look at this URL, okay, this
22449.47 -> is publicly accessible. But let's say we wanted
to force all traffic through cloud front,
22453.61 -> because we don't we want to the conference
can track things. So we get some rich analytics
22458.13 -> there. And we just don't want people directly
accessing this ugly URL. Well, that's where
22462.52 -> this option comes in, restrict bucket access,
okay, and it will, it will create an origin
22467.03 -> identity access for us, but we're gonna leave
it to No, I just want you to know about that.
22470.862 -> And then down to the actual behavior settings,
we have the ability to redirect HTTP to HTTPS.
22476.64 -> That seems like a very sane setting. We can
allow these to be methods, we're only going
22480.792 -> to be ever getting things we're never going
to be put or posting things. And then we'll
22486.35 -> scroll down, scroll down, we can set our TTL.
The defaults are very good. And then down
22490.77 -> here, we have restrict viewer access. So if
we wanted to restrict the viewer access to
22494.74 -> require signed URLs of site cookies to protect
access to our content, we'd press yes here
22500.27 -> But again, we just want this to be publicly
available. So we're going to set it to No.
22504.32 -> Okay. And then down below, we have distribution
settings. And this is going to really affect
22508.16 -> our price of the cost we're going to pay here,
as it says price class, okay,
22513.66 -> and so we can either distribute all copies
of our files to every single edge location,
22520.74 -> or we can just say US, Canada, Europe, or
just US, Canada, yeah, Europe, Asia, Middle
22525.612 -> East Africa, or just the the main three. So
I want to be cost saving here. So I'm really
22530.31 -> going to cost us a lot anyway. But I think
that if we set it to the lowest cost here
22533.85 -> that it will take less time for the distribution
to replicate here in this tutorial go a lot
22538.792 -> faster, okay, then we have the ability to
set an alternate domain name, this is important
22543.762 -> if we are using a CloudFront certificate,
and we want a custom domain name, which we
22549.39 -> would do in a another follow along but not
in this one here. Okay, and if this was a
22555.34 -> website, we would set the default route here
to index dot HTML. Okay, so that's pretty
22559.542 -> much all we need to know here. And we'll go
ahead and create our distribution, okay, and
22563.22 -> so our distribution is going to be in progress.
And we're going to wait for it to distribute
22567.9 -> those files to all those edge locations. Okay,
and so this will just take a little bit of
22572.06 -> time here, he usually takes I don't know,
like three to five minutes. So we'll, we'll
22576.96 -> resume the video when this is done. So creating
that distribution took a lot longer than I
22585.852 -> was hoping for, it was more like 15 minutes,
but I think the initial one always takes a
22590.961 -> very long time. And then then after whenever
you update things, it still takes a bit of
22595.622 -> time, but it's not 15 minutes, more like five
minutes. Okay. But anyway, um, so our distribution
22600.57 -> is created. Here, we have an ID, we have a
domain name, and we're just going to click
22604.95 -> in to this distribution. And we're going to
see all the options we have here. So we have
22609.48 -> general origins, behaviors, error pages, restrictions,
and validations. And tags. Okay, so when we
22614.622 -> were creating the distribution, we configured
both general origins and behaviors all in
22619.27 -> one go. Okay. And so if we wanted to override
the behaviors from before, we just looked
22624.65 -> at it here, we're not going to change anything
here. But I just want to show you that we
22627.91 -> have these options previous. And just to see
that they are broken up between these three
22631.692 -> tabs here. So if I go to Edit, there's some
information here and some information there.
22636.48 -> Okay. So now that we have our distribution
working, we have this domain name here. And
22641.542 -> if we had, if we had used our own SSL, from
the Amazon certification manager, we could
22647.81 -> add a custom domain, but we didn't. So we
just have the domain that is provided with
22652.38 -> us. And this is how we're going to actually
access our our cache file. So what I want
22655.96 -> you to do is copy that there. I'm just going
to place it here in a text editor here. And
22662.512 -> the idea here is we want to then, from the
enterprise D poll, one of the images here.
22667.202 -> So if we have data, we'll just take the front
of it there, okay. And we are going to just
22672.56 -> assemble a new URL. So we're going to try
data first here, and data should work without
22677.38 -> issue. Okay. And so now we are serving this
up from CloudFront. So that is how it works
22682.71 -> now, but data is set to public access. Okay,
so that isn't much of a trick there. But for
22691.97 -> all these other ones, I just want to make
sure that he has public access here and it
22695.661 -> is set here yep to public access. But let's
look at someone that actually doesn't have
22700.81 -> public access, such as Keiko, she does not
have public access. So the question is, will
22706.3 -> CloudFront make files that do not have public
access set in here publicly accessible, that's
22711.042 -> what we're going to find out. Okay. So we're
just going to then assemble an another URL
22716.612 -> here, but this time with Keiko, okay, and
we're gonna see if we can access her All right.
22722.31 -> Okay, oops, I copied the wrong link. Just
copy that one more time. Okay, and there you
22728.75 -> go. So Keiko is not available. And this is
because she is not publicly accessible. Okay.
22734.09 -> So just because you create a CloudFront distribution
doesn't necessarily mean that these files
22737.33 -> will be accessible. So if we were to go to
Keiko now and then set her to public, would
22744.542 -> she be accessible now through CloudFront?
Okay, so now she is all right. So so just
22751.282 -> keep that in mind that when you create a CloudFront
distribution, you're going to get these URLs
22756.06 -> and unless you explicitly set the objects
in here to be publicly accessible, they're
22761.301 -> not going to be publicly accessible. Okay.
But yeah, that's all there is to it. So we
22765.21 -> created our conference. So we need to touch
on one more thing here with CloudFront. And
22773.71 -> that is invalidation. So, up here we have
this Keiko image which is being served up
22778.87 -> by CloudFront. But let's say we want to replace
it. Okay, so in order to replace images on
22784.75 -> CloudFront, it's not as simple as just replacing
an s3. So here we have Keiko, right, and this
22790.4 -> is the current image. And so let's say we
wanted to replace that and so I have another
22795 -> version of Keiko here. I'm just going to upload
it here. And that's going to replace the existing
22799.99 -> One, okay.
22803.42 -> And so I'm just going to make sure that the
new one is here. So I'm just going to right
22808.87 -> click or sorry, gonna hit open here, make
sure it's set to public. And then I'm just
22812.81 -> going to click the link here. And it still
now it's the new one, right, so here we have
22817.85 -> the new one. And if we were to go to the CloudFront
distribution and refresh, it's still the old
22822.16 -> image, okay, because in order for these new
changes to propagate, you have to invalidate
22827.4 -> the old, the old cache, okay, and that's where
invalidation is come into play. So, to invalidate
22833.022 -> the old cache, we can go in here to create
invalidations. And we can put a wildcard to
22838.07 -> expire everything or we could just expire.
Keiko. So, for Keiko, she's at Ford slash
22845.122 -> enterprise D. So we would just paste that
in there. And we have now created an invalidation.
22849.782 -> And this is going to take five, five minutes,
I'm not going to wait around to show you this
22855.27 -> because I know it's going to work. But I just
want you to know that if you update something
22859.542 -> in order, in order for it to work, you have
to create a validation. So it's time to look
22868.02 -> at the CloudFront cheat sheet. And let's get
to it. So CloudFront is a CDN a content distribution
22873.32 -> network. It makes websites load fast by serving
cache content that is nearby CloudFront distributes
22879.66 -> cached copies at edge locations, edge locations
aren't just read only you can actually write
22885.12 -> to them. So you can do puts to them. We didn't
really cover that in the core content. But
22889.34 -> it's good to know CloudFront has a feature
called TTL, which is time to live and that
22894.83 -> defines how long until a cache expires. Okay,
so if you set it to expire every hour, every
22902.42 -> day, that's how fresh or I guess you'd say
how stale your content is going to be. When
22907.6 -> you invalidate your cash, you're forcing it
to immediately expire. So just understand
22912.58 -> that invalidations means you're you're refreshing
your cash, okay? refreshing the cast does
22918.292 -> cost money because of the transfer cost to
update edge locations, right. So if you have
22923.58 -> a file, and it's and it's expired, it then
has to then send that file to 1020, whatever
22929.47 -> amount of servers it is, and there's always
that outbound transfer cost, okay? origin
22934.292 -> is the address of where the original copies
of your files reside. And again, that can
22938.75 -> be a three EC two, lb raffa, d three, then
you have distribution, which defines a collection
22944.1 -> of edge locations and behavior on how it should
handle your cache content. We have two types
22949.522 -> of distributions, we have the web distribution,
also known as web, which is for static website
22955.46 -> content. And then you have rtmp, which is
for streaming media, again, that is a very
22960.21 -> specific protocol, you can serve up video
streaming via the web distribution. Then we
22966.24 -> have origin identity access, which is used
to access private s3 buckets. If we want to
22971.31 -> access cash content that is protected, we
need to use sign URLs or signed cookies, again,
22977.012 -> don't get signed roles confused with pre signed
URLs, which is an s3 feature, but it's pretty
22981.43 -> much the same in terms of giving you access
to something, then you have lambda edge, which
22987.192 -> allows you to pass each request through a
lambda to change the behavior of the response
22992.55 -> or the request. Okay, so there you go. That
is cloud front in a nutshell. Hey, this is
23001.64 -> Andrew Brown from exam Pro. And we are looking
at relational database service RDS, which
23006.6 -> is a managed relational database service and
supports multiple SQL engines, easy to scale,
23012.08 -> backup and secure. So jumping into RDS RDS
is a relational database service. And it is
23018.62 -> the AV solution for relational databases.
So there are six relational database options
23024.21 -> currently available to us. So we have Amazon
Aurora, which we have a whole section dedicated
23028.292 -> on MySQL, Maria dB, Postgres, which is what
we use it exam Pro, Oracle and Microsoft SQL.
23040.32 -> So let's look at what we can do for encryption,
so you can turn on encryption at rest for
23043.81 -> all RDS engines, I've noticed that you might
not be able to turn on encryption for older
23044.81 -> versions of some engine. So sometimes this
option is not available, but generally it
23045.81 -> always is. And also, when you do turn on encryption,
it's also going to encrypt, as well as your
23046.81 -> automated backups, your snapshots and your
read replicas related to that database. And
23047.81 -> encryption is handled by AWS key management
service kms, because it always is. So you
23048.81 -> can see it's as simple as turning on encryption,
and you can either use the default key or
23049.81 -> provide another kms key that you were taking
a look here at RDS backups. Okay, so we have
23050.81 -> two solutions available to us starting with
automated backups, we're going to do is you're
23051.81 -> going to choose a retention period between
one and 35 days. Now generally, most people
23052.81 -> we're going to set this to seven, and if you
were to set it to zero, that's actually how
23053.81 -> you turn
23054.81 -> it off. So when they say that, automated backups
are enabled, Default, they just mean that
23055.81 -> they fill it in with like seven by default
for you. And you can just turn that to zero,
23056.81 -> it's going to store transaction logs throughout
the day, all the data is going to be stored
23057.81 -> inside s3, and there is no additional charge
for those. Those backups, okay, you're going
23058.81 -> to define when you want backups do occur through
a backup windows. So here you can see UTC
23059.81 -> six, and the duration can't be longer than
a half an hour. And then storage and IO may
23060.81 -> be suspended during backup. So just understand
that you might have some issues during that
23061.81 -> period of time. So you might really want to
choose that time carefully. Now, the other
23062.81 -> way is manual snapshots. And all you have
to do is drop down actions and take a snapshot.
23063.81 -> So it is a manual process. Now, if your primary
database or your RDS was instance was deleted,
23064.81 -> you're still going to have the snapshot. So
if you do want to restore a previous snapshot,
23065.81 -> you totally can do that. Okay, they don't
go when you delete the RDS.
23066.81 -> So let's learn how to actually restore a backup
now. And it's as simple as dropping down actions
23067.81 -> here and choosing restore to point in time.
So when recovering, AWS will take the most
23068.81 -> recent daily backup and apply transaction
log data relevant to that day. This allows
23069.81 -> point in time recovery down to a second inside
the retention period. backup data is never
23070.81 -> restored over top of an existing instance,
what it's going to do is when you restore
23071.81 -> an automated backup, or manual snapshot, it's
going to create a new instance, from that
23072.81 -> create, for the created restored database,
okay. And so when you do make this new restored
23073.81 -> RDS instance, it's going to have a new DNS
endpoint. And you are going to have to do
23074.81 -> a little bit of manual labor here because
you're going to want to delete your old instance,
23075.81 -> and then use this new endpoint for your applications.
Okay. So we're going to be looking at multi
23076.81 -> AZ deployment. And this ensures your database
remains available if another AZ becomes unavailable.
23077.81 -> So what it's going to do, it's going to make
an exact copy of the database in another availability
23078.81 -> zone, and it is automatically going to synchronize
those changes from the primary database, the
23079.81 -> master database over to the standby database.
Okay. So the thing about this standby is that
23080.81 -> it is a slave database, it is not receiving
any real time traffic, it is just there's
23081.81 -> a backup to take the place of the master database
in the case of the AZ goes down. So over here
23082.81 -> we have automatic failover protection. So
if the AZ does go down, then failover will
23083.81 -> occur, it's going to point there's like a
URL or address that that says that points
23084.81 -> to the database. So it's going to point to
the slave and the slave is going to be promoted
23085.81 -> to master and now it is your master database.
All right, so that's multi AZ going to take
23086.81 -> a look at Reed replicas, and they allow you
to run multiple copies of your database. These
23087.81 -> copies only allow reads, so you can't do writes
to them. And it's intended to alleviate the
23088.81 -> workload of your primary database, also known
as your master database to improve performance.
23089.81 -> Okay, so in order to use read replicas, you
must have automatic backups enabled. And in
23090.81 -> order to create a replica, you're just dropping
down actions here and hitting create read
23091.81 -> replica as easy as that. And it uses a synchronous
replication between your master and your read
23092.81 -> replica. So you can have up to five replicas
of the database, each read replica will have
23093.81 -> its own DNS endpoint. You can have multi AZ
replicas, cross region replicas, and even
23094.81 -> replicas of replicas. replicas can be promoted
to their own database, but this will break
23095.81 -> replication, which makes a lot of sense, and
no automatic failover. So if the primary copy
23096.81 -> fails, you must manually update your roles
to point at copy.
23097.81 -> So now it's time to compare multi AZ and read
replicas because it's very important to know
23098.81 -> the difference between the two. So for replication,
multi AZ has synchronous replication and read
23099.81 -> replicas have a synchronous replication for
what is actually active on multi AZ it's just
23100.81 -> gonna be the primary instance. So the standby
doesn't do anything, it's just there. If the
23101.81 -> primary the primary instance becomes unavailable,
then it becomes the primary instance where
23102.81 -> read replicas the primary and all the replicas
are being utilized. Okay, we're going to have
23103.81 -> autom automated backups are taken from the
standby word read replicas there are no backups
23104.81 -> configured by default, multi AZ as the name
implies, will span to azs. within a single
23105.81 -> region. replicas are within a single AZ but
they can be multi AZ cross az, or cross region.
23106.81 -> Okay. When upgrades are occurring, it's going
to happen on the primary database for multi
23107.81 -> AZ when upgrade upgrades. To create and read
replicas, it's going to be independent from
23108.81 -> the source instance. And then lastly, we have
failover. So automatic failover will happen
23109.81 -> to standby. And so for read replicas, you're
not going to have that automatic failover,
23110.81 -> you're gonna have to manually promote a, one
of those replicas to become the standalone
23111.81 -> data. Hey, it's Angie brown from exam Pro,
and we are looking at Amazon RDS. And so we
23112.81 -> are going to create our own RDS database,
as well as an aurora database, as well as
23113.81 -> looking how to migrate from RDS to Aurora.
And also maybe we'll look at some backup solutions
23114.81 -> there and some of the other superflous features
that RDS supplies us. If you're wondering
23115.81 -> how do you get to the console here, you go
the top here, type RDS, and click that and
23116.81 -> you will end up in the same place as I am.
So let's get to it and create our first database.
23117.81 -> Okay, and we're going to do that by going
on the left hand side here to databases, and
23118.81 -> clicking Create database. So here we are in
the RDS creation interface. And the first
23119.81 -> thing we're presented with is with crates,
standard crate and easy crate, I assume that
23120.81 -> this would eliminate some options for us,
we're going to stick with standard, because
23121.81 -> we want to have full control over what we're
doing here of the next thing is to choose
23122.81 -> your engine. And so we're going to Aurora
later. And we're going to spin up a Postgres
23123.81 -> database, which is very popular amongst Ruby
on Rails developers, which is my primary web
23124.81 -> framework that I like to use, then under templates,
this is a more configuration for you that
23125.81 -> allows you to get started very easily. So
if you leave this to production here, I want
23126.81 -> to show you the cost because it's laughably
really inexpensive. It's $632, because it's
23127.81 -> doing a bunch of stuff here is running more
than one AZ it's using a very large EC two
23128.81 -> instance, and it has provisioned I Ops, okay.
And so for our use case, I don't think we
23129.81 -> want to spend $632. But if you were like an
enterprise, it makes sense why they would
23130.81 -> do that. But if you aren't paying attention,
that's very expensive. And there's obviously
23131.81 -> the free tier, which is what we will want
to use here. But we will configure this so
23132.81 -> that it will end up as free tier here. And
we will learn the options as we go through
23133.81 -> it. So the first thing is we're going to have
to have a database, so we'll just keep our
23134.81 -> database name as database one, then we need
to set a master password. So we will set it
23135.81 -> as Postgres. And we'll see if we can get to
get away with that. Obviously, when you make
23136.81 -> your real password, you'd use a password generator,
or let it auto generate one. So it's very
23137.81 -> long in length. But I just want to be able
to work with this database very easily for
23138.81 -> the purpose of you know, this, this follow
along, okay, then we have our DB instance
23139.81 -> size. So for the DB instance size, you can
see it's set to standard classes m, I would
23140.81 -> say the standard class is more like burstable,
which are t twos. And so this is what we're
23141.81 -> used to when we're saving money. So we have
the T two micro if you are a very small startup,
23142.81 -> you would probably be starting on T two micro
and do totally fine for you. So we're going
23143.81 -> to change it to T two micro okay. And the
next thing is storage class. so here we can
23144.81 -> choose I provisioned I ops. And so we have
faster, oh, I Ops, right, but we're gonna
23145.81 -> go general here, because we don't need, we
don't need that crazy amount of I ops there.
23146.81 -> And that's going to reduce storage, there
is this ability to do storage, auto scaling.
23147.81 -> So this is kind of nice, where it dynamically
will scale your database to your needs. Um,
23148.81 -> I think I will just leave that on, I don't
see why we wouldn't want to keep that on unless
23149.81 -> there's additional cost to don't believe there
is. Then there's multi AZ and so that would
23150.81 -> set up a nother database for us in another
availability zone in a standby, I don't think
23151.81 -> we need that. So we're going to turn that
off. But just to show you how easy it is to
23152.81 -> turn that on, then we're going to need to
choose our VPC, it's very important, whatever
23153.81 -> web application you are deploying that your
your RDS databases in the same VPC or it's
23154.81 -> going to be a bit of trouble trying to connect
to it. So we'll just leave that in the default
23155.81 -> there. There's some additional
23156.81 -> connectivity options here. And so we is a
subnet group. So we're gonna have to default,
23157.81 -> we're, we're gonna ask whether we want this
to be publicly accessible. Generally, you're
23158.81 -> not going to want a public IP address. But
if we want to interact with this database
23159.81 -> very easily, I'm going to set it to Yes, for
the for the sake of this follow along, because
23160.81 -> it would be nice to put some data into this
database, interact with it with table plus,
23161.81 -> and then we'll go ahead and delete it. Okay.
Then down below, we have a VPC security group,
23162.81 -> I'm thinking that it will probably create
one for us by default, so we'll leave it with
23163.81 -> the default one, which is totally fine. We
can choose our preference for AZ I don't care,
23164.81 -> we'll leave it by default. And then we have
the port number 5432, which is the standard
23165.81 -> port number there. You might want to change
it just for the sake of security reasons.
23166.81 -> Because if you change the port number then
people have to guess what it is. And there's
23167.81 -> some additional configurations here. So what
we have is the initial database name, if you're
23168.81 -> not specify when RDS does not create a database
Okay, so we probably want to name our database
23169.81 -> here. So I'm just going to name the database
one here. You can also authenticate using
23170.81 -> IMDb authentication. So if that is one way
you want to authenticate to your database,
23171.81 -> that is definitely a really nice way of doing
that. So you might want to have that checkbox
23172.81 -> on, then you have backups, okay, and so backups
are enabled automatically, and they're set
23173.81 -> for seven days. If you want to turn backups,
which I definitely do, I'm gonna set it to
23174.81 -> zero days, if we had left that on and created
our RDS instance, it would take forever to
23175.81 -> create because immediately after it starts
up, it's going to then create a backup. And
23176.81 -> that just takes a long time, you can set the
backup window here and select when you want
23177.81 -> to have it run, there is the chance of interruptions
during a backup window. So you definitely
23178.81 -> want to pick this one, it's it's not the most
important usage by your users, we can enable
23179.81 -> performance insights, I'm pretty sure I thought
that was only accessible for certain classes
23180.81 -> of database performance, advanced database
performance monitoring, and offers free tier
23181.81 -> for seven days of rolling, okay, sure, we'll
turn it on. But at one point, you had to have
23182.81 -> a higher tier to be able to use this, then
we have the retention period here for performance
23183.81 -> insights, I guess it's seven days we'll leave
that there appear appears to be encrypted
23184.81 -> by default. That seems like a good thing there.
There's an account, Id kind of ignore my kindly
23185.81 -> ignore my account ID for this account. But
this is a burner account. So it's not like
23186.81 -> you're going to be able to do anything with
that ID. We have enhanced monitoring, I don't
23187.81 -> need that. So I'm going to turn that off,
that seems kind of expensive. We can export
23188.81 -> our logs, that is a good thing to have. We
can have it so it automatically upgrades minor
23189.81 -> versions, that is a very good thing to set,
you can also set the window for that, we can
23190.81 -> turn on deletion protection, I'm not going
to have that on because I'm going to want
23191.81 -> to delete this lickety split. And so there
we go. And so it says 1544. But I know that
23192.81 -> this is free tier because it's using the T
to micro and so we get things so even though
23193.81 -> it doesn't say it's free, I definitely know
it's free. But this gives you the true costs.
23194.81 -> So after your feature would run out, this
is what it would cost about $15 per month
23195.81 -> at the lowest tier, the lowest thing you can
get on on AWS for for your RDS instance. Okay,
23196.81 -> so let's go ahead and create that database.
So I failed creating my database, because
23197.81 -> it turns out database is a reserved word for
this engine. And that's totally fine. So we're
23198.81 -> gonna have to scroll up here and change the
database name. And so I'm just going to change
23199.81 -> it to Babylon five. And we're going to go
ahead and create that. And this time, we're
23200.81 -> going to get our database. And so now we're
just waiting for our database to be created
23201.81 -> here. And so we just saw the region AZ pop
into existence here, and it is currently being
23202.81 -> created, you might have to hit refresh here
a few times. So we'll just wait a little bit
23203.81 -> here until this status has changed. So our
database is available. And now we can actually
23204.81 -> try to make a connection to it and maybe put
in some SQL and run a query. And so before
23205.81 -> we can try to even make a connection, we need
to edit our security group, okay, because
23206.81 -> we are going to need access via Port 5432
to connect to that instance there. So we'll
23207.81 -> just edit our inbound rules. And we're going
to drop down and look for Postgres in here.
23208.81 -> So there's 5432. And we're gonna set to only
rip because we don't want make this publicly
23209.81 -> accessible. And we will hit save, okay. And
so now if we want to make connection, we should
23210.81 -> have no trouble here. I'm just going to close
that tab, and we're going to have to collect
23211.81 -> some information, you're going to need to
use a tool such as table Plus, if you are
23212.81 -> on Mac or Windows, it's free to download and
install if you're on Linux, or you could use
23213.81 -> B, I think it's called D Beaver. Okay, so
that's an open source
23214.81 -> SQL tool here. And so I'm gonna just make
a new connection here. And we're just going
23215.81 -> to choose Postgres, we're going to fill in
some information. So I called this Babylon
23216.81 -> five. Okay, and that was the name of the database
as well, Babylon five. All right, and the
23217.81 -> username was Postgres. And the very, not secure
password is, is Postgres as well, again, if
23218.81 -> you are doing this for production, or any
case, you should really generate a very long
23219.81 -> password there. And then we need the host,
the host is going to be this endpoint here,
23220.81 -> okay. And the port is 5432 by default, so
I don't have to do anything special here.
23221.81 -> I'm just gonna hit test and see if we connect.
Okay, and so it went green. So that is great.
23222.81 -> I'm gonna hit save. So I can save myself some
trouble here. I will just double, double click
23223.81 -> and make a connection. There's a bit of latency
here when you are connecting to RDS and just
23224.81 -> running things. So if you don't see things
immediately, I just give it a little bit time
23225.81 -> or hit the refresh here. But I already have
a SQL script prepared here. I'm just going
23226.81 -> to show it to you. So this is a script and
what it does, actually should have been shouldn't
23227.81 -> have been called the Babylon five database
because I'm mixing Star Trek with Babylon
23228.81 -> five. Ridiculous, right? But this is a bunch
of starship classes from Star Trek and a bunch
23229.81 -> of starships from Star Trek here. And I'm
going to run the script to get us some data
23230.81 -> here. Okay, and so if we do it for table plus,
we're gonna go import from SQL dump. And I
23231.81 -> have it on my desktop here called Starfleet
ship registry, I'm gonna hit open. Okay, I'm
23232.81 -> going to import, I'm just going to run that
script and import us our data here into Postgres.
23233.81 -> Now, if you're using a different database
type is MySQL, Oracle, I can't guarantee that
23234.81 -> this will work. But it will definitely work
for Postgres, because SQL does vary based
23235.81 -> on engines, okay, and so it says it's successfully
done, it even tells us to do a refresh here,
23236.81 -> there is a nice refresh button up there that
you can click, and we're gonna wait for our
23237.81 -> tables to appear. So there we are, we have
our ship classes. And we also have our starships.
23238.81 -> Okay, I just want to run one query here to
make sure queries are working, I'm sure it's
23239.81 -> gonna work. And we're gonna go over here,
and we're just going to get out, we're going
23240.81 -> to want to pull all the starships that are
of ship class defiant. Okay, and we'll just
23241.81 -> make a new query here and or run the old say,
run all Okay, and so there you go. So we're
23242.81 -> getting data. So that's how you can connect
to your RDS database, you just have to open
23243.81 -> up that that port number there, if you are
to connect this to your web application, what
23244.81 -> you probably want to do for your security
group, is to just allow 5432 to the security
23245.81 -> group of the web application. Okay, so here,
I gave my access to my IP, right. But you
23246.81 -> know, you just have whatever your your security
group is, you know, here. So if you had one
23247.81 -> for your EC two instances, your auto scaling,
auto scaling group that holds two instances,
23248.81 -> you just put that in there. Alright. So now
that we have our database running, I figured
23249.81 -> it'd be cool to go check out performance insights,
which I'm really excited about, because this
23250.81 -> service used to be only available to a certain
expensive tier, so you had to pay, I don't
23251.81 -> know, it was like, like a T to large before
you can utilize this. But now it looks like
23252.81 -> AWS has brought it down all the way to the
T to micro. And it just gives you some rich
23253.81 -> performance insights into your application
here. So here actually ran that query, it
23254.81 -> actually shows me kind of the performance
over time. So this is really great to see
23255.81 -> here. I bet if I was to perform another query,
it would repair so I could probably just run
23256.81 -> the same one here. And we could just change
it to a different class. So we're just going
23257.81 -> to go here and change it. Let's pick one at
random like this one here. Okay. And I'll
23258.81 -> just run that there. And so I ran that query.
And I'm not sure how real time this is, because
23259.81 -> I've actually never had a chance to use it
until now, because I just never, never wanted
23260.81 -> to upgrade for that there. So it looks like
it does take a little bit of time for those
23261.81 -> queries to appear. But I did run a query there.
So I bet it will come in, it probably is he
23262.81 -> says fast past five minutes. So I'm going
to assume that it's at a five minute interval.
23263.81 -> So if we waited five minutes, I'm sure this
query would show up there. But just nice to
23264.81 -> know that you can get these kind of rich analytics
here, because normally, you'd have to pay
23265.81 -> for data dog or some other third party service.
And now it's free with AWS. So I just want
23266.81 -> to quickly show you that you can reserve instances
with RDS just like EC two, and you can start
23267.81 -> saving money. So just go to the reserved instances
tab here
23268.81 -> and go to purchase a reserved DB instances.
And we're going to have to wait a little bit
23269.81 -> of time here for this to load, it's probably
because it's getting the most up to date information
23270.81 -> about pricing. And so what we're going to
do is just go through this here and just kind
23271.81 -> of get an idea of the difference in cost.
So we're going to drop down and choose Postgres
23272.81 -> as our database, I always seem to have to
select that twice. Okay, but now I have Postgres
23273.81 -> selected, we are using a T to micro Okay,
we are not doing multi AZ one term seems great
23274.81 -> to me, we will first start with no upfront,
we only want one DB instance. And we'll look
23275.81 -> at what we're getting. So here, it's going
to tell us what we're going to save. So it's
23276.81 -> gonna say it's at 0.0 14 cents per hour. So
to compare this, I have the pricing up here.
23277.81 -> So for a tea to micro, it is a point 00 18
there, okay? And so that's your savings there.
23278.81 -> So if you just fiddle with this, you'll see
now it's 007, which is considerably cheaper,
23279.81 -> and you have all up front and that can't be
right. So 000 I guess that would be the case
23280.81 -> because you you've already paid for it. So
there'll be no hourly charge that makes total
23281.81 -> sense. But now you have an idea of what that
cost is for the year. So for $111 your, your
23282.81 -> your, your cost is totally covered for you.
And so if we wanted to actually calculate
23283.81 -> the full cost here, we would just go here
and generally the number is between watching
23284.81 -> grab the full full price here to get a comparison.
Where is our TT micro buddy here? Here it
23285.81 -> is. So I always just do 730 times because
that's generally how many hours there are
23286.81 -> in a month, seven 730 by that so that you
have basically a $14 charge. So you say 14
23287.81 -> times 12. Okay. And so it's $168 for the year.
So if you're paying upfront for one year,
23288.81 -> we are saving about 50 bucks. If we go on
for three years, we're saving more money,
23289.81 -> I'm not gonna do the math on that, but you
get the idea. So just be aware those options
23290.81 -> are available to us at the T to micro stage,
it's not a huge impact. But when you get to
23291.81 -> these larger instances, you realize you definitely
want your savings. Okay, so yeah. So I'm going
23292.81 -> to show you how to create a snapshot for your
database, it's pretty darn straightforward.
23293.81 -> We're going to go into our database, go to
our maintenance and backups. If we had backups,
23294.81 -> you know, they'll be turned on here. And so
just to take a snapshot, which is the manual
23295.81 -> process of backing up, we can name our snapshot,
whatever he wants, like to say, first snapshot
23296.81 -> there. Okay, and then we'll just press take
snapshot. And it's just going to go from into
23297.81 -> creating state. And we're just going to now
wait for that snapshot to complete. So our
23298.81 -> snapshot is now available to us. And so there's
a few things we can do with it. This only
23299.81 -> took about seven minutes. I didn't wait that
long for the snapshot here. But if we go to
23300.81 -> the top here to actions, there's a few things
we can do, we can restore our snapshots. So
23301.81 -> that's the first thing we're going to look
at here. And so you're gonna be presented
23302.81 -> with a bunch of options to pretty much spin
up a new RDS instance here. And so the reason
23303.81 -> why you might want to do this is you have
a database, and you have outrun or outlived
23304.81 -> the size that you're currently using. So if
you're using that T to micro, which is super
23305.81 -> small here, we'll just use, we'll just show
t three micro here as an as an example. And
23306.81 -> you wanted to increase the neck size, you
would do so. And you could also switch to
23307.81 -> multi AZ change your storage type
23308.81 -> here etc. And then you could restore it, which
will spin up a new RDS instance, okay. And
23309.81 -> then you just kill your old one and move your
endpoints over to this one. Alright.
23310.81 -> So that's one thing we can do here. With restoring
a snapshot, the other is migrating a snapshot,
23311.81 -> okay. And so we'll look into that next here.
So just before we get onto a migrate snapshot,
23312.81 -> let's take a look at copy and share. So copy
allows you to move your snapshot to another
23313.81 -> region. So if you need to migrate your snapshot
somewhere else, this is how you're going to
23314.81 -> go about doing that. And then you can also
enable encryption. So if you don't have encryption
23315.81 -> enabled, this is a good opportunity for you
to encrypt your snapshots. So when you launch
23316.81 -> an RDS instance, it will be encrypted. Okay,
just like an easy to instance. And so then
23317.81 -> we have the ability to share. So now let's
say you wanted to make this snapshot available
23318.81 -> to other people via other AWS accounts, where
you'd add their ID here. And so now they would
23319.81 -> be able to reference that snapshot ID and
utilize it. Or you can also set it to public.
23320.81 -> So that anyone, anyone could access the snapshot.
But you know, we're we're just going to leave
23321.81 -> that alone, just so you are aware of those
two options. Now the one that is of most interest
23322.81 -> is migrating. So this is how you're going
to create an aurora database, okay, so you
23323.81 -> can just directly create an aurora database,
but if you wanted to migrate from your RDS,
23324.81 -> Postgres to Aurora, Postgres, this is how
you're going to go about it. Okay, so we're
23325.81 -> just going to choose, obviously, Aurora, Postgres,
because we're dealing with a Postgres database
23326.81 -> here, that we have our engine version, okay.
So this is an opportunity where we could upgrade
23327.81 -> our version, we're going to change our instance
class. Now, Aurora instances are a lot larger
23328.81 -> than your normal RDS instances. So we're not
going to have a teaching micro here, you might
23329.81 -> want to skip this step, because it is kind
of expensive, and you might forget about it.
23330.81 -> So you don't want to leave this thing running.
So down below, I'm gonna just choose t to
23331.81 -> medium because that is the least expensive
option I have here. And I'm just going to
23332.81 -> end up doing this anyway. So it's not a big
deal. Then we can choose our VPC, we're going
23333.81 -> to leave it to the default here, we can make
it publicly accessible. I'm gonna leave it
23334.81 -> publicly accessible here, because I don't
care. Um, and yeah, so there you go. And we'll
23335.81 -> just scroll down here, and we will migrate.
Okay. And so you might get a complaint here,
23336.81 -> sometimes I get that. And so what I normally
do is I just go ahead and hit migrate again.
23337.81 -> Okay. Let me just drop down the version, maybe
it won't let us do it for version 10.6. Okay,
23338.81 -> and we'll hit migrate one more time. Funny,
as soon as you choose 10.7, you have to re
23339.81 -> choose your instance class there. So I'll
go back to T two t three medium there and
23340.81 -> now hit migrate. Okay, so now it's going to
go ahead and create that cluster there. So
23341.81 -> it's going to go ahead and create that cluster.
You can see here, we had a two to previous
23342.81 -> failed attempts there when I hit a save there,
so those will vanish. But we're just going
23343.81 -> to wait a while for this to spin up. So our
migration has completed. And so our RDS instance
23344.81 -> is now running on Aurora. So let's just take
a quick peek inside of here, it did take a
23345.81 -> considerable amount of time, I think I was
waiting about like 20 minutes for this Aurora
23346.81 -> instance to get up here. And so right away,
you're gonna see that we have a cluster. And
23347.81 -> then we have the writer underneath.
23348.81 -> We have
23349.81 -> two endpoints, one for writing and one for
reading. And you can obviously create your
23350.81 -> own custom endpoints here. But we're just
going to go back here and I just want to show
23351.81 -> you that you can connect to this database.
So going back to table plus, we're going to
23352.81 -> create a new connection and inherited all
the settings from our previous database. So
23353.81 -> just grabbing the reader endpoint here, I'm
just going to paste in the host name. We called
23354.81 -> the user was Postgres. The password was Postgres.
Not very secure password. By the way, the
23355.81 -> database is called Babylon five. Okay, and
we'll just say this is our Aurora, Aurora,
23356.81 -> Babylon five. babbie. Ilan five. I don't know
why having such a hard time spelling that
23357.81 -> today. is okay. I think I spelt it wrong there.
Oh, okay. But anyway, let's just test our
23358.81 -> connection to see here if it works definitely
have spelled something wrong here. Okay, there
23359.81 -> we go. So it's the same the same credentials,
right, just the host has changed. And I can
23360.81 -> obviously connect to that. And we will see,
we'll have read only access to our data there.
23361.81 -> So yeah, it's the same process. Yeah, and
there you go. So just just to peek around
23362.81 -> here, you can create additional readers. So
you know, so you have more read replicas,
23363.81 -> we also have this option for activity stream,
which is for auditing all the activity. So
23364.81 -> this might be for an enterprise requirement
there for you. But we're pretty much done
23365.81 -> with this cluster here. So I'm just going
to go to databases here. And I'm just going
23366.81 -> to terminate it here. And so when we want
to terminate that here, we have, we can go
23367.81 -> down here and just delete, and we just type
in delete me. Okay, and that's going to take
23368.81 -> out the whole thing here. Okay. So once this
is done here, we'll just have to hit refresh
23369.81 -> there. And this will take a considerable long
time, see, it's deleting both, then this URL
23370.81 -> will be gone here. So yeah, there you are.
So um, we created an RDS Postgres database,
23371.81 -> we connected to it. We created, we migrated
it to Aurora. But I wanted to show you a little
23372.81 -> bit more with Aurora. Because I don't feel
like we got to look at all the options here.
23373.81 -> And we're only going to be able to see that
by creating a new instance here. So we're
23374.81 -> going to stick with the standard create there,
we're going to have Amazon Aurora, we have
23375.81 -> the option between MySQL and Postgres, we're
going to select Postgres, and which is going
23376.81 -> to have on the version 10.7 there. And what
I really want to show you here is this database
23377.81 -> feature setting. So we had this set here,
which had one writer and multiple readers.
23378.81 -> And so you're continuously paying for
23379.81 -> for
23380.81 -> Aurora there, and it's very expensive. But
we have this option called serverless. And
23381.81 -> serverless is a very inexpensive option. So
let's say we were building a web application,
23382.81 -> and it was in development, so only some few
clients were using it, or it was only just
23383.81 -> be using being used sporadically throughout
the month, not a lot of usage, then serverless
23384.81 -> is going to be a very cost effective option
for us to use Aurora and also a way for us
23385.81 -> to scale up to using Aurora when we need to
full time. Okay, so what I'm going to do is
23386.81 -> just go and set up a serverless. Aurora database
here, we're going to have a call database
23387.81 -> to we're going to also call it Postgres give
it that very weak password. And this is the
23388.81 -> big thing here. So we have this capacity setting.
So this is only showing up because we have
23389.81 -> serverless I'm pretty sure if we checkbox
that off, it doesn't appear. So now just we
23390.81 -> just choose our DB instance size, okay, but
we're gonna go back up here and go to serverless.
23391.81 -> And so the idea here is that we are choosing
I believe it's called ACU is the acronym for
23392.81 -> this capacity in here, but we're gonna choose
our minimum and our maximum. So at our minimum,
23393.81 -> we want to use two gigabytes of RAM and a
maximum we want that okay, and we have some
23394.81 -> scaling options here, which we're going to
ignore, we're going to launch this in our
23395.81 -> default VPC. And then he did they do have
a clear warning here. Once you create your
23396.81 -> database, you cannot change your VPC selection.
But I mean, that's the case of these two instances
23397.81 -> or whatever you can't you always have to create
a new one right? But I guess some people aren't
23398.81 -> aware of that. We are going to leave these
alone here. There is this option here for
23399.81 -> websites Data API. So this allows you to access
and run SQL via an HTTP endpoint. This is
23400.81 -> extremely convenient way of accessing your,
your, your database here, and it's only available
23401.81 -> here because we are using serverless. Okay.
And this is the same thing with like the Query
23402.81 -> Builder. So if you're using the query builder,
query editor, which is called that there,
23403.81 -> by having this enabled, then we can do both,
okay, and we can have a retention period,
23404.81 -> I'm going to set it to one day, I wish I could
set it to zero. But with Aurora, you have
23405.81 -> to have something set up, you can't have backups
turned off. And it's going to have encryption
23406.81 -> by default. So you can see that it's really
making sure that we make all the smart decisions
23407.81 -> here, and we have deletion protection, I'm
gonna turn that off, because I definitely
23408.81 -> want to be able to delete this, and we're
gonna hit create database. Okay, so there
23409.81 -> you go, we're going to just wait for that
to create, and then we're going to see how
23410.81 -> we can use it with the query editor here,
maybe loaded up with some data, and etc. So
23411.81 -> our service is now available to us. And so
let's go actually connect to it and play around
23412.81 -> with our server. And in order to connect to
Aurora serverless is a little bit different,
23413.81 -> because you have to be within the same VPC,
we're not going to be able to use table plus.
23414.81 -> So in order to do or to connect to it, we're
gonna have to launch an EC two instance. But
23415.81 -> to make things really easy, we're going to
use Cloud Nine, okay, because cloud nine is
23416.81 -> a an ID that's backed by easy to instance,
it already has the MySQL client installed.
23417.81 -> So it's going to make it really easy for us.
So what I want you to do is go to services
23418.81 -> here and type in cloud nine. And we will make
our way over to the cloud nine console here.
23419.81 -> And we'll create ourselves a new environment.
And so I'm just going to call this MySQL,
23420.81 -> Aurora serverless, okay, because that's all
we're gonna use this for. Okay. And we're
23421.81 -> gonna hit next step. And we're going to create
a new EC two instance, we're gonna leave it
23422.81 -> at T two micro, the smallest instance there,
we're going to launch it with Amazon Linux,
23423.81 -> it's going to shut down automatically after
30 minutes. So that's great for us. And we'll
23424.81 -> go ahead and hit next step. Okay, and then
we will create that environment. And now we
23425.81 -> just have to wait for that ID to spin up here.
Okay. So it shouldn't take too long. just
23426.81 -> takes a few minutes. All right. So our cloud
nine environment here is ready here. And down
23427.81 -> below, we have our environment. And so I can
type MySQL, okay. And you can see that the
23428.81 -> client is installed, but we didn't actually
specify any information. So there's no way
23429.81 -> it's going to connect anything here. But let's
go ahead and let's go to our RDS because we
23430.81 -> need to prepare this so we can actually make
a connection here for cloud nine. So let's
23431.81 -> go into the database here and grab this endpoint
here. And I've actually prepped a little follow
23432.81 -> over here with the stuff that we need. So
we're gonna need to prepare this command.
23433.81 -> But before we even do this, okay, we are going
to need to update our security group, okay,
23434.81 -> because we're going to need to grant access
to the security group of that EC, or that
23435.81 -> cloud nine environment. So also, on the right
hand side, we'll open our left hand side,
23436.81 -> we'll open up school groups, again here. And
we're going to look for this, this person's
23437.81 -> or this a cloud environment. So we have one,
here, it's this one up here. Okay, and so
23438.81 -> I just need this, the actual name, the group
ID of the security group. And we'll go back
23439.81 -> to our serverless, or our service security
group here, and we're going to edit it here,
23440.81 -> this looks like it's using the default one,
which is kind of a mess, we shouldn't be using
23441.81 -> this one. But I'm going to go ahead here and
just remove those, and we'll drop down and
23442.81 -> choose MySQL, MySQL, Aurora, if I can find
it in here. There it is. Okay, and we'll just
23443.81 -> paste that in there. And so that is going
to allow the cloud nine environment to connect
23444.81 -> to, or have permission to connect to the Aurora
serverless there. So going back to our environment,
23445.81 -> now we're ready to try out that line. So here
is our line here. And so we're just going
23446.81 -> to copy that whole thing in there and paste
it in. Okay, it's gonna prompt for password,
23447.81 -> and we made it password 123. And there we
are. So we're connected to our database there.
23448.81 -> And so if we wanted to create whatever we
want, it would just be as we were doing with
23449.81 -> Postgres there. So there you go. That's, that's
how you create a Aurora serverless database.
23450.81 -> And that's how you would go about connecting
to it. So now it's just time to do a bit of
23451.81 -> cleanup here. So we are not incurring any
costs. Now, Aurora serverless, doesn't cost
23452.81 -> any money while it's running. So it's not
going to cost you anything. I did terminate
23453.81 -> these other instances earlier on. So you just
have to go to the top here and hit Delete.
23454.81 -> And we don't want to create a final snapshot
and we will delete that cluster. The other
23455.81 -> thing that we need to consider deleting is
this cloud nine environment again, it will
23456.81 -> automatically shut down after 30 minutes.
So it's not going to cost you things the long
23457.81 -> term but you know just to keep it out of your
account. You can go ahead here and delete.
23458.81 -> You can see I was attempting An earlier one
here with Aurora serverless Postgres that
23459.81 -> didn't work and I messed up the cloudformation
templates, I can't get rid of that one, but
23460.81 -> this one will delete here and that's all I
got to do to clean up for this section of
23461.81 -> the RDS cheat sheet and this one is a two
pager. So RDS is a relational database service
23462.81 -> and its AWS solution for relational databases.
Artists instances are managed by AWS so you
23463.81 -> cannot SSH into the VM running the database.
There are six relational database options
23464.81 -> currently available. So we have Aurora, MySQL,
Marya dB, Postgres, Oracle and Microsoft SQL
23465.81 -> Server. Multi AZ is an option you can turn
on which makes an exact copy of a database
23466.81 -> and another AZ that that is only a standby
for multi AZ it is automatically synchronizes
23467.81 -> changes in the database over to the standby
copy. Multi AZ has automatic failover protection
23468.81 -> so if one AZ goes down, failover will occur
and the standby slave will be promoted to
23469.81 -> master. Then we have read replicas replicas
allow you to run multiple copies of your database.
23470.81 -> These copies only allow reads and no writes
and is intended to alleviate the workload
23471.81 -> of your primary database to improve performance.
replicas use a synchronous replication, you
23472.81 -> must have automatic backups enabled to use
read replicas. You can have up to five read
23473.81 -> replicas you can combine read replicas. With
multi AZ you can have read replicas in another
23474.81 -> region. So we have cross region read replicas,
read replicas can be promoted to their own
23475.81 -> database. But this breaks replication. You
can have read replicas of read replicas, RDS
23476.81 -> has two backup solutions. We have automated
backups and database snapshots to be manual
23477.81 -> snapshots. But it means the same thing. So
automated backups, you choose a retention
23478.81 -> period between one and 35 days, there is no
additional cost for backup storage, you define
23479.81 -> your backup window, then you have manual snapshots.
So you might you manually create backup backups.
23480.81 -> If you delete your primary, the manual snapshots
will still exist, and you can they can be
23481.81 -> restored. When you restore an instance it
will create a new database, you just need
23482.81 -> to delete your old database and point traffic
to the new restore database. And you can turn
23483.81 -> on encryption at rest for RDS via kms. So
there you go. That's
23484.81 -> it. This is Angie brown from exam Pro. And
we are looking at Aurora which is a fully
23485.81 -> managed Postgres or MySQL compatible database,
designed by default to scale and is fine tuned
23486.81 -> to be really really, really fast. Looking
more here at Aurora, it combines the speed
23487.81 -> and availability of a high end database with
the simplicity and cost effectiveness of an
23488.81 -> open source database. So Aurora can either
run on MySQL or Postgres compatible engines.
23489.81 -> And the advantage of using Aurora over just
a standard RDS, Postgres or MySQL engine,
23490.81 -> is the fact that it's fine tuned to be super
for performance. So if you're using MySQL,
23491.81 -> it's five times faster than your traditional
MySQL. And the Postgres version is three times
23492.81 -> more performant than the traditional Postgres.
And the big benefit is the cost. So it's 1/10
23493.81 -> 10th the cost of other solutions offering
similar performance and availability.
23494.81 -> So let's talk about Aurora scaling, which
is one of its managed features. So it starts
23495.81 -> with 10 gigabytes of storage initially, and
can scale in 10 gigabyte increments all the
23496.81 -> way up to 64 terabytes, so you have a lot
of room for growth here. And storage is auto
23497.81 -> scaling. So just happens automatically. for
computing power computing, resources can scale
23498.81 -> all the way up to 32 VPC use, and up to 244
gigabytes of memory.
23499.81 -> Let's take a look at aurors availability and
you can see that it's extremely available
23500.81 -> because it runs six copies of your data across
three availability, availability zones, with
23501.81 -> two in each single AZ Okay, so if you were
to lose two copies of your data, it would
23502.81 -> not affect right availability. If you were
to lose three copies of your data, it would
23503.81 -> not affect read availability. So this thing
is super super bomb. Now looking at fault
23504.81 -> tolerance and durability for Aurora backups
and failover are handled automatically if
23505.81 -> you wanted to share your data to another Eva's
account snapshots can be shared. It also comes
23506.81 -> with self healing for your storage so data
blocks and disk are continuously scan for
23507.81 -> errors and repaired automatically. Looking
at replication for Aurora There are two types
23508.81 -> of replicas available we have Amazon or more
replicas in MySQL read replicas knows for
23509.81 -> MySQL we can only have up to five for performance
impact on primary is high. It does not have
23510.81 -> auto Automatic failover However, it does have
support for user defined replication delay,
23511.81 -> or for different data or schema versus primary.
So you have to decide for yourself, which
23512.81 -> one makes more sense for you. But just for
exams, you might need to know there's two
23513.81 -> different types. If for whatever reason they
had you looked into Aurora pricing, you'd
23514.81 -> find out, it's really expensive if you aren't
using it for high production applications.
23515.81 -> So if you're a hobbyist, like me, and you
still want to use Aurora, Aurora has Aurora
23516.81 -> serverless, which is just another mode that
it runs in. And the advantage here is that
23517.81 -> it only runs when you need it to and it can
scale up and down based on your applications
23518.81 -> needs. And so when you set serverless, in
the database features, you're going to have
23519.81 -> this capacity settings, so you can set the
minimum and maximum capacity for a work capacity
23520.81 -> units, also abbreviated as ACU. And so here
it's between two and 384 ac use. And that's
23521.81 -> what it's going to charge you based on only
when it's consumed. So when would you want
23522.81 -> to use a word serverless, what's really good
for low volume blog sites, maybe a chatbot.
23523.81 -> Maybe you've built an MVP that you are demoing
out to clients, so it's not used very often,
23524.81 -> but you plan on using your word down the road.
So that's the use case for Aurora. It works
23525.81 -> with both MySQL and Postgres. For over a year,
Postgres wasn't there, but now it is here.
23526.81 -> There are some limitations on the versions
of Postgres and MySQL are bicycles, you can
23527.81 -> use it, it used to be only MySQL 5.6. But
last time I checked, I saw 5.6 and 5.7. For
23528.81 -> my school. And for Postgres, I saw a lot of
versions. So there is a lot of flexibility
23529.81 -> there for you. But there are some limitations
around that. There's also other things that
23530.81 -> it can't do that Aurora can do. But it's a
big long list. I'm not going to listen here,
23531.81 -> but I just want you to know, the utility of
Aurora thermal. We've finished the raw section
23532.81 -> and now on to the raw cheat sheet where we're
going to summarize everything that we've learned.
23533.81 -> So when you need a fully managed Postgres
or MySQL database that needs to scale, have
23534.81 -> automatic backups, high availability, and
fault tolerance. Think Aurora, Aurora can
23535.81 -> run on my skull or Postgres database engines.
Aurora, MySQL is five times faster over regular
23536.81 -> MySQL, and Aurora, Postgres is three times
faster over regular Postgres. Aurora is 1/10.
23537.81 -> The cost over its competitors with similar
performance availability options, over replicate
23538.81 -> six copies of your database across three eyzies.
Aurora is allowed up to 15 Aurora replicas
23539.81 -> on Aurora database can span multiple regions
via Aurora global database. Aurora serverless,
23540.81 -> allows you to stop start Aurora and scale
automatically while keeping costs low. And
23541.81 -> the ideal use case for serverless is for new
projects or projects with infrequent database
23542.81 -> usage. So there you go. That's everything
you need to know about.
23543.81 -> We are looking at Amazon redshift, which is
a fully managed petabyte size data warehouse.
23544.81 -> So what we use a data warehouse for we would
use it to analyze massive amounts of data
23545.81 -> via complex SQL queries. Amazon redshift is
a columnar store database. So to really understand
23546.81 -> what redshift is, we need to understand what
a data warehouse is to understand what a data
23547.81 -> warehouse is, it's good to compare it against
a database and understand this, we need to
23548.81 -> set some foundational knowledge and understand
what a database transaction is. So let's define
23549.81 -> a database transaction. A transaction symbolizes
a unit of work performed within a database
23550.81 -> management system. So an example of a transaction
or reads and writes that's as simple as that.
23551.81 -> And for database and data warehouse, they're
going to treat transactions differently. And
23552.81 -> so for a database, which we have an online
transactional processing system and OLTP,
23553.81 -> the the transactions are going to be short.
So look at the bottom here, we say short transaction.
23554.81 -> So that means small and simple queries with
an emphasis on writes. Okay, so why would
23555.81 -> we want short transactions? Well, for OLTP?
Well, a database was built to store current
23556.81 -> transactions, and enables fast access to specific
transactions for ongoing business processes.
23557.81 -> So they're just talking about I have a web
app. And we need to be very responsive for
23558.81 -> the current user for reads and writes. Okay,
and so that could be adding an item to your
23559.81 -> shopping list. That could be sign up that
could be doing any sorts of thing in a web
23560.81 -> application. And generally, these are backed
by a single source. So a single source would
23561.81 -> be Postgres on could be running on RDS. And
so that's the idea behind a database. So if
23562.81 -> we go over to the data warehouse side, it
runs on an online analytical processing system,
23563.81 -> an OLAP. And all apps are all about long transaction
so long and complex SQL queries with an emphasis
23564.81 -> on reads. So a data warehouse is built to
store large quantities of historical data.
23565.81 -> and enable fast and complicated complex queries
across all data. So the utility here is business
23566.81 -> intelligence tools generating reports. And
a data warehouse isn't a single source it
23567.81 -> is it takes data from multiple sources. So
dynamodb, EMR, s3, Postgres all over the place,
23568.81 -> data is coming into one place so that we can
run complex queries, and not too frequently.
23569.81 -> So now that we know what a data warehouse
warehouse is, let's talk about the reasons
23570.81 -> why you'd want to use redshift. So redshift
the Pricing starts at 25 cents per hour with
23571.81 -> no upfront costs or commitments. It scales
up to petabytes, petabytes of data for $1,000
23572.81 -> per terabyte per year. redshift is price less
than 1/10. The cost of most similar services
23573.81 -> redshift is used for business intelligence
redshift uses OLAP redshift is a columnar
23574.81 -> store database. It was the second time we've
mentioned this. And we really need to understand
23575.81 -> what a column or storage database is to really
understand the power behind redshift and data
23576.81 -> warehouses. So columnar storage for database
is database tables is an important factor
23577.81 -> in optimizing an analytic query performance
because it drastically reduces the overall
23578.81 -> disk IO requirements and reduces the amount
of data you need to load from the disk. So
23579.81 -> columnar storage is the reason why redshift
is so darn fast. And we're going to look at
23580.81 -> that in more detail here. So let's really
cement our knowledge with redshift and show
23581.81 -> a use case example. So here I have, I want
to build my own business intelligence tool.
23582.81 -> And I have a bunch of different sources. So
I have data coming from EMR, I have data coming
23583.81 -> from s3, I have data coming from dynamodb.
And I'm going to copy that data however I
23584.81 -> want. There's a copy command, I'm going to
copy that data into redshift. Okay, so but
23585.81 -> once that data is in there, you say, Well,
how do I interact and access redshift data?
23586.81 -> Normally, you know, most services use the
ABS SDK. But this case, we're not using native
23587.81 -> SDK, because we just may need to make a generic
SQL connection to redshift. And so if we were
23588.81 -> using Java, and generally you probably will
be using Java, if you're using redshift, you'd
23589.81 -> be using j JDBC, or ODBC, which are third
party libraries to connect and query redshift
23590.81 -> data. So, you know, I said columnar storage
is very important to redshifts performance.
23591.81 -> And so let's conceptually understand what
that means. So what would we normally use
23592.81 -> with a database would be reading via the rows,
whereas an in an OLAP, or reading versus columns,
23593.81 -> because if we're going to be looking at a
lot of data and crunching it, we it's better
23594.81 -> to look at it at columns, okay. Because that
way, if we're reading columns, that allows
23595.81 -> us to store that data as the same database
datatype for allow for easy compression, that
23596.81 -> means that we're going to be able to load
data a lot quicker. And because we're always
23597.81 -> looking at massive amounts of data, at the
same time, we can pull in only the columns
23598.81 -> that we need in bulk, okay,
23599.81 -> and so that's gonna give us much faster performance
for our use case, which is like business intelligence
23600.81 -> tools. So redshift configuration, you can
set it up in two different cluster types.
23601.81 -> So you have single node, which is a great
way to get started on redshift, if you don't
23602.81 -> have a lot of money you want to play around,
you can just launch a single node of 160 gigabytes,
23603.81 -> or you can launch an multimo, multi node.
And so when you launch a multi node, you always
23604.81 -> have a leader node, and then you have compute
nodes. And you can add up to 128 compute nodes,
23605.81 -> so you have a lot of computing power behind
you. Now, I just want to point out that when
23606.81 -> you do spin up redshift and multi node, you're
gonna see there's a maximum set of 32. And
23607.81 -> I just said, there's 128. So what's going
on here? Well, it's just one of those same
23608.81 -> defaults, where AWS wants to be really sure
that you want more than 32. Because you know,
23609.81 -> if you come in day one, someone had 128, they
want to make sure that they have the money
23610.81 -> to pay for it. So if you need more than 32
nodes, you just have to go ask a request for
23611.81 -> a certain service limit increase. Now besides
there being different cluster types, there's
23612.81 -> also different node types. And so we have
two that are labeled here we have DC dense
23613.81 -> compute and dense storage DS, okay. And they
are as what they say they are one is optimized
23614.81 -> for computing power, and one is optimized
for storage. So you know, depending on your
23615.81 -> use case, you're going to choose what type
of node you want. Notice that there are no
23616.81 -> smalls or micros, we all we only start at
large here. Because if you're doing redshift,
23617.81 -> you're working with large amounts of data.
So you know, that makes total sense. compression
23618.81 -> is something that is the most important thing
in terms of speed. So redshift uses multiple
23619.81 -> compression techniques to achieve a significant
compression. Relative to traditional relational
23620.81 -> data stores. Similar data is stored sequentially
on disk. It does not require indexes or materialized
23621.81 -> views, which saves a lot of space compared
to traditional systems when loading data to
23622.81 -> An empty table data is sampled. And the most
appropriate compression scheme is selected
23623.81 -> automatically. So this is all great information
for the exam, you know, it's not so important
23624.81 -> remember this. So you know what redshift is,
is utilized for these Nitty gritties at F
23625.81 -> or the associate is not so, important. redshift
processing. So, redshift uses massively parallel
23626.81 -> processing, which they aggrandize or initializes
MPP, it automatically distributes data and
23627.81 -> query loads across all nodes, and lets you
easily add new nodes to your data warehouse
23628.81 -> while still maintaining fast query performance.
So yeah, it's easy to add more compute power
23629.81 -> on demand. Okay, and so we got redshift backups
of backups are enabled by default, with a
23630.81 -> one day retention period, and retention periods
can be modified up to 35 days. All right.
23631.81 -> redshift always attempts to maintain at least
three copies of your data. One is the original
23632.81 -> copy. The second is a replica on the compute
nodes. And then the third is a backup copy
23633.81 -> of s3. And so redshift also could asynchronously
replicate your snapshots to s3 in a different
23634.81 -> region. So you know, if you need to move your
data region per region, you have that option
23635.81 -> as well. For redshift billing, the compute
node hours, the total number of hours ran
23636.81 -> across all nodes in the billing period, build
one unit per node per hour, and you're not
23637.81 -> charged for the leader nodes per hour. So
when you spin up a cluster, and you only have
23638.81 -> one compute node, and one one leader node,
you're just paying for the compute node. For
23639.81 -> backups, backups are stored on s3 and you're
billed the s3 storage fees, right. So you
23640.81 -> know, just same same thing as usual. And data
data transfer build only only transfers within
23641.81 -> a VPC not outside up outside of it. Okay.
redshift security. So we have data in transit
23642.81 -> encrypt using SSL data rest, we can encrypt
using a Aes 256 encryption. database encryption
23643.81 -> can be applied using kms. Or you can use Cloud
HSM and here you can see it's just as easy
23644.81 -> as applying it. redshift availability. So
redshift is single AZ super important to remember
23645.81 -> this because a lot of services are multi AZ
but redshift is not one of them maybe in the
23646.81 -> future, but maybe not. To run in multi AZ
you would have to run multiple redshift clusters
23647.81 -> in a different AZ with with same inputs. So
you're basically just running a clone, it's
23648.81 -> all manual labor, right? So there's no managed
automatic way of doing multi AZ snapshots
23649.81 -> can be restored to a different AZ in the event
23650.81 -> of an outage occurs. And just to wrap everything
up, we have a really good redshift cheat sheet
23651.81 -> here definitely recommend you print this out
for your exam. And we're going to go through
23652.81 -> everything again. So data can be loaded from
s3 EMR. dynamodb, or multiple data sources
23653.81 -> on remote hosts. redshift is columns columnar
store database, which can give you SQL like
23654.81 -> queries and is an is an OLAP. redshift can
handle petabytes worth of data redshift is
23655.81 -> for data warehousing. redshifts most common
use cases business intelligence redshift can
23656.81 -> only run in one AZ so it's a sink, it's single
AZ it's not multi AZ redshift can run via
23657.81 -> a single node or multi node for clusters.
A single node is 150 gigabytes in size. A
23658.81 -> multi node is comprised of the leader node
and multiple compute nodes. You are billed
23659.81 -> per hour for each node excluding the leader
node in multi node, you're not billed for
23660.81 -> the leader node. just repeating that again
there, you can have up to 128 compute nodes.
23661.81 -> Again, I said earlier that the maximum by
default was 32. But they're not going to ask
23662.81 -> you what the default is. redshift has two
kinds of node types dense compute and dense
23663.81 -> storage. And it should be pretty obvious when
you should use one or the other redshift attempts
23664.81 -> to back backup your data three times the original
on the compute node on a three similar data
23665.81 -> is stored on a disk sequentially for faster
reads. Read of data database can be encrypted
23666.81 -> via kms, or cloud HSM backup retention is
default to one day and can be increased to
23667.81 -> a maximum of 35 days. redshift can asynchronously
backup to your snaps to your backup via snapshot
23668.81 -> to another region delivered via s3. And redshift
uses massively parallel processing to distribute
23669.81 -> queries and data across all loads. And in
the case of an empty empty table when importing
23670.81 -> redshift will sample the data to create a
schema. So there you go. That's redshift in
23671.81 -> a nutshell, and that should help you for the
exams.
23672.81 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at Dynamo DB which is a key
23673.81 -> value and document database, a no SQL database
which can guarantee consistent reading rights
23674.81 -> at any scale. So let's just double check a
couple things before we jump into Dynamo dB.
23675.81 -> So what is a no SQL database? Well today It
is neither relational and does not use SQL
23676.81 -> to query the data for results, hence the no
SQL part, no SQL databases, the method of
23677.81 -> how they store data is different. dynamodb
does both key value store and document store.
23678.81 -> So key value stores when you simply have a
key, and a value and nothing more. And then
23679.81 -> document store is where you have structured
data, right? So this whole thing here would
23680.81 -> be a single value in the database. So again,
dynamodb is a no SQL key value and document
23681.81 -> database for internet skills applications.
It has a lot of functionality behind it is
23682.81 -> fully managed multi region. Multi master durable,
is a durable database, built in security,
23683.81 -> Backup and Restore. And in memory caching.
The big takeaway why you'd want to use dynamodb
23684.81 -> is that you just say what you need, you say
I need 100 reads per second, or 100 writes
23685.81 -> per second, and you're guaranteed to get that
it's just based on what you're willing to
23686.81 -> pay. Okay, so scaling is not an issue here,
it's just do you want to pay that amount for
23687.81 -> whatever capacity you need. So when we're
talking about durability, dynamodb does store
23688.81 -> its data across three regions. And we definitely
have fast reads and writes because it's using
23689.81 -> SSD drives. So that's the level of durability.
And the next thing we're going to look into
23690.81 -> is the consistency because it replicates data
across different regions, you could be reading
23691.81 -> a copies of data, and we might run into inconsistency.
So we need to talk about those caveats.
23692.81 -> So I just wanted to touch on table structure
here. Because Dynamo DB does use different
23693.81 -> terminologies instead of what a relational
database uses. So instead of a roll, they
23694.81 -> call it an item instead of a column or a cell
or whatever you want to call it, they just
23695.81 -> call it an attribute. And then the other most
important thing is the primary key which is
23696.81 -> made up with a with a partition key and a
sort key. And that's all you need to know
23697.81 -> for the solution architect associate for the
other certifications, we have to really know
23698.81 -> this stuff. But this is all we need to know
for this case.
23699.81 -> So consistency is something that's a very
important concept when we're dealing with
23700.81 -> dynamodb. Because when data is written to
the database, it has to then copy it to those
23701.81 -> other regions. And so if someone was reading
from region C, when an update was occurring
23702.81 -> there is that there's that chance that you're
reading it before it has the opportunity to
23703.81 -> write it. Okay. And so dynamodb gives us a
couple options to give us choices on our use
23704.81 -> case. And we'll go through the two. And so
the first one is eventual consistent reads,
23705.81 -> which is the default functionality. And the
idea here is when copies are being updated,
23706.81 -> it is possible for you to read and be returned
inconsistent copy, okay. But the trade off
23707.81 -> here is the reads are fast, but there's no
guarantee of consistency, all copies of data,
23708.81 -> eventually will become generally consistent
within a second. Okay, so the here that the
23709.81 -> trade off is that you know, you could be reading
it before it's updated. But generally it will
23710.81 -> be up to date. So you have to decide whether
that's the trade off you want. That's default
23711.81 -> option. The other one is strongly consistent
reads, okay, and this is one all copies are
23712.81 -> being updated and you attempt to read it,
it will not return a result until all copies
23713.81 -> are consistent, you have a guarantee of consistency.
But the trade off is higher latency, so slower
23714.81 -> reads, but that the reads are going to be
as slow as a second because all copies of
23715.81 -> data will be consistent within a second. So
if you can wait up to a second in the case
23716.81 -> of a right, then that's what you'll have to
do. for eventual consistent reads. If you
23717.81 -> can tolerate something being consistent because
it's not important, then those are your two
23718.81 -> options. So we're on to the dynamodb cheat
sheet. If you are studying for the developer
23719.81 -> associate this would be two pages long but
since this is for the solution architect associate
23720.81 -> it this is a lot shorter. Okay, so dynamodb
is a fully managed no SQL key value and document
23721.81 -> database. applications that contain large
amounts of data but require predictable read
23722.81 -> and write performance while scaling is a good
fit for dynamodb dynamodb scales with whatever
23723.81 -> read and write capacity you specify per second
dynamodb can be set to have eventually consistent
23724.81 -> reads which is the default option and strongly
consistent reads, eventually consistent reads,
23725.81 -> data is returned immediately, but data can
be inconsistent copies of data will be generally
23726.81 -> consistent within one second. Strongly consistent
reads will wait until data is consistent data
23727.81 -> will never be inconsistent, but latency will
be higher, only up to a second though. Copies
23728.81 -> of data will be consistent within within a
guarantee of one second or at you know exactly
23729.81 -> one second dynamodb stores three copies of
data on SSD drives across three regions. And
23730.81 -> there you go, that's all you need. Hey, this
is Andrew Brown. And we are looking at Eva's
23731.81 -> cloud formation, which is a templating language
that defines AWS resources to be provisioned,
23732.81 -> or automating the creation of resources via
code. And all these concepts are called infrastructure
23733.81 -> as code which we will cover again in just
a moment here. So to understand cloud formation,
23734.81 -> we need to understand infrastructure as code
because that is what cloudformation is. So
23735.81 -> let's reiterate over what infrastructure is
code is. So it's the process of managing and
23736.81 -> provision computer data centers. So in our
case, it's AWS, through machine readable definition
23737.81 -> files. And so in this case, it's cloudformation,
template YAML, or JSON files, rather than
23738.81 -> the physical hardware configuration or interactive
configuration tools. So the idea is to stop
23739.81 -> doing things manually, right. So if you launch
resources in AWS, you're used to configuring
23740.81 -> in the console all those resources, but through
a scripting language, we can automate that
23741.81 -> process. So now let's think about what is
the use case for cloud formation. And so here,
23742.81 -> I have an example, where let's pretend that
we have our own minecraft server business,
23743.81 -> and people sign up on our website and pay
a monthly subscription, and we will run that
23744.81 -> server for them. So the first thing they're
going to do is they're gonna tell us where
23745.81 -> they want the server to run. So they have
low latency and what size of servers so the
23746.81 -> larger the server, the more performant the
server will be. And so they give us those
23747.81 -> two inputs. And then we somehow send that
to a lambda function, and that lambda function
23748.81 -> triggers to launch a new cloudformation stack
using our cloud formation template, which
23749.81 -> defines, you know, how to launch that server,
that easy to instance, running Minecraft and
23750.81 -> a security group and what region and what
size. And when it's finished creating, we
23751.81 -> can monitor maybe using cloud watch events
that it's done, and using the outputs from
23752.81 -> that cloud formation stack, send the IP address
of the new minecraft server to the user, so
23753.81 -> they can log in and start using their servers.
So that's way of automating our infrastructure.
23754.81 -> So we're gonna look at what a cloudformation
template looks like. And this is actually
23755.81 -> one we're going to use later on to show you
how to launch a very simple Apache server.
23756.81 -> But cloudformation comes in two variations.
It comes in JSON, and YAML. So why is there
23757.81 -> two different formats? Well, JSON just came
first. And YAML is is an intent based language,
23758.81 -> which is just more concise. So it's literally
the same thing, except it's in that base.
23759.81 -> So we don't have to do all these curlies.
And so you end up with something that is,
23760.81 -> in length, half the size. Most people prefer
to write YAML files, but there are edge cases
23761.81 -> where you might want to use JSON. But just
be aware of these two different formats. And
23762.81 -> it doesn't matter which one you use, just
use what works best for you. Now we're looking
23763.81 -> at the anatomy of a cloud formation template.
And so these are made up of a bunch of different
23764.81 -> sections. And here are all the sections listed
out here. And we'll work our way from top
23765.81 -> to bottom. And so the first one is metadata.
So that allows you to provide additional information
23766.81 -> about the template, I don't have one of the
example here and I rarely ever use metadata.
23767.81 -> But you know, it's just about additional information,
then you have the description. So that is
23768.81 -> just describing what you want this template
to do. And you can write whatever you want
23769.81 -> here. And so I described this template to
launch new students, it's running Apache,
23770.81 -> and it's hard coded work for us East one,
then you have parameters and parameters is
23771.81 -> something you can use a lot, which is you
defining what inputs are allowed to be passed
23772.81 -> within to this template at runtime. So one
thing we want to ask the user is what size
23773.81 -> of instance type Do you want to use, it's
defaulted to micro, but they can choose between
23774.81 -> micro and nano. Okay, so we can have as many
parameters as we want, which we'll use throughout
23775.81 -> our template to reference, then you have mappings,
which is like a lookup table, it maps keys
23776.81 -> to values, so you can change your values to
something else. A good example of this would
23777.81 -> be, let's say, you have a region. And for
each region, the image ID string is different.
23778.81 -> So you'd have the region keys mapped to different
image IDs based on the region. So that's a
23779.81 -> very common use for mappings. Then you'd have
conditions these are like your FL statements
23780.81 -> within your template don't have an examples
here. But that's all you need to know. Transform
23781.81 -> is very difficult to explain, if you don't
know macros are but the idea it's like applying
23782.81 -> a mod to the actual template. And it will
actually change what you're allowed to use
23783.81 -> in the template. So if I define a transform
template, the rules here could be wildly different,
23784.81 -> different based on what kind of extra functionality
that transform adds. We see that with Sam,
23785.81 -> the serverless application model is a transform.
So if you ever take a look at that you'll
23786.81 -> have a better understanding of what I'm talking
about there. Then you have resources which
23787.81 -> is the main show to the whole template. These
are the actual resources you are defining
23788.81 -> that will be provisioned. So think any kind
of resource I enroll etc. instance lamda RDS
23789.81 -> anything, right? And then you have outputs
and outputs is, it's just what you want to
23790.81 -> see as the end results. So like, when I create
the server, it's we don't know the IP address
23791.81 -> is until it spins it up. And so I'm saying
down here, get me the public IP address. And
23792.81 -> then in the console, we can see that IP address,
so that we don't have to, like look at the
23793.81 -> PC to console pull it out. The other advantage
of outputs is that you can pass information
23794.81 -> on to other cloudformation templates or created
like a chain of effects because we have these
23795.81 -> outputs. But the number one thing you need
to remember is what makes a valid template.
23796.81 -> And there's only one thing that is required,
and that is specifying at least one resource.
23797.81 -> All these other fields are optional, but resource
is mandatory, and you have to have at least
23798.81 -> one resource. So if you're looking for cloudformation
templates to learn by example, Ava's quickstarts
23799.81 -> is a great place to do it, because they have
a variety of different categories, where we
23800.81 -> have templates that are pre built by AWS partners
and the APN. And they actually usually show
23801.81 -> the architectural diagram, but the idea is
you launch the template, you don't even have
23802.81 -> to run it, you can just press a button here
and then actually see the raw template. And
23803.81 -> that's going to help you understand how to
connect all this stuff together. Because if
23804.81 -> you go through the ages documentation, you're
going to have to spend a lot of time figuring
23805.81 -> that out where this might speed that up if
this is your interest. So I just wanted to
23806.81 -> point that out for you. It's not really important
for the exam, it's not going to come up as
23807.81 -> an exam question. It's just a learning resource
that I want you to
23808.81 -> consider.
23809.81 -> We're on to the cloudformation cheat sheet,
please consider that this is specific for
23810.81 -> the solution architect associate. Whereas
for the SIS ops associate, this would be a
23811.81 -> much longer cheat sheet because you have to
know it more in detail. I did add a few additional
23812.81 -> things we did not cover in the core content
just in case they do creep up on the exam.
23813.81 -> I don't think they will, but I threw them
in there just in case. And so let's get through
23814.81 -> this list. So when being asked to automate
the provisioning of resources, think cloudformation.
23815.81 -> When infrastructure as code is mentioned,
think cloud formation. cloudformation can
23816.81 -> be written in either JSON or YAML. When cloudformation
encounters an error, it will roll back with
23817.81 -> rollback and progress again, might not show
up an example, I'm putting it in their cloudformation
23818.81 -> templates larger than half a megabyte, arc
too large. In that case, you'd have to upload
23819.81 -> from s3. So the most important thing is you
can upload templates directly or you can provide
23820.81 -> a link to an object in an s3 bucket. Okay,
nested stacks help you break up cloudformation
23821.81 -> templates into smaller reusable templates
that can be composed into larger templates.
23822.81 -> At least one resource under Resources must
be defined for a cloudformation template to
23823.81 -> be valid. And then we talk about all the sections.
So we have metadata that's for extra information
23824.81 -> about your template description that describes
what your template should do parameters how
23825.81 -> you get users inputs into the template, transforms
and applies macros, outputs, these are values
23826.81 -> you can use to important to other stacks.
So it's just output variables. mappings is
23827.81 -> like a lookup table. So it maps keys to values
resources, define the resources you want to
23828.81 -> provision. And again, I repeat it, at least
one resources required. All these other sections
23829.81 -> are optional, and conditions, which these
are like your FL statements within your cloudformation
23830.81 -> templates. So there you go. We're all done
with cloudformation. Hey, this is Andrew Brown
23831.81 -> from exam Pro. And we are looking at cloudwatch,
which is a collection of monitoring services
23832.81 -> for logging, reacting and visualizing log
data. So I just want you to know that cloud
23833.81 -> watch is not just one service, it's multiple
services under one name. So we have cloudwatch,
23834.81 -> logs, cloudwatch metrics, cloud watch events,
cloud watch alarms, and cloud watch dashboards,
23835.81 -> I'm not going to go through the list here,
because we're going to cover each section
23836.81 -> and then we'll cover it in the cheat sheet.
But just so you know, the most important thing
23837.81 -> to know is that their cloud watch is not a
single service, it's multiple services. So
23838.81 -> it's time to look at cloudwatch logs, which
is the core service of cloud watch. All the
23839.81 -> other cloud services are built on top of this
one. And it is used to monitor store and access
23840.81 -> your log file. So here we have a log file.
And logs belong within a log group that cannot
23841.81 -> exist outside of the Law Group. So here I
have one called production dot log, which
23842.81 -> is a Ruby on Rails application. And it contains
multiple log files over a given period of
23843.81 -> time, and this is inside of those log files.
And we have the ability to like filter that
23844.81 -> information and do other things with it. So
log files are stored indefinitely by default,
23845.81 -> and they never expire. Okay, so you don't
ever have to worry about losing this data.
23846.81 -> And most Eva's services are integrated cloudwatch
logs by default. Now there are some cases
23847.81 -> where there's actually multiple cases where
you have to turn on cloud watch logs, or you
23848.81 -> have to add Iam permissions. So like when
you're creating a lambda function, the default
23849.81 -> permissions allow you to write to logs. But
the thing is, you wouldn't normally realize
23850.81 -> that you're enabling it. So anyway, that's
cloud watch. So now we're going to take a
23851.81 -> look at cloudwatch metrics, which is built
on top of logs. And the idea behind this is
23852.81 -> it represents a time ordered set of data points,
or you can think of it as a variable to monitor
23853.81 -> so within the logs, we'll have that data and
extracts it out as data points, and then we
23854.81 -> can graph it, right. So in this case, I'm
showing you for an EC two instance. So you
23855.81 -> have some network in coming into that EC two
instance. And you can choose that specific
23856.81 -> metric, and then get a visual of it. So that
is cloudwatch metrics. Now, these metrics
23857.81 -> are predefined for you. So you don't have
to do anything. to leverage this, you just
23858.81 -> have to have logs enabled on specific services.
And these metrics will become available when
23859.81 -> data arrives. Okay.
23860.81 -> So now we're going to take a look at cloudwatch
events, which builds off of metrics and logs
23861.81 -> to allow you to react to your data and and
take an action on that, right. So we can specify
23862.81 -> an event source based on an event pattern
or a schedule, and that's going to then trigger
23863.81 -> to do something in a target, okay. And a very
good use case for this would be to schedule
23864.81 -> something that you'd normally do on a cron
tab. So maybe you need to backup a server
23865.81 -> once a day. So you trigger that, and then
there's probably like, EBS snapshot in here.
23866.81 -> But you know, that's the idea behind it here.
Okay, so you can either trigger based on a
23867.81 -> pattern or a timeframe. And it has a lot of
different inputs here, it's not even worth
23868.81 -> going through them all. But EBS snapshot,
and lambda are the most common. So we're looking
23869.81 -> at cloudwatch metrics, and we had a bunch
of predefined ones that came for us. But let's
23870.81 -> say we wanted to make our own custom metric,
well, we can do that. And all we have to do
23871.81 -> is by using the avsc ally, the command line
interface, or the SDK, software development
23872.81 -> kit, we can programmatically send data for
custom metrics. So here I have a custom metric
23873.81 -> for the enterprise D, which is namespace and
Starfleet. And we're collecting dimensions
23874.81 -> such as hole integrity, shields and thrusters.
Okay, so we can send any kind of data that
23875.81 -> we want on and publish that to cloudwatch
metrics. Now another cool feature about custom
23876.81 -> metrics is that it opens the opportunity for
us to have high resolution metrics, which
23877.81 -> can only be done through custom metrics. So
if you want data, a lot like edit even more
23878.81 -> granular level of blow one minute, with high
resolution metrics, you can go down to one
23879.81 -> second, and we have these intervals, you can
do it one second, five, second, 10, second,
23880.81 -> 30 seconds. But generally, you know, if you
can turn it on, you're probably gonna want
23881.81 -> to go as low as possible. The higher the frequency,
the more it's going to cost you. So do have
23882.81 -> that in consideration. But the only way to
get high resolution metrics is through a custom
23883.81 -> metric. So now we're taking a look at cloudwatch
alarms, which triggers a notification based
23884.81 -> on a metric when it is breached based on a
defined threshold. Very common use case is
23885.81 -> building alarms. It's like one of the first
things you want to do when you set up your
23886.81 -> AWS account. And so here we have some options
when we go set our alarm. So we can say whether
23887.81 -> it's static, or its anomaly, what is the condition?
So does it trigger this question, when it's
23888.81 -> greater than equal then lower than etc? And
what's the amount so you know, my account,
23889.81 -> I'm watching for $1,000, if it's under $1,000,
I don't care. And if it goes over, please
23890.81 -> send me an email about it. And that is the
utility there. So there you go. cloudwatch
23891.81 -> alert. So now it's time to look at Cloud watch
dashboards, which, as the name implies, allows
23892.81 -> you to create dashboards. And this is based
off of cloudwatch metrics. So here, we have
23893.81 -> a dashboard in front of it. And we add widgets.
And we have all sorts of kinds here, graphs,
23894.81 -> bar charts, and etc. And you drag them on
you pick your options, and then you have to
23895.81 -> just make sure you hit that Save button. And
there you go. So it's really not that complicated.
23896.81 -> Just when you need a visualization of your
data. You know, think about using cloud watch.
23897.81 -> So I just wanted to quickly touch on availability
of data, and how often cloudwatch updates
23898.81 -> the metrics that are available to you because
it varies on service and the one we really
23899.81 -> need to know is these two because it does
creep been to a few exam questions. So by
23900.81 -> default, when you're using EC two, it monitors
at a five minute, minute interval. And if
23901.81 -> you want to get down to one minute, you have
to turn on detailed monitoring, which costs
23902.81 -> money, okay? For all other services, it's
going to be between one minute to three minute
23903.81 -> to five minute, there might be a few other
services that have detailed monitoring, I
23904.81 -> feel like the elastic cache might have it.
But generally, all you have to worry about
23905.81 -> is for EC to the majority of services are
by default one minute. So that's why I just
23906.81 -> had to really emphasize this because you see,
two does not default to one minute, it's five
23907.81 -> minutes. And to get that one minute, you have
to turn on detailed monitoring. I just want
23908.81 -> to make you aware that cloudwatch doesn't
track everything you'd normally think it would
23909.81 -> track for an easy to incidence. And specifically,
if you wanted to know like your memory utilization,
23910.81 -> or how much disk space was left on your server,
it does not track that by default. Because
23911.81 -> those are host level metrics. Those
23912.81 -> are more in detailed metrics. And in order
to gather that information, you need to install
23913.81 -> the cloud watch agent and the cloud watch
agent is a script, which can be installed
23914.81 -> via SYSTEMS MANAGER run command, it probably
comes pre installed on Amazon, Linux one and
23915.81 -> Amazon x two. And so you know, if you need
those more detailed metrics, such as memory
23916.81 -> and disk space, you're gonna have to install
that. But these ones you already have by default,
23917.81 -> so there is disk usage, there is network usage
and CPU usage. The disk usage here is limited.
23918.81 -> I camera what they are the top my head, but
it's not like disk space, like, am I 40% left
23919.81 -> in disk space, okay. So you know, just be
aware of these two things, okay.
23920.81 -> Hey, this is Andrew Brown from exam Pro. And
we are going to do a very short follow along
23921.81 -> here for cloudwatch. So if you're taking the
solution, architect associate, you don't need
23922.81 -> to know a considerable amount about cloud
watch in terms of details, that's more for
23923.81 -> the sysop associate. But we do need to generally
know what's going on here. So maybe the first
23924.81 -> thing we should learn how to do is create
an alarm. Okay, so alarms are triggers based
23925.81 -> on when certain thresholds, metrics trigger
a certain threshold. So I have a bunch here
23926.81 -> already, because I was creating some dynamodb
tables, whenever you create dynamodb, you
23927.81 -> always get a bunch of alarms, I'm going to
go ahead here and create a new alarm, the
23928.81 -> most common alarm to create is a building
a building alarm. So maybe we could go ahead
23929.81 -> and do that. So under billing, we're going
to choose total estimated charge, we're gonna
23930.81 -> choose USD, I'm going to select metric, okay.
And, you know, we can choose the period of
23931.81 -> time we want it to happen, we have the static
and anomaly detection, we have whether we
23932.81 -> need to determine when it should get triggered.
So we would say when we go over $1,000, okay,
23933.81 -> we should get a Rubes $1,000 USD. Okay. So
it's not letting me fill it in there. There
23934.81 -> we go. When we hit that metric there, then
we should get an email about it. All right.
23935.81 -> And so we are going to go ahead and just hit
next there. And for this alarm to work, we
23936.81 -> need to have an SNS topic. Okay, so we're
gonna create a new topic here. And I'm just
23937.81 -> gonna say Andrew at exam pro.co. All right.
And what we'll end up doing here is we'll
23938.81 -> just hit next, oops, we have to create topic
button there. Okay, so it's created that topic,
23939.81 -> we'll hit next. And we'll just define this
as billing alarm. Okay. And we will hit next
23940.81 -> here, and we will create the alarm. And so
now we have an alarm. So anytime billing goes
23941.81 -> over $1,000, it's going to send us an email,
it's very unlikely this is going to happen
23942.81 -> within this account. Because I'm not spending
that much here. It does have to wait for some
23943.81 -> data to come in. So it will say insufficient
data to begin with. And it's still waiting
23944.81 -> for pending confirmation. So it is waiting
for us to confirm that SNS topic. So I'm just
23945.81 -> going to hop over to my email and just confirm
that for you very quickly here. And so in
23946.81 -> very short amount of time I've received here
a subscription. So I'm just gonna hit confirm
23947.81 -> subscription here. Okay, it's just going to
show that I've confirmed that, okay. And I'm
23948.81 -> just going to go ahead and close that here.
And we'll just give this a refresh. Alright,
23949.81 -> so that pending confirmation is gone. So that
means that this billing alarm isn't an OK
23950.81 -> status, and it is able to send me emails,
when that does occur. Okay, so there's a few
23951.81 -> different ways to set alarms, sometimes you
can directly do it with an EC too. So I just
23952.81 -> want to show you here, okay. So we're just
going to go to over DC to I don't think we
23953.81 -> have anything running over here right now.
Okay, and so I'm just going to launch a new
23954.81 -> instance because I just want to show that
to you. We're going to go to Amazon Linux
23955.81 -> two. We're going to go to configuration next.
Now there is this option here for detailed
23956.81 -> monitoring. And this is going to provide monitoring
at for every minute as opposed to every five
23957.81 -> minutes by default. Okay, so this does cost
additional money. But I'm just going to turn
23958.81 -> it on here. For the sake of this, follow along.
Okay, I'm just going to give it a key pair
23959.81 -> here. And then I'm just going to go to View
instances here. And I just want to show you
23960.81 -> that under the monitoring tab, we do get a
bunch of metrics here about this EC two instance.
23961.81 -> And if you wanted to create an alarm, it's
very convenient, you can actually just do
23962.81 -> it from here. So if you have an EC two instance,
and you want to send an alarm for this here,
23963.81 -> it could be for a variety thing. So take action,
so send a notification here. And also, you
23964.81 -> could stop the instance. So we could say,
when the CPU utilization goes over 50%, shut
23965.81 -> down the server. Okay. And so that's one very
easy way to create alarms. All right. Okay,
23966.81 -> um, so, you know, just, it's, it's good to
know that a lot of services are like that,
23967.81 -> I bet if we went over to dynamodb, I bet it's
the same thing. So if we go over to Dynamo
23968.81 -> dB, okay, and we go to tables here, and we
were using this for a another tutorial here
23969.81 -> and we create an alarm, okay, it's the same
story. So you're gonna want to take a peek
23970.81 -> at different services, because they do give
you some basic configurations, though that
23971.81 -> make make it very easy to set up alarms, you
can, of course, always do it through here.
23972.81 -> But it's a lot easier to do that through a
lot of the services. Okay. And so I think
23973.81 -> maybe what we'll do here is look at events.
Next, we're going to take a look now at cloudwatch
23974.81 -> events, which has been renamed to Amazon event
bridge. So these are exactly the same service.
23975.81 -> So AWS added some additional functionality,
such as the ability to create additional event
23976.81 -> buses, and to use partner event sources. And
so they gave it a rebranding, okay.
23977.81 -> So the way it works is, and we'll just do
this through cloudwatch events here. And then
23978.81 -> we'll also do it through the new interface.
But the idea is, you generally will create
23979.81 -> rules within cloud watch. And so you have
the ability to do from an event pattern, okay?
23980.81 -> Or from a schedule. All right. So we are going
to actually just do from schedule, because
23981.81 -> that's the easiest one to show here. And so
based on a schedule, we could say every day,
23982.81 -> all right, once a day, I want to create a
backup of a aect volume. So we have a bunch
23983.81 -> of options here. Okay. And this is a very
common one. So I actually have a minecraft
23984.81 -> server that I run, I like to backup the volume
at least once a day. And so this is the way
23985.81 -> I would go about doing that. So here, I just
have to supply it. Oops, we actually want
23986.81 -> to do the snapshot here. And so I would just
have to supply the volume. So here I have
23987.81 -> an easy to instance running from earlier in
this fall along here. And so I'm just going
23988.81 -> to provide the volume ID there. Okay. And
so once that is there, I can hit configure
23989.81 -> details and say EBS snapshot, okay. Or we
just say volume, snapshot doesn't matter.
23990.81 -> Just I'm being picky here. And we'll create
that. And so now we have that rule. So once
23991.81 -> a day, it's going to create that snapshot
for us, we're gonna go ahead and do that event
23992.81 -> bridge, you're gonna see it's the same process.
It's just this new UX, or UI design. Whether
23993.81 -> it is a improvement over the old one is questionable,
because it's a very thing that people always
23994.81 -> argue about with AWS is the changes to the
interface here, we're going to see the same
23995.81 -> thing of that pattern and schedule, I'm going
to go to schedule here, we're going to choose
23996.81 -> a one day, alright. And you can see now we
choose our Event Bus. And so we're whenever
23997.81 -> we're creating rules here, it's always using
the default Event Bus. But we can definitely
23998.81 -> create other event buses and use partner events.
Okay. And we're just gonna drop this down
23999.81 -> here and choose Create a snapshot here. I
don't know if it's still my clipboard it is,
24000.81 -> there we go. And we'll just create that. So
you're gonna see that we can see both of them
24001.81 -> here. Okay, so we can see UBS one and snapshot.
And if we go back to our rules here, we should
24002.81 -> be able to see both, is it just the one or
both? Yeah, so they're both so you can just
24003.81 -> see that they're the exact same service. Okay.
And just to wrap up talking about Amazon event
24004.81 -> bridge, I just want to show you that you can
create multiple event buses here. So if we
24005.81 -> go and create an event bus, you can actually
create an event bus that is shared from another
24006.81 -> AWS account. Okay, so you could actually react
to within your system, an event from another
24007.81 -> actual account, which is kind of cool. And
then you also have your partner event sources.
24008.81 -> So here you could react to data with a data
dog or something with has to do with login.
24009.81 -> So you know, there are some ways to react
cross account. Okay, so that's just the point
24010.81 -> I wanted to make there. All right. And we're
going to just check one more thing out here,
24011.81 -> which is a cloud watch dashboard. So college
dashboards allows you to create a bunch of
24012.81 -> metrics and put it on a dashboard. So I'm
just going to make one here my EC to dashboard
24013.81 -> because we do have an EC two instance running
24014.81 -> And what we can do here is just start adding
things. So I could add a line graph, okay.
24015.81 -> And we do have a running EC two instance.
So we should be able to get some information
24016.81 -> there. So let's say per instance metrics here,
and we will see if we can find anything that's
24017.81 -> running should be something running here,
I actually, you know what, I think it's just
24018.81 -> this one here, we didn't name that instance.
That's why I'm not seeing anything there.
24019.81 -> Okay, I'm just gonna create that there. And
so you know, a lot, a lot of stuff is happening
24020.81 -> with that instance. So that's why we're not
seeing any data there. But if there was, we
24021.81 -> would start to see some spikes there. But
all you need to know is that you can create
24022.81 -> dashboards in these, we can create widgets
based on metric information. And just be sure
24023.81 -> to hit that save dashboard button. It's very
non intuitive. This interface here, so maybe
24024.81 -> this will get a refresh one day, but yeah,
there you go. That's dashboards. So that wraps
24025.81 -> up the cloud watch section here. So what we're
gonna want to do is, we're just going to want
24026.81 -> to tear down whatever we created. So let's
go to our dashboard. And I believe we can
24027.81 -> delete it, how do we go about doing it, we
go to delete dashboard, these dashboards us
24028.81 -> get like a few free but they do cost in the
long term, then we're going to tear down are
24029.81 -> on our alarm, because these actually alarms
do cost money. If you have a lot of them,
24030.81 -> so Mize will get rid of ones that we aren't
using. Okay, then we will go to our rules
24031.81 -> here. And we will just go ahead and delete
these rules, okay, I'm just disabling them,
24032.81 -> I actually want to delete them. Okay, and
I believe I started an EC two instance. So
24033.81 -> we're gonna just go over to our instances
here, and terminate. Okay, so there we go.
24034.81 -> That's a full cleanup there. Of course, if
you're doing the SIS Ops, where you have to
24035.81 -> know, cloud watch in greater detail, but this
is just generally what you need to know, for
24036.81 -> the solution architect associate. And likely
the developer is you're on to the cloud watch
24037.81 -> cheat sheet. So let's jump into it. So Cloud
watch is a collection of monitoring services.
24038.81 -> We have dashboards, events, alarms, logs and
metrics, starting with logs. First, it is
24039.81 -> the core service to all of cloud watch. And
it logs data from Ada services. So a very
24040.81 -> common thing that you might love would be
CPU utilization. Then we go on to metrics
24041.81 -> and metrics builds off of logs, and it represents
a time ordered set of data points. It is a
24042.81 -> variable to monitor. So let's go back to CPU
utilization and visualize it as a line graph.
24043.81 -> That is what metrics is done, we go on to
cloudwatch events, which triggers an event
24044.81 -> based on a condition, a very common use case
is maybe you need to take a snapshot of your
24045.81 -> server every hour, I like to think of events
as a serverless crontab, because that's how
24046.81 -> I use it. Then you have alarms, which triggers
notifications based on a metric when a defined
24047.81 -> threshold is breached. So a very common use
case are a building alarm. So if we go over
24048.81 -> $1,000, I want an email about it, you got
to tell me, then you got cloudwatch dashboards,
24049.81 -> as the name implies, it's a dashboard. So
it creates visualizations based on metrics.
24050.81 -> There are a couple of exceptions when we're
dealing with end to end cloud watch. And the
24051.81 -> first is that it monitors at an interval of
five minutes. And if you want to get that
24052.81 -> one minute interval, you have to turn on detailed
monitoring, most services do monitor at one
24053.81 -> minute intervals. And if they don't, it's
going to be the one three or five minute interval.
24054.81 -> logs must belong to a log group cloudwatch
agents need to be installed on EC to host
24055.81 -> if you want to get memory usage or decides
because that doesn't come by default. You
24056.81 -> can stream custom log files to to cloud watch
logs. So maybe if you're gonna have a Ruby
24057.81 -> on Rails app, you have a production log, and
you want to get that in cloud watch logs,
24058.81 -> you can do that. And then the last thing is
cloud watch metrics. metrics allow you to
24059.81 -> track high resolution metrics. So that you
can have sub minute intervals, tracking all
24060.81 -> the way down to one second. So if you need
something more granular, you can only do that
24061.81 -> through cloud metrics. So there you go. That's
the cloud watch.
24062.81 -> Hey, this is Angie brown from exam Pro. And
we are looking at Cloud trail which is used
24063.81 -> for logging API calls between AWS services.
And the way I like to think about this service.
24064.81 -> It's when you need to know who to blame. Okay,
so as I said earlier, cloud trail is used
24065.81 -> to monitor API calls and actions made on an
AWS account. And whenever you see these keywords
24066.81 -> governance, compliance, operational auditing
or risk auditing, it's a good indicator, they're
24067.81 -> probably talking about Eva's cloud trail.
Now, I have a record over here to give you
24068.81 -> an example of the kinds of things that cloud
trail tracks to help you know how you can
24069.81 -> blame someone What's up, something's gone
wrong. And so we have the where, when, who
24070.81 -> and what so the where, so we have the account
ID what, like which account did it happen
24071.81 -> in and Have the IP address of the person who
created that request the lens. So the time
24072.81 -> it actually happened, the who. So we have
the user agent, which is, you know, you could
24073.81 -> say, I could tell you the operating system,
the language, the method of making this API
24074.81 -> call the user itself. so here we can see Worf
made this call, and, and what so to what service,
24075.81 -> and you know, it'll say what region and what
service. So this service, it's using, I am
24076.81 -> here, I in the action, so it's creating a
user. So there you go, that is cloud trail
24077.81 -> in a nutshell.
24078.81 -> So within your database account, you actually
already have cloud trail logging things by
24079.81 -> default, and it will collect into the last
90 days under the event history here. And
24080.81 -> we get a nice little interface here. And we
can filter out these events. Now, if you need
24081.81 -> logging be on 90 days. And that is a very
common use case, which you definitely want
24082.81 -> to create your own trail, you'd have to create
a custom trail. The only downside when you
24083.81 -> create a custom trail is that it doesn't have
a gooey like here, such as event history.
24084.81 -> So there is some manual labor involved to
visualize that information. And a very common
24085.81 -> method is to use Amazon, Athena. So if you
see cloud trail, Amazon, Athena being mentioned
24086.81 -> in unison, there's that reason for that, okay.
So there's a bunch of trail options, I want
24087.81 -> to highlight and you need to know these, they're
very important for cloud trail. So the first
24088.81 -> thing you need to know is that a trail can
be set to log in all regions. So we have the
24089.81 -> ability here, say yes, and now, no region
is missed. If you are using an organization,
24090.81 -> you'll have multiple accounts, and you want
to have coverage across all those. So in a
24091.81 -> single trail, you can check box on applied
to my entire organization, you can encrypt
24092.81 -> your cloud trail logs, what you definitely
want to do using server side encryption via
24093.81 -> key management service, which abbreviate is
SSE kms. And you want to enable log file validation,
24094.81 -> because this is going to tell whether someone's
actually tampered with your logs. So it's
24095.81 -> not going to prevent someone from being able
to tamper from your logs. But it's going to
24096.81 -> at least let you know how much you can trust
your logs.
24097.81 -> So I do want to emphasize that cloud trail
can deliver its events to cloudwatch. So there's
24098.81 -> an option After you create the trail where
you can configure, and then it will send your
24099.81 -> events to cloud watch logs. All right, I know
cloud trail and cloud watch are confusing,
24100.81 -> because they seem like they have overlapping
of responsibilities. And there are a lot of
24101.81 -> aidable services that are like that. But you
know, just know that you can send cloud trail
24102.81 -> events to cloud watch logs, not the other
way around. And there is that ability to.
24103.81 -> There are different types of events in cloud
trail, we have measurement events, and data
24104.81 -> events. And generally, you're always looking
at management events, because that's what's
24105.81 -> turned on by default. And there's a lot of
those events. So I can't really list them
24106.81 -> all out for you here. But I can give you a
general idea what those events are. So here
24107.81 -> are four categories. So it could be configuring
security. So you have attached rule policy,
24108.81 -> you'd be registering devices, it would be
configuring rules for routing data, it'd be
24109.81 -> setting up logging. Okay. So 90% of events
in cloud trail are management events. And
24110.81 -> then you have data events. And data events
are actually only for two services currently.
24111.81 -> So if you were creating your trail, you'd
see tabs, and I assume as one, they have other
24112.81 -> services that can leverage data events, we'll
see more tabs here. But really, it's just
24113.81 -> s3 and lambda. And they're turned off by default,
for good reason. Because these events are
24114.81 -> high volume. They occur very frequently. Okay.
And so this is tracking more in detail s3,
24115.81 -> events, such as get object, delete object
put object, if it's a lambda, it'd be every
24116.81 -> time it gets invoked. So those are just higher
there. And so those are turned off by default.
24117.81 -> Okay. So now it's time to take a quick tour
of cloud trail and create our very own trail,
24118.81 -> which is something you definitely want to
do in your account. But before we jump into
24119.81 -> doing that, let's go over to event history
and see what we have here. So AWS, by default
24120.81 -> will track events in the last 90 days. And
this is a great safeguard if you have yet
24121.81 -> to create your own trail. And so we have some
event history here. And if we were just to
24122.81 -> expand any of them doesn't matter which one
and click View event, we get to, we get to
24123.81 -> see what the raw data looks like here for
a specific event. And we do have this nice
24124.81 -> interface where we can search via time ranges
and some additional information. But if you
24125.81 -> need data Now beyond 90 days, you're going
to have to create a trail. And also just to
24126.81 -> analyze this, because we're not going to have
this interface, we're gonna have to use Athena
24127.81 -> to really make sense of any cloud trail information.
But now that we have learned that we do have
24128.81 -> event history available to us, let's move
on to creating our own trail. Let's go ahead
24129.81 -> and create our first trail. And I'm just going
to name my trail here exam pro trail, I do
24130.81 -> want you to notice that you can apply a trail
to all regions, and you definitely want to
24131.81 -> do that, then we have management events, where
we can decide whether we want to have read
24132.81 -> only or write only events, we're going to
want all of them, then you have data events.
24133.81 -> Now these can get expensive, because s3 and
lambda, the events that they're tracking are
24134.81 -> high frequency events. So you can imagine
how often someone might access something from
24135.81 -> an s3 bucket, such as a get or put. So they
definitely do not include these. And you have
24136.81 -> to check them on here to have the inclusion
of them. So if you do want to track data events,
24137.81 -> we would just say for all our s3 buckets,
or specify them and lambdas are also high
24138.81 -> frequency because we would track the invocations
of lambdas. And you could be in the 1000s
24139.81 -> upon millions there. So these are sanely not
included by default. Now down below, we need
24140.81 -> to choose our storage location, we're going
to let it create a new s3 bucket. For us,
24141.81 -> that seems like a good choice, we're going
to drop down advanced here at it because it
24142.81 -> had some really good tidbits here. So we can
turn on encryption, which is definitely something
24143.81 -> we want to do with kms. And so I apparently
have a key already here. So I'm just gonna
24144.81 -> add that I don't know if that's the default
key. I don't know, if you get a default key
24145.81 -> with cloud trail, usually, you'd have one
in there. But I'm just going to select that
24146.81 -> one there, then we have enable log file validation.
So we definitely want to have this to Yes,
24147.81 -> it's going to check whether someone's ever
tampered with our logs, and whether we should
24148.81 -> not be able to trust her logs. And then we
could send a notification about log file delivery,
24149.81 -> this is kind of annoying, so I don't want
to do that. And then we should be able to
24150.81 -> create our trail as soon as we name our bucket
here. So we will go ahead and just name it
24151.81 -> will say exam pro trails, assuming I don't
have one in another account. Okay, and so
24152.81 -> it doesn't like that one, that's fine. So
I'm just going to create a new kms key here.
24153.81 -> kms keys do cost a buck purse, if you want
to skip the step you can totally do. So I'm
24154.81 -> just going to create one for this here called
exam pro trails.
24155.81 -> Okay. Great. And so now it has created that
trail. And we'll just use the site here. And
24156.81 -> then maybe we'll take a peek here in that
s3 bucket when we do have some data. Alright,
24157.81 -> I do want to point out one more thing is that
you couldn't set the the cloud watch event
24158.81 -> to track across all organizations, I didn't
see that option there. It's probably because
24159.81 -> I'm in a sub account. So if I was in my, if
you have an alias organization, right, and
24160.81 -> this was the root account, I bet I could probably
turn it on to work across all accounts. So
24161.81 -> we didn't have that option there. But just
be aware that it is there. And you can turn
24162.81 -> a trail to be across all organizations. So
I just had to switch into my route organization
24163.81 -> account, because I definitely wanted to show
you that this option does exist here. So when
24164.81 -> you create a trail, we have applied all regions,
but we also can apply to all organizations,
24165.81 -> which means all the accounts within an organization.
Okay. So you know, just be aware of that.
24166.81 -> So now that our trail is created, I just want
you to click into and be aware that there's
24167.81 -> an additional feature that wasn't available
to us when we were creating the trail. And
24168.81 -> that is the ability to send our cloud trail
events to cloud watch logs. So if you want
24169.81 -> to go ahead and do that, you can configure
that and create an IM role and send it to
24170.81 -> a log or cloud watch log group. There are
additional fees apply here. And it's not that
24171.81 -> important to go through the motions of this.
But just be aware that that is a capability
24172.81 -> that you can do with patrol. So I said earlier
that this will collect beyond 90 days, but
24173.81 -> you're not going to have that nice interface
that you have an event history here. So how
24174.81 -> would you go about analyzing that log, and
I said you could use Amazon Athena. So luckily,
24175.81 -> they have this link here. That's going to
save you a bunch of setup to do that. So if
24176.81 -> you were to click this here, and choose the
s3 bucket, which is this one here, it's going
24177.81 -> to create that table for you and Athena, we
used to have to do this manually, it was quite
24178.81 -> the pain. So it's very nice that they they've
added this one link here, and I can just hit
24179.81 -> create table. And so what that's going to
do, it's going to create that table in Athena
24180.81 -> for us and we can jump over to Athena. Okay.
And um, yeah, it should be created here. Just
24181.81 -> give it a little refresh here. I guess we'll
just click Get Started. I'm not sure why it's
24182.81 -> not showing up here. We're getting the splash
screen. But we'll go in here and our table
24183.81 -> is there. So we get this little goofy tutorial.
I don't want to go threat. But on that table
24184.81 -> has now been created. And we have a bunch
of stuff here. There is a way of running a
24185.81 -> sample query, I think he could go here and
it was preview table. And that will create
24186.81 -> us a query. And then we it will just run the
query. And so we can start getting data. So
24187.81 -> the cool advantage here is that if we want
to query our data, just like using SQL, you
24188.81 -> can do so here. And Athena, I'm not doing
this on a day to day basis. So I can't say
24189.81 -> I'm the best at it. But you know, if we gave
this a try here and tried to query something,
24190.81 -> maybe based on event type, I wonder if we
could just like group by event type here.
24191.81 -> So that is definitely a option. So we say
distinct. Okay, and I want to be distinct
24192.81 -> on maybe, event type here.
24193.81 -> Okay.
24194.81 -> doesn't like that little bit, just take that
out there. Great. So
24195.81 -> there we go. So that was just like a way so
I can see all the unique event types, I just
24196.81 -> take the limit off there, the query will take
longer. And so we do have that one there.
24197.81 -> But anyway, the point is, is that you have
this way of using SQL to query your logs.
24198.81 -> Obviously, we don't have much in our logs,
but it's just important for you to know that
24199.81 -> you can do that. And there's that one button,
press enter to create that table and then
24200.81 -> start querying your logs. So we're onto the
cloud trail cheat sheet, and let's get to
24201.81 -> it. So Cloud trail logs calls between eight
of us services. When you see the keywords
24202.81 -> such as governance, compliance, audit, operational
auditing and risk auditing, it's a high chance
24203.81 -> they're talking about cloud trail when you
need to know who to blame. Think cloud trail
24204.81 -> cloud trail by default logs events data for
the past 90 days via event history. To track
24205.81 -> beyond 90 days, you need to create a trail
to ensure logs have not been tampered with,
24206.81 -> you need to turn on log file validation option.
Cloud trail logs can be encrypted using kms.
24207.81 -> Cloud trail can be set to log across all service
accounts in an organization and all regions
24208.81 -> in an account. Cloud trail logs can be streamed
to cloud watch logs, trails are outputted
24209.81 -> to s3 buckets that you specify cloud trail
logs come in two kinds. We have a management
24210.81 -> events and data events, management events,
log management operations, so you know, attach
24211.81 -> roll policy, data events, log data operations
for resources. And there's only really two
24212.81 -> candidates here s3 and lambda. So think get
object delete object put put object did events
24213.81 -> are disabled by default when creating a trail
trail log trail logs in s3 and can be analyzed
24214.81 -> using Athena I'm gonna have to reword that
one. But yeah, that is your teaching. Hey,
24215.81 -> this is Andrew Brown from exam Pro. And we
are looking at AWS lambda, which lets you
24216.81 -> run code without provisioning or managing
servers. And servers are automatically started
24217.81 -> and stopped when needed. You can think of
as lambdas as serverless functions, because
24218.81 -> that's what they're called. And it's pay per
invocation. So as we just said it was lambda
24219.81 -> is a compute service that lets you run code
without provisioning or managing servers.
24220.81 -> Lambda executes your code only when needed
and scales automatically to a few or to 1000
24221.81 -> lambda functions concurrently. In seconds,
you pay only for the compute time you consume,
24222.81 -> there is no charge when your code is not running.
So the main highlights is lambda is cheap,
24223.81 -> lambda is serverless. And lambda scales automatically.
Now, in order to use lambda, you are just
24224.81 -> uploading your code and you have up to seven
options that are supported by AWS. So we have
24225.81 -> Ruby, Python, Java, go PowerShell, no GS,
and C sharp. If you want to use something
24226.81 -> outside of this list, you can create your
own custom runtimes that are not supported.
24227.81 -> So eight support is not going to help you
with them, but you can definitely run them
24228.81 -> on it. So when we're thinking about how to
use Avis lambda, there is a variety of use
24229.81 -> cases because lambda is like glue, it helps
you connect different services together. And
24230.81 -> so I have to use use cases in front of you
here. So the first is processing thumbnail.
24231.81 -> So imagine you are a web service, and users
are allowed to upload their profile photo.
24232.81 -> So what you normally do is you would store
that in an s3 bucket. Now you can set event
24233.81 -> triggers on s3 buckets so that it would go
trigger a lambda and then that image would
24234.81 -> get pulled from that bucket. And using something
like Sharpe j s or image magic, you could
24235.81 -> then take that profile photo and then crop
it to a thumbnail and then store it back into
24236.81 -> the bucket. Okay, another use case would be
contact email form. So this is the same thing
24237.81 -> example if user email form, it's exactly this.
When you fill in the contact email form, it
24238.81 -> sends that form data to an API gateway. endpoint,
which then triggers a lambda function, and
24239.81 -> then you have this lambda function that evaluates
whether the form data is valid or not. If
24240.81 -> it's not valid, it's gonna say, hey, you need
to make these corrections. Or if it if it's
24241.81 -> good, it's going to then create a record in
the in our dynamodb table, which is the records
24242.81 -> are called items. And it's also going to send
out an email notification to the company so
24243.81 -> that we know that you've contacted us via
SNS. All right, so there you go.
24244.81 -> So to invoke a lambda to make it execute,
we can either use the AWS SDK, or we can trigger
24245.81 -> it from another database service. And we have
a big long list here. And this is definitely
24246.81 -> not the full list. So you can see that we
can do API gateway, we just showed that with
24247.81 -> the email contact form, if you have IoT devices,
you could trigger a lambda function, your
24248.81 -> maybe you want your Echo Dot using an Alexa
skill would trigger a lambda, Al B's CloudFront,
24249.81 -> cloudwatch, dynamodb, kinesis, s3, SNS Sq
s. And I can even think of other ones outside
24250.81 -> of here, like guard duty and config, there's
a bunch, okay, so you can see that a dress
24251.81 -> integrates with a lot of stuff, it also can
integrate with third party party, or partnered
24252.81 -> third party database partners. And that's
powered through Amazon event bridge, which
24253.81 -> is just event bridges very much like cloud
watch events. But with some additional functionality
24254.81 -> there. And you can see we can integrate with
data dog one, login page pager duty. So just
24255.81 -> to give you a scope of the possible triggers
available, I just want to touch on lambda
24256.81 -> pricing here, quickly. So the first million
requests like the first functions, you execute
24257.81 -> our free per month, okay? So if you're a startup,
and you're not doing over a million requests
24258.81 -> per month, and a lot aren't, you're basically
not paying anything for compute after that
24259.81 -> is 20 cents per additional million requests.
So very, very inexpensive. There, the other
24260.81 -> costs to it, besides just how often things
are requested. It's also you know, how many
24261.81 -> how long the duration is. So the first 400
gigabytes of seconds is free. Thereafter,
24262.81 -> it's going to be this very, very small amount,
for every gigabyte per second, okay, this
24263.81 -> also is going to this value is going to change
based on the amount of memory you use, I bet
24264.81 -> you This is for the lowest amount 128 megabytes,
most of the times, you're not going to see
24265.81 -> yourself increasing beyond 512. That's really
high. But yeah, I always feel that I'm between
24266.81 -> 128 and 256. Now just to do a calculation
just to give an idea of total pricing. So
24267.81 -> let's say we had a lambda function that's
at 128 megabytes with the lowest, we have
24268.81 -> 30 million executions per month, those are
requests, and the duration is 200 milliseconds.
24269.81 -> For those lambda functions, we're only paying
$5.83. So you can see that lambda is extremely
24270.81 -> inexpensive. So I just wanted to give you
a quick tour of the actual ad bus lamda interface,
24271.81 -> just so you can get an idea how everything
works together. So you would choose your runtime.
24272.81 -> So here we're using Ruby. And then you can
upload your code, you have different ways,
24273.81 -> you can either edit it in line, so they have
like CLOUD NINE integrated here. So you can
24274.81 -> just start writing code. If it's too large,
then you either have to upload in a zip or
24275.81 -> provided via s3. So there are some limitations.
And the larger gets eventually you'll end
24276.81 -> up on s3, when you want to import a lambda,
and then you have your triggers. And there's
24277.81 -> a lot of different triggers. But for this
lambda function, it's using dynamodb. So when
24278.81 -> a record is inserted into dynamodb, it goes
to dynamodb streams. And then it triggers
24279.81 -> this lambda function. And then we have on
the right hand side, the outputs and those
24280.81 -> outputs. It's the lambda function that actually
has to call those services. But you create
24281.81 -> an IM role. And whatever you have permissions
to it will actually show you here on the right
24282.81 -> hand side. So here Here you can see this lambda
is allowed to interact with cloud watch logs,
24283.81 -> dynamodb and kinesis. firehose. So there you
go. We are looking at default limits for AWS
24284.81 -> lambda. It's not all of them, but it's the
ones that I think are most important for you
24285.81 -> to know. So by default, you can only have
1000 lambdas running concurrently. Okay,
24286.81 -> so if you want to have more, you'd have to
go ask AWS support. It's possible there could
24287.81 -> be an exam question where it's like, Hey,
you want to run X amount of lambdas. And they're
24288.81 -> not running or something. This could be because
of that limit. You are able to store temporary
24289.81 -> files on a on a lambda as it's running, and
has a limit of up to 500 megabytes. When you
24290.81 -> create a lambda by default, it's going to
be running in no VPC. And so sometimes there
24291.81 -> are services such as RDS where you can only
rack them if You are in the same VPC. So you
24292.81 -> might actually have to change the VPC. In
some use cases, when you do set a lambda to
24293.81 -> a VPC, it's going to lose internet access,
I'm not to say that you cannot expose it because
24294.81 -> it's in a security group. So there might be
some way to do it. But that is a consideration
24295.81 -> there, you can set the timeout to be a maximum
of 15 minutes. So you don't really if you
24296.81 -> had to go beyond 15 minutes, and this is where
you probably want to use fargate, which is
24297.81 -> similar to Avis lambda, but there's a lot
more work in setup and your your charge per
24298.81 -> second as opposed to milliseconds. But just
be aware, if you need anything beyond 50 milliseconds,
24299.81 -> you're gonna want fargate. And the last thing
is memory. So you can set memory memory starts
24300.81 -> at 128 megabytes and goes up all the way to
3008 megabytes, the more megabytes you use,
24301.81 -> the more expensive it's going to be paired
with how long the duration is, and it This
24302.81 -> goes up in 64 megabyte increments, okay. So
there you go, there's the most important do
24303.81 -> you know. So one of the most important concepts
to ages lambda is cold starts, because this
24304.81 -> is one of the negative trade offs of using
serverless functions. So, you know, database
24305.81 -> has servers pre configured, so they're just
lying around, and they're in a turned off
24306.81 -> state for your runtime environment. So when
a lambda is invoked, the servers need to then
24307.81 -> be turned on, your code needs to be copied
over. And so during that time, there's going
24308.81 -> to be a delay when that function will initially
run. And that's what we call a code cold start.
24309.81 -> So over here, you can see I have a lambda
function, it gets triggered, and there is
24310.81 -> no server for it to run on. And so what's
gonna happen is that servers gonna have to
24311.81 -> start and we're going to copy that code, and
there's gonna be a period of delay. Now, if
24312.81 -> you were to invoke that function, again, what's
going to happen is, it has to be recent, right?
24313.81 -> If if the same function, so the codes already
there, and and the servers already running,
24314.81 -> then you're going to not have that delay,
that cold starts not going to be there. And
24315.81 -> that's when your servers actually warm. All
right. So, you know, serverless functions
24316.81 -> are cheap, but everything comes with the trade
off. And so serverless functions, cold starts
24317.81 -> can cause delays in the user experience. And
this is actually a problem directly for us
24318.81 -> on exam Pro, because we didn't use serverless
architecture, because we wanted everything
24319.81 -> to be extremely fast, because, you know, using
other providers, we weren't happy with the
24320.81 -> delay and experience. Now there are ways around
cold starts, which is called pre warming.
24321.81 -> So what you can do is you can invoke the a
function so that it starts prematurely so
24322.81 -> that when someone actually uses it, it's always
going to stay warm, or you can take a lambda
24323.81 -> and then give it more responsibility so that
more things are passing through it so it stays
24324.81 -> warm, more constant. And you know that cold
starts is becoming less and less issue at
24325.81 -> going forward because cloud providers are
trying to find solutions to reduce those times,
24326.81 -> or to mitigate them, but they are still a
problem. So just be very aware of this one
24327.81 -> caveat to thermal. We're on
24328.81 -> to the lamda cheat sheet. So lambdas are serverless
functions, you upload your code and it runs
24329.81 -> without you managing or provisioning any servers.
Lambda is serverless. You don't need to worry
24330.81 -> about the underlying architecture. Lambda
is a good fit for short running tasks where
24331.81 -> you don't need to customize the OS environment.
If you need long running tasks greater than
24332.81 -> 15 minutes, and a custom OS environment then
consider using fargate. There are seven runtime
24333.81 -> language environments officially supported
by lambda, you have Ruby, Python, Java, Node
24334.81 -> JS, C sharp PowerShell and go. You pay per
invocation. So that's the duration, the amount
24335.81 -> of memory used, rounded up to the nearest
100 milliseconds, and you are at and you're
24336.81 -> also paid based on the amount of requests
so the first 1 million requests per month
24337.81 -> are free. You can adjust the duration timeout
to be up to 15 minutes, and the memory up
24338.81 -> to 3008 megabytes, you can trigger lambdas
from the SDK or multiple AWS services, such
24339.81 -> as s3 API gateway dynamodb. Lambda is by default
run in no VPC to interact with some services,
24340.81 -> you need to have your lambdas in the same
VPC. So, you know, in the case of RDS, you'd
24341.81 -> have to have your lambda in the same VPC as
RDS lambdas can scale to 1000 concurrent functions
24342.81 -> in a second 1000 is the default if you want
to increase this you have to go make an EVA
24343.81 -> service limit increase with Ada support and
lambdas have cold starts if a function has
24344.81 -> not been recently executed, there will be
a delay. Hey, this is Andrew Brown from exam
24345.81 -> pro and we are looking at simple queue service
also known as Sq s which is a fully managed
24346.81 -> queuing service that is enables you to decouple
and scale micro services, distributed systems
24347.81 -> and serverless applications. So to fully understand
SQL, we need to understand what a queueing
24348.81 -> system is. And so a queueing system is just
a type of messaging system, which provides
24349.81 -> asynchronous communication and decouples.
Processes via messages could also be known
24350.81 -> as events from a sender and receiver. But
in the case for a streaming system, also known
24351.81 -> as a producer and consumer. So, looking at
a queueing system, when you have messages
24352.81 -> coming in, they're usually being deleted on
the way out. So as soon as they're consumed
24353.81 -> or deleted, it's for simple communication,
it's not really for real time. And just to
24354.81 -> interact with the queue. And the messages,
they're both the sender and receiver have
24355.81 -> to pull to see what to do. So it's not reactive.
Okay, we got some examples of queueing systems
24356.81 -> below, we have sidekick, sq, S, rabbit, rabbit
and queue, which is debatable because it could
24357.81 -> be considered a streaming service. And so
now let's look at the streaming side to see
24358.81 -> how it compares against a queueing system.
So a streaming system can react to events
24359.81 -> from multiple consumers. So like, if you have
multiple people that want to do something
24360.81 -> with that event, they can all do something
with it, because it doesn't get immediately
24361.81 -> deleted, it lives in the Event Stream for
a long period of time. And the advantage of
24362.81 -> having a message hang around in that Event
Stream allows you to apply complex operations.
24363.81 -> So that's the huge difference is that one
is reactive and one is not one allows you
24364.81 -> to do multiple things with the messages and
retains it in the queue. One deletes it and
24365.81 -> doesn't doesn't really think too hard about
what it's doing. Okay, so there's your comparative
24366.81 -> between queuing and streaming. And we're going
to continue on with Sq s here, which is a
24367.81 -> queueing system. So the number one thing I
want you to think of when you think of SQL
24368.81 -> says application integration, it's for connecting
isolette applications together, acting as
24369.81 -> a bridge of communication, and Sq s happens
to use messages and queues for that you can
24370.81 -> see Sq s appears in the Ava's console under
application integration. So these are all
24371.81 -> services that do application integration Sq
S is one of them. And as we said it uses a
24372.81 -> queue. So accuse a temporary repository for
messages that are waiting to be processed,
24373.81 -> right. So just think of going to the bank,
and everyone is waiting that line, that is
24374.81 -> the queue. And the way you interact with that
queue is through the Avis SDK. So you have
24375.81 -> to write code that was going to publish messages
to the queue. And then when you want to read
24376.81 -> them, you're going to have to use the AWS
SDK to pull messages. And so Sq S is pull
24377.81 -> based, you have to pull things, it's not pushed
based, okay.
24378.81 -> So to make this crystal clear, I have an SQL
use case here. And so we have a mobile app,
24379.81 -> and we have a web app, and they want to talk
to each other. And so using the Avis SDK,
24380.81 -> the mobile app sends a message to the queue.
And now the web app, what it has to do is
24381.81 -> it has to use the Avis SDK and pull the cue
whenever it wants. So it's up to the this
24382.81 -> app to code in how frequently it will check.
But it's going to see if there's anything
24383.81 -> in the queue. And if there is a message, it's
going to pull it down, do something with it
24384.81 -> and report back to the queue that it's consumed
it meaning to tell the queue to go ahead and
24385.81 -> delete that message from the queue. All right,
now this app on the mobile left hand side
24386.81 -> to know whether it's been consumed, it's going
to have to, on its own schedule, periodically
24387.81 -> check to pull to see if that message is still
in the queue, if it no longer is, that's how
24388.81 -> it knows. So that is the process of using
SQL between two applications. So let's look
24389.81 -> at some SQL limits starting with message size.
So the message size can be between one byte
24390.81 -> to 256 kilobytes, if you want it to go beyond
that message size, you can use the Amazon
24391.81 -> SQL extended client library only for Java,
it's not for anything else to extend that
24392.81 -> necessarily up to two gigabytes in size. And
so the way that would work is that the message
24393.81 -> would be stored in s3 and the library would
reference that s3 object, right? So you're
24394.81 -> not actually pushing two gigabytes to SQL
is just loosely looking to something in an
24395.81 -> s3 bucket. Message retention. So message retention
is how long SQL will hold on a message before
24396.81 -> dropping it from the queue. And so the message
retention by default is four days, and you
24397.81 -> have a message retention retention that can
be adjusted from a minimum of 60 seconds to
24398.81 -> a maximum of 14 days.
24399.81 -> SQL is a queueing system. So let's talk about
the two different types of queues. We have
24400.81 -> standard queue which allows for a nearly unlimited
number of transactions per second when your
24401.81 -> transaction is just like messages, and it
guarantees that a message will be delivered
24402.81 -> at least once. However, the trade off here
is that more than one copy of the message
24403.81 -> could be Potentially delivered. And that would
cause things to happen out of order. So if
24404.81 -> ordering really matters to you just consider
there's that caveat here with standard queues,
24405.81 -> however, you do get nearly unlimited transactions.
So that's a trade off. It does try to provide
24406.81 -> its best effort at to ensure messages stay
generally in the order that they were delivered.
24407.81 -> But again, there's no guarantee. Now, if you
need a guarantee of the, the ordering of messages,
24408.81 -> that's where we're going to use feefo, also
known as first in first out, well, that's
24409.81 -> what it stands for, right. And the idea here
is that, you know, a message comes into the
24410.81 -> queue and leaves the queue. The trade off
here is the number of transactions you can
24411.81 -> do per second. So we don't have nearly unlimited
per second where we have a cap up to 300.
24412.81 -> So there you go.
24413.81 -> So how do we prevent another app from reading
a message while another one is busy with that
24414.81 -> message. And the idea behind this is we want
to avoid someone doing the same amount of
24415.81 -> work that's already being done by somebody
else. And that's where visibility timeout
24416.81 -> comes into play. So visibility timeout is
the period of time that it meant that messages
24417.81 -> are invisible DSU Sq sq. So when a reader
picks up that message, we set a visibility
24418.81 -> timeout, which could be between zero to 12
hours. By default, it's 30 seconds. And so
24419.81 -> no one else can touch that message. And so
what's going to happen is that whoever picked
24420.81 -> up that message, they're going to work on
it. And they're going to report back to the
24421.81 -> queue that, you know, we finished working
with it, it's going to get deleted from the
24422.81 -> queue. Okay. But what happens, if they don't
complete it within the within the visibility
24423.81 -> timeout frame, what's going to happen is that
message is now going to become visible, and
24424.81 -> anyone can pick up that job, okay. And so
there is one consideration you have to think
24425.81 -> of, and that's when you build out your web
apps, that you you bake in the time, so that
24426.81 -> if if the job is going to be like if it's
if 30 seconds have expired, then you should
24427.81 -> probably kill that job, because otherwise
you might end up this issue where you have
24428.81 -> the same messaging being delivered twice.
And that could be an issue. Okay, so just
24429.81 -> to consideration for visibility. Don't ask
us we have two different ways of doing polling
24430.81 -> we have short versus long. Polling is the
method in which we retrieve messages from
24431.81 -> the queue. And by default, SQL uses short
polling, and short polling returns messages
24432.81 -> immediately, even if the message queue being
pulled is empty. So short polling can be a
24433.81 -> bit wasteful, because if there's nothing to
pull, then you're just calling you're just
24434.81 -> making calls for no particular reason. But
there could be a use case where you need a
24435.81 -> message right away. So short polling is the
use case you want. But the majority of use
24436.81 -> cases, the majority of use cases, you should
be using long polling, which is bizarre, that's
24437.81 -> not by default, but that's what it is. So
long polling waits until a message arrives
24438.81 -> in the queue, or the long pole timeout expires.
Okay. And long polling makes it inexpensive
24439.81 -> to retrieve messages from the queue as soon
as messages are available, using long polling
24440.81 -> will reduce the cost because you can reduce
the number of empty receives, right. So if
24441.81 -> there's nothing to pull, then you're wasting
your time, right? If you want to enable long
24442.81 -> polling if you have to do it within the SDK,
and so what you're doing is you're setting
24443.81 -> the receive message request with a wait time.
So by doing that, that's how you set long
24444.81 -> polling. Let's take a look at our simple queue
service cheat sheet that's going to help you
24445.81 -> pass your exam. So first, we have Sq S is
a queuing service using messages with a queue
24446.81 -> so think sidekick or rabbit mq, if that helps
if you know the services, sq S is used for
24447.81 -> application integration. It lets you decouple
services and apps so that they can talk to
24448.81 -> each other. Okay, to read Sq s, you need to
pull the queue using the ABS SDK Sq S is not
24449.81 -> push based. Okay, it's not reactive. sq s
supports both standard and first in first
24450.81 -> out FIFO queues. Standard queues allow for
unlimited messages per second does not guarantee
24451.81 -> the order of delivery always delivers at least
once and you must protect against duplicate
24452.81 -> messages being processed feefo first in first
out maintains the order messages with a limit
24453.81 -> of 300. So that's the trade off there. There
are two kinds of polling short by default
24454.81 -> and long. Short polling returns messages immediately
even if the message queue is being pulled
24455.81 -> as empty. Long polling waits until messages
arrive in the queue or the long pole time
24456.81 -> expires. in the majority of cases long polling
is preferred over short polling majority okay.
24457.81 -> Visibility timeout is the period of the time
that messages are invisible to the Sq sq.
24458.81 -> messages will be deleted from the queue after
a job has been processed. Before the visibility
24459.81 -> timeout expires. If the visibility timeout
expires in a job will become visible to The
24460.81 -> queue again, the the default visibility timeout
is 30 seconds, timeout can be between zero
24461.81 -> seconds to a maximum of 12 hours. I highlighted
that zero seconds because that is a trick
24462.81 -> question. Sometimes on the exams, people don't
realize you can do it for zero seconds, sq
24463.81 -> s can retain messages from 60 seconds to 14
days by default, is four days. So 14 days
24464.81 -> is two weeks. That's an easy way to remember
it. Message sizes can be between one byte
24465.81 -> to two and 56 kilobytes. And using the extended
client library for Java can be extended to
24466.81 -> two gigabytes. So there you go, we're done
with SQL.
24467.81 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at simple notification service
24468.81 -> also known as SNS, which lets you subscribe
and send notifications via text message email,
24469.81 -> web hooks, lambdas Sq s and mobile notification.
Alright, so to fully understand SNS, we need
24470.81 -> to understand the concept of pub sub. And
so pub sub is a publish subscribe pattern
24471.81 -> commonly implemented in messaging systems.
So in a pub sub system, the sender of messages,
24472.81 -> also known as the publisher here, doesn't
send the message directly to the receiver.
24473.81 -> Instead, they're going to send the messages
to an Event Bus. And the event pumps categorizes
24474.81 -> the messages into groups. And then the receiver
of messages known as the subscriber here subscribes
24475.81 -> to these groups. And so whenever a new message
appears within their subscription, the messages
24476.81 -> are immediately delivered to them. So it's
not unlike registering for a magazine. All
24477.81 -> right, so, you know, down below, we have that
kind of representation. So we have those publishers,
24478.81 -> and they're publishing to the Event Bus which
have groups in them, and then that's gonna
24479.81 -> send it off to those subscribers, okay, so
it's pushing it all along the way here, okay,
24480.81 -> so publishers have no knowledge of who their
subscribers are. Subscribers Do not pull for
24481.81 -> messages, they're gonna get pushed to them.
messages are instead automatically immediately
24482.81 -> pushed to subscribers and messages and events
are interchangeable terms in pub sub. So if
24483.81 -> you see me saying messages and events, it's
the same darn thing. So we're now looking
24484.81 -> at SNS here. So SNS is a highly available,
durable, secure, fully managed pub sub messaging
24485.81 -> service that enables you to decouple microservices
distributed systems and serverless applications.
24486.81 -> So whenever we talking about decoupling, we're
talking about application integration, which
24487.81 -> is like a family of Ada services that connect
one service to another. Another service is
24488.81 -> also Sq s. And SNS is also application integration.
So down below, we can see our pub sub systems.
24489.81 -> So we have our publishers on the left side
and our subscribers on the right side. And
24490.81 -> our event bus is SNS Okay, so for the publisher,
we have a few options here. It's basically
24491.81 -> anything that can programmatically use the
EVAs API. So the SDK and COI uses the Avis
24492.81 -> API underneath. And so that's going to be
the way publishers are going to publish their
24493.81 -> messages or events onto an SNS topic. There's
also other services on AWS that can trigger
24494.81 -> or publish to SNS topics cloud watch, definitely
can, because you'd be using those for building
24495.81 -> alarms. And then on the right hand side, you
have your subscribers and we have a bunch
24496.81 -> of different outputs, which we're going to
go through. But here you can see we have lambda
24497.81 -> Sq s email, and HTTPS protocol. So publishers
push events to an SNS topic. So that's how
24498.81 -> they get into the topic. And then subscribers
subscribe to the SNS topic to have events
24499.81 -> pushed to them. Okay. And then down below,
you can see I have a very boring description
24500.81 -> of SNS topic, which is it's a logical access
point and communication channel. So that makes
24501.81 -> a nap. That makes sense. So let's move on.
We're gonna take a deeper look here at SNS,
24502.81 -> topics and topics allow you to group multiple
subscriptions together, a topic is able to
24503.81 -> deliver to multiple protocols at once. So
it could be sending out email, text, message,
24504.81 -> HTTPS, all the sorts of protocols we saw earlier.
And publishers don't care about the subscribers
24505.81 -> protocol, okay, because it's sending a message
event, it's giving you the topic and saying,
24506.81 -> you figure it out, this is the message I want
to send out. And it knows what subscribers
24507.81 -> it has. And so the topic, when it delivers
messages, it will automatically format it
24508.81 -> for the message according to the subscribers
chosen protocol. Okay. And the last thing
24509.81 -> I want you to know is that you can encrypt
your topics via kms key management service.
24510.81 -> And you know, so it's just as easy as turning
it on and picking your key.
24511.81 -> So now we're taking a look at subscriptions.
And subscriptions are something you create
24512.81 -> on a topic, okay, and so here I have a subscription
that is an email subscription. And the endpoint
24513.81 -> is obviously going to be an email. So I provided
my email there. If you want to say hello,
24514.81 -> give send me an email. And it's just as simple
as clicking that button and filling in those
24515.81 -> options. Now you have to choose your protocol.
And here we have our full list here on the
24516.81 -> right hand side. So we'll just go through
it. So we have a sheet phps. And you're going
24517.81 -> to want to be using this for web hooks. So
the idea is that this is usually going to
24518.81 -> be an API endpoint to your web applications
that's going to listen for incoming messages
24519.81 -> from SNS, then you can send out emails. Now,
there's another service called ACS, which
24520.81 -> specializes in sending out emails. And so
SNS is really good for internal email notifications,
24521.81 -> because you don't necessarily have your custom
domain name. And also, the emails have to
24522.81 -> be plain text only. There's some other limitations
around that. So they're really, really good
24523.81 -> for internal notifications, maybe like billing
alarms, or maybe someone signed up on your
24524.81 -> platform you want to know about it, then they
also have an email JSON. So this is going
24525.81 -> to send you JSON via email, then you have
Sq s. So you can send an SNS message to Sq
24526.81 -> s. So that's an option you have there. You
can also have SNS trigger a lambda functions.
24527.81 -> So that's a very useful feature as well. And
you can also send text messages that we'll
24528.81 -> be using the SMS protocol. And the last one
here is platform application endpoints. And
24529.81 -> that's for mobile push. So like a bunch of
different devices, laptops, and phones have
24530.81 -> notification systems in them. And so this
will integrate with that. And we're just gonna
24531.81 -> actually talk about that a bit more here.
So I wanted to talk a bit more about this
24532.81 -> platform application endpoint. And this is
for doing mobile push. Okay, so we have a
24533.81 -> bunch of different mobile devices, and even
laptops that use notification systems in them.
24534.81 -> And so here you can see a big list, we have
ADM, which is Amazon device messaging, we
24535.81 -> have Apple, Badu Firebase, which is Google.
And then we have two for Microsoft. So we
24536.81 -> have Microsoft push, and Windows push, okay.
And so you can with this protocol, push out
24537.81 -> to that stuff. And the advantage here, you're
gonna when you push notification messages
24538.81 -> to these mobile endpoints, it can appear in
the mobile app just like message alerts, badges,
24539.81 -> updates, or even sound alerts. So that's pretty
cool. Okay, so I just want you to be aware
24540.81 -> of that. Alright, so on to the SNS cheat sheet.
So simple notification service, also known
24541.81 -> as SNS, is a fully managed pub sub messaging
service. SNS is for application integration.
24542.81 -> It allows decoupled services and apps to communicate
with each other. We have a topic which is
24543.81 -> a logical access point and communication channel,
a topic is able to deliver to multiple protocols.
24544.81 -> You can encrypt topics via kms. And then you
have your publishers, and they use the EVAs
24545.81 -> API via the CLA or the SDK to push messages
to a topic. Many Eva's services, integrate
24546.81 -> with SNS and act as publishers. Okay, so think
cloud watch and other things. Then you have
24547.81 -> subscriptions. So you can subscribe, which
consists subscribe to topics. When a topic
24548.81 -> receives a message, it automatically immediately
pushes messages to subscribers. All messages
24549.81 -> published to SNS are stored redundantly across
multi az, which isn't something we talked
24550.81 -> in the core content, but it's good to know.
And then we have the following protocols we
24551.81 -> can use. So we have HTTP, HTTPS. This is great
for web hooks into your web application. We
24552.81 -> have emails good for internal email notification.
Remember, it's only plain text if you need
24553.81 -> rich text. And custom domains are going to
be using sts for that. Then you have email
24554.81 -> JSON, very similar to email just sending Jason
along the way. You can also send your your
24555.81 -> SNS messages into an ESA s Sq sq. You can
trigger lambdas you can send text messages.
24556.81 -> And then the last one is you have platform
application endpoints, which is mobile push,
24557.81 -> okay. And that's going to be for systems like
Apple, Google, Microsoft, Purdue, all right.
24558.81 -> Hey, this is Andrew Brown from exam Pro. And
we are looking at elastic cache, which is
24559.81 -> used for managing caching services, which
either run on Redis or memcached. So to fully
24560.81 -> understand what elastic cache is, we need
to answer a couple questions. And that is,
24561.81 -> what is caching what is an in memory data
store. So let's start with caching. So caching
24562.81 -> is the process of storing data in a cache.
And a cache is a temporary storage area. So
24563.81 -> caches are optimized for fast retrieval with
the trade off. The data is not durable, okay.
24564.81 -> And we'll explain what not like what it means
when we're saying it's not durable. So now
24565.81 -> let's talk about in memory data store, because
that is what elastic cache is. So it's when
24566.81 -> data is stored in memory. So memory, literally
think RAM, because that's what it's going
24567.81 -> in. And the trade off is high volatility.
Okay. So when I say it's very volatile, that
24568.81 -> means low durability. So what does that mean?
It just means the risk of data being lost,
24569.81 -> okay, because, again, this is a temporary
storage area. And the trade off is we're going
24570.81 -> to have fast access to that data. All right.
So that is generally what a cache and memory
24571.81 -> data store is. So with elastic cache, we can
deploy, run and scale popular open source
24572.81 -> compatible in memory data stores. One cool
feature is that it will frequently identify
24573.81 -> queries that you Use often and will store
those in cash so you get an additional performance
24574.81 -> boost. One caveat, I found out when using
this in production for my own use cases is
24575.81 -> that alasa cash is only accessible to two
resources offering in the same VPC. So here
24576.81 -> I have an easy to instance, as long as the
same VPC can connect to elastic cache, if
24577.81 -> you're trying to connect something outside
of AWS, such as digitalocean, that is not
24578.81 -> possible to connect that elastic cache. And
if it's outside of this VPC, you're not gonna
24579.81 -> be able to make that connection, probably
through peering or some other efforts, you
24580.81 -> could do that. But you know, generally, you
want to be using elastic cache or the servers
24581.81 -> that use it to be in the same VPC. And we
said that it runs open source compatible in
24582.81 -> memory data stores. And the two options we
have here are mem cache, and Redis. And we're
24583.81 -> going to talk about the difference between
those two in the next slide. for lots of cash,
24584.81 -> we have two different engines, we can launch
we have memcached, and Redis. And there is
24585.81 -> a difference between these two engines, we
don't really need to know in great detail,
24586.81 -> you know, all the differences. But we do have
this nice big chart that shows you that Redis
24587.81 -> takes more boxes, then mem cache. So you can
see that Redis can do snapshots, replication,
24588.81 -> transact transactions, pub sub, and support
geospatial support. So you might think that
24589.81 -> Redis is the clear winner here. But it really
comes down to your use case. So mem cache
24590.81 -> is generally preferred for caching HTML fragments.
And mem cache is a simple key value store.
24591.81 -> And that trade off there is that even though
it's simpler and has less features, it's going
24592.81 -> to be extremely fast. And then you have Redis
on the other side, where he has different
24593.81 -> kinds of operations that you can do on your
data in different data structures that are
24594.81 -> available to you. It's really good for leaderboards,
or tracking unrenewed notification, any kind
24595.81 -> of like real time cached information that
has some logic to it. Redis is going to be
24596.81 -> your choice there. It's very fast, we could
argue to say who is faster than the other
24597.81 -> because on the internet, some people say Redis
is overtaking memcached, even in the most
24598.81 -> basic stuff, but generally, you know for for
the exam, memcached is technically are generally
24599.81 -> considered faster for HTML fragments, okay.
But you know, it doesn't really matter because
24600.81 -> on the exam, they're not gonna really ask
you to choose between memcached and Redis.
24601.81 -> But you do need to know the difference. So
we are on to the elastic cache cheat sheet.
24602.81 -> It's a very short cheat sheet, but we got
to get through it. So elastic cache is a managed
24603.81 -> in memory caching service. Elastic cache can
launch either memcached or Redis. mem cache
24604.81 -> is a simple key value store preferred for
caching HTML fragments is arguably faster
24605.81 -> than Redis. Redis has richer data types and
operations, great for leaderboards, geospatial
24606.81 -> data, or keeping track of unread notifications,
a cache is a temporary storage area. Most
24607.81 -> frequently identical queries are stored in
the cache and resources only within the same
24608.81 -> VPC may connect to lots of cash to ensure
low latency. So there you go. That's the last
24609.81 -> thing.
24610.81 -> So now we're taking a look at high availability
architecture, also known as h A. And this
24611.81 -> is the ability for a system to remain available.
Okay. And so what we need to do is we need
24612.81 -> to think about what could cause a service
to become unavailable. And the solution we
24613.81 -> need to implement in order to ensure high
availability. So starting with number one,
24614.81 -> we're dealing with the scenario where when
an availability zone becomes unavailable,
24615.81 -> now remember, an AZ is just the data center.
So you can imagine a data center becoming
24616.81 -> flooded for some reason. And so now all the
servers there are not operational. So what
24617.81 -> would you need to do? Well, you need to have
easy two instances in another data center.
24618.81 -> Okay. And so how would you route traffic from
one AZ to another? Well, that's where we will
24619.81 -> use an elastic load balancer so that that
way we can be multi AZ. Now, what would happen
24620.81 -> if two ACS went out? Well, then you'd need
a third one. And a lot of enterprises have
24621.81 -> this as a minimum requirement, we have to
be running at least in three azs. Okay, moving
24622.81 -> on to our next scenario, what happens when
a region becomes unavailable? So let's say
24623.81 -> there is a meteor strike. It's a very unlikely
scenario. But um, we need a scenario which
24624.81 -> would take out an entire region, all the data
centers in that geographical location. And
24625.81 -> so what you're going to need is going to need
instances running in another region. So how
24626.81 -> would you facilitate the routing of traffic
from one region to another, that's going to
24627.81 -> be route 53. Okay, so that's the solution
there. Now, what happens when you have a web
24628.81 -> application that becomes unresponsive because
of too much traffic? All right. So if you're
24629.81 -> having too much traffic coming to your platform,
well, then you're probably going to need more
24630.81 -> EC two instances to handle the demand. And
that's where we're going to use auto scaling
24631.81 -> groups which have the ability to scale based
on the amount of traffic that's coming in
24632.81 -> and So now what happens if we have an instance
that becomes unavailable, because there's
24633.81 -> an instance failure. So something with the
hardware or the virtual software is failing,
24634.81 -> and so it's no longer healthy? Well, again,
that's where we can have auto scaling groups,
24635.81 -> because we can say, we will set the minimum
amount of instances. So let's say we have
24636.81 -> always three running to handle the load minimum.
And if one fails, then it's going to spin
24637.81 -> up another one. And also, of course, the lb
would run traffic to other instances and other
24638.81 -> AZ. So we have high availability. And now
we have our last scenario here. So what happens
24639.81 -> when our web application becomes unresponsive
due to distance and geographical location?
24640.81 -> So let's say someone's accessing our web application
from Asia, and we are in North America. And
24641.81 -> the distance is causing unavailability? Well,
we have a couple options here, we can use
24642.81 -> a CloudFront. And so CloudFront could cache
our static content or even our dynamic content
24643.81 -> to some degree. So that there's there's content
nearby to that user, which gives them back
24644.81 -> availability, or we could just be running
our our content, or sorry, our servers in
24645.81 -> another region that's nearby. And we use route
53 routing policy for geo geolocation. So
24646.81 -> it's, so if we have servers that are in Asia,
that it's going to route traffic to that those
24647.81 -> servers. Okay, so there you go. That's the
rundown for high availability.
24648.81 -> We're looking at scale up versus scale out.
So when utilization increases, and we're reaching
24649.81 -> capacity, we can either scale up known as
vertical scaling, or we can scale out known
24650.81 -> as horizontal scaling. So in the case for
scale up, all we're doing is we're just increasing
24651.81 -> the instance size to meet that capacity. And
the trade offs here is this is going to be
24652.81 -> simple to manage, because we're just increasing
the instance. But we're gonna have lower available
24653.81 -> availability. So if a single instance fails,
the service is going to become unavailable.
24654.81 -> Now for scale out known as horizontal scaling,
while we're going to do is we're going to
24655.81 -> add more instances. And so the advantage here
is we're gonna have higher availability, because
24656.81 -> if a single instance fails, it doesn't matter,
we're gonna have more complexity to manage.
24657.81 -> So more servers means more, more of a headache,
okay? So what I would suggest you to do is
24658.81 -> you generally want to scale out first to get
more availability. And then you want to then
24659.81 -> scale up so that you keep simplicity, okay.
So you do want to use both these methods,
24660.81 -> it's just it has to do on the specific scenario
that's in front of you.
24661.81 -> Hey, this is Angie brown from exam Pro. And
we are looking at Elastic Beanstalk, which
24662.81 -> allows you to quickly deploy and manage web
apps on AWS without worrying about infrastructure.
24663.81 -> So the easiest way to think of Elastic Beanstalk
is I always compare it to Heroku. So I always
24664.81 -> say it's the Heroku of AWS. So you choose
a platform, you upload your code, and it runs
24665.81 -> without little worry for developers knowing
the actual underlying infrastructure, okay.
24666.81 -> It just does not recommend it for production
applications, when each of us is saying that
24667.81 -> they really are talking about enterprise or
large companies. for startups, you know, I
24668.81 -> know ones that are still using it three years
in, so it's totally fine for those use cases.
24669.81 -> But if you do see exam questions where they're
talking about, like, you have a workload,
24670.81 -> it's just for developers, they don't want
to have to really think about what they're
24671.81 -> doing Elastic Beanstalk is going to be that
choice. So what kind of what kind of things
24672.81 -> does Elastic Beanstalk set up for you? Well,
it's going to set up you a load balancer,
24673.81 -> auto scaling groups, maybe the database, easy
to instance, pre configured with the platform
24674.81 -> that you're running on. So if you're running
a Rails application, you choose Ruby. If you're
24675.81 -> running large Ral you choose PHP, you can
also create your own custom platforms and
24676.81 -> run them on Elastic Beanstalk. Another thing
that's really important to know is that Elastic
24677.81 -> Beanstalk can run Docker eyes environments.
So here we have multi container and Docker.
24678.81 -> It does have some nice security features where
if you have RDS connected, it can rotate out
24679.81 -> those passwords for you. And it does have
a couple of deployment methodologies in there.
24680.81 -> By default, it's in place, but it can also
do bluegreen deployment, and it can do monitoring
24681.81 -> for you. And down below, you just see these
little boxes, that's just me showing you that
24682.81 -> when you go into Elastic Beanstalk, you'd
have all these boxes to do some fine tuning
24683.81 -> but more or less, you just choose if you want
high availability or you want it to be cheap,
24684.81 -> and it will then just choose all these options
for you. So that is all you need to know about
24685.81 -> Elastic Beanstalk for the solution architect
is Andrew Brown from exam Pro, and we are
24686.81 -> going to learn how to utilize Elastic Beanstalk.
So we can quickly deploy, monitor and scale
24687.81 -> our applications quickly and easily. Okay,
so we're going to go ahead here and hit get
24688.81 -> started. And we're going to upload or create
a new application here we're going to name
24689.81 -> it Express j s because that's what we're going
to be utilizing here. Express is just example,
24690.81 -> a To choose a platform to be no j s, okay,
and so now we're at this option where we can
24691.81 -> utilize a sample application, which is totally
something we can do. Or we can upload our
24692.81 -> own code. So for this, I really want you to
learn a few of the caveats of Elastic Beanstalk.
24693.81 -> And you're only going to learn that if you
upload your own code, you can definitely just
24694.81 -> do sample application, just watch the videos
to follow along. But the next thing we're
24695.81 -> going to do is we're going to prep a application
where I have a sample repo here, we're going
24696.81 -> to go and talk about some of the things we
need to configure and upload in the next video.
24697.81 -> All right. Alright, so I prepared this expressjs
application. So we can learn the caveats of
24698.81 -> Elastic Beanstalk. And I even have the instructions
here, if you didn't want to just use this
24699.81 -> premade one and you want to go the extra effort
to make your own, I do have the instructions
24700.81 -> here emitting how to install no GS, you have
to figure that out for yourself. But um, you
24701.81 -> know, just using this application here, we're
going to go ahead and you can either download
24702.81 -> the zip, or clone it, I'm going to clone it,
because that's the way I like to do it. And
24703.81 -> we'll go over to our terminal here. And I'm
just going to clone that to my desktop. Okay.
24704.81 -> And it won't take long to here. And there
we go. We have it cloned. We'll go inside
24705.81 -> of it, I'm just going to open up the folder
here. So you can
24706.81 -> get an idea of the contents of this Express
JS application. So with most applications,
24707.81 -> we just want to make sure that it runs before
we upload it here. So I'm going to do npm
24708.81 -> install. Okay, it's gonna install all the
dependencies, you just saw a node modules
24709.81 -> directory created. And I'm just going to run
my application, I have a nice little script
24710.81 -> here. To do that, it's going to start on localhost.
And here's our application. So it's a very
24711.81 -> simple application references a very popular
episode of Star Trek The Next Generation.
24712.81 -> And so we're going to go back and just kill
our application here. And now we're going
24713.81 -> to start preparing this application for for
Elastic Beanstalk. So when you have an Elastic
24714.81 -> Beanstalk application, it needs to know how
to actually run this application, okay. And
24715.81 -> the way it's going to go ahead and do that
is through a hidden directory with a couple
24716.81 -> of hidden files. So if you scroll down in
my example, here, I'm going to tell you that
24717.81 -> you need to create a couple, you need to create
a hidden folder called Eb extensions. And
24718.81 -> inside that folder is going to contain different
types of configuration files. And it's going
24719.81 -> to tell Elastic Beanstalk how to run this
application. So for no GS, we want it to execute
24720.81 -> the NPM start command, which is going to start
up the server to run this application. We
24721.81 -> also need it to serve static files. So we
have another configuration file in here to
24722.81 -> serve those static files. Now this.eb extensions,
it's actually already part of the repository,
24723.81 -> so you don't have to go ahead and create them.
But a very common mistake that people have
24724.81 -> with Elastic Beanstalk is they fail to upload
that hidden folder, because they simply don't
24725.81 -> see it. Okay. So if you are on your on a Mac
here, you can hit Control Shift period, which
24726.81 -> is going to show those hidden folders for
Windows and Unix, you're gonna have to figure
24727.81 -> that out for yourself here. But just be aware
that you need to include this folder for packaging.
24728.81 -> Alright, so now we know that our application
runs on and we can see all the files we need,
24729.81 -> we're going to go ahead and packages. So I'm
going to grab what we need here. We don't
24730.81 -> need the docks. That's just something that
I added here just to get that nice graphic
24731.81 -> in there for you. And I believe that's all
we need, we could exclude the readme, we're
24732.81 -> going to exclude the dot get directory because
sometimes they contain sensitive credentials.
24733.81 -> But the most important thing is this.eb. Extension.
So I'm going to go ahead here and zip this
24734.81 -> into an archive. Okay, and so now I have that
ready to upload. Okay. And just one more caveat
24735.81 -> here is that you saw me do an npm install
to install the dependencies for this application.
24736.81 -> Well, how is Elastic Beanstalk? going to do
that, okay, so it actually does it automatically.
24737.81 -> And this is pretty much for most environments
have it's going to be a Ruby application,
24738.81 -> it's going to do bundle install. And if you
have requirements and your Django application
24739.81 -> is going to do it, but for Elastic Beanstalk
for a no Jess, it's going to automatically
24740.81 -> run npm install for you. Okay, so you don't
have to worry about that. Um, but anyway,
24741.81 -> we've prepared the archive. And so now we
can proceed to actually uploading this archive
24742.81 -> into Alaska. So I left this screen open, because
we had to make a detour and package our code
24743.81 -> into an archive. And so now we're ready to
upload our code. So just make sure you have
24744.81 -> upload your code selected here and we'll click
Upload. And just before we upload, I want
24745.81 -> you to notice that you can either upload a
local archive or you can provide it via a
24746.81 -> s3 URL and you will have to do this once you
exceed 512. megabytes, which isn't a very
24747.81 -> hard thing to do, because applications definitely
get larger. So just be aware of that. But
24748.81 -> we still have a very small application here,
it's only five megabytes, five, five megabytes
24749.81 -> here. So we can definitely upload this directly.
I do want to point out that we did zip the
24750.81 -> node modules, and this directory is usually
very large. So I bet if I excluded this, this
24751.81 -> would have been less than a megabyte. But
for convenience, I just included it. But we
24752.81 -> did see previous that Elastic Beanstalk does
do an npm install automatically for us. So
24753.81 -> if we had admitted this, it would be installed
in the server, okay. But I'm just going to
24754.81 -> upload this archive, it is five megabytes
in size. So it will take a little bit of time
24755.81 -> here, when we hit the upload button, but just
before we do, we need to set our version label.
24756.81 -> Now I'm going to name this version 0.0. point
one. And it's a good idea to try to match
24757.81 -> the versioning here with your Git tags, because
in git, you can tag specific cabinets with
24758.81 -> specific versions, okay, and we're going to
go ahead and upload this, okay. And this will
24759.81 -> just take a little bit of time, my Internet's
not the fastest here. So five megabytes is
24760.81 -> going to take a minute or so here.
24761.81 -> Okay, great. So we've uploaded that code here.
And we have version 0.0. point one. And so
24762.81 -> now we can talk about more advanced configuration,
we could go ahead and create this application.
24763.81 -> But I want to just show you all the little
things that you can configure in Alaska. So
24764.81 -> let's look at a bit more advanced configurations
here and just make sure that we aren't getting
24765.81 -> overbilled because we spun up resources, we
didn't realize we're gonna cost us money.
24766.81 -> So the preset configuration here is on the
low cost here. So it's going to be essentially
24767.81 -> free. If we were to launch this here, which
is great for learning, but let's talk about
24768.81 -> if we actually set it to high availability.
So if we set it to high availability, we're
24769.81 -> going to get a load balancer. So a load bouncer
generally costs at least $15 USD per month.
24770.81 -> So by having a low cost, we're saving that
money there. And when we set it to high availability,
24771.81 -> it's going to set it in an auto scaling group,
okay, between one to four instances, with
24772.81 -> low cost, it will only run a single server,
you can see that is set to a tee to micro,
24773.81 -> which is the free tier. And we could adjust
that there if we want. And then we have updates.
24774.81 -> So the deployment method right now is all
at once. And so if we were to deploy our application,
24775.81 -> again, let's say it's already been uploaded,
and we deploy it again, using all at once,
24776.81 -> we're going to have downtime, because it's
going to take that server off offline, and
24777.81 -> then put a new server up with the code in
order to deploy that code. And so we can actually
24778.81 -> use bluegreen deployment to mitigate that,
and I'm just going to pop in here to show
24779.81 -> you so all at once means that it's going to,
it's going to shut down and start up a new
24780.81 -> server in place. And then immutable means
it's going to create a new server in isolation,
24781.81 -> okay, so just be aware of those options there.
But um, there's that. And we can also create
24782.81 -> our database and attach it here as well. Sometimes,
that is a great idea. Because if you create
24783.81 -> an RDS database, so here, I could select like
MySQL and Postgres, right, and you provide
24784.81 -> the username and password. But the advantage
of creating your art RDS database with Elastic
24785.81 -> Beanstalk is that it's going to automatically
rotate your RDS passwords for you for for
24786.81 -> security purposes. So that's a very good thing
to have here. I generally do not like creating
24787.81 -> my RDS instances with Elastic Beanstalk, I
create them separately and hook them up to
24788.81 -> my application. But just be aware that you
can go ahead and do that. And I think that's
24789.81 -> like the most important options there. But
we're just going to make sure that we are
24790.81 -> set to the low cost free tier here with T
to micro, okay. And we'll go ahead and create
24791.81 -> our app. And here we go. And so now, what
we're going to see is some information here
24792.81 -> as it creates our application. And this does
take a few minutes here anytime you launch,
24793.81 -> because it has to launch at two instance.
But it always takes about, you know, three
24794.81 -> to five minutes to spin up a fresh instance.
So I will probably clip this video. So this
24795.81 -> proceeds a lot quicker here. So that deploy
finished there. And it redirected me to this,
24796.81 -> this dashboard here. So if you are still on
that old screen, and you need to get to the
24797.81 -> same place as me just go up to express j s
sample up here and just click into your environment
24798.81 -> and we will be in the same place. So did this
work. So it created us a URL here and we will
24799.81 -> view it and there you go. Our application
is running on Elastic Beanstalk. Alright.
24800.81 -> Now if you're looking up here and saying well,
what if I wanted to get my custom domain here?
24801.81 -> That's where row 53 would come into play.
So in roughly two, three, you would point
24802.81 -> it to your elastic IP, which is the case here,
because we created a single instance to be
24803.81 -> cost saving, and attached an elastic IP for
us. If we had done the high availability option,
24804.81 -> which created a load bouncer, we would be
pointing roughly three, two, that load balancer.
24805.81 -> And that's how we get our custom domain on
Elastic Beanstalk. And let's just quickly
24806.81 -> look at what it was doing as it was creating
here. So if we go to events, we're gonna get
24807.81 -> all the same information. As we were in that
prior, you know, that black terminal screen
24808.81 -> where it was showing us progress. It's the
exact same information here. So it created
24809.81 -> an environment for us it, um, had environment
data that it uploaded to s3, it created a
24810.81 -> security group, it created an elastic IP on
and then it spun up that easy to essence and
24811.81 -> it took three minutes. And this is what I
said it would take between three to five minutes
24812.81 -> to spin up an EC two instance, if we had chose
to create an RDS instance, in our configuration
24813.81 -> to create that initial RDS always takes about
10 to 15 minutes because it has to create
24814.81 -> that that initial backup. But then from then
on, if we did other deploys, would only take
24815.81 -> the three to five minutes. Okay, so there
you go. That's all we need to really know
24816.81 -> for Elastic Beanstalk for the solution architect.
24817.81 -> So just so we're not wasting our free tier
credits, we should tear down this Elastic
24818.81 -> Beanstalk environment. So I'm going to go
up here to actions. And we are going to terminate
24819.81 -> this environment here. And we're going to
have to provide its name so it's up here.
24820.81 -> So I'm just going to copy it, paste it in
and hit terminate, okay, and this is a bit
24821.81 -> slow. But we'll let it go here. And it's going
to hopefully destroy this environment. Sometimes
24822.81 -> it does fail, you'll have to give it another
try there. But once that is done, then you
24823.81 -> might want to go ahead and destroy it. Delete
the application, okay? So, but it's not necessarily
24824.81 -> sorry to delete the application. It's just
necessary to destroy this environment here,
24825.81 -> because this is actually running instances.
All right. So we'll just wait here. shouldn't
24826.81 -> take too long, just a couple minutes. All
right. So we finished terminating, and it
24827.81 -> redirected me here. And we can see the previous
terminated environment. And now just to fully
24828.81 -> clean everything up here, we can delete the
application. Now, there's no cost to keep
24829.81 -> this application around. It's really the environments
that contain that running resources. But just
24830.81 -> to be tidy here, we'll go ahead and delete
that there. And we'll provide its name and
24831.81 -> should be relatively quick, they're in Great.
So the environment and also the application
24832.81 -> is destroyed. So we're fully or fully cleaned
up. So onto the Elastic Beanstalk cheat sheet.
24833.81 -> And this is very minimal for the solution
architect associate, if you're doing other
24834.81 -> exams, where Elastic Beanstalk is more important,
because there's going to be like two pages,
24835.81 -> okay, so just keep that in mind. But let's
get through this cheat sheet. So Elastic Beanstalk
24836.81 -> handles the deployment from capacity provisioning,
and load balancing, auto scaling, to application
24837.81 -> health monitoring when you want to run a web
app, but you don't want to have to think about
24838.81 -> the underlying infrastructure. You want to
think Elastic Beanstalk. It costs nothing
24839.81 -> to use Elastic Beanstalk only the resources
it provision. So RDS lb easy to recommend
24840.81 -> it for test or development apps, not Rebecca
recommended for production use, you can choose
24841.81 -> from the following pre configured platforms,
you got Java, dotnet, PHP, node, js, Python,
24842.81 -> Ruby, go and Docker. And you can run Docker
eyes environments on Elastic Beanstalk. So
24843.81 -> there you go.
24844.81 -> Hey, this is Angie brown from exam Pro. And
we are looking at API gateway, which is a
24845.81 -> fully managed service to create, publish,
maintain, monitor and secure API's at any
24846.81 -> scale. So API gateway is a solution for creating
secure API's in your cloud environment at
24847.81 -> any scale. So down below, I have a representation
of how API gateway works. So on the left hand
24848.81 -> side, you'd have your usual suspects, you'd
have your mobile app, your web app, or even
24849.81 -> an IoT device. And they would be making HTTP
requests to your API, which is generally a
24850.81 -> URL. And so API gateway provides you a URL
that it generates, so that you can do that.
24851.81 -> And then in API gateway, you create your endpoint.
So here I have like endpoints for tasks with
24852.81 -> different methods. And the idea here is that
it's likely to create these virtual endpoints
24853.81 -> so that you can then point them to AWS services,
the most common common use service is lambda.
24854.81 -> But yeah, so the easiest way to think about
API gateway and this is really 80 of us this
24855.81 -> definition is that the API acts as the front
door for applications to access data, business
24856.81 -> logic and functionality for back end services.
So it's just a virtual, a bunch of virtual
24857.81 -> endpoints to connect to AWS So let's just
talk about some of the key features API gateway.
24858.81 -> So API gateway handles all tasks involving
accepting and processing up to hundreds of
24859.81 -> 1000s of concurrent API calls, including traffic
management, authorization and monitoring.
24860.81 -> So it allows you to track and control any
usage of your API, you can throttle requests
24861.81 -> to help prevent attacks, you can expose atbs
endpoints to define a RESTful API is highly
24862.81 -> scalable. So everything happens automatically
and is cost effective. And you can send each
24863.81 -> API endpoint to a different target, and maintain
multiple versions of your API.
24864.81 -> All right, so let's look at how we actually
configure API and the components involved.
24865.81 -> So the first most important thing are resources.
Okay, so resources over here is Ford slash
24866.81 -> projects. And so you know, they literally
just are URLs, that's what resources are.
24867.81 -> And in a API gateway project, you're gonna
want to be creating multiple multiples of
24868.81 -> these resources, right, because you're not
gonna just have one endpoint. And so here
24869.81 -> we have a forward slash projects. And underneath,
you can see we have another resource, which
24870.81 -> is a child of this Parent Resource, which
would create this full URL here for you, you
24871.81 -> see this weird hyphen ID syntax that is actually
a variable. So that would be replaced with
24872.81 -> like three or four. But you know, what you
need to know is that resources are URLs, and
24873.81 -> they can have children, okay? Then what you're
going to want to do is you're going to want
24874.81 -> to apply methods to your resources. So methods
are your HTTP methods. So it's the usual write,
24875.81 -> delete patch, post, put options head, okay.
And you can define multiple resources. So
24876.81 -> if you wanted to do a get to projects, ID,
you could do that. And you could also do a
24877.81 -> post. And now those are unique endpoints.
So both GET and POST are going to have different
24878.81 -> functionality. But yeah, you just need to
define all that stuff, right? So yeah, a resource
24879.81 -> can have multiple methods, resources can have
children, you know. And so once we've defined
24880.81 -> our API using resources and methods, the next
thing is to actually get our API published.
24881.81 -> And in order to do that, you're going to need
a bunch of different stages setup. And so
24882.81 -> stages is just the is just like a way of versioning,
your, your API for published versions. And
24883.81 -> you're normally going to do this based on
your environment. So you would have like production,
24884.81 -> QA, for quality assurance, staging, maybe
you'd have one for developers. So yeah, so
24885.81 -> you'll create those stages. Now once you create
a stage, you're going to get a unique URL
24886.81 -> that's automatically generated from AWS. So
here, I have one, and this is the called the
24887.81 -> invoke URL, and this is the endpoint, you're
actually going to hit. So you do this, like
24888.81 -> Ford slash prod. And then whatever your endpoints
are. So we saw in the previous example, we
24889.81 -> had Ford slash tasks and projects, you just
depend on there and make the appropriate method,
24890.81 -> whatever it is, GET or POST. And that's how
you're gonna interact with API gateway. Now,
24891.81 -> you might look at this and say, I don't really
like the look of this URL, I wish I can use
24892.81 -> a custom one, you can definitely do that in
API gateway. So you could make this like API
24893.81 -> dot exam pro CO, instead of this big ugly
URL. But again, for each staging, they're
24894.81 -> going to have one. So here, it's prod. So
there'd be one here, QA and staging. All right.
24895.81 -> And so in order to deploy these versions to
the staging, you'd, you'd go to your actual
24896.81 -> API, and you'd have to do deploy API action.
Every time you make a change, you have to
24897.81 -> do the deploy API actually doesn't automatically
happen. That's something that confused me
24898.81 -> for some time, because you think you made
this endpoint? Why aren't they working? It's
24899.81 -> generally because you have to do so we looked
at how to define the API, and also how to
24900.81 -> deploy it. The last thing is actually how
do we configure those endpoints. So when you
24901.81 -> select the method for your resource, you're
going to choose your integration type. And
24902.81 -> so we have a bunch of different ones. So we
got lambda HTTP and Mach one, you can send
24903.81 -> it to another ad of a service, which this
option is very confusing, but it's supposed
24904.81 -> to be there. And you have VPC link. So I would
go to your on premise environment or your
24905.81 -> local data center or local. Network. Okay.
So you do have those integration types. And
24906.81 -> then once you do that, the options are going
to vary, but the most common one is lambda
24907.81 -> function. And so we're going to see is generally
this and the idea is that you get to configure
24908.81 -> for the the request coming in and the response
going out. Okay, so you could apply authorizations,
24909.81 -> yes is off none. So you can make it so they
have to authenticate or Authorize. And then
24910.81 -> you can say you have some configuration lambda,
and you have some manipulation on the response
24911.81 -> going out. Okay.
24912.81 -> So get a bit of cost saving here and a bit
of less of a burden on your API, what you
24913.81 -> can do is you can turn on API gateway cache,
so you can cache the results of common endpoints.
24914.81 -> Okay. So when enabled on the stage API gateway
caches response from your endpoint for a specific
24915.81 -> TTL. Okay, so let's just a period of time
have expired right, or Time To Live. API gateway
24916.81 -> responds to requests by looking up the responses
from the cache. So instead of making a request
24917.81 -> to the endpoint, okay, and the reason you're
gonna want to do this is going to reduce the
24918.81 -> number of calls To your endpoint, which is
going to save you money. And it's going to
24919.81 -> prove latency for the requests made to your
API, which is going to lead to a great experience.
24920.81 -> So definitely, you might want to turn on.
So we're gonna look at cores now. And this
24921.81 -> stands for cross origin resource sharing.
And this is to address the issue of same origin
24922.81 -> policy. So same origin policy protects us
against Xs s attacks, but there are times
24923.81 -> when we actually do want to access things
from another domain name, okay, that's not
24924.81 -> from our own. And so that's what corps allows
you to do causes kind of like this document
24925.81 -> or these header files, which say, Okay, this
domain is okay. To run scripts on. Okay. And
24926.81 -> so an API gateway, this is something we're
going to be commonly turning on. Because by
24927.81 -> default, cores is not enabled. And so you're
gonna have to enable it for the entire API
24928.81 -> or particular endpoints, like the cores that
you want to like the headers, you want to
24929.81 -> be passed along with those endpoints. And
so here, you'd say, Okay, well, posts are
24930.81 -> allowed options are allowed and see where
it says access cross origin allow, you're
24931.81 -> doing a wildcard saying everything's allowed.
Okay? So that's how you would set it. But
24932.81 -> you know, just understand what causes and
korres is a is these headers that say, this
24933.81 -> domain is allowed, allowed access to run these
things from this location, okay. And so of
24934.81 -> course, is always enforced by the client cores
being the browser. Okay, so the core, that
24935.81 -> means that the browser is going to look for
cores, and if it has cores, then it's going
24936.81 -> to, you know, do something, okay. So there
you go. So there's this common vulnerability
24937.81 -> called cross site scripting, exe, s s attacks.
And this is when you have a script, which
24938.81 -> is trying to be executed from another website,
on your website. And it's for malicious reasons.
24939.81 -> Because a lot of people when they're trying
to do something malicious, they're not going
24940.81 -> to be doing it from your site, because that's
from you. It's gonna be from somebody else.
24941.81 -> So in order to prevent that, by default, web
browsers are going to restrict the ability
24942.81 -> to execute scripts that are cross site from
another site. But in order to allow scripts
24943.81 -> to be executed, you need a same origin policy,
okay? That's the concept of the browser saying,
24944.81 -> Okay, these scripts are allowed to execute
from another website. So again, web browsers
24945.81 -> do enforce this by default. But if you're
using tools, such as postman and curl, they're
24946.81 -> going to ignore same origin policy. So if
you're ever wondering why something's not
24947.81 -> working, cross site, it's going to likely
be this.
24948.81 -> Right, so we're on to the API gateway cheat
sheet. So API gateway is a solution for creating
24949.81 -> secure API's in your cloud environment at
any scale, create API's that act as a front
24950.81 -> door for applications to access data, business
logic or functionality from back end services,
24951.81 -> API gateway thralls, API endpoints at 10,000
requests per second. We didn't mention that
24952.81 -> in the core content, but it's definitely exam
question that might come up where they're
24953.81 -> like, Oh, you have something, you're going
beyond 10,000. And it's not working? Well.
24954.81 -> That's the reason why is that there's a hard
limit of 10,000 requests per second. And then
24955.81 -> you have to ask for a increase a service level
increase at the support stages allow you to
24956.81 -> have multiple published versions of your API.
So prod staging QA. Each stage has an invoke
24957.81 -> URL, which is the endpoint you use to interact
with your API, you can use a custom domain,
24958.81 -> domain for your invoke URL. So it could be
API dot example code to be a bit prettier.
24959.81 -> You need to publish your API via the deploy
API action, you choose which which stage you
24960.81 -> want to publish your API. And you have to
do this every single time you make a change.
24961.81 -> It's annoying, but you have to do it. Resources
are URLs. So just think forward slash projects.
24962.81 -> resources can have child resources, resources.
So the child here being hyphen, Id edit hyphen,
24963.81 -> hyphen, hyphen is like a syntax is saying
this is a custom variable. That could be three,
24964.81 -> four to sign an exam question, but it's good
for you to know. You define multiple methods
24965.81 -> on your resources. So you're gonna have your
get post, delete, whatever you want. cores
24966.81 -> issues are common with API gateway cores can
be enabled on all or individual endpoints.
24967.81 -> caching improves latency and reduces the amount
of calls made to your endpoint. Same origin
24968.81 -> policies help to prevent excess attacks, same
origin policies ignore tools like postman
24969.81 -> or curl. So similar to policies just don't
work with those or don't work. But it just
24970.81 -> the ease that so you can work with those tools.
cores is also enforced by the client client
24971.81 -> would be the browser. So cores, the browser
is going to definitely look for course headers
24972.81 -> and interpret them. You can require authorization
to to your API via Ava's cognito or a custom
24973.81 -> lambda. So just so you know, you can protect
the calls to
24974.81 -> Hey, this is Andrew Brown from exam Pro, and
we are looking at Amazon kinesis, which is
24975.81 -> a scalable and durable real time data streaming
service. As to ingest and analyze data in
24976.81 -> real time from multiple sources. So again,
Amazon kinesis is AWS is fully managed solution
24977.81 -> for collecting, processing and analyzing street
streaming data in the cloud. So when you need
24978.81 -> real time, think kinesis. So some examples
where kinesis would be of use stock prices,
24979.81 -> game data, social media data, geospatial data,
clickstream data, and kinesis has four types
24980.81 -> of streams, we have kinesis data streams,
kinesis, firehose delivery streams, kinesis,
24981.81 -> data analytics, and kinesis video analytics,
and we're going to go through all four of
24982.81 -> them. So we're gonna first take a look at
kinesis data streams. And the way it works
24983.81 -> is you have producers on the left hand side,
which are going to produce data, which is
24984.81 -> going to send it to the kinesis data stream,
and that data stream is going to then ingest
24985.81 -> that data. And it has shards, so it's going
to take that data and distributed amongst
24986.81 -> its shards. And then it has consumers. And
so consumers with data streams, you have to
24987.81 -> manually configure those yourself using some
code. But the idea is you have these two instances
24988.81 -> that are specialized to then consume that
data and then send it to something in particular.
24989.81 -> So we have a consumer that is specialized
to sending data to redshift than dynamodb
24990.81 -> than s3, and then EMR, okay, so whatever you
want the consumer to send it to, it can send
24991.81 -> it wherever it wants. But the great thing
about data streams is that when data enters
24992.81 -> into the stream, it persists for quite a while.
So it will be there for 24 hours, by default,
24993.81 -> you could extend it up to 160 68 hours. So
if you need to do more with that data, and
24994.81 -> you want to run it through multiple consumers,
or you want to do something else with it,
24995.81 -> you can definitely do that with it. The way
you pay for kinesis data streams, it's like
24996.81 -> spinning up a new CPU instance, except you're
spinning up shards, okay. And that's what's
24997.81 -> going to be the cost there. So as long as
the shard is running, you pay X amount of
24998.81 -> costs for X amount of shards. And that is
kinesis data. So onto kinesis firehose delivery
24999.81 -> stream, similar to data streams, but it's
a lot simpler. So the way it works is that
25000.81 -> it also has producers and those producers
send data into kinesis firehose. The difference
25001.81 -> here is that as soon as data is ingested,
so like a consumer consumes that data, it
25002.81 -> immediately disappears from the queue. Okay,
so data is not being persisted. The other
25003.81 -> trade off here is that you can only choose
one consumer. So you have a few options, you
25004.81 -> can choose s3, redshift, Elasticsearch, or
Splunk, generally, people are going to be
25005.81 -> outputting to s3. So there's a lot more simplicity
here. But there's also limitations around
25006.81 -> it. The nice thing though, is you don't have
to write any code to consume data. But that's
25007.81 -> the trade off is you don't have any flexibility
on how you want to consume the data, it's
25008.81 -> very limited. firehose can do some manipulations
to the data that is flowing through it, I
25009.81 -> can transform the data. So if you have something
where you want it from JSON, you want to convert
25010.81 -> it to parkette. There are limited options
for this. But the idea is that you can put
25011.81 -> it into the right data format, so that if
it gets inserted into s3, so maybe Athena
25012.81 -> would be consuming that, that it's now in
parkette file, which is optimized for Athena,
25013.81 -> it can also compress the file. So just simply
zip them, right. There's different compression
25014.81 -> methods, and it can also secure them. So there's
that advantage. The big advantage is firehose
25015.81 -> is very inexpensive, because you only pay
for what you consume. So only data that's
25016.81 -> ingested is what you what you pay for you
kind of think of it like I don't know, even
25017.81 -> lambda or fargate. So the idea is you're not
paying for those running shards, okay. And
25018.81 -> it's just simpler to use. And so if you don't
need data retention, it's a very good. Okay,
25019.81 -> on to kinesis video streams. And as the name
implies, it is for ingesting video data. So
25020.81 -> you have producers, and that's going to be
sending either video or audio encoded data.
25021.81 -> And that could be from security cameras, web
cameras, or maybe even a mobile phone. And
25022.81 -> that data is going to go into kinesis video
streams, it's going to secure and retain that
25023.81 -> encoded data so that you can consume it from
services that are used for analyzing video
25024.81 -> and audio data. So you got Sage maker recognition,
or maybe you need to use TensorFlow or you
25025.81 -> have a custom video processing or you have
something that has like HL based video playback.
25026.81 -> So that's all there is to it. It's just so
you can analyze and process a video streams
25027.81 -> applying like ml or video processing service.
25028.81 -> Now we're gonna take a look at kinesis data
analytics. And the way it works is that it
25029.81 -> takes an input stream and then it has an output
stream. And these can either be firehose or
25030.81 -> data streams. And the idea is you're going
to be passing information data analytics What
25031.81 -> this service lets you do is it lets you run
custom SQL queries so that you can analyze
25032.81 -> your data in real time. So if you have to
do real time reporting, this is the service
25033.81 -> you're going to want to use. The only downside
is that you have to use two streams. So it
25034.81 -> can get a little bit expensive. But for data
analytics, it's it's really great. So that's
25035.81 -> all there is. So it's time to look at kinesis
cheat sheet. So Amazon kinesis is the ADA
25036.81 -> solution for collecting, processing and analyzing
streaming data in the cloud. When you need
25037.81 -> real time, think kinesis. There are four types
of streams, the first being kinesis data streams,
25038.81 -> and that's a you're paying per shard that's
running. So think of an EC two instance, you're
25039.81 -> always paying for the time it's running. So
kinesis data streams is just like that data
25040.81 -> can persist within that stream data is ordered,
and every consumer keeps its own position,
25041.81 -> consumers have to be manually added. So they
have to be coded to consume, which gives you
25042.81 -> a lot of custom flexibility. Data persists
for 24 hours by default, up to 168 hours.
25043.81 -> Now looking at kinesis firehose, you only
pay for the data that is ingested, okay, so
25044.81 -> think of like lambdas, or fargate. The idea
is that you're not paying for a server that's
25045.81 -> running all the time. It's just data, it's
ingested, data immediately disappears. Once
25046.81 -> it's processed consumer, you only have the
choice from a predefined set of services to
25047.81 -> either get s3, redshift, Elasticsearch, or
Splunk. And they're not custom. So you're
25048.81 -> stuck with what you got kinesis data analytics
allows you to perform queries in real time.
25049.81 -> So it needs kinesis data streams or farfalle
firehose as the input and the output, so you
25050.81 -> have to have two additional streams to use
a service which makes it a little bit of expensive.
25051.81 -> Then you have kinesis video analytics, which
is for securely ingesting and storing video
25052.81 -> and audio encoder data to consumers such as
Sage maker, recognition or other services
25053.81 -> to apply machine learning and video processing.
to actually send data to the streams, you
25054.81 -> have to either use kpl, which is the kinesis
Producer library, which is like a Java library
25055.81 -> to write to a stream. Or you can write data
to a stream using the ABS SDK kpl is more
25056.81 -> efficient, but you have to choose what you
need to do in your situation. So there is
25057.81 -> the kinesis cheat sheet.
25058.81 -> Hey, this is Andrew Brown. And we are looking
at AWS storage gateway, which is used for
25059.81 -> extending and backing up on premise storage
to storage gateway provides you seamless and
25060.81 -> secure integration between your organization's
on premise IT environment, and ABS AWS storage
25061.81 -> infrastructure, we can securely store our
data to the newest cloud. And it's scalable
25062.81 -> and cost effective in uses virtual machine
images in order to facilitate this on your
25063.81 -> on premise system. So it supports both VMware
ESXi and Microsoft Hyper V. And once it's
25064.81 -> installed and activate, you can use Eva's
console to create your gateway. Okay, so there
25065.81 -> is an on premise component and a cloud component
to connect those two things. And we have three
25066.81 -> different types of gateways, which we're going
to get into now. So the three types of gateways,
25067.81 -> we have file gateway, which uses NFS, or SMB,
which is used for storing your files in s3,
25068.81 -> then you have volume gateway, which is using
iSCSI. And this is intended as a backup solution.
25069.81 -> And we have two different methods of storing
those volumes. And then the last is tape gateway,
25070.81 -> which, which is for backing up your virtual
tape library. Here we're looking at file gateway.
25071.81 -> And what it does is it allows you to use either
the NFS, or SMB protocol, so that you can
25072.81 -> create a mount point so that you can treat
s3, just like a local hard drive or local
25073.81 -> file system. And so I always think of file
gateway as extending your local storage onto
25074.81 -> s3. Okay, and there's some details here we
want to talk about, so ownership permissions,
25075.81 -> and timestamps are all stored with an s3 metadata.
For the objects that are associated with the
25076.81 -> file, Once the file is transferred to s3,
it can be managed as a native s3 objects and
25077.81 -> bucket policies versioning lifecycle Lifecycle
Management, cross region replication apply
25078.81 -> directly to your object stored in your bucket.
So not only do you get to use s3, like a normal
25079.81 -> file system or hard drive, you also get all
the benefits of s3.
25080.81 -> Now we're going to look at the second type
of storage gateway volume gateway. So volume
25081.81 -> gateway presents your application with disk
volumes using internet small computer systems
25082.81 -> interface. So iSCSI block protocol. Okay,
so the idea is that you have your local storage
25083.81 -> volume and using this protocol through storage
gateway, we're going to able to interact with
25084.81 -> s3 and store a backup of our storage volume
as an EBS snapshot, this is going to depend
25085.81 -> on the type of volume gateway we use because
there are two different types and we'll get
25086.81 -> into that out of the slide. But let's just
get through what we have here in front of
25087.81 -> us. So the data is written to the volumes
and can be asynchronously backed up as a point
25088.81 -> in time snapshot of the volume and stored
in cloud as an EBS snapshots. snapshots are
25089.81 -> incremental backups that capture only change
blocks in the volume. All snapshots, storage
25090.81 -> is also compressed to help minimize your storage
charges. So I like to think of this as giving
25091.81 -> you the power of EBS locally, because if you
were to use EBS on AWS. It does all these
25092.81 -> cool things for you, right, but so it's just
treating your local drives as like EBS drives,
25093.81 -> and it's doing this alter s3. So let's let's
go look at the two different types here. So
25094.81 -> the first type is volume gateway for storage
vault volumes. And the key thing is that it's
25095.81 -> where the primary data is being stored. Okay,
so the primary data is stored locally while
25096.81 -> asynchronously backing up the data to AWS.
So you're all your local data is here, and
25097.81 -> then you just get your backup on AWS. So it
provides on premise applications with low
25098.81 -> latency access to the entire data set while
still providing durable offset backups. It
25099.81 -> creates storage volumes and mounts them as
ice FCS devices from your on premise servers.
25100.81 -> As we saw in the last illustration, any data
written to the stored volumes are stored on
25101.81 -> your on premise storage hardware. That's what
this is saying here with the primary data.
25102.81 -> EBS snapshots are backed up to a database
s3 and stored volumes can be between one gigabyte
25103.81 -> to 16 terabytes in size. Let's take a look
at cached volume. So the difference here between
25104.81 -> stored volumes cache volumes is the primary
data stored on AWS. And we are caching the
25105.81 -> most frequently accessed files. So that's
the difference here. And the key thing to
25106.81 -> remember between storage volume or storage
volumes and cache volumes is where the primary
25107.81 -> data is. So why would we want to do this?
Well, it minimizes the need to scale your
25108.81 -> on premise storage infrastructure while still
providing your applications with low latency
25109.81 -> data access, create storage volumes up to
32 terabytes in size and attach them as I
25110.81 -> SCSI devices, from your on premise servers,
your gateway stores data that you will write
25111.81 -> to these volumes in s3 and retain recently
read data in your on premise storage. So just
25112.81 -> caching those most frequently files, gateway
cache and upload buffer storage cache volumes
25113.81 -> can be between one gigabyte and 32 gigabytes
in size. So there you go, that is volume gateway.
25114.81 -> We're looking at the third type of storage
gateway tape gateway. And as the name implies,
25115.81 -> it's for backing up virtual tape libraries
to AWS. So it's a durable cost effective solution
25116.81 -> to archive your data in AWS. You can leverage
existing tape based backup application infrastructure,
25117.81 -> stores data virtual tape cartridges that you
create on your tape gateway, each tape gateway
25118.81 -> is pre configured with a media changer and
tape drives. I know I'm not showing that in
25119.81 -> here. But you know, I think it's just better
to see the simpler visualization, but which
25120.81 -> are available to your existing client backup
applications. Ice SCSI devices, you add tape
25121.81 -> cartridges as you need to archive your data,
and it supports these these different tape
25122.81 -> tape services. Okay, so got veem you got backup,
exact net backup. There's also one from Symantec,
25123.81 -> Symantec that's called Backup Exec, I don't
know, used to be there got bought out, I don't
25124.81 -> know. So I just listed this one. So maybe
I made a mistake there. But it's not a big
25125.81 -> deal. But the point is, is that you have virtual
tape libraries, and you want to store them
25126.81 -> on s3, and it's going to be using s3 Glacier
because of course that is for long, long storage.
25127.81 -> So there you go. That's it.
25128.81 -> So we're at the end of storage gateway. And
here I have a storage gateway cheat sheet
25129.81 -> which summarizes everything that we've learned.
So let's start at the top here, storage gateway
25130.81 -> connects on premise storage to Cloud Storage.
So it's a hybrid storage solution. There are
25131.81 -> three types of gateways file gateway, volume,
gateway and tape gateway, Vol gateway, lets
25132.81 -> s3 act as a local file system using NFS or
SMB. And the easy way to think about this
25133.81 -> is think of like a local hard drive being
extended into s3. Okay. Volume gateway is
25134.81 -> used for backups and has two types stored
and cached. Stored volume gateway continuously
25135.81 -> backs backs up local storage to s3 as EBS
snapshots, and it's important for you remember
25136.81 -> that the primary data is on premises that's
what's going to help you remember the difference
25137.81 -> between stored and cached. Storage volumes
are between one gigabyte to 16 terabytes in
25138.81 -> size, cache volume gateway caches the most
frequently used files on premise and the primary
25139.81 -> data is stored on s3 again, remember the difference
between where the primary data is being stored.
25140.81 -> cache volumes are one gigabytes, between three
to gigabytes in size and tape gateway backs
25141.81 -> up virtual tapes to s3 Glacier for long archival
storage. So there you go, we're all done with
25142.81 -> storage. Hey, this is Andrew Brown. And we
are going to do another follow along. And
25143.81 -> this is going to touch multiple services.
The core to it is lambda, but we're going
25144.81 -> to do static website hosting, use dynamodb
use SNS and API gateway, we're all going to
25145.81 -> glue it together, because I have built here
a contact form, and we are going to get it
25146.81 -> hosted and make it serverless. Okay, so let's
get to it. So, um, we're gonna first try to
25147.81 -> get this website here hosted on s3, okay,
and so what I want you to do is make your
25148.81 -> way to the s3 console, you can just go up
to services here and type s3 and click here
25149.81 -> and you will arrive at the same location.
And we're going to need two buckets. So I've
25150.81 -> already registered a domain called Frankie
Lyons calm here in route 53. Okay, and we're
25151.81 -> going to have to copy that name exactly here
and create two buckets, okay, and these buckets
25152.81 -> are going to have to be the exact name as
the domain name. So we're going to first do
25153.81 -> the naked domain, which is just Frankie Lyons,
calm, okay. And then we're going to need to
25154.81 -> do the second one here, once this creates,
it's taking its sweet time, we're gonna have
25155.81 -> to do that with the sub domain, okay. And
so now we have both our buckets, okay. And
25156.81 -> so now we're going to click into each one
in turn on static website hosting. So going
25157.81 -> to management over here, or sorry, properties,
there is a box here called static website
25158.81 -> hosting. And we're going to have this one
redirect to our subdomain here. So we'll do
25159.81 -> ww dot, I'm not even gonna try to spell that.
So I'm just gonna copy paste it in there.
25160.81 -> Okay. And we're just going to hit save. Alright,
and so we have a static website hosting redirect
25161.81 -> set up here. And then we're going to go to
back to Amazon s3 and turn on static website
25162.81 -> hosting for this one here. So we're going
to go to properties, and set static website
25163.81 -> hosting and use this bucket. And we're going
to make it index dot HTML error dot html.
25164.81 -> And yeah, that's good. So now we have our
other stuff turned on, this is going to need
25165.81 -> to be public facing because it is a bucket.
So we're going to go over to our permissions
25166.81 -> here and edit the block public access. And
we're going to have to hit Save here, okay.
25167.81 -> And we just need to type confirm. Okay. And
now we should be able to, we should be able
25168.81 -> to upload content here. So let's go ahead
and upload our website here. So I do have
25169.81 -> it on my, my desktop here under a folder called
web. So this is all the stuff that we need
25170.81 -> to run it probably not the package dot JSON
stuff. So I'm just going to go ahead here
25171.81 -> and grab this, okay. And we're just going
to click and drag that there. And we'll just
25172.81 -> upload it. Okay, and that won't take too long
here. And now if we want to preview the static
25173.81 -> website hosting, we're going to go to our
properties here, and just right click on this
25174.81 -> endpoint to or I guess you can right click,
we'll just copy it. Okay. And we'll just give
25175.81 -> it a paste and give it a look here. So we're
getting a 403 forbidden, um, this shouldn't
25176.81 -> be the case, because we have it. Oh, you know,
it's not WW. Oh, no, and just www. So that's
25177.81 -> a bit confusing, because we should have this
turned on. So I think what it is, is that
25178.81 -> I need to update the bucket policy. Okay,
so I'm just going to go off screen here and
25179.81 -> grab the bucket policy, it's on the database
documentations, I just can't remember it off
25180.81 -> the top my head. So I just did a quick Google
on static website hosting bucket policy. And
25181.81 -> I arrived here on a device docs. And so what
we need is we need this policy here. Okay.
25182.81 -> And so I'm just going to go copy it here.
And I'm going to go back to s3, and I'm going
25183.81 -> to paste in this bucket policy. Now I do need
to change this to match the bucket name here.
25184.81 -> So we'll just copy that here at the top. Okay,
and so what we're doing is we're saying allow,
25185.81 -> allow read access to all the files within
this bucket, okay. And this said, we can name
25186.81 -> it whatever you want, it's actually optional,
I'm just gonna remove it to clean it up here,
25187.81 -> okay. And we should be able to save this.
Okay, and we are and now this bucket has public
25188.81 -> access. So if we go back to this portal three
here, and do refresh, our website is now up.
25189.81 -> So there is a few other things we need to
do. So this form when we submit it, I wanted
25190.81 -> to send off an email via SNS, and I also want
it to, I want it to also stored in Dynamo
25191.81 -> dB, so we have a reference of it. So let's
go ahead and set up an SNS topic and then
25192.81 -> we'll proceed to do Alright, so let's make
our way over to SNS here. So I'm just gonna
25193.81 -> go back up to the top here. Just click down
services here and type SNS and we're going
25194.81 -> to open this up in a new tab. Because it's
great to have all these things open. And we'll
25195.81 -> just clear out these ones here. Okay? And
we're going to get to SNS here, and what I'm
25196.81 -> going to do is on the first time I'm here,
so I get this big display here, but a lot
25197.81 -> of times, you can just click the hamburger
here and get to what you want. So I'm just
25198.81 -> going to go to topics on the left hand side,
because that's what we need to create here.
25199.81 -> I'm going to create a topic. And I'm going
to name this topic, um, Frankie Alliance.
25200.81 -> Okay, so I'm just going to grab that domain
name here. Okay, I'm just gonna say topic
25201.81 -> here. And I don't need an optional display
name, I guess it depends, because sometimes
25202.81 -> it's used in the actual email here that's
actually displayed. So I'm just going to copy
25203.81 -> this here and just put F, and here, Frankie
Alliance, okay, I think we can have uppercase
25204.81 -> there. And we have a few options here, we
can encrypt it, I'm not going to bother with
25205.81 -> that, we can set our access policy, we're
gonna leave that, by default, we have this
25206.81 -> ability to do retries, we're not doing HTTP,
we're going to be using this for email. So
25207.81 -> this doesn't matter. And, you know, the rest
are not important. So I'm going to hit Create
25208.81 -> topic here. Okay, and what that's going to
do is that's going to create an Arn for us.
25209.81 -> And so we're going to have to come back to
this later to utilize that there. Okay. But
25210.81 -> what we're going to need to do is if we want
to receive emails from here, we're going to
25211.81 -> have to subscribe to this topic. So down below,
we'll hit the Create subscription here. And
25212.81 -> we're going to choose the protocol. And so
I want it to be email, and I'm
25213.81 -> going to choose Andrew at exam pro.co. All
right, I'm just gonna drop down here, see
25214.81 -> if there's anything else no, nothing important.
And
25215.81 -> I'm just gonna hit Create subscription. So
what what's that that is going to do? It's
25216.81 -> going to send me a confirmation email to say,
hey, do you really want to subscribe to this?
25217.81 -> So you're going to get emails, I'm going to
say yes, so I'm just going to flip over to
25218.81 -> my email here, and go ahead and do that. Alright,
and so here came the email was nearly instantaneous.
25219.81 -> And so I'm just going to hit the confirmation
here, okay. And now that's going to confirm
25220.81 -> my subscription. Okay, so that means I'm going
to now receive emails if something gets pushed
25221.81 -> to that topic. All right. So yeah, if we go
back to SNS here, you can see it was in a
25222.81 -> pending state, if we just do a refresh here.
Okay, now we have a confirmation. So there
25223.81 -> you go. Um, now we can move on to creating
our Dynamo DB table. Alright, so now that
25224.81 -> we have SNS, let's proceed to create our Dynamo
DB table. So I want to go to services at the
25225.81 -> top here, type in Dynamo dB. And we will open
this up in a new tab because we will have
25226.81 -> to come back to all these other things here.
And we're just gonna wait for this to load
25227.81 -> and we're going to create ourselves a dynamodb
table. So we'll hit Create. And I'm going
25228.81 -> to name this based off my domain name. So
I'm gonna say Frankie Alliance. And we need
25229.81 -> to set a partition key. So a good partition
key is something that is extremely unique,
25230.81 -> like a user ID, or in this case, an email.
So we'll use that as email. And then for the
25231.81 -> sort key, we are going to use a created date.
Okay, so there is no date time, data structure
25232.81 -> here in dynamodb. So we'll just have to go
with a string, and that's totally fine. And
25233.81 -> there are some defaults here. So it says no
secondary indexes, provision, capacity five,
25234.81 -> and five, etc, etc. So we're just going to
turn that off, and we're gonna override this
25235.81 -> ourselves. So there is the provisioned, and
we can leave it at provision, I'd rather just
25236.81 -> go Yeah, we'll leave it at provision for the
time being. But I'm going to override these
25237.81 -> values. So I'm just gonna say one and one,
okay. And the reason why is just because I
25238.81 -> don't imagine we're gonna have a lot of traffic
here. So being able to do one Read, read and
25239.81 -> writes per second should be extremely capable
for us here, right? So this should be no issue.
25240.81 -> And then we'll just go down below, this is
all fine. We could also encrypt this at rest,
25241.81 -> I'm just gonna leave that alone. Okay, and
that all looks good to me. So I'm going to
25242.81 -> hit Create there. Okay, and so this table
is just going to create here and so what we're
25243.81 -> looking for is that Arn. So once we have the
Arn for the table, then we will be able to
25244.81 -> go ahead and hook that up into our lambda
code. Okay. So yeah, this all looks great.
25245.81 -> Um, so I guess maybe the next thing is now
to actually get the lambda function they're
25246.81 -> working. So maybe we'll do that or I guess
we could go ahead and actually put this behind
25247.81 -> CloudFront and hook up the domain. I think
we'll do that first. Okay, so we're gonna
25248.81 -> go and do some CloudFront here, sem rush.
25249.81 -> So I guess the next thing here is to actually
get our, our proper domain here, so we're
25250.81 -> not using the AWS one. So I've actually already
registered a domain here and I might actually
25251.81 -> include it from another tutorial here are
those steps here. So if you feel that there
25252.81 -> is a jargon section, it's just because I am
bringing that in over here. But we already
25253.81 -> have the domain name. And so once you have
your domain name, that means that you can
25254.81 -> go ahead and start setting up CloudFront and
ACM. So we're gonna want to do ACM first.
25255.81 -> So we're going to type in ACM here. Okay,
that's Amazon certificate manager. And that's
25256.81 -> how we're going to get our SSL domain, make
sure you click on the one on the left hand
25257.81 -> side provision certificates, because this
one is like $500. Starting, so it's very expensive.
25258.81 -> So just click this over here to provision
just make sure again, that is the public certificate
25259.81 -> and not the private private, again, is very
expensive, we're going to hit request,
25260.81 -> we're going to put the domain name in. So
we have the domain name there, I'm always
25261.81 -> really bad at spelling it. So I'm just gonna
grab it here. Let's really hold it, we don't
25262.81 -> need spelling mistakes.
25263.81 -> And we're going to have the naked domain.
And we're also going to do wildcard. And so
25264.81 -> just by doing this, we're going to cover all
our bases of the naked and all all subdomains.
25265.81 -> So I strongly recommend that you go ahead
and do this, when you are creating your certificates,
25266.81 -> we're going to hit Next we're going to use
DNS validation email is just a very old way,
25267.81 -> nobody really does it that way anymore. We're
going to hit review, we're gonna hit confirm
25268.81 -> request, okay. And so what's happening here
is that it's going it's now in pending validation.
25269.81 -> And so we need to confirm that we own this
domain. And so we need to add a record to
25270.81 -> our domain name, since our domain name is
hosted on Route 53, that's going to make it
25271.81 -> very easy to add these records, it's going
to be one click of a button. So I'm just gonna
25272.81 -> go ahead here and hit create, and then go
here and hit Create. Okay, and so um, yeah,
25273.81 -> this shouldn't take too long, we'll hit continue,
okay. And we're just going to wait for this
25274.81 -> to go pending to issued, okay, this is not
going to take very long, it takes usually
25275.81 -> a few minutes here. So we're just going to
wait, I'm going to go grab a coconut water,
25276.81 -> and I'll be back here shortly. Alright, so
I'm back here, and it only took us a few minutes
25277.81 -> here. And the status has now issued, so meaning
our SSL certificate is ready for use. So that
25278.81 -> means we can now create our CloudFront distribution.
So what I want you to do is go up to here
25279.81 -> and type in CloudFront. Okay. And we're going
to make our way over to CloudFront. So, here
25280.81 -> we are in CloudFront. And we're just going
to create ourselves a new distribution, we
25281.81 -> have web and rtmp, we're not going to be using
rtmp. This is for Adobe Flash media server.
25282.81 -> And very rarely does anyone ever use Adobe
anymore, so it's going to have to be the web
25283.81 -> distribution. And we're gonna have to go through
the steps here. So the first thing is, we
25284.81 -> need to select our actual bucket here. So
we are going to be doing this for the www,
25285.81 -> okay. And we're going to restrict bucket access,
because we don't want people directly accessing
25286.81 -> the website via this URL here, we want to
always be through the domain name. So that's
25287.81 -> what this option is going to allow us to do.
It's going to create a new origin identity,
25288.81 -> we can just let it do as it pleases, we need
to grant permission. So I'm gonna say yes,
25289.81 -> update my bucket policy. So that should save
us a little bit of time there. Now on to the
25290.81 -> behavior settings, we're going to want to
redirect HTTP to HTTPS, because really, no
25291.81 -> one should be using HTTP, we are going to
probably need to allow we'll probably have
25292.81 -> this forget. And head I was just thinking
whether we need post and patch. But that's
25293.81 -> only four if we were uploading files to s3
through education. So I think we can just
25294.81 -> leave it as get get in head. So we're fine
there. We're just going to keep on scrolling
25295.81 -> down here, we're not gonna restrict access
to this is a public website, we're just going
25296.81 -> to drop down and choose US, Canada and Europe,
you can choose best performance, I just feel
25297.81 -> this is going to save me some time because
it does take a long time for this thing to
25298.81 -> distribution to create. So the fewer edge
locations, I think the less time it takes
25299.81 -> to create that distribution, we're going to
need to put our alternate domain name in here.
25300.81 -> So that is just our domain names, we're going
to put www dot and again, I don't want to
25301.81 -> spell it wrong. So I'm just going to copy
it here manually. Okay. Back to CloudFront
25302.81 -> here and we'll just do Frankie lines calm.
Now we need to choose our custom SSL, we will
25303.81 -> drop down here and choose Frankie Alliance
Comm. Okay, and we need to set our default
25304.81 -> route object that's going to be index dot
HTML. That's how it knows to look at your
25305.81 -> index HTML page right off the bat. Okay, and
that's everything. So we're gonna hit Create
25306.81 -> distribution. And luckily, there are no errors.
There are no errors. So we are in good shape
25307.81 -> here. I'm not sure why it took me here. So
I'm just going to click here to see if it
25308.81 -> actually created that distribution. It's very
strange, it usually takes you to the distribution
25309.81 -> page there. Okay, but it is creating that
distribution. Okay, so we're gonna wait in
25310.81 -> progress. This does take a considerable amount
of time. So go take a shower, go take a walk,
25311.81 -> go watch an episode of Star Trek, and we will
be back here shortly. So our distribution
25312.81 -> is created. It took about 20 minutes for this
to create. I did kind of forget to tell you
25313.81 -> that we have to create two distributions.
So sorry about that, but we're going to have
25314.81 -> to go ahead and make another one. So we have
one here for the www, but we're going to need
25315.81 -> one for the naked domain. So I want you to
go to create distribution and go to web. And
25316.81 -> for the domain name, it's going to be slightly
different. Okay, so instead of selecting the
25317.81 -> bucket, what I want you to do is I want to
go back to s3. And I want you to go to the
25318.81 -> bucket with the naked domain. And we're going
to go to properties here, okay. And at a static
25319.81 -> website hosting, I don't want you to copy
everything. For the end point here with the
25320.81 -> exception of the HTTP, colon, forward slash
forward slash, okay. And we're just going
25321.81 -> to copy that. And we're going to paste that
in as the domain name. All right, so we're
25322.81 -> not going to autocomplete anything that I
want you to hit tab, so that it autocompletes
25323.81 -> this origin ID here. And then we can proceed.
So we will redirect HTTP to HTTPS.
25324.81 -> We'll just scroll down here. So this is all
good. The only thing is, we want to change
25325.81 -> our price class to you the first one here,
okay, we're going to need to put that domain
25326.81 -> name in there. So we'll just copy it here
from s3, and paste that in, we're going to
25327.81 -> need to choose our custom SSL drop down to
our SSL from ACM. And we'll leave everything
25328.81 -> else blank. And we'll create that distribution.
So now it's going to be another long wait.
25329.81 -> And I will talk to you in a bit here. So after
waiting 20 minutes, our second distribution
25330.81 -> is complete, I want you to make note that
for the naked domain, we were pointing to
25331.81 -> that endpoint for the static s3, website hosting,
and for the WW, we are pointing to the actual
25332.81 -> s3 bucket, this doesn't matter. Otherwise,
this redirect won't work. Okay, so just make
25333.81 -> sure this one is not set to the bucket. Alright.
So now that we have our two distributions
25334.81 -> deployed, we can now start hooking them up
to our custom domain name. Alright, so I want
25335.81 -> you to make your way over to route 53, we're
going to go to hosted zones on the left hand
25336.81 -> side, we're going to click into that domain
name. And we're going to add two record sets,
25337.81 -> we're gonna add one record set for the naked
domain, and then the www Alright, so we're
25338.81 -> gonna leave that blank there, choose alias.
And we're going to drop down here, and we're
25339.81 -> going to choose that CloudFront distribution.
Now, there are two distributions, there's
25340.81 -> no chance of you selecting the incorrect one,
because it's only going to show the one that
25341.81 -> matches for the domain. Alright, so we'll
hit create for the naked domain, and then
25342.81 -> we'll add another one. And we'll go to www,
and we're gonna go to alias, and we're going
25343.81 -> to choose the www CloudFront distribution.
All right, and now that those are both created,
25344.81 -> we can go ahead and just grab this domain
name here and give it a test. Okay, and so
25345.81 -> it's working, we're on our custom domain name.
Now, definitely, you want to check all four
25346.81 -> cases. So with, with and without the WW. And
with and without the s for SSL. And then in
25347.81 -> combination of so that one case works will
try without the WW. Okay, it redirects as
25348.81 -> expected, okay, and we will try now without
the SSL and make a domain. And it works as
25349.81 -> expected. And we will just try it without
the S here. And so all four cases work, we
25350.81 -> are in great shape. If anyone ever reports
to the your website's not working, even though
25351.81 -> it's working for you just check all those
four cases, maybe there is a issue there.
25352.81 -> Alright. So now that we have route 53, pointing
to our distributions and our custom domain
25353.81 -> hooked up, we need to do a little bit of work
here with our www. Box bucket there because
25354.81 -> when we first created this policy, we added
this statement here, which allowed access
25355.81 -> to this bucket. And then when we created our
CloudFront distribution, we told it to only
25356.81 -> allow access from that bucket. So it added
this, this statement here, all right. And
25357.81 -> so this one was out of convenience, because
we weren't using a custom domain. And so we
25358.81 -> only want this website to be accessible through
CloudFront. And this is still allowing it
25359.81 -> from the original endpoint domain. So if we
were to go to management, I'm just going to
25360.81 -> show you what I mean, I'm sorry, properties.
And we were to go to this endpoint here. All
25361.81 -> right, we're going to see that it's still
accessible by this URL. So it's not going
25362.81 -> through CloudFront. And we don't want people
directly accessing the bucket. We want everything
25363.81 -> to go through CloudFront. So we get statistics
and other fine tuned control over access to
25364.81 -> our website here. So what I want you to do
is I want you to go and back to your permissions
25365.81 -> here and go to bucket policy and remove that
first statement. Okay. And we're going to
25366.81 -> hit save. All right, and we're going to go
back to this endpoint, and we're going to
25367.81 -> get a refresh. And we should get a 403. If
you don't get it immediately, immediately.
25368.81 -> Sometimes chrome caches the browser's just
try another browser or just hit refresh until
25369.81 -> it works. Because if you definitely have removed
that bucket policy, it should return a 403.
25370.81 -> So now, the only way to access it and we'll
go back to the original one here is through
25371.81 -> the domain name. So there you go. It's all
hooked up here and we can now proceed to actually
25372.81 -> working with the landlord. Alright, so now
it's time to work with AWS lambda. And so
25373.81 -> I prepared a function for you here in the
For a folder called function, and the idea
25374.81 -> is we have this form here. And when we submit
it, it's going to go to API gateway and then
25375.81 -> trigger this lambda function. Alright. And
those, that data will be inputted into this
25376.81 -> function here. So it gets passed in through
event. And then it parses event body, which
25377.81 -> will give us the return Jason of these fields
here, that we're going to use validate, which
25378.81 -> is a, it's a third party library that we use
to do validations. Okay, so if I just open
25379.81 -> up the constraints, file here, this uses,
this actually validates all these fields.
25380.81 -> And so whether the input is valid or not,
it's going to either return an error to this
25381.81 -> forum saying, hey, you have some mistakes.
If it is successful, then what it's going
25382.81 -> to do. Alright, it's going to call this success
function, and then it will call insert, record
25383.81 -> and send email. So these two things are in
two separate files here. So one is for Dynamo
25384.81 -> dB, and one is for SNS. So when we say insert
record, we're saying we're going to insert
25385.81 -> a record into the dynamodb table that we created.
And then for SNS, we're going to send that
25386.81 -> email off. Alright, so this is a slightly
complex lambda function. But the reason I
25387.81 -> didn't just make this one single file, which
could have been very easy is because I want
25388.81 -> you to learn at the bare minimum of of a more
complex lambda functions, such as having to
25389.81 -> upload via zip, and dealing with dependencies.
All right, so now that our, our we have a
25390.81 -> little walkthrough there, let's actually get
this lambda function into the actual AWS platform.
25391.81 -> All right. So before we can actually upload
this to AWS, we have to make sure that we
25392.81 -> compile our dependencies. Now, I could easily
do this locally. But just in case, you don't
25393.81 -> have the exact same environment as myself,
I'm gonna actually show you how to do this
25394.81 -> via cloud nine. All right, so what I want
you to do is, we're just going to close that
25395.81 -> tab here. And I want you to close these other
ones here and just leave one open. And we're
25396.81 -> going to make our way over to cloud nine.
25397.81 -> Okay, and so just before we create this environment,
can you double check to make sure that you
25398.81 -> are in the correct region. So I seem to sit
on the border of US East, North Virginia,
25399.81 -> and Ohio, and sometimes it likes to flip me
to the wrong region. So I'm gonna switch it
25400.81 -> to North Virginia, this is super important,
because when we create our lambda functions,
25401.81 -> if they're not in the same region, we're going
to have a bunch of problems. All right, so
25402.81 -> just make sure that's North Virginia, and
go ahead and create your environment. Okay.
25403.81 -> And I'm going to name this based off our domain
name. So I
25404.81 -> should probably have that tab open there.
So I'm just gonna open up route 53. There,
25405.81 -> okay. And I'm just going to copy the domain
name here. Okay, and I'm going to name it
25406.81 -> for Randy Alliance.
25407.81 -> Alright. And we're going to go to a next step.
And we are going to choose to micro because
25408.81 -> it's the smallest instance, we're gonna use
Amazon Lex, because it's packed with a bunch
25409.81 -> of languages pre installed for us. Cloud Nine
environments do shut off after 30 minutes,
25410.81 -> so you won't be wasting your free credits.
Since it is a T two micro it is free tier
25411.81 -> eligible. And the cost of using cloud nine
is just the cost of running the cloud, the
25412.81 -> actual EC two instance underneath to run the
environment. So we're going to hit next step
25413.81 -> here, we're going to create the environment
here. And now it will only take a few minutes,
25414.81 -> and I will see you back shortly here. Oh,
so now we are in cloud nine. And I'm just
25415.81 -> going to change the theme to a darker because
it's easier on my eyes, I'm also going to
25416.81 -> change the mode to vim, you can leave it as
default vim is a complex keyboard setup. So
25417.81 -> you may not want to do that. And now what
we can do is we can upload our code into Cloud
25418.81 -> Nine. So that we can install the dependencies
for the specific Node JS version we are going
25419.81 -> to need, so I want you to go to File and we're
going to go to upload local files. And then
25420.81 -> on my desktop, here, I have the contact form,
it's probably a good idea, we grab both the
25421.81 -> web and function, the web contains the actual
static website, and we will have to make adjustments
25422.81 -> to that code. So we are just prepping ourselves
for a future step there. And so now that code
25423.81 -> is uploaded, okay, so what we're looking to
do is we want to install the dependencies,
25424.81 -> okay, because there, we need to also bundle
in whatever dependencies within this function,
25425.81 -> so for it to work, and in order to know what
to install with, we need to know what version
25426.81 -> of Node JS we're going to be using. And the
only way we're going to know is by creating
25427.81 -> our own lambda function. Alright, so what
I want you to do is just use this other tab
25428.81 -> here that I have open, and I'm going to go
to the lambda, the lambda interface here,
25429.81 -> and we are going to go ahead and create our
first. All right, so let's proceed to create
25430.81 -> our lambda function. And again, just make
sure you're in the same region so North Virginia
25431.81 -> because it needs to be the Same one is the
cloud nine environment here. So we're going
25432.81 -> to go ahead and create this function and we're
going to need to name it, I'm going to name
25433.81 -> it the offeror engi. Alliance, okay? And I'm
gonna say contact form. Alright. And I believe
25434.81 -> I spelt that correctly there, I'm just using
that as a reference up here. And we need to
25435.81 -> choose a runtime. So we have a bunch of different
languages. So we have Ruby, Python, Java,
25436.81 -> etc, etc, we were using no Gs. And so now
we, you can use 10, or eight point 10, it's
25437.81 -> generally recommended to use 10. But there
are use cases where you might need to use
25438.81 -> eight, and six is no longer supported. So
that used to be an option here, but it is
25439.81 -> off of the table now, so to speak. So we need
to also set up permissions, we're not going
25440.81 -> to have a role. So we're going to need to
create one here. So let's go ahead and let
25441.81 -> this lambda create one for us. And we will
hit Create function, okay. And we're just
25442.81 -> gonna have to wait a few minutes here for
it to create our lambda function. Okay, not
25443.81 -> too long. And here's our function. Great.
So it's been created here. And the left hand
25444.81 -> side, we have our triggers. So for our case,
it's going to be API gateway. And on the right
25445.81 -> hand side, we have our permissions. And so
you can see, by default, it's giving us access
25446.81 -> to cloud watch logs. And we are going to need
dynamodb and SNS in here. So we're going to
25447.81 -> have to update those permissions, just giving
a quick scroll down here, we actually do have
25448.81 -> a little cloud nine environment embedded in
here. And we could have actually done all
25449.81 -> the code within here. But I'm trying to set
you up to be able to edit this web page. And
25450.81 -> also, if the if your lambda function is too
large, then you actually can't use this environment
25451.81 -> here. And so you'd have to do this anyway.
So I figured, we might as well learn how to
25452.81 -> do it with the cloud nine way, alright, but
you could edit it in line, upload a zip file,
25453.81 -> so as long as it's under 10 megabytes, you
can upload it and you should, more or less
25454.81 -> be able to edit all those files. But if it
gets too big, then you have to supply it via
25455.81 -> s3. Alright, so, um, yeah, we need to get
those additional permissions. And so we're
25456.81 -> going to need to edit our, our roll, which
was created for us by default. All right,
25457.81 -> let's make our way over to I am so just gonna
type in I am here. And once this loads here,
25458.81 -> we're going to go to the left hand side, we're
going to go to a roles, and we're going to
25459.81 -> start typing Frankie. So if he are, okay,
there's that role. And we're going to add,
25460.81 -> attach a couple policies here. So we're gonna
give it we said SNS, right, we'll give it
25461.81 -> full access. And we're
25462.81 -> gonna give it dynamodb. And we'll give it
full access. Now, for the associate level,
25463.81 -> it's totally appropriate to use full access.
But when you become a database professional,
25464.81 -> you'll learn that you'll want to pair these
down to only give access to exactly the actions
25465.81 -> that you need. But I don't want to burden
you with that much. Iam knowledge at this
25466.81 -> time, all right, but you'll see in a moment,
because when we go back to, sorry, lambda
25467.81 -> function, and we refresh here, okay, I'm just
hitting the manual refresh there, we're gonna
25468.81 -> see what we have access to. So this is now
what we have access to, and we have a bunch
25469.81 -> of stuff. So we don't just have access to
dynamodb. We have access to Dynamo DB accelerator,
25470.81 -> we're not going to use that we have access
to easy to we don't need that we, we have
25471.81 -> access to auto scaling, we probably don't
need that data pipeline. So that's the only
25472.81 -> problem with using those full access things
as you get a bit more than what you want.
25473.81 -> But anyway, for our purposes, it's totally,
totally fine. Okay, and so now that we have
25474.81 -> our role, we want to get our code uploaded
here. So what I want you to do is I want you
25475.81 -> to go back to cloud nine. All right, and we're
going to bundle this here. So down below,
25476.81 -> we are in the correct directory environment.
But just to make sure we are in the same place,
25477.81 -> I want you to type cd Tilda, which is for
home, and then type forward slash environment.
25478.81 -> Okay, and then we're gonna type forward slash
function. All right, I want to show you a
25479.81 -> little a little thing that I discovered is
that this top directory that says Frankie
25480.81 -> Alliance is actually the environment directory,
for some reason, they name it differently
25481.81 -> for your purpose. But just so you know, environment
is this folder here. Okay. And so now we know
25482.81 -> our node version is going to be 10. I want
you to type in nvm, which stands for node
25483.81 -> version manager type NBN list, and we're going
to see what versions of node we have installed
25484.81 -> and which one is being used. And by default,
it's using 10. So we're already in great shape
25485.81 -> to be installing our dependencies in the right
version. So I want you to do npm install,
25486.81 -> or just I, which is the short form there.
Okay. And it's going to install all the dependencies
25487.81 -> we need. We can see they were installed under
the node modules directory there. So we have
25488.81 -> everything we want. And so now we just need
to download this here and bring it into our
25489.81 -> lambda function. So we are going to need to
zip this here and then upload it to the lamda
25490.81 -> interface here. So what I want you to do is
I want you to right click the function folder
25491.81 -> here and click download and it's going to
download this to our download. folder here,
25492.81 -> okay. And then I want you to unzip it, okay,
because we actually just want the contents
25493.81 -> of it, we don't want this folder here. Alright.
And the idea is that we are just including
25494.81 -> this node modules, which we didn't have earlier
here. And I'm just going to go ahead and compress
25495.81 -> that. And then we're going to have an archive.
And I want you to make your way back to the
25496.81 -> lamda interface here. And we're going to drop
down and upload a zip. Alright, and we are
25497.81 -> going to upload that archive. Alright, and
then we will hit save so that it actually
25498.81 -> uploads the archive, it doesn't take too long,
because you can see that it's less than a
25499.81 -> megabyte. And so we can access our files in
here. All right. And again, if this was too
25500.81 -> large, then it would actually not allow us
to even edit in here, and we'd still have
25501.81 -> to do cloud nine. Alright. So now that our
code is uploaded, now we can go and try and
25502.81 -> run it, or better yet, we will need to learn
how to sync it back to here, so that we can
25503.81 -> further edit it. Okay. So what I want to just
show you quickly here in cloud nine is if
25504.81 -> you go to the right hand side to AWS resources,
we have this lambda thing here. And again,
25505.81 -> if we were in the wrong region, if we were
in US East two, in our cloud nine environment,
25506.81 -> we wouldn't be able to see our function here.
But here's the remote function. And what we
25507.81 -> can do is if we want to continuously edit
this, we can pull it to cloud nine, and edit
25508.81 -> it here and then push it back and upload it.
So this is going to save us the trouble of
25509.81 -> continuously zipping the folder. Now, you
could automate this process with cloudformation,
25510.81 -> or other other serverless frameworks. But,
you know, I find this is very easy. It's also
25511.81 -> a good opportunity to learn the functionalities
of cloud nine. So now that we have the remote
25512.81 -> function here, I just want you to press this
button here to import the lambda function
25513.81 -> to our local environment. And it's saying,
Would you like to import it? Yes, absolutely.
25514.81 -> Okay. And so now, this is the function here,
and this is the one we're going to be working
25515.81 -> with. So we can just ignore this one here.
25516.81 -> All right. And so whenever we make a change,
we can then push this back, alright. And we
25517.81 -> might end up having to do that here or not.
But I just wanted to show you that you have
25518.81 -> this ability. All right. So actually, let's
actually try sinking it back, we're just going
25519.81 -> to make some kind of superficial change something
that doesn't matter, I just want to show you
25520.81 -> that you can do it. So we're going to just
type in anything here. So I'm just going to
25521.81 -> type in for reggy. Just a comment, okay. And
I'm going to save this file here and see how
25522.81 -> that was a period. Now it's green. So that
says it's been changed. And what I'm going
25523.81 -> to do is I'm going to go up here and click
this, and I'm going to re import this function.
25524.81 -> Alright, so I'm just going to hit this deploy.
Alright, and that's going to go ahead and
25525.81 -> send the changes back, alright, to the remote
function here. Okay. And I'm just going to
25526.81 -> hit our refresh here. All right. And then
what I'm going to do is I'm going to go back
25527.81 -> to the land environment here, I'm just going
to give it a refresh here. And let's see if
25528.81 -> our comment shows up. Okay, and so there you
are. So that's how we can sync changes between
25529.81 -> cloud nine in here. And again, if this file
was too large, we wouldn't even be able to
25530.81 -> see this. So Cloud Nine would be our only
convenience for doing this. So now that we
25531.81 -> know how to do that, let's actually learn
how to test our function. So let's proceed
25532.81 -> to doing that. So now let's go ahead and test
our function out. And so we can do here is
25533.81 -> create a new test event. Alright, and so I've
prepared one here for us. Okay, so we have
25534.81 -> a JSON with body. And then we have a string
of five JSON, because that's how it would
25535.81 -> actually come in from API gateway. All right,
and so I'm just going to do a contact form,
25536.81 -> test contact form, okay? And hit Create there.
And so now we have this thing that we can
25537.81 -> test out. All right, and so I'm just going
to go ahead and hit test, and we're going
25538.81 -> to get a failure and that's totally, totally
fine because if we scroll down, it actually
25539.81 -> tells us that we have an undefined a table
name. Alright. And the reason why we have
25540.81 -> an undefined table name is because we actually
haven't configured our SNS topic, like it's
25541.81 -> our an ID or the dynamodb table. So what we're
going to need to do is we're gonna need to
25542.81 -> supply this function with that information
and I believe we have them as environment
25543.81 -> variables, that's how they're specified. So
if I was to go back to cloud nine here, and
25544.81 -> we were to look maybe in Dynamo dB, I'm just
looking at where we actually configure the
25545.81 -> actual table name here. Okay, um, there it
is. So see where it says process envy table
25546.81 -> name, so that means is expecting a down here
in the environment. We are expecting under
25547.81 -> our environment variables where our environment
variables, here they are, so we're expecting
25548.81 -> a table name. And we're also for SNS, we are
expecting the topic Arn. Okay, so we need
25549.81 -> to Go grab those two things there. And we
will have better luck this time around. So
25550.81 -> I'm going to go and look for dynamodb here.
Okay, and I'm going to also get SNS. And we
25551.81 -> will go ahead and swap those in. So for our
table, it's just called Frankie Alliance.
25552.81 -> So that's not too complicated. Okay. And we
will just apply it there. And then for SNS,
25553.81 -> we might actually have an additional topic
here since the last time we were here, and
25554.81 -> we need the Arn. So we're going to go in here
and grab this. Okay, we need the Arn. All
25555.81 -> right. And so I'll paste that in there. And
we will hit save. Okay, and we will give this
25556.81 -> a another trial. So let's go ahead and hit
that test button and see what we get this
25557.81 -> time around. Fingers crossed. And look, we
got a success here. All right. So if we were
25558.81 -> to wanting to take a look at the results of
this, if we go down to monitoring here, okay,
25559.81 -> and we go to View cloud watch logs, we can
actually see our errors and our success rates.
25560.81 -> Okay. So here, we have a couple logs, I'm
just gonna open up this one here. Alright.
25561.81 -> And so we can see here that the body was passed
along. And it inserted into the dynamodb.
25562.81 -> table there. And it also did the SNS response.
And so these are all just the console logs
25563.81 -> that I have within the actual code. So if
you're wondering where these are coming from,
25564.81 -> it's just these console logs here. Okay, so
I set those up so that I'd have an idea for
25565.81 -> that. So let's actually go take a look at
dynamodb and see if our record exists. And
25566.81 -> there it is. So it's added to the database.
So now the real question is, have we gotten
25567.81 -> an email, notification. And so I'm just going
to hop over to my email, and we're going to
25568.81 -> take a quick look now.
25569.81 -> Alright, so here I am in my email, and we
literally got this email in less than a minute.
25570.81 -> And here is the information that has been
provided to us. So there you go, our lambda
25571.81 -> function is working as expected. And so now
that we have that working, the next big thing
25572.81 -> is actually to hook up our actual actual form
to this lambda function. And so in order to
25573.81 -> do that, we are going to need to set up API
gateway. So that that form has somewhere to
25574.81 -> send its data to. So let's proceed over to
API gateway. And we are going to create our
25575.81 -> own API gateway, Kool
25576.81 -> Aid. Okay, hello, let's go. So here we are
at the API gateway. And if you need to figure
25577.81 -> out how to get here, just type API up here
in the services. And we will be at this console.
25578.81 -> And we will get started. And we'll just hit
X here, I don't care what they're saying here.
25579.81 -> And we're going to hit new API, make sure
it's rest. And we're going to name it as we
25580.81 -> have a naming everything here. So I'm going
to call this for Engie. Alliance, okay. And
25581.81 -> the endpoint is original, that's totally fine
here. And we will go ahead and create that
25582.81 -> API. So now our API has been created. And
we have this our default route here, okay.
25583.81 -> And so we can add multiple resources or methods,
we can totally just work with this route here.
25584.81 -> But I'm going to add a resource here, and
I'm going to call it a transmit, okay. Okay.
25585.81 -> And I'm just going to name it the same up
here. And we're also going to want to enable
25586.81 -> API gateway course, we do not want to be dealing
with course issues. So we will just checkbox
25587.81 -> that there. And we'll go ahead and create
that resource. Okay, and so now we have a
25588.81 -> resource. And by default, we'll have options.
We don't care about options, we want to have
25589.81 -> a new method in here, and we are going to
make it a post. Okay. And we are going to
25590.81 -> do that there. And it's actually going to
be a lambda function. That's what we want
25591.81 -> it to have. All right. And do we want to use
the lambda integration proxy requests will
25592.81 -> be proxy to lambda with request details available
in your event? Yes, we do. That sounds good
25593.81 -> to me. And then we can specify the lamda region
is US East one, and then the lambda function.
25594.81 -> So here, we need to supply the lambda function
there. So we're going to make our way back
25595.81 -> to that lambda function there. So we have
this SNS topic, I'm just going to go over
25596.81 -> here and go back to lambda. And we're going
to grab the name of that lambda function and
25597.81 -> go back to API gateway and supply it there.
Okay, and then we're going to go ahead and
25598.81 -> save that. Yes, we are cool with that. Okay.
And we'll hit Save there. And so now our lambda
25599.81 -> function is kind of hooked up here. We might
have to fiddle with this a little bit more.
25600.81 -> But if we go back to our our lamda console
here and hit refresh. Now on the left hand
25601.81 -> side, we should see API gateway. So API gateway
is now a trigger. Okay? So yeah, we will go
25602.81 -> back to API gateway here. Alright, so now
to just test out this function to see if it's
25603.81 -> working, I'm going to go to test here and
we have some things we can fill in like query
25604.81 -> strings, headers, and the request body. So
flipping back to here, you're going to probably
25605.81 -> wonder why I had this up here. Well, it was
to test for this box here. And so I'm just
25606.81 -> gonna just slightly change it here. So we'll
just change it to Riker. Okay, and we're gonna
25607.81 -> say bye. Okay. All right. And so, you know,
now that I have this slight variation here,
25608.81 -> I'm just going to paste that in there. And
hit test. Okay. And it looks like it has worked.
25609.81 -> So we can just double check that by going
to a dynamodb here and doing a refresh here.
25610.81 -> And so the second record is there. Obviously
in cloud watch, if we were to go back here
25611.81 -> and refresh, okay, we are going to have updated
logs within here. Alright, so not sure if
25612.81 -> the logs are showing up here. There is Riker.
So he is in there. So our API gateway endpoint
25613.81 -> is now working as expected. Okay. So now what
we need to do is we need to publish this API
25614.81 -> gateway
25615.81 -> endpoint. Okay, so let's go ahead and do that.
All right. Okay. Hello, let's go. So in order
25616.81 -> to publish our API, what we're going to need
to do is deploy it. Okay. So anytime you make
25617.81 -> changes, you always have to deploy your API.
So I'm going to go ahead here and hit deploy
25618.81 -> API. And we're going to need to choose a stage,
we haven't created any stages yet. So I'm
25619.81 -> going to go here, I'm going to type prod,
which is short for production, okay, very
25620.81 -> common for people to do that. You could also
make it pro if you like, but prod is generally
25621.81 -> what is used. And I'm gonna go ahead and deploy
that. Okay. And so now it is deployed. And
25622.81 -> now I have this nice URL. And so this URL
is what we're going to use to invoke the lambda.
25623.81 -> So all we have to do is copy this here, and
then send a put request to our sorry, post
25624.81 -> request to transit here. So what we're going
to do is copy this URL here, okay. And we're
25625.81 -> going to go back to our cloud nine environment,
and we are going to go to our web in here.
25626.81 -> Okay. And in this, we have our HTML code,
and we have a function called form Submit.
25627.81 -> And so if we were to go into the JavaScript
here, okay, there is a place to supply that
25628.81 -> in here. And it's probably going to be Oh,
where did I put it? Um, it is a right here.
25629.81 -> So on the form submit, it takes a second parameter,
which is URL. All right. And so actually,
25630.81 -> it's just all right over here, I made it really
easy for myself here, and I'm just going to
25631.81 -> supply it there. Okay, and we're going to
need strings around that. Otherwise, it's
25632.81 -> going to give us trouble, it's going to have
to be a double quotations. Okay. And so now
25633.81 -> this form submit is going to submit it to
that endpoint, it has to be transmit, of course.
25634.81 -> Okay. So, yeah, there we go. I'm just going
to double check that transmit that looks good
25635.81 -> to me, I'm just gonna double check that it's
using a post it is. So yeah, that's all we
25636.81 -> need to do here. So now that we've changed
our index HTML, we're going to need to update
25637.81 -> update that in CloudFront, and invalidate
that. So let's make our way over to s3 and
25638.81 -> upload this new file. Okay. All right. So
the first thing we're going to need to do
25639.81 -> is download this file here. So we're just
gonna go ahead and download that new index
25640.81 -> HTML. And we're going to need to use one of
our other tabs here, we'll take up the cloud
25641.81 -> watch one here, we don't need to keep all
these things open. And we're going to make
25642.81 -> our way to s3, okay. And once we make it into
s3, we're going to go into the www dot francke.
25643.81 -> Alliance a.com. And we're going to upload
that new file, I believe it's in my downloads.
25644.81 -> So I'm just gonna go down here and show in
Finder, and here it is. So I'm just going
25645.81 -> to drag it on over Okay, upload that file.
Okay, and so now that file has been changed,
25646.81 -> but that doesn't mean that CloudFront has
been updated. So we have to go to our friendly
25647.81 -> service called CloudFront. Okay, and we're
going to need to invalidate that individual
25648.81 -> file there. So we're gonna go to www, and
we're gonna go into validations create. Now,
25649.81 -> we could put an asterisk, but we know exactly
what we're changing. So we're gonna do index
25650.81 -> dot HTML, and we're going to invalidate it.
And we're gonna just wait for that validation
25651.81 -> to finish Okay, and then we will go test out
our form. So after waiting a few minutes here,
25652.81 -> our invalidation is complete. And so let's
go see if our new form is hooked up. So we're
25653.81 -> going to need to go to the domain name. And
I was have a hard time typing it so I'm just
25654.81 -> going to call it Copy, Paste it directly here,
okay, just gonna go route 53 and grab it for
25655.81 -> myself. And there it is. Come on, there we
are. Okay. And so I'm just going to then paste
25656.81 -> it in there. Okay, so here's our form. And
I just want to be 100%. Sure, because you're
25657.81 -> using when you're working with Chrome and
stuff, things can aggressively cache there.
25658.81 -> So see, it's still using the old one. But
we've definitely definitely updated it. So
25659.81 -> I'm just going to give it a hard refresh here.
Okay, and so now it is there. So just make
25660.81 -> sure you check that before you go ahead and
do this so that you save yourself some frustration.
25661.81 -> All right. So now it's the moment of truth
here, and we are going to go ahead and fill
25662.81 -> up this form and see if it works. So I'm gonna
put in my name Andrew Brown. Okay, and we're
25663.81 -> gonna just put in exam Pro. Andrew exam pro.co,
we're gonna leave the phone number blank there.
25664.81 -> I'm going to say, Federation, I want to buy
something. Can I buy a spaceship? Okay. Whoa,
25665.81 -> boy. And we're going to now hit transmit,
okay, and it's transmitting there, and it
25666.81 -> says success.
25667.81 -> Okay, so we're gonna go to dynamodb, and do
a refresh there, and we can see that it's
25668.81 -> inserted. So there you go, we're all done
all the way there. If we wanted to test the
25669.81 -> validation, we could totally do so as well.
So I'm just gonna hit Submit here. And here,
25670.81 -> it's throwing an error. So we're, we're done.
So we went through everything we created dynamodb,
25671.81 -> an SNS topic, a lambda function, we used cloud
nine, we hooked up API gateway, we created
25672.81 -> a static website hosting. And we backed it
all by CloudFront using route 53. So we did
25673.81 -> a considerable amount to get this form. So
you know, that was quite impressive there.
25674.81 -> So I guess, now that we're done, let's go
ahead and tear all this stuff down. So now
25675.81 -> it's time to tear down and do some cleanup
here. All the resources we're using here pretty
25676.81 -> much don't cost any money or shut themselves
down. So there's, we're not going to be in
25677.81 -> trouble if we don't do this. But, you know,
we should just learn how to delete things
25678.81 -> here. So first, I'm going to go to dynamodb,
I'm going to go ahead and delete that table.
25679.81 -> Okay, and I don't want to create a backup.
So we'll go ahead and do that. Then we'll
25680.81 -> make our way over to SNS, okay. And we are
going to go ahead and delete that topic. So
25681.81 -> there it is, and we will just delete it, okay,
and we will put in a delete me. And then we
25682.81 -> will make our way over to lamda. All right.
And we are going to go ahead and delete that
25683.81 -> lambda function. And then we are going to
make our way over to Iam. And I am roles aren't
25684.81 -> that big of a deal, but you might not want
to keep them around. So we will just go in
25685.81 -> here and type in Frankie here. Okay, and we
will delete that role. And then we were running
25686.81 -> an ECG, or any situ instance via cloud nine.
So I'm just going to close that there. Let's
25687.81 -> tap here. And I'm gonna just type in, in here,
cloud nine, okay. And we are going to terminate
25688.81 -> that instance, there, you can see I have a
few others that did not terminate, but we'll
25689.81 -> go ahead and delete that there. And we will
just type delete. Okay, and that's going to
25690.81 -> delete on its own. You will want to double
check that there. But I want to get through
25691.81 -> this all with you here. So we're not going
to watch that. And then we want to delete
25692.81 -> our API gateway here. Why don't we delete
this, um, we'll go up to the top here. I rarely
25693.81 -> ever delete API. So we'll go here and enter
the name of the API before commit couldn't
25694.81 -> just be delete, right? Sometimes it's delete
me, sometimes it's etc, etc, they give you
25695.81 -> all these different ways of deleting, we'll
hit Delete API. So now that is deleted there.
25696.81 -> Um, we need to delete our CloudFront. Okay,
so we'll go here. And we have two distributions
25697.81 -> here. So in order to delete them, you have
to first disable them. And this takes forever
25698.81 -> to do Okay, so once they're disabled, there'll
be off and then you can just delete them,
25699.81 -> I'm not going to stick around to show you
how to delete them. As long as they're disabled,
25700.81 -> that is going to be good enough, but you should
take the extra time and delete them 2030 minutes
25701.81 -> when these decide to finish, okay, and we
want to delete our ACL if we can. Okay, so
25702.81 -> we'll make our way over there. So I'm gonna
hit delete, and it won't let me until those
25703.81 -> are deleted. So we'll wait till those are
fully deleted, then you just go ahead to here
25704.81 -> and delete that. I'm not going to come back
and show you that it's just not not worth
25705.81 -> the time here. Some of these things just take
too too long for me. Okay, and then we'll
25706.81 -> go into fringe Alliance. We will go ahead
and remove these Records here, okay, you don't
25707.81 -> want to keep records around that are pointing
to nothing. So if those CloudFront distributions
25708.81 -> are there, there is a way of people can compromise
and have those resources point to somewhere
25709.81 -> else. So you don't want to keep those around,
then we're going to go to s3, okay. And we're
25710.81 -> going to go ahead and delete our buckets.
Generally, you have to empty them before you
25711.81 -> can delete them. So I'm going to go here.
I don't know if databases made this a little
25712.81 -> bit easier now, but generally, you'd always
have to empty them before you can delete.
25713.81 -> So I'll hit Delete there. And we will try
this one as well. Okay. Oops, www dot. Okay,
25714.81 -> look at that. Yeah, that's nice. You don't
have to hit empty anymore. So yeah, I think
25715.81 -> we have everything with the exception of CloudFront
there and the ACM again, once they're disabled,
25716.81 -> you delete them, and then you delete your
ACM. So yeah, we have fully cleaned up here.
25717.81 -> And hopefully, you really enjoyed this. Alright,
so now it's time to book our exam. And it's
25718.81 -> always a bit of a trick to actually find where
this page is. So if you were to search eight,
25719.81 -> have a certification and go here.
25720.81 -> Alright, and then maybe go to the training
overview, and then click get started, it's
25721.81 -> going to take you to at bis dot training,
and this is where you're going to register
25722.81 -> to take the exam. So in the top right corner,
we are going to have to go ahead and go sign
25723.81 -> in. And I already have an account. So I'm
just going to go and login with my account
25724.81 -> there. So I'm just gonna hit sign in there.
Okay, and we're just going to have to provide
25725.81 -> our credentials here. So I'm just going to
go ahead and fill mine in, and I will see
25726.81 -> you on the other side and just show you the
rest of it. Alright, so now we are in the
25727.81 -> training and certification portal. So at the
top, we have a one stop training. And to get
25728.81 -> to booking our exam, we got to go to certification
here. And then we're going to have to go to
25729.81 -> our account. And we're going to be using the
certain metrics, third party service that
25730.81 -> actually manages the certifications. So we're
going to go to our certain metrics account
25731.81 -> here. And now we can go ahead and schedule
our exam. So we're going to schedule a new
25732.81 -> exam. And down below, we're going to get a
full list of exams here. So it used to just
25733.81 -> be psi. And so now they all have psi Pearson
VUE, these are just a network of training
25734.81 -> centers where you can actually go take and
sit the exam, for the CCP, you can actually
25735.81 -> take it from home. Now it's the only certification
you can take from home, it is a monitored
25736.81 -> exam. But for the rest, they have to be done
at a data center. And so I'm just going to
25737.81 -> show you how to book it either with psi or
a Pearson VUE here. And again, they have different
25738.81 -> data centers. So if you do not find a data
center in your area, I'll just go give Pearson
25739.81 -> VUE a look so that you can actually go book
that exam. So let's go take a look at an exam.
25740.81 -> So maybe we will book the professional here.
So I'm just going to open this in a tab and
25741.81 -> open that in a tab and we're going to review
how we can book it here through these two
25742.81 -> portals. So let's take a look at psi, this
is the one I'm most familiar with. Okay, because
25743.81 -> Pearson VUE wasn't here the last time I checked.
But so here you can see the duration and the
25744.81 -> confirmation number, you want to definitely
make sure you're taking the right exam. Sometimes
25745.81 -> there are similar exams like the old ones,
that will be in here. So just be 100%. Sure,
25746.81 -> before you go ahead and do that, and go and
schedule your exam. And so it's even telling
25747.81 -> you that there is more than one available
here, and that's fine. So we'll just hit Continue.
25748.81 -> Okay. And then from here, we're going to wait
here and we're going to select our language,
25749.81 -> okay. And then we get to choose our data centers.
So the idea is you want to try to find a data
25750.81 -> center near you. So if I typed in Toronto
here, so we'll enter a city in here like Toronto,
25751.81 -> I don't know why thinks I'm over here. And
I'm just going to hit Toronto here. And we're
25752.81 -> going to search for exam centers. Okay, and
then we are going to have a bunch of over
25753.81 -> here. So the closest one in Toronto is up
here. So I'm gonna click one. Alright, and
25754.81 -> it's going to show me the available times
that I can book. So there's not a lot of times
25755.81 -> this week, generally you have to it has to
be like two, three days ahead. Every time
25756.81 -> I booked exam, it's never been the next day.
But here we actually have one it's going to
25757.81 -> vary based on the test center that you have
here. We're going to go ahead here and this
25758.81 -> one only lets you do Wednesdays and Thursdays.
So if we had the Thursday here at 5pm. Okay,
25759.81 -> and then we would choose that and we would
continue. Okay, and then we would hit Continue
25760.81 -> again. Alright, and so the booking has been
created and in order to finalize it, we just
25761.81 -> have to pay that it is in USD dollars, okay.
So you'd have to just go and fill that out.
25762.81 -> And once that's filled out and you pay it,
then you are ready to go sit that exam. So
25763.81 -> that's how we do with psi and then we're gonna
go take a look over at Pearson VUE. So I'm
25764.81 -> just gonna go ahead and clear this. Because
I'm not serious about booking an exam right
25765.81 -> now. Okay. And we'll go take a look how we
do it with Pearson VUE. So here we are in
25766.81 -> the Pearson VUE section to book. And you first
need to choose your preferred language. I'll
25767.81 -> choose English because that's what I'm most
comfortable with. And we're going to just
25768.81 -> hit next here. And the next thing it's going
to show us is the price and we will say schedule
25769.81 -> this exam. All right. And now we can proceed
to scheduling. Okay, so we'll just proceed
25770.81 -> to scheduling, it's given me a lot of superfluous
option. and here we can see locations in Toronto.
25771.81 -> Okay, so here are test centres. And we do
actually have a bit of variation here. So
25772.81 -> you can see there are some different offerings,
you might also see the same data center, so
25773.81 -> I can choose this one here. Okay, and it lets
you select up to three to compare the availability.
25774.81 -> So sure, we will select three, and we will
hit next. Okay, we'll just wait a little bit
25775.81 -> here. And now we are just going to choose
when we want to take that exam there. So we
25776.81 -> do have the three options to compare. And
so you know, just choose that 11 time, okay.
25777.81 -> And so then we would see that information
and we could proceed to checkout.
Source: https://www.youtube.com/watch?v=Ia-UEYYR44s