
AWS re:Invent 2020 - Infrastructure Keynote with Peter DeSantis
AWS re:Invent 2020 - Infrastructure Keynote with Peter DeSantis
Peter DeSantis, Senior Vice President of AWS Infrastructure and Support, goes behind the scenes to show how AWS thinks differently about reliability, performance, and efficiency. He shares his insights on AWS data center, Availability Zone, and region design. Peter dives deep on AWS Graviton2, AWS Nitro System, and how Nitro enables the new Amazon EC2 Mac instance. He also provides an update on Amazon’s progress towards The Climate Pledge.
Guest Speakers:
Michelle McKenna, of NFL
Jerry Hunter, of Snap Inc.
Subscribe:
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4
#AWS
Content
2.359 -> [music playing]
13.57 -> Please welcome the Senior VP
of AWS Infrastructure and Support,
16.03 -> Infrastructure Leadership,
Peter DeSantis.
21.08 -> [music playing]
29.8 -> Good morning.
Thank you for joining me.
32.68 -> For those of you who have not
watched this keynote before,
35.46 -> we like to go a little deeper
37.36 -> on how AWS thinks about
and builds infrastructure.
41.26 -> This morning I have some
pretty cool things to share.
44.4 -> Hopefully by now,
you’ve heard about Graviton2
47.49 -> as customers are reporting
some fabulous results
50.17 -> both improving their performance
and lowering their cost.
53.42 -> Today, I’m going to show you
how we designed Graviton2
56.69 -> and why it performed so well.
58.02 -> And I’m excited to give you an update
on our progress on sustainability.
62.37 -> We’ve made a ton of progress
since last year
64.55 -> and I can’t wait to share
that with you either.
67.32 -> Of course, I have some
really great customers here
69.8 -> to talk about how they’re using
AWS infrastructure
72.78 -> to do amazing things
for their customers,
75.5 -> but I want to start off
this morning
76.76 -> by talking to you about something
a bit different – how we operate.
81.86 -> Can you tell me how AWS operates?
85.52 -> This is a question
that I often get from customers.
88.74 -> And they usually ask the question
for a couple of reasons.
91.74 -> First, they want to understand
our operational practices
95.45 -> because if you’re going to run
your critical workloads in the cloud,
99.27 -> you want to know that you can depend
upon the operational practices
102.11 -> of your cloud provider.
104.2 -> But customers
also ask this question
106.49 -> because they want to see
what they can learn from AWS
108.93 -> and apply those learnings
to their own business.
113.11 -> At Amazon, we pride ourselves
on being a little bit peculiar
117.55 -> and one of the ways
this comes across
119.48 -> is in the way some leaders
express their ideas.
123.18 -> I’ve been collecting these
peculiar quotes for over a decade.
126.9 -> You see, it’s part
of my retirement plan
129.06 -> to write a book
on Amazon culture.
131.63 -> Unfortunately that book is not
coming along that quickly,
135.15 -> so I thought it might be fun
to use a few of these quotes
137.61 -> to help us along in this discussion.
140.13 -> Let me start with
one of our favorites.
144.23 -> When it comes to being a great
operator, there’s no shortcuts.
148.65 -> Great operational performance is
the result of a long-term commitment
152.21 -> and an accrual of small decisions
and investments
154.45 -> compounding on top of one another.
157.22 -> But I do think there are
a few things different
159.61 -> about the way we approach
availability at AWS.
162.83 -> And I think it starts with AWS
growing out of Amazon.
166.6 -> Amazon is at its heart
a technology company
169.07 -> that’s uniquely tied
to the physical world
171.89 -> operating warehouses
and customer support centers,
174.65 -> running a complex
international supply chain
177.11 -> and keeping a website up
24 hours a day,
179.54 -> seven days a week for decades.
181.61 -> These are all real-world
operational challenges
184.28 -> that benefit from
technological invention.
187.18 -> And these problems
have inspired us
189.35 -> to be both great technologists
and great operators.
195.65 -> Here’s an interesting quote
from a leader
198.07 -> that must have been
a big fan of Columbo
200.08 -> or perhaps the board game Clue.
202.63 -> But what’s interesting
about this quote is the context.
205.93 -> This leader was patiently
sifting through the details
209.18 -> to find the root cause of a problem.
Now when I interview senior leaders
214.07 -> from other companies
they often ask me,
216.03 -> “Is there common challenges that are
faced by new leaders at Amazon?”
219.97 -> And I tell them
the most common reason
221.91 -> that I see new leaders
at Amazon struggle
225.16 -> is not getting in the details
the way we would expect.
228.54 -> We expect all our leaders,
however senior,
231.41 -> to spend a considerable amount of
their time working in the details.
235.43 -> If you’re an engineer,
it’s nearly impossible
237.76 -> to build a high availability system
239.53 -> without understanding
the operational challenges
241.49 -> encountered by the current systems.
And if you’re a manager,
244.78 -> it’s difficult to make informed
decisions about things like roadmap
247.96 -> without understanding
these same details.
252.62 -> Setting the right culture
and being in the details matters.
257.22 -> But if you want to provide
your customers
258.9 -> with differentiated
operational performance,
260.99 -> you need to design it in.
You may have heard Werner Vogels say,
264.76 -> “Everything fails,”
and that idea that anything can
268.26 -> and will fail influences
everything we build at AWS.
272.46 -> The idea is simple.
273.72 -> Anticipate failure
and design your products
276.5 -> to protect your customers.
278.43 -> To illustrate this idea, we’ll look
at how we build our infrastructure,
282.1 -> specifically our data center
power infrastructure.
286.61 -> Let’s start with the basics.
288.34 -> These are the usual suspects involved
in high availability power design.
291.91 -> There’s a utility feed
coming into the data center.
294.68 -> Power from the utility goes
into what’s called switch gear.
297.49 -> This is really just a set
of interlock breakers
299.68 -> with some monitoring logic
and control logic.
304.05 -> Our switch gear here
is responsible for detecting
306.63 -> when there’s a utility power issue
and when it sees this,
309.48 -> it disconnects the utility power
and it activates a backup generator.
313.62 -> The power from the switch gear
is fed into what’s called
316.09 -> an uninterruptable power
supply or UPS.
319.55 -> A UPS is essentially
an energy storage device
322.17 -> that supports the critical load
while the switch gear switches
325.32 -> between the utility
and the backup generator.
328.43 -> Now, while this is a simple design
and it works for utility failures,
333.35 -> this comes up far short of what
you need to run a data center.
336.96 -> Let’s take a look at that.
338.58 -> We’ll start with a generator.
340.19 -> Generators are big mechanical systems
and they sit idle most of the time.
345.16 -> But for the handful of minutes
you need them every few years,
348.25 -> they really need to work.
350.46 -> Now with the right
preventative maintenance
352.09 -> they’re remarkably reliable.
353.91 -> But remarkably reliable
isn’t good enough.
356.67 -> At scale, everything fails
359.41 -> and to keep your equipment
running at its best,
361.38 -> you need to do
preventative maintenance
363.16 -> and to do this maintenance you have
to take the generator offline.
366.45 -> And while it’s offline, it’s not
there to protect your critical load.
370.14 -> So you have to add
a backup generator.
372.29 -> This concept called concurrent
maintainability with redundancy
376.9 -> is important for
all of our critical gear.
380.36 -> So what can we do?
Well we add another generator.
385.21 -> That’s a fairly easy change,
387.32 -> but we still have problems
with the simple design.
390.39 -> What happens if the UPS
or the switch gear fail?
393.48 -> And how do we do maintenance
on those components?
396.43 -> Let me show you how we think about
different components in a system
399.57 -> and their potential
impact on availability.
403.86 -> When you look at the components
of a system,
406.13 -> whether the system’s
in a software system
408.46 -> or a physical system
like the one we’re looking at here,
411.12 -> it’s useful to think about them
along with a couple of dimensions.
414.49 -> The first is what is the blast radius
of the component if it has a failure?
419.3 -> If the impact is really small, then
the failure may be less of a concern,
423.41 -> but as the blast radius gets bigger,
things get much more concerning.
427.65 -> In the case of our power design,
429.5 -> the blast radius of both components
is big, really big.
434.62 -> Data center UPSs these days tend
to be around 1 megawatt of power.
439.17 -> To put that in perspective,
440.27 -> you can power thousands of servers
with 1 megawatt of power.
443.92 -> And your switch gear, it needs to be
at least as big as your UPS
447.51 -> and it’s often much bigger.
449.56 -> So both of these components
have big blast radius.
453.51 -> The other dimension
that’s interesting
455.18 -> when you’re evaluating components
is complexity.
458.18 -> The more complicated a component,
460.38 -> the more likely it is
to have an issue.
462.46 -> And here the switch gear
and the UPS are quite different.
466.14 -> Let’s take a deeper look.
468.89 -> As I mentioned switch gear
is fairly uncomplicated equipment.
473.18 -> It’s big and super-important
475.68 -> but it’s really just a bunch
of mechanical circuit breakers,
478.37 -> some power sensing equipment
and a simple software control system.
482.79 -> Now, that control system is simple,
but it is software.
487.51 -> Most vendors will refer to it
as firmware
489.55 -> but that really just means
it’s embedded software
491.82 -> that gets saved
to a persistent memory module.
494.62 -> And software that you don’t own that
496.89 -> is in your infrastructure
can cause problems.
501.04 -> First, if you find a bug, you can
spend weeks working with the vendor
504.76 -> to reproduce
that bug in their environment
507.41 -> and then you wait months
for the vendor to produce a fix
510.85 -> and apply and validate that fix.
And in the infrastructure world,
515.65 -> you have to take that fix
and apply it to all these devices
518.71 -> and you might have to send
a technical to manually do that.
522.37 -> And by the time you’re done,
523.8 -> it can easily take a year
to fix an issue
525.9 -> and this just won’t work
to operate the way we want.
529.03 -> Another problem with third party
embedded software
532.02 -> is that it needs to support lots
of different customer use cases.
535.35 -> Vendors need to optimize
their equipment
537.43 -> for what most
of their customers want,
539.69 -> not what you need
and this means added complexity
543.61 -> which in turn increases
the risk of a failure.
547.47 -> Finally,
small operational differences
549.69 -> in how the different software behaves
can make your operations different,
553.16 -> more complex
and this can lead to mistakes.
558.1 -> So many years ago we developed
our own switch gear control system.
561.74 -> We call it AMCOP.
You can see it pictured here.
565.37 -> Now this may look fairly simple
and indeed we’ve invested heavily
569.15 -> in keeping it as simple as possible.
571.44 -> We don’t add features
to our controller.
573.74 -> Instead we focus on ensuring it does
its very important job perfectly.
579.3 -> Today, we use dozens of different
makes and models of switch gear
582.75 -> from several partners,
but they’re all controlled by AMCOP
587.82 -> and this means that we can
operate our global data center
590.87 -> exactly the same way everywhere.
Now let’s look at a UPS.
597.42 -> There’s lots of ways
to spot complexity.
600.22 -> The first is to look at how many
components are in the system.
603.14 -> And you can see from this picture
604.84 -> that UPSs have
very complex electronics,
608.19 -> but what you can’t see
in this picture
610.19 -> is the software that runs
on these components
612.64 -> and that’s where things
get really complex.
615.47 -> The UPS has a hard job to start with
and vendors have jam-packed the UPS
619.61 -> with features over the last 20 years.
622 -> Now we often disable
many of these features,
624.52 -> but they still add complexity
to the UPS.
629.37 -> Beyond the complexity of
the UPS itself, they’re batteries.
632.96 -> UPSs need to store energy somewhere
635.47 -> and they usually do that
in lead acid batteries.
638.63 -> There are other solutions, but lead
acid batteries have an advantage
641.44 -> of being an old,
well understood technology.
644.05 -> But they have disadvantages too.
646.07 -> The batteries for a single 1
megawatt UPS weigh 12,000lbs
650.61 -> and they’re best stored
in a dedicated room
652.57 -> because they require
special environmentals.
655.3 -> All right, let’s go back
to our chart.
659.04 -> As we saw earlier, both the UPS
and the switch gear are big,
662.85 -> but the UPS is significantly more
complex, so let’s update our chart.
670.2 -> Okay, now you can clearly see
what’s keeping us up at night
674.86 -> and we’re not the only ones
that have come to the conclusion
677.17 -> that a single UPS
is not reliable enough.
679.69 -> Lots of smart people
have worked on solutions.
682.08 -> There’s no standard solution
but the common approach
684.88 -> is to throw more redundancy
at your design,
687.24 -> usually by adding a second UPS.
690.25 -> And often this is done by using
a feature of the UPS
693.28 -> that allows it to be paralleled
with other UPSs.
696.96 -> The problem with this approach
699.12 -> is that it doesn’t really change
your position our graph.
702.39 -> You still have a big, complicated
component keeping you awake at night.
706.22 -> You’ve just added one more and
it’s connected to the other UPS.
710.54 -> More interconnected
complicated components
713.27 -> is seldom a solution
that yields increased availability.
718.21 -> The approach we’ve taken
for a long time
720.43 -> is to power our servers
with two independent power lineups.
724.28 -> By independent I mean each lineup
has its own switchgear,
727.35 -> its own generator, its own UPS,
even its own distribution wires.
731.66 -> And by keeping these lineups
completely independent
733.93 -> all the way down to the rack,
736.99 -> we’re able to provide
very high availability
740.12 -> and protect ourselves
from issues with the UPS.
742.82 -> And while we don’t love our UPSs
for the reasons I mentioned earlier,
746.18 -> this redundancy performs very well.
749.11 -> In fact, our data centers running
this design achieve availability
752.74 -> of almost 7 9s.
756.39 -> But if you want to be
an amazing operator,
759.1 -> you need to constantly
push to improve.
761.79 -> And as you can see
in this peculiar quote,
764.28 -> you can’t just ruminate about
how to make things better,
766.699 -> you need to act.
768.39 -> So how do we improve
our power design?
770.9 -> Well, we need to address the big
blast radius complicated component,
776.87 -> the UPS.
And let me show you how we did that.
782.48 -> Rather than using a big third-party
UPS,
784.97 -> we now use small battery packs
and custom power supplies
788.02 -> that we integrate into every rack.
790.2 -> You can think about this as a micro
UPS, but it’s far less complicated.
795.42 -> And because we designed it ourselves,
we know everything about it
798.56 -> and we control all the pieces
of the software.
801.29 -> And as we discussed earlier,
this allows us to eliminate
803.88 -> complexity
from features we don’t need
806.36 -> and we can iterate at Amazon speed
to improve the design.
810.42 -> Now the batteries can also be
removed and replaced in seconds
813.22 -> rather than hours and you can do this
without turning off the system.
817.23 -> So this allows us
to drastically reduce
819.11 -> the risk of maintenance we need
to do to the battery shells.
824.19 -> The end result, we eliminated
a big blast radius high complexity,
829.09 -> failure-prone UPS
830.52 -> and we replaced it with a small blast
radius, lower complexity component.
834.63 -> Now this is exactly
the sort of design
837.12 -> that lets me sleep like a baby.
839.26 -> And indeed, this new design is
giving us even better availability
842.65 -> than what I showed earlier.
844.64 -> And better availability
is not the only improvement
846.71 -> we get from this new design.
847.8 -> I’m going to come back to this later
when we look at sustainability.
851.19 -> But for now I want
to stay on infrastructure.
853.85 -> Like the example I showed you here
today, we are continually investing
857.4 -> and improving the availability
of our infrastructure.
861.71 -> But we are also aware that no matter
how good our infrastructure
865.75 -> design and operations,
some level of failure is inevitable.
869.75 -> Physical events like fires,
floods, hurricanes,
872.28 -> tornadoes are tangible proof
874.02 -> that you cannot achieve
extremely high reliability
876.78 -> from any single server
or even a single data center.
880.54 -> After we launched Amazon EC2,
Availability Zones
883.95 -> were one of the very
first features that we added.
886.97 -> At the time the idea
of an Availability Zone
889.69 -> was a brand new concept
to most users,
892.38 -> but this idea was not new to Amazon
because this was the way
895.44 -> we had been running
our infrastructure for a long time.
899.94 -> About five years
before we launched AWS
902.14 -> we made a decision to expand
the Amazon.com
904.78 -> infrastructure
beyond the single data centers
907.34 -> that we were using in Seattle.
909.46 -> Several operational near misses
made it clear
912.85 -> that we needed
an infrastructure strategy
914.33 -> that allowed us to run our critical
systems out of several data centers.
920 -> One of the leading ideas at the time
921.85 -> was to move our system
to two data centers,
923.95 -> one on the west coast
and one on the east coast.
926.94 -> This is the availability model
used by most of the world
929.79 -> at that time
and it’s still common today.
931.67 -> And while this idea seems
compelling at a high level,
935.59 -> it loses its charm quickly
when we get into the details.
939.38 -> Amazon is a highly stateful,
real-time application
942.12 -> that needs to do things like keep
inventory information
945.59 -> up-to-date and consistent while all
the users are accessing the website.
950.11 -> And when you’re trying to keep data
951.56 -> that changes
this rapidly synchronized,
953.84 -> with either strong or eventual
consistency, latency matters.
958.26 -> With a synchronous strategy
the latency between your replicas
961.66 -> will directly impact
your maximum transaction rate.
964.5 -> And with an asynchronous strategy,
the higher the latency,
968.16 -> the more out-of-date
your replicas will be.
970.92 -> And for Amazon
where millions of customers
973.54 -> are accessing real-time
inventory information
976.23 -> and making real-time changes
to that information,
979.25 -> neither a low transaction rate
981.23 -> nor out-of-date information
is acceptable.
984.34 -> And this problem
isn’t unique to Amazon.
986.61 -> All modern high-scale applications
988.47 -> need to support
high transaction rates
990.62 -> and seek to provide up-to-date
information to their users.
994.45 -> So the obvious solution is to run
from multiple data centers
997.82 -> located closer together.
But how close?
1003.14 -> Like all good engineering challenges
we have a trade-off.
1006.67 -> The further apart
you put two data centers,
1008.81 -> the less likely they’ll be
impacted by a single event.
1012.5 -> On one extreme, if you put two
data centers across the street
1015.46 -> from one another, it’s pretty easy
to think about things
1018.09 -> that might impact both data centers.
1020.1 -> Common things like utility failures
to less common things like fires
1023.69 -> and floods to unlikely but really
scary things like tornadoes.
1029.34 -> But as you get further apart,
1030.97 -> the probability of these sorts of
natural disasters goes down quickly.
1036.26 -> And once you get
to miles of separation,
1038.36 -> even natural disasters
like hurricanes and earthquakes
1040.92 -> are unlikely to have a significant
impact on both data centers.
1046.22 -> Now you can keep going,
but after a while you need to start
1049.11 -> imagining some absurdly
low probability disasters,
1052.1 -> things that haven’t happened
in any of our lifetimes
1054.41 -> to imagine simultaneous impact.
1057.55 -> And of course, on the other side
adding distance adds latency.
1063.25 -> The speed of light
is a pesky little thing
1065.1 -> and no amount of innovation
speeds it up.
1068.51 -> Here I’ve added some estimates for
how much latency would be observed
1071.55 -> as we move
our data centers further apart.
1075.52 -> The conclusion we came to is that
for high availability applications
1079.26 -> there’s a Goldilocks zone
for your infrastructure.
1082.05 -> When you look at the risk,
the further away the better,
1084.59 -> but after a handful of miles,
there’s diminishing returns.
1088.23 -> This isn’t an exact range.
1089.87 -> Too close varies a little
based on geographic region.
1092.69 -> Things like seismic activity,
hundred-year flood plains,
1095.71 -> probability of large hurricanes
or typhoons
1097.89 -> can influence
how we think about too close.
1100.89 -> But we want miles of separation.
1103.28 -> And what about too far?
Here we look at the latency
1106.19 -> between all the Availability Zones
in the region
1108.28 -> and we target a maximum latency
roundtrip of about 1 millisecond.
1113.29 -> Now whether you’re
setting up replication
1115.04 -> with a relational database
1116.55 -> or running a distributed system
like Amazon S3 or DynamoDB,
1120.6 -> we’ve found that things
get pretty challenging
1122.5 -> when latency goes much
beyond 1 millisecond.
1127.97 -> I just spent a bunch of time
giving you insight
1129.8 -> into how we think
about Availability Zones,
1132.48 -> but really this is not news.
1134.08 -> You can read all about this
in our documentation.
1137.46 -> Here you can see how we define
Regions and Availability Zones.
1141 -> Of particular note
you’ll see that each Region
1143.41 -> has multiple Availability Zones
1145.6 -> and we clearly state
that every Availability Zone
1147.93 -> is physically separated
by a meaningful distance.
1151.52 -> Seems pretty straightforward.
1153.44 -> But let’s see how
some other cloud providers
1155.73 -> talk about
their Availability Zones
1157.76 -> and more importantly let’s look
at what they don’t say.
1162.41 -> Here's the way other
US cloud providers
1164.81 -> talk about Availability Zones.
1167 -> Neither provider is clear
and direct about what they mean.
1169.68 -> Words like ‘usually’
1171.11 -> and ‘generally’ are used
throughout their documentation.
1175.15 -> And when you’re talking about
protecting your application
1177.72 -> with meaningful separation,
1179.34 -> usually and generally
just aren’t good enough.
1183.62 -> But the most concerning thing
about this documentation
1185.72 -> is what it doesn’t say.
1187.59 -> Neither provider says anything
about how far apart
1190.5 -> their Availability Zones are.
1192.44 -> Two rooms in the same house
are separate locations,
1196.27 -> but that’s not what you want
from your Availability Zones.
1199.41 -> And another challenge
is the fine print.
1202.23 -> One provider says
that Availability Zones
1204.07 -> are generally available
in select Regions.
1206.45 -> Well what’s a select Region?
1208.54 -> I took a quick look to figure out
how many Regions that might be
1212.38 -> and it looks like about 12 Regions
have Availability Zones
1215.3 -> and another 40 Regions do not.
1217.94 -> And notably, none of the Regions
in South America,
1220.92 -> Africa or the Middle East
have Availability Zones.
1223.75 -> It also appears that countries
like China and Korea
1226.96 -> lack Regions
with Availability Zones entirely.
1231.07 -> For most customers, properly
designed Availability Zones
1234.16 -> provide a powerful tool
1235.27 -> to cost effectively achieve
very high availability.
1238.86 -> We believe that availability
you can achieve from properly
1241.74 -> designed Availability Zones
1243.21 -> is sufficient for the vast
majority of workloads.
1246.15 -> But some customers require
even higher levels of assurance.
1249.48 -> For example,
to meet regulatory requirements.
1253.06 -> And for these workloads AWS
offers the option to run applications
1256.85 -> in multiple geographic regions
to attain
1259.35 -> even higher fault-tolerance
and disaster isolation.
1263.25 -> But here too we think
about things a little differently.
1267.99 -> When thinking about things
that can impact your services,
1270.58 -> we naturally think about fires,
floods, tornadoes.
1273.65 -> But humans are the most
likely source of a problem
1276.61 -> and it’s usually
well-intentioned humans.
1279.11 -> As the quote here tells us,
anything that a human can touch,
1283.16 -> a human can muck up.
1285.08 -> And that’s why AWS
goes to extreme lengths
1287.46 -> to protect our infrastructure
and our services
1290.18 -> from human and natural disasters.
1292.18 -> Anything that can impact them from
a single event or a single change.
1296.68 -> We do this everywhere.
But AWS Regions are special.
1301.28 -> Regions are designed to be entirely
isolated from one another
1305.68 -> at the infrastructure level
1306.99 -> and also at the software
and the services level.
1309.66 -> By default, all AWS services
are built and deployed
1312.86 -> separately to every AWS Region
1314.81 -> to assure that we do not
have operational issues
1317.12 -> across multiple Regions
simultaneously.
1319.73 -> A small number of services
provide cross-region capabilities.
1322.94 -> For example, you can use
the same AWS credentials
1325.52 -> to log into two different regions.
These capabilities are limited,
1329.24 -> highly scrutinized
and carefully managed.
1332.29 -> As a result,
in our 15 years of operation,
1334.93 -> services like Amazon S3,
Amazon EC2 and Amazon DynamoDB
1339.42 -> have never had significant issues
in multiple regions at the same time.
1344.76 -> AWS currently
has 24 independent regions,
1348.1 -> each with multiple well-designed
Availability Zones.
1352.13 -> This year, we’re delighted to launch
AWS Regions in Cape Town,
1355.36 -> South Africa and Milan, Italy.
1357.87 -> We’ve also announced new Regions
coming in Switzerland,
1361.08 -> Indonesia and Spain
1362.61 -> and will be adding our second Regions
in India, Japan and Australia.
1367.56 -> That last one was just
announced earlier this week.
1371.93 -> Of course, great availability
1374.99 -> is not just about
keeping everything running.
1377.44 -> For a cloud provider,
it also includes providing you
1379.96 -> with the capacity you need when you
need it and where you need it.
1383.85 -> Just like a technical design,
1385.36 -> a supply chain
is made up on components
1388.56 -> or in this case suppliers and each
of these components can fail.
1392.55 -> These failures can be caused
by short-term issues like labor
1395.82 -> strikes or fires or longer
lasting issues like trade disputes.
1400.59 -> No year in recent memory
has seen more disruption than 2020.
1405.07 -> Starting in March of this year,
we saw varying degrees of disruption
1408.1 -> all over the world
as local communities
1410.22 -> responding to the Coronavirus.
1412.33 -> The impact has been different
based on location and time,
1416.74 -> but unexpected delays
and closures were the norm
1419.74 -> and in many ways they continue to be.
To deal with the real world,
1423.45 -> the best protection is engineering
your supply chain
1426.02 -> with as much geographic
and supplier diversity as you can.
1429.63 -> Like separation can protect
your infrastructure,
1432.21 -> it can also protect
your supply chain.
1434.49 -> And this has been an area
of continued focus for us
1436.71 -> over the last decade.
1439.51 -> Here's a view of our supply chain
for four critical components.
1442.57 -> Each dot represents
one or more suppliers.
1445.02 -> A bigger dot represents
more suppliers.
1447.58 -> At this time, we had a total
of twenty-nine suppliers
1451.14 -> in four countries
for these four components.
1453.68 -> Now, this is reasonable
supplier diversity
1455.92 -> but we wanted to do much better.
1460.66 -> Here’s the global supply map for
those same four components in 2020.
1464.99 -> Since 2015, we’ve nearly tripled
the number of suppliers
1468.8 -> and increased our supply base
to seven countries.
1471.6 -> And this added diversity
was a big help
1474.18 -> in navigating the challenges of 2020.
1477.94 -> Our investments in a
geographically diverse supply chain
1480.58 -> and our operational focus that we
put on capacity planning meant that,
1484.32 -> despite all the challenges
of this last year,
1486.85 -> our customers were able to keep
scaling without interruption.
1490.3 -> It’s so satisfying to see
how those investments allowed us
1493.77 -> to help our customers
through this challenging time.
1496.56 -> Now, I want to introduce
one of our customers
1498.95 -> that had to quickly reinvent
how they did business
1501.06 -> when COVID hit the Spring.
1502.63 -> Here is Michelle McKenna,
Chief Information Officer
1505.43 -> for the National Football League.
1511.17 -> Thank you, Peter.
1513.06 -> Our thirty-two teams
and their passionate fans
1515.57 -> make the NFL America’s
largest sports organization.
1520.32 -> Last season, we finished
our one hundredth season
1523.65 -> with over fifteen
million viewers per game,
1526.45 -> creating forty-one of the top
fifty broadcasts in the US.
1531.15 -> As CIO, I oversee
the League’s technology strategy
1535.42 -> which includes making sure
we leverage the best and greatest
1538.65 -> and new technologies
to evolve our game,
1541.94 -> engage our fans,
and protect and develop our players.
1546.44 -> In early March of this year,
we were preparing for the NFL draft.
1550.1 -> A live event where we welcome
our newest players.
1553.91 -> The NFL draft is about conducting
the business of football
1558.22 -> but it has now also grown into
one of our marquee events enjoyed
1562.57 -> by hundreds of thousands of fans
over three days in an NFL city
1567.44 -> and even watched by millions online
and on television.
1571.06 -> The 2020 draft
was to be in Las Vegas.
1574.5 -> But, like the rest of the world,
we learned about COVID-19
1577.63 -> and we rapidly began to understand
1579.75 -> that the event would be
much different in 2020.
1583.42 -> On March 13th, our offices shut down
1586.29 -> and the offices and facilities
of our clubs soon followed suit.
1590.85 -> On March 24th, we had a meeting.
I recently looked back at my calendar
1596.38 -> and it was aptly named
“Draft Contingencies”.
1599.8 -> And my, what a contingency
we ended up needing.
1603.09 -> By then we had learned
that we would not be able
1605.58 -> to gather in our facilities at all.
1608.34 -> So, five weeks out,
the League had to call an audible.
1612.09 -> The draft was officially
going virtual.
1615.16 -> In the span of a few days,
1616.51 -> we went from a live broadcast
in Las Vegas
1619.5 -> to hopefully being able to gather
in our facilities coaches and staff
1623.47 -> to ultimately everyone,
1625.26 -> every player prospect,
every coach, general manager, scout,
1629.23 -> and even our Commissioner would need
to be remote from their homes.
1633.22 -> The challenge was immense.
1635.3 -> With hundreds of questions about
how we were going to pull it off,
1638.52 -> would we really be able
to do it remotely?
1641.38 -> Could it be done
without technical interruption?
1643.93 -> I remember holding my breath
1645.67 -> when asked that question
by our Commissioner
1648.54 -> because, typically,
televised broadcasts
1651.11 -> require a production truck
and the reliability of satellite,
1655.55 -> which rarely fails, transmitted back
to studios for production.
1660.85 -> But this traditional route
wouldn’t work
1662.75 -> for all the hundreds of remotes
that we would need.
1665.66 -> So, we had to figure out a new plan.
1669.73 -> We quickly got together
with our partners and events
1672.93 -> and one of the first companies
we reached out to for help was AWS.
1678.26 -> The NFL and AWS have been
strategic partners for many years now
1682.78 -> and as the CIO of the League,
I have leaned on AWS
1685.66 -> many times to help me
solve things and challenges
1688.62 -> that we haven’t done before.
1690.72 -> So, right away.
I reached out to my partners.
1693.81 -> Our Head of Technology John Cave
1695.62 -> actually suggested to us all
on a big video call
1699 -> that perhaps we could carry
the broadcast
1700.97 -> over the internet
using AWS and mobile phones
1704.29 -> instead of broadcast satellites
and high-end broadcast cameras.
1708.27 -> At first, it seemed impossible.
ESPN, I recall, told us,
1713.92 -> “We’ve never done anything
like this before.”
1716.78 -> We had eighty-five draft picks to do,
an even larger number of coaches,
1720.77 -> GMs, and staff, and we were scattered
all over the country.
1724.89 -> How could this possibly work?
1726.76 -> Well, with ESPN,
our broadcast partner, and AWS,
1730.31 -> we put our heads down
and came up with a plan.
1733.27 -> Each player would receive
two devices.
1735.68 -> One always on device that would show
the emotion of the moment,
1740.49 -> the anticipation and the excitement.
1743.42 -> It was actually the “live from
the living room” shot, so to speak.
1747.46 -> And the other interview camera
was to be used for interviews
1750.76 -> so that a player could step aside
1752.89 -> and have one-to-one interactions
with their new teams, fans,
1756.69 -> and our Commissioner.
1758.69 -> We created and shipped nearly two
hundred at home production booths
1762.87 -> for top prospects,
coaches, teams, and personnel,
1766.49 -> including everything from
two mobile phones to lighting,
1769.11 -> to microphones and tripods.
And even further than this,
1772.43 -> we went through a full tech analysis
of each location
1775.61 -> to ensure that if connectivity
needed upgrading,
1778.38 -> it could be done in advance.
1780.16 -> And we also had every internet
service provider on speed dial.
1784.54 -> This is Jordan Love
from Utah State University.
1786.91 -> Here at the house getting ready
for the virtual draft.
1788.99 -> This morning,
I received my draft kit.
1790.9 -> Got my whole setup.
1792.13 -> This is where I’ll be sitting.
1793.63 -> Hope everyone is staying home,
staying strong.
1795.904 -> Can’t wait to see
everyone on Thursday night.
1800.32 -> We were also able to implement
a fundraising platform
1803.14 -> that raised over a hundred
million dollars for COVID
1805.5 -> Relief. Knowing that this draft
could have that kind of impact
1808.99 -> is really what pushed our teams
to keep working forward
1812.01 -> through this technical challenge
so that we could pull this off,
1815.46 -> ultimately leaving a legacy
in the League’s history.
1818.93 -> AWS is a strategic partner
and known to be a resilient Cloud.
1823.31 -> And we knew if any organization
could help us pull this off,
1826.2 -> it would be AWS.
1828.16 -> AWS deployed several
of their top engineers
1831.01 -> to help us think through how we could
simultaneously manage
1833.99 -> thousands of feeds to flow
over the internet
1836.46 -> and up to ESPN in Bristol,
Connecticut, to put on the air.
1840.23 -> In order to make that successful,
the IT team had to become
1842.99 -> somewhat of an air traffic
controller for a live broadcast.
1846.32 -> But we also had to see problems
1847.93 -> in the broadcast
ahead of them happening,
1850.37 -> utilizing the best
in the crystal ball technology.
1853.58 -> You know, seeing the future.
1855.28 -> Something that I know
you all have do.
1858.41 -> The always on video feeds
were sent to EC2
1861.02 -> Instance’s running media gateways.
1863.45 -> ESPN pulled the feeds from EC2
and produced the show live.
1868.14 -> The NFL on-premise systems
also received the feeds
1871.56 -> via Direct Connect
for our own internal use,
1874.4 -> which included monitoring
and archiving.
1877.56 -> We used AWS Shield Advanced,
1879.72 -> a dedicated service
to monitor traffic real time
1882.26 -> and mitigate attacks,
1883.45 -> to enhance protection
of the NFL media gateways.
1886.9 -> We used multiple Availability Zones
1888.83 -> to minimize impact
in the event of a failure
1891.57 -> and, just in case,
even more contingencies,
1894.37 -> we had additional infrastructure
ready to go in another region.
1898.95 -> AWS helped monitor and alert us
when problems were around the corner,
1903.6 -> using that crystal ball,
1905.18 -> so that we could react in time
for live television.
1909.26 -> It’s one thing to be resilient in
your day-to-day technology operation
1913.68 -> and a totally different thing
to be resilient when we were live.
1917.71 -> This was my first live
television experience
1920.98 -> and I can tell you it is not
for the faint of heart.
1924.83 -> The AWS Cloud infrastructure
was the resilient backbone
1928.47 -> that helped us move
thousands of feeds.
1931.08 -> Many people called
these drafts special.
1933.74 -> And it was special indeed.
1935.87 -> Raising over
a hundred million dollars,
1938.11 -> the draft also had
a record number of viewers.
1940.96 -> Over fifteen million tuning in
for the first-round picks,
1944.58 -> a thirty-seven percent
increase from the prior year,
1948.16 -> and totaling
fifty-five million viewers
1950.78 -> over the course
of a three-day event.
1953.19 -> What resulted by chance
was the personal connections
1956.46 -> to the Commissioner, prospects,
owners, GMs, and head coaches.
1961.21 -> Social media was a testimony
to our fans’ involvement.
1965.06 -> Platforms were buzzing
everywhere about every topic.
1969.04 -> Even our Commissioner’s jar of M&Ms
became a subject of discussion,
1973.79 -> something that
we could never have imagined.
1976.69 -> But at the core of all this madness,
1979.13 -> what was so special was how our fans
were able to connect with the NFL.
1983.98 -> You see, they were going through
much the same thing
1986.83 -> we were all going through.
1988.67 -> They were able to relate
to what they were watching.
1992.05 -> All the people at home were going
through the same thing as we were.
1995.74 -> Coping with a pandemic,
remote working with our families
1998.95 -> around, pets, and many distractions.
2002.19 -> This intimate interaction
could not have been planned for
2005.37 -> and that’s what made it
so special and real.
2008.97 -> The 2020 virtual draft
will have long lasting effects.
2012.95 -> How we plan and produce our
broadcasts is going to forever change
2017.41 -> and we will now always have,
2019.19 -> I believe,
some virtual components to our draft.
2022.61 -> For example, our Commissioner
was able to personally welcome
2025.35 -> almost every player to the NFL
2027.67 -> instead of a select few
that get to attend a live event.
2032.28 -> Going forward, we will continue
to push and use AWS’
2036.38 -> Cloud to enable and transform
our broadcast and events.
2041.94 -> Thank you.
2044.12 -> Thank you, Michelle. It’s great
to hear how the NFL and AWS
2047.45 -> worked together to deliver a special
and successful NFL draft.
2052.64 -> The last couple of years,
2053.91 -> I’ve talked a lot
about our investments in AWS Silicon.
2057.03 -> That’s because these investments
have been allowing us
2059.53 -> to deliver differentiated
performance, exciting new features,
2063.43 -> improved security, and better power
efficiency for AWS customers.
2069.16 -> We’re going to look now
at how we design our chips.
2072.09 -> But chips are only part
of this story.
2074.22 -> What’s really exciting
and transformative
2076.26 -> about deep investments in AWS
2077.9 -> Silicon is being able to work
across custom hardware and software
2082.31 -> to deliver unique capabilities.
2084.83 -> And by working across
this whole stack,
2087.19 -> we’re able to deliver these
improvements faster than ever before.
2092.09 -> At AWS, we’ve been building custom
hardware for a very long time.
2096.69 -> Our investments in AWS
Custom Silicon all started in 2014
2101.65 -> when we began working with a company
called Annapurna Labs
2104.59 -> to produce
our first custom nitro chip.
2108.17 -> Shortly after this,
Annapurna Labs became part of AWS
2111.51 -> and is now the team working on
all of these exciting chips.
2115 -> Now, we use that nitro chip I just
talked about to create specialized
2118.54 -> hardware which we call
the Nitro Controller.
2121.54 -> We use the Nitro Controller to turn
any server into an EC2 instance.
2126.23 -> The Nitro Controller runs all
the code that we use to manage
2129.34 -> and secure the EC2
instance and virtualize
2132.05 -> and secure our network and storage.
2134.44 -> And by running on the Nitro
Controller rather than on the server,
2138.31 -> we’re able to improve
customer instance performance,
2141.16 -> increase security,
and innovate more quickly.
2144.29 -> Today, I have an example that
I believe really brings this to life.
2149.34 -> Last week, we announced
the Amazon EC2 Mac Instance.
2153.92 -> There was a ton of excitement
about this launch.
2156.84 -> But how do you make a Mac
into an EC2 instance?
2161.49 -> Well, here you can see
an actual Mac EC2 server.
2166.19 -> You probably recognize the Mac Mini
in the middle of the server tray.
2170.34 -> And if we pan out a bit,
you’ll see the Nitro Controller.
2175.16 -> The first Mac EC2 Instance
is the marriage of a Mac Mini
2178.65 -> and a Nitro Controller.
And as you see, we did not need
2181.69 -> to make any changes
to the Mac hardware.
2184.17 -> We simply connected
a Nitro Controller
2186.36 -> via the Mac’s Thunderbolt
connection.
2189.01 -> When you launch a Mac Instance,
you’re Mac compatible AMI
2192.08 -> runs directly on the Mac Mini.
No hypervisor.
2195.45 -> The Nitro Controller
sets up the instance
2197.584 -> and provides secure access
to the network
2199.657 -> and any storage you attach.
2201.46 -> And that Mac Mini can now
natively use any AWS service.
2205.54 -> It can have multiple network E&Is.
2207.4 -> It can attach high performance
encrypted EBS volumes.
2210.22 -> It can have instance firewalls.
2212.36 -> And the instance has low latency
access to other AWS services,
2215.99 -> like Amazon S3 and Amazon Aurora.
2218.65 -> All the great stuff that comes
with being an EC2 Instance.
2222.03 -> And because all of this
happens outside the Mac Mini,
2225.02 -> you get all the resources of the Mac
dedicated to your workload.
2229.73 -> Just as you would if that Mac
was running on your desktop.
2234.32 -> Today, we’re on our fourth generation
of custom Nitro chips.
2238.68 -> And each generation of Nitro chip
has enabled improved performance.
2242.62 -> The most recent generation
of our Nitro chip
2244.96 -> is powering the recently
announced C6gn Instance.
2249.53 -> The C6gn is our highest performing
network optimized EC2 instance.
2254.77 -> It’s specifically designed for
the most demanding network workloads,
2258.73 -> including high performance computing.
2261.45 -> Now, there are lots of different ways
to look at network performance
2264.5 -> and the C6gn improves performance
on all of these dimensions.
2268.34 -> But as I’ve discussed a few times
in past keynotes,
2271.3 -> achieving lower latency
with reduced jitter
2273.72 -> is one of the hardest problems
in engineering.
2276.43 -> Latency is one of those challenges
that cannot be solved
2279.07 -> with more transistors,
more engineers, more power.
2284.77 -> So, here you can see
the C6gn instance
2288.05 -> and how it reduces
round-trip latencies
2289.88 -> significantly compared to the C5n.
2293.52 -> The C5n was previously
our best performing instance
2296.17 -> for network intensive workloads.
2298.75 -> And improvements like this
aren’t just at the average.
2301.37 -> You can see the improvement
in the tail latency as well.
2304.88 -> And this means reduced
performance variability
2307.91 -> which, for scale out applications,
means better overall performance.
2311.94 -> We’re going to look
at this more in a minute.
2314.54 -> Now, while we’re very excited
about the Nitro chips
2317.62 -> and our investments here,
our investments in AWS
2320.93 -> Custom Silicon
extend far beyond Nitro.
2325.13 -> Last year, we released our
first machine learning chip, AWS
2328.96 -> Inferentia. We targeted inference
with our first chip
2332.98 -> because most at scale workloads,
machine learning workloads,
2337.82 -> the cost of the inference represents
the vast majority of the cost,
2342.12 -> and for Inferentia provides
the highest
2344.35 -> throughput at almost half the cost
per inference when compared to GPUs
2348.57 -> which are commonly used for large
scale inference infrastructure.
2353.49 -> Our AWS Neuron team developed
a software to allow machine
2358.03 -> learning developers to use Inferentia
as a target for popular frameworks,
2362.57 -> including TensorFlow,
PyTorch, and MXNet.
2366.58 -> With Neuron, they can take advantage
of the cost savings
2369.57 -> and performance of Inferentia
with little
2371.67 -> or no change
to their ML code,
2373.8 -> all while maintaining support
for other ML processors.
2378.87 -> We’ve been delighted by the results
customers have achieved
2381.24 -> in migrating their large-scale
inference workloads to Inferentia.
2384.67 -> Amazon Alexa recently moved
their inference workload
2387.48 -> from Nvidia GPU-based hardware
to Inferentia based EC2 Instances
2391.88 -> and reduced costs by thirty percent
while achieving a twenty-five percent
2396.46 -> improvement in their
end-to-end latency.
2399.2 -> And as you can see,
many other customers
2401.42 -> are reporting great results.
2404.89 -> And while we’re excited
by the results customers
2407.07 -> are seeing with Inferentia,
2408.44 -> our investment in machine
learning chips is just beginning.
2411.55 -> Last week, Andy announced AWS
2413.48 -> Trainium, our second machine
learning chip.
2416.8 -> Like Inferentia
has done for inference,
2418.84 -> Trainium will provide the lowest cost
and highest performance
2422.12 -> way to run your training workloads.
2424.51 -> I’m looking forward to showing you
more technical details about
2427.18 -> Trainium next year.
2428.56 -> But today, I want to talk to you
2430.24 -> about our third area
of silicon investment, AWS
2433.13 -> Graviton.
2436.05 -> We introduced Graviton
a couple of years ago
2438.04 -> with the Graviton based A1 Instance.
2440.92 -> Our purpose with that instance
was to work with our customers
2443.89 -> and our ISV partners to understand
what they needed
2446.82 -> to run their workloads
on a modern 64-bit ARM processor.
2450.88 -> We learned a lot about how
to make it easy for customers to port
2453.75 -> and run applications on Graviton.
This year, we released Graviton2.
2460.4 -> I didn’t get a chance to tell you
about it at last year’s keynote
2464.25 -> but the good news is I can now
get into the details of Graviton2,
2468.05 -> how we designed it
and, more importantly,
2469.9 -> show you some of the amazing results
2471.96 -> that our customers are seeing
moving their workloads to Graviton2.
2477.06 -> With Graviton2,
we set out to design the best
2479.87 -> performing general purpose processor.
2482.32 -> And while we wanted
the best absolute performance,
2484.7 -> we also wanted the lowest cost.
Faster and less expensive.
2489.64 -> Having lofty goals for a
multi-hundred million project
2492.48 -> like a new chip isn’t unusual.
2494.98 -> What’s unusual
is exceeding these goals.
2497.38 -> Graviton2 is the best performing
general purpose processor
2500.86 -> in our Cloud by a wide margin.
2503.95 -> It also offers
significantly lower cost.
2507.12 -> And as I will show you later
when I update you on sustainability,
2509.99 -> it’s also the most power efficient
processor we’ve ever deployed.
2514.34 -> Great results like this require
amazing execution.
2517.76 -> But they also require a clever plan.
2520.55 -> Taking the same path as every other
processor would not have delivered
2524.72 -> the type of performance
we’re seeing here.
2527.41 -> Our plan was to build a processor
that was optimized for AWS
2531.49 -> and modern Cloud workloads,
2534.4 -> taking full advantage
of the Nitro architecture
2536.87 -> that I talked about earlier.
2538.92 -> So, what do modern
Cloud workloads look like?
2542.73 -> Well, to understand that,
let’s start by looking at
2545.12 -> what a modern processor looks like.
Before about fifteen years ago,
2550.97 -> the main difference between
one processor generation
2553.26 -> and the next was the speed
of the processor.
2556.26 -> And this was great while it lasted.
2558.45 -> But about fifteen years ago,
this all changed.
2561.73 -> New processors continued
to improve their performance
2564.42 -> but not nearly as quickly
as they had in the past.
2567.13 -> Instead, new processors
started adding cores.
2571.01 -> And now, you can think of a core
like a mini processor on the chip.
2575.19 -> Each core on the chip
can work independently
2577.68 -> and at the same time
as all the other cores.
2580.13 -> And this means that if you can divide
your work up,
2582.33 -> you can get
that work done in parallel.
2584.56 -> Processors went from one core to two
and then four.
2587.72 -> The trend was obvious and exciting.
2590 -> So, how did workloads adapt
to this new reality?
2595.31 -> Well, the easiest way
to take advantage of cores
2597.53 -> is to run more independent
applications on the server
2600.72 -> and modern operating systems
have got very good at scheduling
2604.2 -> and managing multiple processes
on high core systems.
2607.97 -> Another approach
is multi-threaded applications.
2611.56 -> Multi-threaded applications
allow builders
2614.45 -> to have the appearance of scaling up
2616.6 -> while taking advantage
of parallel execution.
2619.39 -> Languages like Java make
multi-threaded programming easier
2623.21 -> and safer than the C++
I grew up with.
2626.33 -> But modern languages like Go, Erlang,
2628.45 -> and Rust have completely
changed the game
2630.62 -> for high performance multi-threaded
application development.
2633.91 -> To me, one of the most exciting
trends is the move to services.
2638.59 -> Service based architectures move us
from large monolithic applications
2642.76 -> to small, purpose built
independent services.
2646.33 -> This is exactly the type of computing
that containers and Lambda enable.
2650.26 -> Taken together, you can call
these trends scale out computing.
2656.43 -> And while scale out computing
2658.24 -> has evolved to take advantage
of higher core processors;
2661.39 -> processor designers have never
really abandoned the old world.
2665.36 -> Modern processors have tried
to have it both ways,
2668.95 -> catering to both legacy applications
and modern scale out applications.
2673.98 -> And this makes sense
if you think about it.
2675.95 -> As I mentioned,
producing a new processor
2678.01 -> can cost hundreds
of millions of dollars
2680.2 -> and the way you justify
that sort of large upfront investment
2683.55 -> is by targeting
the broadest option possible.
2686.63 -> The more processors you
ultimately end up producing,
2689.48 -> the less significant
that upfront cost
2691.3 -> is to each incremental
processor produced.
2694.09 -> So, modern mini core processors
have unsurprisingly tried
2697.82 -> to appeal to both legacy applications
and modern scale out applications.
2704.8 -> Processor designers
have been constantly
2706.69 -> adding functionality
to their cores for decades.
2709.64 -> With legacy workloads,
you need to assure
2711.76 -> that every core never stalls
while waiting for resources
2715.5 -> so you end up adding more
and more of everything
2717.86 -> and everything gets bigger.
2719.95 -> And somewhere along the way,
a funny thing started happening.
2723.52 -> Cores got so big and complex that it
was hard to keep everything utilized.
2728.58 -> And the last thing you want
2729.8 -> is transistors on your processor
doing nothing.
2734.26 -> So, to work around
this limitation,
2736.59 -> processor designers
invented a new concept
2738.82 -> called simultaneous
multi-threading or SMT.
2743.42 -> SMT allows a single core
to work on multiple tasks.
2747.77 -> Each task is called a thread.
2750.64 -> Threads share the core so SMT
doesn’t double your performance
2754.35 -> but it does allow you
to take use of that big core
2756.75 -> and maybe improves your performance
by twenty or thirty percent.
2760.63 -> But SMT also has drawbacks.
The biggest drawback of SMT
2766.61 -> is it introduces overhead
and performance variability.
2769.89 -> And because each core
has to work on multiple tasks,
2773.3 -> each task’s performance
is dependent
2775.52 -> on what the other tasks
are doing around it.
2778.35 -> Workloads can contend for the same
resources like cache space
2781.58 -> slowing down the other threads
on the same core.
2785.71 -> In fact,
workloads like video transcoding
2788.23 -> and high-performance computing
which spend a lot of time
2791.51 -> optimizing their code for scale
out workloads disable SMT entirely
2797.29 -> because the variability
introduced by SMT
2800.05 -> makes their applications
run less efficiently.
2803.56 -> And while you can turn off SMT,
you can’t reclaim the transistors
2807.71 -> that you used to put it in there
in the first place.
2810.68 -> And this means you’re paying
for a lot of idle transistors.
2815.82 -> There are also
security concerns with SMT.
2818.59 -> SMT is the main vector
that researchers have focused on
2821.54 -> with
so-called side channel attacks.
2823.95 -> These attacks try to use SMT
to inappropriately share
2828.34 -> and access information
from one thread to another.
2831.87 -> Now, we don’t share threads
from the same processor core
2834.89 -> across multiple customers
with EC2 to ensure customers
2837.91 -> are never exposed to these
potential SMT side channel attacks.
2843.83 -> And SMT isn’t the only way processor
designers have tried to compensate
2847.85 -> for overly large
and complex cores.
2851.05 -> The only thing worse
than idle transistors
2853.65 -> is idle transistors that use power.
2856.93 -> So, modern cores have complex
power management functions
2860.38 -> that attempt to turn off
or turn down parts of the processor
2863.83 -> to manage power usage.
2866 -> The problem is, these power
management features introduce
2870 -> even more performance variability.
2872.53 -> Basically, all sort of things
can happen to your application
2875.25 -> and you have no control over it.
2877.17 -> And if you’re a system engineer
trying to focus on performance,
2880.12 -> this can be extremely difficult
to cope with.
2884.93 -> And in this context,
you can now understand
2887.28 -> how Graviton2 is different.
2890.02 -> The first thing we did with Graviton2
was focus on making sure
2893.3 -> that each core delivered
the most real-world performance
2896.93 -> for modern Cloud workloads.
When I say real-world performance,
2901.51 -> I mean better performance
on actual workloads.
2904.9 -> Not things that lead
to better spec sheets stats
2907.07 -> like processor frequency
or performance micro benchmarks
2910.68 -> which don’t capture
real-world performance.
2913.49 -> We used our experience
running real scale out applications
2916.97 -> to identify where we needed
to add capabilities
2919.59 -> to assure optimal performance
without making our cores too bloated.
2925.71 -> Second, we designed Graviton2
2928.13 -> with as many independent cores
as possible.
2931.17 -> When I say independent,
Graviton2 cores
2933.43 -> are designed
to perform consistently.
2936.42 -> No overlapping SMT threads.
No complex power state transitions.
2941.89 -> Therefore, you get
no unexpected throttling,
2944.15 -> just consistent performance.
And some of our design choices
2948.41 -> actually help us
with both of these goals.
2950.89 -> Let me give you an example.
2953.77 -> Caches help your cores run fast
by hiding the fact that system memory
2958.21 -> runs hundreds of times
slower than the processor.
2962.28 -> Processors often use
several layers of caches.
2965.6 -> Some are slower and shared
by all the cores.
2968.38 -> And some are local to a core
and run much faster.
2972.31 -> With Graviton2,
2973.58 -> one of the things we prioritized
was large core local caches.
2977.75 -> In fact, the core local
L1 caches on Graviton2
2981.44 -> are twice as large as the current
generation x86 processors.
2985.98 -> And because we don’t have SMT,
2988.51 -> this whole cache is dedicated
to a single execution thread
2992.22 -> and not shared by competing
execution threads.
2995.17 -> And this means that each Graviton2
core has four times the local
2999.05 -> L1 caching as
SMT enabled x86 processors.
3003.51 -> All of this means each core
can execute faster
3006.6 -> and with less variability.
Okay.
3010.39 -> Now, hopefully you have a pretty
good idea of what we focused on
3013.21 -> when we designed and built Graviton2.
Let’s look at how things turned out.
3019.17 -> Here’s a view of how many
execution threats were available
3023.05 -> when we built
in our processors
3024.89 -> that we used to build
RC instances over the years.
3028.03 -> On the left you see our C1 instance
3031.16 -> which was launched with the processor
that had four threats.
3034.7 -> And on the right you see Graviton2
with its 64 execution threats
3039.07 -> which is used in the C6g.
Now, when you look at this graph,
3043.91 -> this looks like pretty
incremental progress,
3046.59 -> but remember, this view
is threads not cores.
3050.5 -> So, for most of these
processers we’re looking at,
3052.97 -> those threads have been
provided by SMT.
3056.08 -> Let me adjust the graph
and let’s look at real cores.
3061.18 -> Okay, now you see how Graviton2
really differentiates itself,
3065.14 -> that’s a significant
non-linear improvement.
3068.24 -> So, let’s look at some benchmarks.
3072.53 -> Because Graviton2
is an Arm processor,
3075.38 -> a lot of people will assume
that Graviton2
3077.78 -> will perform good
at front-end applications,
3080.24 -> but they doubt it can perform
well enough
3081.9 -> for serious I/O intensive
back-end applications.
3085.37 -> But this is not the case.
So, let’s look at a Postgres database
3088.93 -> workload performing a standard
database benchmark called HammerDB.
3094.59 -> First we’re going to look
at the m5 instance.
3097.11 -> Now the smallest m5 instance
has two execution threats.
3101.1 -> It’s one core but two threats.
3103.36 -> Remember, I mentioned we don’t
share cores across customers.
3107.03 -> So, we can only scale
down to two threats.
3109.54 -> And our largest m5 instance
actually has 96 threats.
3113.78 -> But that’s actually two
processors on the same system
3118.44 -> and that’s going to cause
some problems.
3119.92 -> So, we’re going to start by
looking at
3121.47 -> just how this benchmark performs
on one m5 processor.
3128.4 -> Okay. Here you can see
we get pretty good scaling.
3132.05 -> As we add more threads things
improve almost lineally,
3135.6 -> not quite, but pretty close.
Okay.
3138.05 -> Now I am going to add
the larger m5 instance sizes.
3141.43 -> This is the threads
from the other processor.
3146.37 -> Okay. You can see right away
a scaling here isn’t nearly as good.
3149.55 -> And there’s a few reasons
for this flattening.
3152 -> But it mostly comes down
to sharing memory
3154.09 -> across two
different processors,
3155.96 -> and that sharing adds latency
and variability to the memory access.
3160.11 -> And like all variability,
3161.73 -> this makes it hard for scale-out
applications to scale efficiently.
3166.75 -> Let’s add Graviton.
3170.02 -> Here we can see the M6g
instance on the same benchmark.
3173.41 -> You can see that M6g delivers better
absolute performance at every size.
3177.91 -> But that’s not all.
3179.64 -> First you see the M6g scales
almost lineally all the way up
3183.38 -> to the 64 core
largest instance size.
3186.34 -> And by the time you get to 48 cores,
you have better absolute performance
3190.31 -> that even the largest m5 instance
with twice as many threats.
3194.64 -> And you can see M6g
offers a one core option.
3197.9 -> Because the M6g doesn’t have threads
we can scale
3201.35 -> all the way down giving you
3202.79 -> an even more cost-effective option
for your smallest workloads.
3207.13 -> And for your most demanding workloads
the 64 core M6g
3210.85 -> instance provides over 20%
better absolute performance
3214.61 -> than any m5 instance.
But this isn’t the whole story.
3219.34 -> What we’re looking at here
is absolute performance.
3222.13 -> Things get even better when we factor
in the lower cost of the M6g.
3226.63 -> Let's look at that.
3229.62 -> Okay. Here you can see the biggest
and smallest M6g instance
3234.02 -> compared to the corresponding
m5 instance variance
3237.02 -> on the same cost per operation basis
for the benchmark we just looked at.
3242.16 -> You can see the larger sized
instances are nearly 60% lower cost.
3246.97 -> And because the M6g scales down
better than a threaded processor,
3251.09 -> you can save even more
with the small instance,
3253.47 -> over 80% on this workload.
3257.62 -> Of course,
benchmarks are exciting
3259.64 -> but what’s really exciting
is seeing customers
3262.75 -> having success
using Graviton2.
3265.23 -> And the benchmarks
that really matter,
3267.36 -> customer workloads are
showing the performance
3269.68 -> in price benefits
we expected.
3271.98 -> I got a chance to catch up
with one of those customers,
3274.43 -> Jerry Hunter,
who runs Engineering for Snap,
3277.22 -> about some of the benefits they are
seeing from AWS and Graviton2.
3280.98 -> Jerry actually
ran AWS infrastructure
3282.9 -> before he took his current job
at Snap about four years ago.
3286.37 -> So, it was fun
to catch up with Jerry.
3288.36 -> Let me share a little bit
of our conversation.
3293.78 -> Jerry, great to see you.
3295.48 -> When we talked about doing this
I thought we might be in Vegas
3298.7 -> and we might be able
to have a beer afterwards.
3300.67 -> Don’t think
that’s going to happen.
3302.09 -> But it’s still great
to catch up with you.
3304.17 -> Nice to be here.
3305.45 -> Awesome, well today,
I spent a little time
3307.68 -> talking about Amazon culture
so let’s start there.
3310.87 -> You were at Amazon
for almost ten years.
3312.71 -> Can you tell me is there
anything you learned at Amazon
3314.51 -> that you took with you to Snap?
3316.81 -> Yes, you know, I actually
think operational discipline,
3320.51 -> and I will call it
operational discipline,
3322.01 -> is the leaders deepen
the details both technology
3325.55 -> and operationally of the space
that they are running.
3328.02 -> One of my favorite stories
is like when I first started at Snap
3331.23 -> we were trying
to understand cost.
3333.6 -> And as we grew,
our costs were going up.
3336.07 -> There was a tactic
that we used at AWS
3338.35 -> that I really liked
and that was understanding
3340.25 -> how to take the cost and associate
the unit of cost with the value
3344.23 -> you’re giving to your customer
so that unit cost is associated
3346.45 -> with what the customer
is buying.
3348.38 -> And it turns out that it not
only works inside of AWS
3351.03 -> but it works for people
that are using AWS.
3353.23 -> So, we look at the services
that we’re offering to our customers
3356.45 -> in terms of the value they get
and then scale it,
3359.63 -> aggregate all of those
different services
3361.36 -> we’re using to describe
the cost associated
3363.57 -> with the thing
we’re delivering.
3364.92 -> And then I hand it over
to the engineers
3366.85 -> and it gives
the engineers observability
3369.05 -> into how costs are being spent
3370.94 -> and where there is
an opportunity to tune costs.
3374.09 -> So that cost efficiency comes
straight out of the metric.
3376.4 -> And that’s turned out
to be a real help for us
3379.04 -> on our path to profitability.
3380.88 -> Awesome. Well, you were one
of the best operators I know.
3383.99 -> So, it’s great to hear
you’ve taken that with you.
3386.83 -> But while we’re talking
about your time at Amazon,
3388.68 -> why don’t you tell us
about something you did here
3390.33 -> that maybe not everybody knows
about that you’re proud of.
3393.13 -> Yes, there’s a lot of stuff
I was proud of.
3395.28 -> There’s a lot of firsts,
but this one is really easy for me.
3398.42 -> When we worked
on renewable power
3400.04 -> I just … I still am satisfied
by the work that we did there.
3404.16 -> It was deeply satisfying
and here’s why. It was complicated.
3409.18 -> Laws that you have
to understand and follow
3412.2 -> for putting power
on the grid are byzantine.
3414.94 -> And building wind farms
and solar farms is new
3417.53 -> and it’s new technology and there’s
all these different ways to do it.
3420.73 -> And so, there was a lot of firsts
and it was really fun.
3424.21 -> I also think that there are things
I learned about renewable power
3427.51 -> that I think AWS knows now that would
be useful to the rest of the world,
3432.58 -> because it’s a powerful thing
to be able to deliver power
3437.07 -> and know that
it’s being done productively.
3439.57 -> Yes, well, we are definitely
appreciative of that early work
3442.61 -> you did with renewable power.
We’ve come a long way.
3445.57 -> But like anything you build
on the success of the past.
3449.2 -> It’s actually a big part
of what the Climate
3450.86 -> Pledge is all about for us.
3452 -> It’s how we can help other companies
and work together
3456.02 -> and solve all of these problems
that we have to solve.
3457.95 -> So, I’m looking forward
to giving an update on that.
3460.74 -> But let’s get back to Snap.
So, tell me about Snap..
3464.2 -> Sure. Snap is the company
that built the Snapchat app.
3467.72 -> It’s the fastest and easiest way
to communicate
3469.72 -> with friends
and family through the camera.
3472.36 -> Every day 250 million people
around the globe use Snapchat
3477.32 -> to send 4 billion snaps,
that’s billion with a b,
3482.27 -> to either communicate, tell stories,
or use our augmented reality.
3488.6 -> And we care deeply about
our customers’ privacy.
3492.29 -> So, we have a privacy
first engineering focus
3494.58 -> and we do things like messages
in snaps that disappear by default.
3499.07 -> We have curated content
that comes from trusted partners
3502.19 -> rather than an uncurated
unmoderated newsfeed.
3505.94 -> And I think lastly,
we care deeply about innovation.
3509.43 -> Very exciting. So, tell me about
how Snap is using AWS.
3514.33 -> Well, we use tons… We use EC2
and Dynamo and CloudFront and S3,
3519.16 -> and we tried just about everything.
3521.13 -> And we use it because
it allows us to control costs
3525.81 -> and I don’t have to spend engineers
on building infrastructure.
3529.42 -> I can spend them on doing features
which is what allows us
3532.38 -> to provide value to our customers.
3534.64 -> And we get to use things
new innovations from AWS
3537.66 -> like Graviton
and reduce cost,
3540.38 -> create better performance
for our customers
3543.07 -> with not a lot of energy.
3545.26 -> Awesome. Well, I am excited
to hear you using Graviton.
3548.45 -> One of the things that customers
always worry about
3551.38 -> is how difficult it’s going
to be to move to Graviton.
3553.99 -> Can you tell what your experience
was there?
3556.45 -> Yes, we found it
pretty straightforward.
3558.96 -> The API’s are pretty similar
to what we were using before.
3561.77 -> So, it didn’t take a lot for us
to migrate our code over to test it out.
3566.22 -> We started trying it out with some of
our customers to see how it worked.
3569.24 -> We liked the results.
So, we rolled it out into the fleet
3572.39 -> and immediately got like a 20%
savings which is fantastic
3577.47 -> because like we were able
to switch this load over
3579.81 -> and immediately get that cost savings
and get higher performance.
3584.61 -> Awesome, glad to hear you’re getting
that value from Graviton.
3588.52 -> But what else besides cost and
performance do you value about AWS?
3592.5 -> Well, like when I was at AWS
I spent a lot of personal time
3596.84 -> thinking about how
to make things more secure.
3599.4 -> And I know that everybody
at AWS does that.
3601.86 -> It’s a huge point of value for us.
3603.47 -> As I’ve just said, we care deeply
about privacy and security,
3606.64 -> and that allows us to spend our time,
3609.62 -> my security team which I love,
they do a great job,
3612.33 -> focus on the part of the app
that we own.
3614.69 -> We don’t have to spend time worrying
about what’s happening in the Cloud
3617.73 -> because we know
it’s being done for us
3619.19 -> by people who are really excellent.
So, I personally appreciate that.
3623.17 -> It’s something that brings me comfort
at night when I go to bed.
3627.68 -> The other thing that I really like
is that AWS
3630.42 -> is in regions all over the world.
3632.68 -> You know, early days we had
our back-end as a monolith
3636.08 -> in a single region
in the middle of the country.
3639.17 -> And so, if you were in Frankfurt
for instance
3641.72 -> and you were communicating with
somebody that was also in Frankfurt,
3644.61 -> those communications had to travel
all the way back to the US
3647.32 -> through undersea cable,
blah, blah, blah,
3649.07 -> and make its way back to that person.
3650.71 -> Well, there’s a speed
of light thing there
3652.01 -> and so it could be clunky and slow.
3654.01 -> And this is the conversation,
if you’re not speaking quickly
3656.74 -> it’s doesn’t feel
like a conversation.
3658.57 -> So, bringing all of that
to a data center
3660.95 -> and say Frankfurt
or in India or Sydney,
3663.96 -> gives us real-time access
3665.99 -> to that speedy tech
that our customers expect.
3669.28 -> Awesome. Well, sounds like
you’re making use of AWS,
3672.7 -> but what’s next for Snap?
3674.6 -> Well, there’s a ton of stuff
that we’re working on,
3676.46 -> but there’s two things
I care deeply about right now.
3679.59 -> The first is we’re on a path
to profitability
3682.56 -> and we’re making good progress.
3685.18 -> I could never make that progress if I
was building my own data centers.
3688.2 -> And so, it’s super-useful for me
to try things like we did
3691.26 -> with Graviton, turn them on
and find the immediate cost savings.
3694.43 -> So, I am pretty happy
about the path we’re on.
3696.12 -> I’m happy about the partnership
that we have and just getting there.
3699.92 -> And the second thing is AWS
keeps innovating
3702.91 -> and that lets us keep innovating.
3705.01 -> And I can’t hire enough people
to do all the innovation
3708.4 -> that I want which,
by the way, I am hiring.
3710.395 -> [laughs]
3713.07 -> But we test just about everything
that comes out from AWS
3716.53 -> and I look forward to continued
innovation from AWS
3721.72 -> because there’s a lot of innovation
you should expect
3723.85 -> to see coming out of our services
in the first years,
3725.5 -> which I am not going to talk
about what it is,
3726.88 -> but I am very excited about it.
3728.53 -> And I am really looking forward
to our partnership together
3730.31 -> and deliver that.
3731.77 -> Well, I am disappointed that you
didn’t give away any secrets, Jerry.
3734.07 -> But I will have to leave it at that.
3737.25 -> I really appreciated this chance
to catch up and I am looking forward
3740.39 -> to when we can actually
see each other in person.
3743.11 -> Me too, counting on it.
3747.261 -> It was really nice catching up
with Jerry
3749.15 -> and hearing about the great work
he’s doing at Snap.
3751.83 -> And you heard us talk
about the early work
3753.73 -> that he did on renewable energy
many years ago.
3756.33 -> And now I am really excited to give
you an update on where we are now.
3760.05 -> Last year we announced
The Climate Pledge,
3762.62 -> which commits Amazon
and other signatories
3765.21 -> to achieve net zero carbon by 2040.
3768 -> Ten years ahead
of the Paris Agreement.
3770.53 -> The climate pledge is not just
one company’s climate commitments.
3774.586 -> It offers the opportunity to join
a community of leading businesses
3778.2 -> committed to working together
as a team
3780.5 -> to tackle the world’s
greatest challenge, climate change.
3784.04 -> Including Amazon, 31 companies
have now signed The Climate Pledge
3788.34 -> including Verizon, Rivian,
Siemens and Unilever.
3791.79 -> Today I am going to give you
an update on the investments
3794.47 -> we’re making in AWS
to support the Climate
3796.99 -> Pledge, as well as some of
our other sustainability efforts.
3800.56 -> Because power is such an important
part of AWS’s path
3803.91 -> to zero net carbon,
let’s start there.
3808.46 -> I’m going to give you an update
on our path
3809.97 -> to 100% renewable energy here.
3812.13 -> But first I want to talk
about efficiency.
3814.71 -> The greenest energy is
the energy we don’t use.
3817.5 -> And that’s why AWS has been
and remains laser
3820.42 -> focused on improving efficiency in
every aspect of our infrastructure.
3824.38 -> From the highly available
infrastructure, the powers,
3826.48 -> or servers to the techniques
we use to cool our data centers,
3830.01 -> to the innovative server designs
we used to power our customers
3833.13 -> workloads,
energy efficiency is a key part
3836 -> of every part
of our global infrastructure.
3838.97 -> We actually looked at
two innovations earlier today.
3843.58 -> First, we talked about how we remove
3846.13 -> our essentially UPS
from our data center design.
3849.07 -> What we didn’t talk about was how
this improved our power efficiency.
3855.16 -> Every time you have to convert power
from one voltage to another,
3858.38 -> or from AC to DC, or BC to AC,
you lose some power in process.
3863.93 -> So, by eliminating the central UPS
3866.1 -> we were able to reduce
these conversions.
3868.6 -> And additionally,
we’ve spent time innovating
3871.07 -> and optimizing the power supplies
on the racks
3874.42 -> to reduce energy loss
in that final conversion.
3877.52 -> Combined, these changes reduced our
energy conversion loss by about 35%.
3883.9 -> We also talked about Graviton2.
3886.3 -> And I mentioned it was
our most power efficient processor.
3889.39 -> In fact, Graviton2 processors
provide 2-3½ times
3893.65 -> better performance per watt or energy
use than any other AWS processor.
3899.95 -> This is a remarkable improvement
in power efficiency.
3902.66 -> With the world’s increasing need
for compute
3905.42 -> and other IP infrastructures
innovations like these
3908.19 -> are going to be critical to ensuring
3909.62 -> that we can sustainably power
the workloads of the future.
3912.9 -> And AWS’s scale
and focus on innovation
3915.76 -> allow us to improve efficiency
3917.74 -> faster than traditional
enterprise data centers.
3922.15 -> According to a 451
Research Infrastructure
3926.07 -> that AWS operates is 3.6 times
more energy efficient
3929.98 -> than the median surveyed
US enterprise data center.
3933.76 -> And this study was based
on infrastructure
3935.87 -> before the innovations
I just talked about.
3938.21 -> So, this gap is only going to widen.
3940.39 -> And when you combine AWS’s
continuous focus on efficiency
3944.01 -> with our renewable energy progress,
3945.98 -> customers can achieve up
to 88% reduction in carbon emissions
3949.57 -> compared to using
an enterprise data center.
3952.4 -> So, let’s look at
that renewable energy progress.
3956.81 -> During my keynote in 2018,
3959.03 -> I showed you the large-scale
utility wind and solar projects
3962.18 -> that we were building
to power our data centers.
3964.47 -> At the time, we had added over
900 megawatts of new wind
3968.58 -> and solar in the United States.
3971.59 -> In addition, Amazon also
deployed solar on our rooftops
3975.5 -> in our sort centers and distribution
centers across the world.
3980.15 -> Last year we announced 14 new wind
3983.22 -> and solar projects
adding approximately 1300 megawatts
3987.95 -> including our first
renewable projects
3989.98 -> outside the United States,
in Ireland, Sweden,
3993.24 -> the United Kingdom,
Spain and Australia.
3996.34 -> But I also mentioned
that we just getting started.
3999.98 -> So far this year we have announced
700 megawatts of new wind
4004.38 -> and solar farms including our
first renewable project in China.
4008.69 -> Even with the challenges of Covid-19
our projects in Sweden and Spain
4012.97 -> went into operation.
And today we have much more to show.
4019.97 -> We’re proud to announce
more than 3400
4022.85 -> megawatts of additional
renewable energy projects
4025.43 -> including our first projects
in Italy, France,
4028.15 -> South Africa, and Germany.
4030.31 -> These projects will bring
Amazon’s 2020 total buy to nearly
4034.24 -> 4200 megawatts of renewable power
across 35 wind and solar farms.
4040.12 -> Sound impressive?
It is.
4043.77 -> Amazon’s renewable energy
procurement in 2020
4046.52 -> is the largest by a corporation
in a single year,
4049.55 -> exceeding the record by 50%.
4052.3 -> It’s also a 300% increase over
the projects we announced last year.
4057.496 -> And this progress is a big part
of why we now believe
4059.95 -> we’re on track to hit our 100%
renewable energy goal by 2025.
4064.52 -> Five years ahead of our initial
target of 2030
4066.93 -> that we shared last year.
4070.43 -> Our focus on energy efficiency
and renewable power
4072.9 -> delivers significant progress
on AWS’s sustainability journey.
4076.7 -> However, to meet Amazon’s Climate
4078.83 -> Pledge commitment to reach
zero net carbon by 2040,
4084.61 -> we have to reduce
a broad category of emissions.
4087.22 -> And these are known
as Scope 3 indirect emissions.
4090.83 -> As that name implies
these emission sources
4093.85 -> are not directly controlled by us,
4095.89 -> but they still result
from our business operations.
4100.07 -> All businesses have
these sorts of emissions.
4102.73 -> And they include things like employee
travel and office expenses.
4106.48 -> For AWS, our largest source
of indirect carbon emissions
4109.94 -> come from constructing
our data center
4111.83 -> and manufacturing
of our hardware.
4113.99 -> Our sustainability
engineering construction
4116.02 -> and procurement teams are hard
at work on these problems.
4121.09 -> For example, cement production
is responsible
4123.95 -> for 7-8%
of the world’s carbon emissions.
4127.19 -> Largely due to a process to make
a cement ingredient called clinker.
4133.26 -> The process to make clinker
was patented almost 200 years ago.
4136.86 -> And it’s still widely used
because it’s cheap, it's reliable
4140.32 -> and it produces
high quality concrete.
4143.09 -> Clinker is made by grinding limestone
and combining it with other materials
4146.79 -> and the processing it
at very high heat.
4149.63 -> This processing produces
large amounts of carbon emissions
4153.27 -> both from the burning of fossil fuels
to process it
4156.06 -> as well as the gasses released
from the chemical reactions
4158.92 -> during the production
process itself.
4162.84 -> And concrete is critical for
so many different types of building
4167.4 -> and infrastructure in modern life.
4169.59 -> Things like buildings and highways,
bridges, dams, schools,
4173.69 -> hospitals, concrete is everywhere.
4176.66 -> Most of the world’s
concrete production
4178.2 -> is used to produce
all of this infrastructure
4180.78 -> and only really has
a very small fraction
4183.2 -> is used to produce data centers.
4185.64 -> To help you understand the scale,
we estimate that all the concrete
4189.29 -> that AWS used to build data
centers last year
4192.41 -> is far less than the concrete
4193.9 -> used to build the foundation
of the Golden Gate Bridge.
4197.45 -> But while we’re a small part
of global concrete usage,
4200.65 -> we believe we can have an outsized
impact on solving a problem
4203.56 -> that the world
so desperately needs to address.
4206.31 -> So, what are we doing?
4208.99 -> While we can’t eliminate concrete
our scale
4212.04 -> enables us to help
drive industry change
4214.44 -> by creating demand for
more sustainable alternatives.
4218.15 -> In the near term, AWS
plans to increase
4220.95 -> the use of supplementary
cementitious materials
4224.38 -> in the cement
that we use for our data centers.
4227.01 -> These supplementary materials
replace the need
4229.84 -> for the carbon intensive clinker.
4232.55 -> One example would be using
recycle biproducts
4235.21 -> from other industrial processes
like manufacturing iron and steel.
4239.61 -> We expect that increasing the amount
of these replacement materials
4242.91 -> can reduce the embodied carbon
in a data center by about 25%.
4248.23 -> Longer term we’ll need solutions
beyond these current substitutes.
4251.94 -> And we’re working with partners
on alternative clinkers
4255.29 -> that are made
with different processes
4257.12 -> that result in lower emissions.
4260.49 -> AWS is also evaluating
and experimenting with technologies
4264.05 -> that produce lower carbon concrete
4266.23 -> by utilizing carbon dioxide
during the manufacturing process.
4270.58 -> One example is CarbonCure
which injects carbon dioxide
4274.68 -> into the concrete
during production.
4277.45 -> This process sequesters or traps
carbon dioxide in the concrete
4281.73 -> and it also reduces
the cement concrete
4285.21 -> which further lowers the embodied
carbon in the concrete.
4289.17 -> CarbonCure is also one of
the first companies we invested
4292.11 -> in with The Climate Pledge Fund,
which we announced this last year.
4296.16 -> This fund with an initial
two billion in funding
4299.76 -> will invest in visionary companies
whose products and solutions
4303.05 -> will facilitate the transition
to a sustainable low-carbon economy.
4307.3 -> Amazon’s already
incorporating CarbonCure
4309.54 -> into its construction process
in HQ2 in Virginia.
4314.25 -> And this is just one example of how
our commitment to be net zero carbon
4318.29 -> will drive us to innovate
for a lower carbon future.
4322.84 -> For AWS running our operations
4324.72 -> sustainably means reducing the amount
of water we use as well.
4329.46 -> Like concrete, data centers represent
4331.99 -> a tiny portion
of the world’s water usage.
4334.99 -> But our view at Amazon
is that we can punch above
4337.4 -> our weight class on these problems.
4339.67 -> In many regions AWS uses outside air
for cooling much of the year.
4344.24 -> And on the hottest days
when we do we need to use water
4347.22 -> to help with that cooling,
4348.52 -> we’ve optimized our systems
to reduce this water usage.
4352.18 -> With our designs even on the largest
data center running at full capacity,
4357.58 -> we use about the same water that
25 average US households would use.
4363.52 -> So, as part of this water use,
we would look for opportunities
4366.91 -> to return the water
that we do use to the communities.
4369.68 -> And I want to share with you
an example
4371.14 -> of how we partner to deliver water
to farmers in our US West region.
4377.574 -> [music playing]
4382.31 -> My name is Dave Stockdale.
4384.48 -> I’m the City Manager
for the city of Umatilla.
4386.9 -> We’re a small farm community
out her in Eastern Oregon.
4390.27 -> Four years ago, as we were looking
at the growth
4392.64 -> that AWS itself
brought to the community,
4395.03 -> we realized pretty quickly
we were going to exceed our capacity
4399.04 -> at our waste-water
treatment facility.
4400.87 -> We started looking
at other creative solutions.
4404.14 -> I’m Beau Schilz. I’m on the AWS
Water Team.
4407.16 -> One of the things we do here
is we treat the water
4410.18 -> before it runs
through our data center
4412.01 -> and then we’re able to use it
three or four times.
4414.34 -> And our cooling water is not dirty.
4417.56 -> It just didn’t make sense
to have clean water
4420.76 -> run through a very expensive
treatment process.
4423.61 -> Instead of it going
to a waste-water treatment plant,
4426.81 -> we put it into this canal,
4428.6 -> where it then goes to reach
the local community
4431.02 -> so it can be repurposed
for irrigation.
4433.51 -> In our US West Region here in Oregon,
4436.12 -> we reuse 96% of all the waste-water
we discharge from our data centers.
4441.95 -> I’m Vern Frederickson.
I’ve been growing hay
4444.16 -> and other irrigated crops
in this area for the last 35 years.
4447.96 -> Besides the land that we own water
is one of the greatest assets
4451.23 -> that we have in this community.
We’re glad to see businesses like AWS
4455.32 -> giving water back
to the farming community.
4458.15 -> We’re very grateful to be able
to work with the City of Umatilla
4461.51 -> and the Port of Morrow,
4463.26 -> to enable this and to be good
water stewards in this community.
4467.26 -> Every time we reuse water it’s less
water we’re pulling
4470.01 -> from our rivers and streams.
4471.5 -> It’s good for the environment.
It’s good for the economy.
4474.11 -> It’s good for our community
as a whole.
4481.91 -> In addition to the water
we’re returning to communities,
4484.316 -> AWS is working on community water
programs all over around the world.
4488.31 -> In 2018 many of you might know
that Cape Town, South Africa,
4492.02 -> nearly ran out
of fresh water.
4494.066 -> One of the problems
in Cape Town
4495.79 -> is that invasive species that soak up
vast quantities of fresh water.
4499.96 -> AWS is funding projects to remove
these invasive species
4503.66 -> for a partnership led
by the nature conservancy.
4506.7 -> In addition with data center,
4508.57 -> in addition to what data
center design in Cape Town
4511.03 -> that reduces water use,
4513.13 -> these efforts assure
that we are returning
4514.76 -> far more water to the Cape Town
community that we use.
4518.31 -> We’re also working on watershed
restoration projects
4521.08 -> in Sao Paulo, Brazil.
4522.94 -> And Amazon is funding
water filtration,
4525.42 -> rainwater harvesting and ground
water recharge projects
4528.28 -> that will bring 250 million gallons
of water annually to a 165,
4533.47 -> 000 people in India and Indonesia.
4538.98 -> I hope that what I have shared today
gives you a sense of the depth
4542.3 -> and breadth of sustainability
efforts at AWS.
4545.39 -> Teams across AWS are working
to increase our efficiency,
4548.78 -> achieve 100% renewable energy
4551.19 -> and reduce carbon intensity
in our infrastructure.
4554.22 -> New technologies,
products and services
4555.97 -> are required to achieve our goal
of net zero carbon by 2014.
4560.23 -> And we’re committed to working
with companies
4561.9 -> across many industries
to drive innovation,
4564.88 -> not just for Amazon
and the signatories of the Climate
4567.38 -> Pledge but for the world.
4571.12 -> I want to end today
with this exciting fact.
4573.75 -> As I mentioned earlier,
4574.86 -> Amazon has announced over 6.5
gigawatts of new renewable energy.
4579.66 -> With nearly 4.2 gigawatts
of this total coming in 2020.
4584.24 -> And as we announced this morning,
4585.7 -> this makes Amazon
the largest corporate procurer
4588.39 -> of renewable energy in the world.
4590.74 -> And as, we like to say at Amazon,
it’s still day one.
4594.14 -> With that, thank you
for participating in this year’s
4596.89 -> very unique
re:Invent, stay safe,
4599.1 -> and I look forward
to seeing you in person soon.
4602.402 -> [music playing]
Source: https://www.youtube.com/watch?v=AaYNwOh90Pg