AWS re:Invent 2021 - AWS Networking: Making all workloads possible

AWS re:Invent 2021 - AWS Networking: Making all workloads possible


AWS re:Invent 2021 - AWS Networking: Making all workloads possible

AWS Networking helps enable you to run any kind of workload in the cloud. In this session, join Dave Brown, VP of Amazon EC2 Compute and Networking Services, to learn how this is possible. Dave reviews the progress AWS made in the last year across networking and content delivery solutions, which are designed to be the most secure, have the highest network availability, deliver consistent high performance, and have the broadest global coverage. Dave also discusses new capabilities and how AWS customers are using our networking services to build on the AWS comprehensive network for all workloads.

Learn more about AWS at https://bit.ly/3dJVcv4

Subscribe:
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4

ABOUT AWS
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

#AWS #AmazonWebServices #CloudComputing


Content

1.668 -> [music playing]
5.372 -> Please welcome Vice President Amazon EC2, AWS, Dave Brown.
21.088 -> Well, hi, all.
22.122 -> And thanks very much for joining me here today.
24.758 -> It's great to be back after nearly two years of being at home
28.629 -> and having re:Invent virtual last year.
30.964 -> I'm thoroughly enjoying the time
32.466 -> that I'm getting to spend with customers,
34.334 -> learning about what they're doing
35.569 -> and what they've been building with AWS.
40.274 -> We've come through a lot in the last year
42.476 -> and the innovation in the networking space continues.
45.846 -> We continue to innovate for our customers
47.381 -> to ensure that they can build networks on AWS
50.851 -> focused on a couple of key areas.
53.62 -> First, it's being able to support from the largest
56.023 -> and most scalable global network,
58.325 -> Secondly, we want to take a look at how do we make sure
60.894 -> that we give you the best network performance at all times.
64.565 -> Obviously in today's world the security
66.834 -> is something we're all thinking about
68.101 -> and I'll take a look at some of the network security
69.736 -> that we've been doing as well.
71.905 -> Fourth, we'll look at network for every single workload
74.141 -> and finally we'll look at what are we doing to bring AWS and AWS Networking
78.979 -> closer to you, wherever you may be in the world.
81.548 -> That's quite a bit to get going,
83.15 -> we're going to be jampacked for the next 60 minutes or so
85.385 -> and I'm looking forward to it.
86.62 -> So, let's get started with that largest
88.689 -> and most scalable global network.
92.492 -> You know, we've seen customers from all segments using AWS.
96.763 -> I was fortunate enough to join EC2 in about 2007 in our Cape Town office
102.002 -> and it was incredibly early days and back then you really needed,
105.973 -> you know, to have a PhD degree to do anything really useful with EC2.
110.01 -> It was very simple, we had a very simple network.
112.98 -> But today, you know, across all sectors,
115.148 -> we've seen incredible adoption.
116.817 -> In the cyberspace we have start-ups Pinterest and Redfin
120.053 -> enterprises like General Electric, Intuit and Pfizer.
123.423 -> In the public sector we have customers
124.925 -> like The American Heart Association, The FINRA
127.961 -> and the USDA and software providers
130.264 -> and partners like Dedalus, Adobe and Accenture,
132.966 -> and it's a real privilege to get to work with all of these customers
135.903 -> and no matter what they are trying to do at incredible scale.
139.773 -> I wanted to highlight two very quickly,
141.675 -> the first of them is Slack.
143.544 -> I'm sure most of you have used Slack
145.546 -> or are using Slack on a day-by-day basis,
148.182 -> and I've had the privilege of working with the Slack team
150.851 -> as they've gone through their network journey on AWS.
154.321 -> Slack started out from a simple, single VPC
157.724 -> and today the run hundreds of VPCs if not thousands of VPCs
161.495 -> across regions all over the world.
163.664 -> They use Transit Gateway heavily for that architecture
166.667 -> and every single Slack message
168.235 -> that you send travels through a Transit Gateway
170.771 -> somewhere within an AWS region.
173.674 -> The second customer we'll look at his Twilio.
177.744 -> Twilio has been native on AWS
179.613 -> almost right from the beginning of their journey
181.815 -> and they actually provide programmatic access for developers
184.785 -> to make phone calls and send and receive text messages
188.155 -> and to perform other communication functions.
191.158 -> Obviously in the last year with call center
193.76 -> needs just exploding the demand for Twilio services really expanded
198.899 -> and very often developers are making calls that span the globe –
201.869 -> your customer could really be anywhere.
203.837 -> Now, we've been privileged to work with Twilio
205.405 -> to provide them with the low latency access
208.242 -> to the network no matter where they may be.
210.544 -> When you make a call on Twilio today, it actually travels our AWS
214.181 -> Global Backbone to get to the location that it needs to be at.
219.286 -> That's where I want to start,
220.487 -> is looking at our AWS global infrastructure.
223.857 -> when I joined EC2 there was really just a single region
226.927 -> and that was our US-East-1 region,
228.762 -> but we've expanded that a lot in the last few years.
231.698 -> In the first five years of EC2 we added four regions.
235.502 -> We thought that was pretty impressive.
237.704 -> In the next five years we added seven,
240.274 -> and in the last five years or so we've added another 14 regions
243.744 -> and we have another nine regions ready for launch
246.18 -> that we've also already proactively announced.
248.582 -> All of this… we also, along with that,
250.884 -> have 275 CloudFront PoPs, points of presence,
255.322 -> where you can get access to our CloudFront CDN
257.791 -> in multiple countries around the world.
259.86 -> We also have over 100 direct connect locations
263.697 -> where you can bring in low latency MPLS-Like Network directly into AWS.
269.937 -> And, finally, all of these locations are connected
272.973 -> with the AWS Global Backbone.
276.276 -> Every line that you see on that screen
277.978 -> is actually a piece of optical fiber –
280.113 -> very often 100 Gigabit optical fiber –
282.316 -> and many, many strands that are actually owned and managed by AWS,
287.354 -> and this spans the globe.
288.689 -> It's an incredible undertaking
290.39 -> and the growth of that global network has really been amazing.
295.095 -> Now we have to think several years out.
297.497 -> Often when you think about the cloud
299.299 -> you think about this idea of the illusion of infinite capacity,
302.769 -> that's something I probably say to my teams
304.371 -> once a week where we say "we have to provide our customers
307.474 -> with this illusion that we always have capacity for them."
310.577 -> It is an illusion and it takes an enormous amount of work
313.413 -> behind the scenes to make that happen.
315.549 -> I often think about that in terms of EC2 instances
317.951 -> but is also important in the AWS Global Backbone.
321.755 -> Here you can see a couple of locations
323.457 -> and those lines that are highlighted are actually trans-oceanic fiber
328.328 -> and cables that we put in place very often as part of a consortium.
332.199 -> We put the Hawaiki cable in from New Zealand, through Sydney
334.902 -> and it actually terminates up in Portland, Oregon.
337.571 -> We did the Jupiter cable that started in Singapore terminated in Hong Kong
341.608 -> and then crossed the Pacific also to Los Angeles that one was,
345.479 -> and the Maria cable which runs on the East Coast of the US to Europe.
349.55 -> And when we do those, we're actually looking a number of years ahead
352.819 -> because we've got to make sure that we always have enough
355.422 -> backbone capacity to carry your workload.
358.425 -> Very often customers say to me
359.893 -> "you don't tell us enough about the AWS Global Backbone"
362.763 -> and the reality is I don't really want to be talking to you
364.865 -> about it cause it should just work.
366.667 -> You should never have to worry
368.001 -> if we have enough capacity to carry your workload.
371.238 -> You know this literally is a ship that goes out to sea
374.308 -> with the cable behind it
375.809 -> and in the case of the Hawaii cable this is actually
379.046 -> 9000 miles long starting in Japan and terminating in Portland
383.116 -> and here you can actually see this is the coast of Japan
385.385 -> where they're actually putting this cable off the beach –
388.021 -> you can see how they dug up the sand there and obviously clear
389.923 -> that so at some point you don't have any idea
392.059 -> that a piece of optical fiber,
393.727 -> or very large cable with many strands of optical fiber, came from there.
397.264 -> So a lot of work that goes into that.
399.933 -> Our AWS regions, you've heard us speak a lot
401.735 -> about this over the years, we have never lowered the standard
405.405 -> for what it means to have a highly available AWS region.
409.209 -> Every one of our regions consists of multiple availability zones,
412.312 -> at least three, and in many cases we actually have regions
414.915 -> with many more than three availability zones.
417.818 -> Those availability zones, and very often a single availability zone,
421.154 -> actually consists of multiple datacenters.
424.057 -> And we think about the space between these datacenters,
426.293 -> they're sort of the Goldilocks zone as we call it.
428.629 -> We want those datacenters to be far enough apart
431.465 -> that they won't fail for the same reason at the same time,
434.201 -> but not so far apart that the network latency
437.137 -> actually exceeds about one to two milliseconds.
439.973 -> So customers are able to use multiple availability zones
442.342 -> for high availability without having to worry about the latency,
445.579 -> that they've seem and how that would affect their application.
449.116 -> The network behind these availability zones is critically important
452.986 -> and we actually have multiple redundant pairs of fiber
457.591 -> that run between all of these datacenters, and transit centers,
461.161 -> to ensure that there's always multiple paths
463.964 -> between any of these locations.
466.2 -> We have to obviously assume that failure could happen at any time
470.27 -> and it's always a little amusing to see
471.939 -> what actually caused a piece of optical fiber to break.
474.775 -> I can't tell you how many times I've seen backhoes
476.844 -> digging up fiber all around the world.
479.313 -> I even had a dumpster fire in Brazil
481.081 -> burn some optical fiber at the top of a telephone pole.
484.451 -> And so it's always interesting but we have to plan for it
487.855 -> and you'll see no customer impact from fiber cuts
491.425 -> because we've planned those alternative paths
493.493 -> and we really think about the network convergence time.
495.896 -> We actually do that failover at the optical level
498.832 -> so it's really just a handful of packets that may be dropped
501.668 -> when a piece of optical fiber is broken.
506.106 -> You know this year we actually just celebrated 15 years of Amazon EC2.
511.845 -> It's an incredible milestone,
513.213 -> we've come in incredible journey we've walked in that time,
516.583 -> and so I want to go back, in this talk today,
519.119 -> and look at some of the things that we learnt
521.522 -> over those years in the network and the journey that we went on.
525.692 -> Firstly, as I said, EC2 launched on August the 26th 2006.
530.364 -> We had no idea what we were building,
532.466 -> we certainly didn't think it would become what it has become today.
536.103 -> Our network at this stage was actually just a single flat subnet
540.24 -> and all of the instances actually only had public
543.177 -> IPs, there was no NATing, there was no private IP address,
547.08 -> we said "here's an instance on the internet",
550.384 -> that was our call to Direct Addressing.
554.021 -> That worked pretty well for the first couple of months.
557.357 -> We ended up adding private addressing,
559.059 -> which was a private IP address, we didn't have elastic
561.328 -> IPs at that stage in 2007, and I actually remember the day
565.399 -> that I interviewed at Amazon in Cape Town,
568.535 -> I walked into the office and one of the engineers –
571.138 -> somebody said hello to him and said "how are you doing?"
572.94 -> and he said "oh, I've been up all night
575.876 -> fighting with that network device" and what had happened is,
579.546 -> a little bit less than about six months,
581.381 -> less than a year after launching EC2
583.951 -> we started to see network devices at the edge of our network
587.387 -> starting to struggle.
588.555 -> And what they were struggling with was not necessarily
591.024 -> the data planeload,
592.492 -> they were struggling with the control planeload.
595.395 -> Within a traditional datacenter your networking team
598.131 -> is making a handful of changes
600.2 -> maybe every week or everyday normally done via command line.
603.904 -> In the cloud environment we provide API access
607.074 -> and we were launching instances every few seconds
610.077 -> and having to reprogram these devices with the NAT rules
612.579 -> that they needed at the time,
614.014 -> those devices just could not get keep up.
616.283 -> The control plane on there just hadn't been designed for it.
619.753 -> And so we knew that we had to think differently about
622.689 -> how we were building these network devices and deploying them.
626.193 -> And so that kicked off a project, in early 2008
629.93 -> we actually added elastic
631.164 -> IPs, which I'm sure many of you know and love today
633.934 -> the ability to attach the public IP that you own to an EC2 instance.
637.838 -> The way we do that is actually just
639.273 -> NAT that IPS to your private IP internally.
643.41 -> We knew we had to build custom hardware,
645.646 -> we could see the rate of change that elastic
647.614 -> IPs brought, there was no way that standard devices out there,
650.651 -> network devices,
651.785 -> could keep up with that rate of change that the cloud had brought.
654.755 -> And so we actually designed our very first custom network device
658.792 -> which we call Blackfoot, named after a specific species of Penguin in Cape
663.497 -> Town, obviously aligned with Linux as well in the Linux logo.
667.267 -> And it was a Linux server basically that just did basic
669.87 -> NAT translation with a number of networking cards.
674.908 -> You know, Blackfoot turned out to be a great success
678.078 -> and one of the things that proved that – there were ready two things –
681.048 -> it really simplified the approach.
683.083 -> We said this device doesn't need to do all of the other stuff
686.153 -> that the alternative could do,
687.788 -> all it needs to do is NAT translation.
690.09 -> And we actually were able to achieve times of less than one millisecond,
692.86 -> and back in 2009 less than a millisecond for
695.629 -> NAT translation that was pretty amazing
697.764 -> and actually better than anything else out there.
700.3 -> And the way I know it was really successful is we still use
703.103 -> Blackfoot today for every single packet that comes into EC2
707.841 -> and other parts of AWS as well.
709.977 -> And it was so successful they even named a building after it.
713.18 -> And so the main building for AWS is actually called Blackfoot today.
719.286 -> In 2010 we realized that our custom hardware was obviously the way to go
723.023 -> and we had started to see other challenges with network
726.493 -> scalability in other places,
727.761 -> and the one we started to look at then was our top-of-rack switches.
731.798 -> Every single EC2 rack within our datacenters
733.834 -> obviously contains a number of servers,
735.769 -> but is connected to the rest of the network using the top-of-rack switch.
739.84 -> And so we actually formed one of our famous two pizza teams.
743.443 -> Now in South Africa the pizzas are a lot smaller
745.379 -> so those teams tend to be fairly small.
747.748 -> Yeah, that's about 8 to 12 people with the American sized pizzas.
751.385 -> But we had a small team and we gave them about 10 months
753.82 -> to actually go and build a new top-of-rack switch from scratch
757.925 -> – and they were able to deliver something.
759.66 -> I mean there are a couple of things, you know, that really allowed us
761.828 -> to innovate rapidly in the custom hardware,
765.232 -> custom network hardware space,
766.433 -> so some things we really hold true today.
768.769 -> The first thing is we use an incredibly simple design.
771.705 -> You know, we want to make sure that these devices
774.107 -> do what we need them to do and absolutely nothing else.
777.477 -> We don't want any complexity, we don't want any code paths
780.514 -> that we don't use frequently, We keep them incredibly simple.
784.251 -> We also design for high availability.
787.487 -> We want to make sure that not only are the devices themselves
790.424 -> highly available with very, very low annual failure rates
793.427 -> – and we've been able to achieve numbers there
795.095 -> that we didn't think would be possible.
796.997 -> But we also want to make sure that when we actually make changes
798.866 -> to these devices that we do that in a very consistent way.
802.669 -> One of the things we have to think about in the datacenters
804.404 -> is network convergence,
806.039 -> if I'm rebooting a device my network is going to converge and customers
809.176 -> are going to see periods of connectivity issue.
811.445 -> That's not okay on AWS and we don't have that today.
814.414 -> The way we get around that is every single time
816.617 -> we make change to a device,
818.151 -> we refresh the operating system completely –
820.554 -> before that we've taken it out of service – we refresh it
823.19 -> and we put it back into service.
824.858 -> It's exactly the same workflow as if I've added a new device
827.394 -> to the network and that eliminates the problem of convergence times.
831.632 -> Also because we build all this hardware ourselves
833.734 -> from custom point of view,
835.169 -> we have complete control of both the hardware and the software
838.172 -> and that allowed us to lower costs in the network,
841.375 -> improves our security and improved our reliability.
844.344 -> When an issue happens I'm able to see the code and fix it myself
847.548 -> and obviously given us a big performance boost.
851.919 -> There are a number of things we do as well that are kind of funny
854.555 -> and just very practical.
856.456 -> Obviously over the years
857.658 -> we've built many different versions of this hardware
860.727 -> and when a top-of-rack switch fails, a Data Tech has to go to a store,
865.132 -> check out a new top-of-rack switch and go and replace it.
868.268 -> And one of the problems we had was Data Techs would often go
871.004 -> and choose the wrong switch because they all looked kind of similar
873.106 -> and we said "well these aren't really devices
875.209 -> we're trying to sell to anybody,
876.777 -> so let's just make them all a different color."
878.979 -> So if you go into our data centers today
880.848 -> we have red ones and green ones and blue ones and orange ones.
883.917 -> But when we actually have these made, we get sent a Pantone color palette
887.955 -> by the manufacturer that's going to bend the sheet metal for us.
891.325 -> And we say… we go and pick the color and we send it back to them
893.493 -> and they've actually come back to us to said
895.229 -> "here's a sample piece of sheet metal,
898.465 -> is this okay with your design?"
900.4 -> And when we see it we can't really tell,
902.436 -> is that color called magical blue.
905.405 -> If the color is called magical blue we're good to go.
907.908 -> If it's not, well, we're going to have to rework it.
909.476 -> So we were picking the names, not the colors.
911.712 -> It works very well for us.
914.615 -> So our first top-of-rack switch could do 10 gigabytes per second.
919.386 -> We very quickly moved on to be able to support 100 gigabytes
921.889 -> and today we can support 400 gigabytes at our top-of-racks.
926.026 -> In one of our latest instances and the network that we have,
929.363 -> we can support 460 terabits of networking capacity
933.166 -> in a one-way latency of 12 microseconds.
935.068 -> That was back in 2013.
937.437 -> In our innovation of the top-of-rack switch
938.906 -> we've actually increased that now to be able to do
941.208 -> 10,000 terabytes per second or ten petabits per second
944.945 -> and latency is all the way down to about 7 microseconds.
947.948 -> That's almost half the latency that we had in 2013.
952.519 -> It's incredible progress that we've made.
955.989 -> Today we monitor over 11 trillion events across the network.
961.695 -> With millions of network devices spread across all of our regions
964.998 -> we have to make sure that no device causes customer impact
968.869 -> and that includes hard failures where a device is clearly down,
972.806 -> but also the infamous grey failure
974.675 -> where it might be doing something to the packets
976.376 -> that's incredibly difficult to monitor.
978.545 -> The way we do this is all of our monitoring is end to end.
982.149 -> We do have a lot of monitoring obviously at the device level
984.151 -> but really making sure that packets can traverse from the source
987.654 -> to their destination with no problems
990.157 -> is what we're looking at in 11 trillion different
992.86 -> monitoring metrics coming out across AWS network on a daily basis,
997.664 -> all monitored at the second granularity.
1001.869 -> You know, it's this investment in building at a global scale
1006.24 -> and building a network that can scale that really allowed us
1009.977 -> to be able to handle what we saw during Covid-19.
1014.314 -> In a matter of days we saw our network traffic increase by 400%
1019.386 -> when Italy was locked down for Covid-19.
1023.156 -> There's no way that any planning over a few days
1025.826 -> could have solved that problem,
1027.194 -> you had to be ready for that sort of scale
1029.763 -> and planned many, many months if not years in advance.
1033.233 -> We also had customers such as Zoom and Peloton and Robin Hood
1037.104 -> that have seen tremendous growth either due to the pandemic
1040.607 -> or due to an IPO that has really allowed us,
1043.01 -> from a networking scaling point of view,
1044.978 -> to be able to support them and it's been exciting to see.
1049.816 -> Next I'd like to take a look… if we can just go to the next slide.
1057.791 -> Okay, let's go into the next one which is our highest performance.
1062.496 -> You know, we've invested an enormous amount over the years.
1066.466 -> I remember when we released our very first EC2 instance,
1069.036 -> you know, you would see latencies of 200 to 300
1071.505 -> milliseconds within the network
1073.707 -> and that was the virtualization engine –
1075.909 -> obviously the hypervisor was adding a whole lot of latency
1078.378 -> with Zen back in the day – and very quickly we realized
1081.715 -> that for us to really get the scale and the performance that we needed,
1084.952 -> we would have to think very differently
1086.486 -> about how we build our servers.
1088.622 -> And we started a journey in rethinking hypervisors
1092.125 -> to really lower the latency
1094.595 -> and also the jitter to make sure that the latency was low
1097.531 -> and consistently low for our customers,
1100.701 -> and the network was the first place we started in this journey.
1104.104 -> In about 2013 we actually shipped our first instance
1107.875 -> that we call Network Optimized, it was our C3 instance,
1110.978 -> and it actually used a network offload card,
1112.88 -> which was the first Nitro card with an on-based chip, on that card,
1116.884 -> where we actually did all of the network processing on that card
1121.021 -> instead of using the CPU.
1123.457 -> And over the years we've actually transferred all of our computing
1127.194 -> that we need to do as a cloud provider,
1129.062 -> whether that's the security that we need a process on every packet,
1131.698 -> whether that's the networking, whether that's the storage,
1134.368 -> whereas all the management, the billing,
1135.836 -> everything that happens behind the scenes we've taken 100%
1139.072 -> of that away from the central Intel AMD or Graviton processor.
1143.644 -> And so, today, we're the only card provider that's actually
1145.779 -> able to give you 100% of that CPU and 100% of the system memory
1151.418 -> and 100% of the storage because we run in a separate system.
1155.956 -> Now you've actually seen this innovation that we've done
1157.925 -> with Nitro in the instance level bandwidth.
1160.494 -> How much bandwidth are we providing at a single instance level.
1164.031 -> I mean back in the day when we launched our first instance
1166.033 -> we actually provided 1 Gigabit of networking,
1168.168 -> which is pretty amazing back then,
1169.903 -> and you can see how the baseline networking
1171.572 -> has increased over the years.
1173.373 -> As I said 2013 was our first instance that actually was network optimized,
1178.712 -> 2019 we increased our baseline to 25 Gigabits per second and in 2021
1183.784 -> we hit 50 Gigabits per second on our latest Ice Lake
1187.955 -> and Milan processes, our sixth series generation.
1193.327 -> We also have network optimized instances where we've actually…
1196.163 -> we're the first cloud provider to provide
1198.031 -> 100 Gigabits of Ethernet connectivity.
1201.235 -> We had some instances that provided
1202.703 -> 400 Gigabits of Ethernet connectivity and yesterday
1206.073 -> at re:Invent who launched our first instance,
1208.175 -> the Trn1 instance used for machine learning training.
1211.378 -> It actually provides 800 Gigabits of networking –
1215.983 -> and we won't be stopping there by the way.
1221.088 -> Customers have sometimes asked us,
1222.589 -> I actually saw a tweet just three days ago
1225.158 -> that said "can I get more than five Gigabits
1227.995 -> of network connectivity to the Internet from Amazon EC2"
1231.698 -> and the answer, today, is absolutely yes.
1234.401 -> And we've increased the outgoing bandwidth from EC2 instances
1237.571 -> to 50 Gigabits per second on our latest instance types
1241.441 -> and so now you're able to get higher bandwidth
1243.11 -> both between applications running in multiple regions –
1245.746 -> so inter-region data transfer.
1247.614 -> You're also able to get that between the instance and the Internet
1250.384 -> and we also provide that between data transfer
1252.819 -> between the instance in your on-premises location
1255.455 -> via AWS Direct Connect.
1258.725 -> AWS Direct Connect, as I said earlier,
1260.294 -> provides you with MPLS-like connectivity to AWS regions
1264.898 -> from an on-premise location via a Direct Connect location.
1269.203 -> And Direct Connect started out with 10 Gigabit connections,
1272.472 -> you could get 10 Gigabits of connectivity to AWS
1275.542 -> was the largest size.
1277.077 -> A couple of years ago we launched support to be able to bring multiple
1281.982 -> 10 Gigabit connections together as a single connection to AWS,
1285.586 -> but you still had to manage those connections separately.
1288.488 -> And today I'm happy to announce the availability of 100 Gigabits
1291.825 -> per second connections via Direct Connect to AWS.
1296.496 -> This is full 100 Gigabit port connectivity,
1299.8 -> you're getting access to that full port, it's not shared
1302.87 -> and it's also not bringing together a number of 10 Gigabit ports.
1307.074 -> It's a single port.
1308.709 -> We also provide security and privacy through MACsec
1312.312 -> encryption that's available on those ports as well
1314.781 -> if you want to ensure those links are encrypted.
1317.251 -> We work with a lot of customers that do an enormous amount of bandwidth
1321.421 -> and one of those customers is ByteDance.
1324.825 -> They have a number of social media applications
1327.895 -> that just generate an enormous amount of content,
1331.365 -> and so these 100 Gigabit network connections on Direct Connect
1334.902 -> to something that they've used to scale
1337.037 -> and you can see to support… they said to support
1338.906 -> ByteDance applications we need high speed, low latency
1341.842 -> connections capable of sending exabytes of data around the world,
1345.879 -> and we're very excited to be able to work with them
1347.681 -> and support the incredible applications.
1351.451 -> If you think back to your first-year networking course at college
1355.822 -> one of the things you learned about was TCP congestion.
1359.026 -> You know traditional TCP routing
1360.894 -> does not effectively use all available network capacity
1363.864 -> and can often lead to network congestion,
1366.033 -> and that's something we've been looking at within our datacenters.
1368.569 -> But we see we must be able to get more out of the capacity
1371.371 -> we have available and not be limited by what a single TCP flow can do.
1376.777 -> We developed a protocol a few years ago
1378.278 -> called Scalable Reliable Datagram,
1380.514 -> which we've spoken about previously, and this is actually an Ethernet
1383.383 -> based protocol that we developed internally.
1385.953 -> We look at protocols like in funny band
1387.788 -> another low latency network protocols
1389.523 -> but we thought you know we had such an investment in Ethernet
1392.492 -> connectivity that we really wanted to make sure we doubled down on that
1395.329 -> and see if we could innovate to get around some of the challenges
1398.165 -> that TCP congestion brings.
1400.234 -> One of those challenges is TCP requires that packets arrive in order
1404.404 -> and if a packet arrives out of order, well what TCP will do
1406.84 -> is it will hold up the packets and say
1408.575 -> "I missed the packet can we please have a retransmit"
1411.178 -> and that's a lot of latency to add and it also means that your flow
1414.748 -> actually has to follow a common path through the network.
1418.719 -> And so we delivered Scalable Reliable Datagrams
1420.754 -> so that it actually negates 'in order' packet delivery.
1423.357 -> We don't mind anymore in what order those packets arrive
1425.492 -> and we rely on the layers in the Nitro system above the data plan…
1428.695 -> above the data layer to make sure
1430.13 -> that we're actually reconverging those packets in the right way.
1432.966 -> We've also shipped Elastic Fabric Adapter,
1434.935 -> which is the underlying protocol EFA, for network intensive applications.
1439.54 -> So when you use Elastic Fabric Adapter with an EC2 instance today,
1443.01 -> you're able to get latencies as low as 15 – that's 1-5 –
1447.181 -> microseconds between EC2 instances.
1449.082 -> It's incredibly popular in both the high performance computing space
1452.519 -> and the deep learning and machine learning spaces today.
1455.989 -> Let's take a look at how that works.
1457.324 -> So when a network flow is routed through the network,
1459.693 -> typically it will choose a set of routers
1461.995 -> and all packets in that flow, based on how it hashes,
1464.198 -> the source IP and destination IP and ports,
1466.733 -> will route along those same paths.
1468.268 -> But you can see there's routers in the network
1470.137 -> that aren't actually being utilized effectively for that flow,
1472.739 -> and there's effectively capacity available
1475.175 -> that we're not using for that flow today.
1477.244 -> The other problem you see there is a few of the routers
1479.112 -> are actually seeing multiple flows.
1480.614 -> As our flows get larger we might have routers
1482.783 -> that actually become overloaded.
1484.551 -> And so with Scalable Reliable Datagram
1487.354 -> we're actually able to send those packets,
1489.59 -> even for a single flow, through multiple routers.
1492.025 -> It no longer matters in what order they actually flow
1494.995 -> and so the results of this mean you just see far less congestion,
1498.265 -> you have much better utilization of your network and for you,
1500.834 -> as a customer, you see significantly lower latencies.
1504.838 -> One of these customers was Fox Sports,
1507.608 -> they like to do live broadcasts.
1509.443 -> If any of you do live broadcasts you know how incredibly stressful
1513.113 -> that can be, you don't get a second chance
1514.948 -> at a live broadcast and what Fox had been trying to do
1517.417 -> was to see whether they could move their live streams
1520.42 -> and their live broadcasts to the cloud
1522.356 -> and they haven't been able to do that with any other cloud provider.
1525.826 -> And they actually need latencies as low as 16.6 microseconds per frame
1530.264 -> to make sure that they can effectively broadcast a 4K stream.
1533.567 -> And with EFA and now SRD protocol we've been able to provide Fox Sports
1539.306 -> with access to being able to live broadcast directly from AWS.
1545.712 -> Coming soon, in 2022,
1548.415 -> we are bringing the power of SRD not only to HPC applications
1551.852 -> or for machine learning or specific use cases,
1553.954 -> we're going to be integrating SRD deeply into the Nitro protocol,
1558.091 -> And so the… the VPC protocol.
1560.794 -> And so every single instance that communicates
1563.263 -> either to the internet or between instances
1565.299 -> is actually going to start to see lower latencies.
1567.768 -> We expect a 300% increase in single flow bandwidth
1571.104 -> and up to 90% reduction in tail latencies.
1574.441 -> And so hopefully we're going to see… you'll see that rolling out next year
1577.544 -> and really should improve and lower the latencies
1579.613 -> that you're seeing across your applications.
1583.217 -> Let's take a quick look at security.
1588.121 -> Here we started off with this slide about our Global Backbone,
1590.691 -> and I mentioned it earlier, but every single packet
1593.794 -> that flows across this Global Backbone is actually encrypted.
1596.897 -> We don't have any flows that happened between regions
1599.867 -> that aren't encrypted at the edge.
1601.735 -> We also encrypt everything on the wire,
1603.437 -> we don't do it on a per customer level,
1605.439 -> which means that your traffic is just lost in the noise.
1607.708 -> Not only is it lost in the noise it's also encrypted,
1610.01 -> which means that it's very difficult to either decrypt or do
1613.413 -> any sort of sniffing or interpretation
1615.382 -> of what the traffic could be.
1617.284 -> Within the region traffic between availability zones
1620.02 -> and data centers is also encrypted at the line level.
1624.558 -> We also have cross-region peering is encrypted on top
1628.061 -> of what's provided natively at the line level.
1629.696 -> We do additional encryption there.
1631.598 -> We have encryption between instances for any of our newer Nitro instances,
1635.969 -> so any of the fifth generation instances
1638.105 -> that have an 'N' in the name, as a suffix,
1640.941 -> or any of our sixth generation instances
1643.01 -> and every single Nitro instance going forward,
1645.746 -> would support native network encryption within the VPC.
1649.283 -> Nothing that you need to do.
1650.784 -> We obviously always recommend you do do HTTPS as well.
1653.887 -> Then we also have IPsec on VPN tunnels
1656.323 -> and MACsec on Direct Connect to your data center and elastic load
1659.893 -> balancing provides easy encryption of any application traffic using TLS.
1667 -> The other thing we've had to think about is how do we make sure
1669.603 -> that this encryption is not only good today,
1672.339 -> but with the imminent arrival in the next 10 years
1675.576 -> or so of quantum computers,
1677.644 -> which have significantly more processing power,
1680.647 -> there's a chance that somebody could capture traffic today
1683.016 -> and be able to decrypt that in the future.
1685.452 -> And so to protect against that,
1686.753 -> all of our encryptions algorithms make use of AES-256 encryption
1691.425 -> to ensure that we're actually quantum safe and reduce
1694.161 -> that risk of actually recording traffic in 20 or 30 years time
1698.198 -> and being able to do something with it.
1704.872 -> You know, one thing you often see when I talk to customers,
1707.241 -> especially when they've migrated to the cloud,
1709.009 -> is they have some on-premise devices that they absolutely love.
1712.613 -> They've been using them for years, they've worked incredibly well
1715.349 -> and they want to be able to use them within AWS.
1718.051 -> Now the good news is most of these providers have taken their appliances
1722.322 -> and actually virtualized them and made them available in the cloud,
1725.325 -> and many of you are using things like Palo Alto networks
1728.095 -> firewalls, Aviatrix, Cisco routers, NetScout, Trend Micro.
1732.699 -> Many of them all security devices providing security that you've used,
1736.703 -> maybe have compliance rules to use certain ones
1738.872 -> and you want to be able to use them in AWS.
1740.874 -> And these are actually all available in the AWS Marketplace.
1744.244 -> This is actually the largest category today of any type of application
1748.348 -> in the AWS Marketplace.
1750.05 -> You know one of the challenges of deploying these though
1752.653 -> is that they're actually a single device,
1754.588 -> so you actually have to launch an EC2 instance that's running the firewall
1758.659 -> that you want to use and put it into your network and route traffic to it.
1762.596 -> Obviously being a single instance
1764.264 -> it could experience some sort of failure,
1767.334 -> maybe that failure is the instance itself,
1768.936 -> actually have some sort of technical problem
1770.437 -> or it could be that the traffic has increased and it doesn't scale.
1774.107 -> This is something that customers really struggled with
1776.577 -> for a long period of time, is
1777.845 -> 'how do I scale these devices and how do I make sure that,
1781.682 -> if they fail, I'm not going to see problems.
1784.551 -> And a few months ago we actually launched a Gateway Load Balancer.
1787.454 -> This is a new Load Balancer as part of the Elastic Load
1789.556 -> Balancing fleet that allows you to run these network
1793.794 -> appliances within the transit path and automatically scale them
1797.564 -> horizontally as the traffic increases,
1799.933 -> but also make sure that if any one of them fails,
1802.703 -> will automatically migrate those network flows to other devices
1806.673 -> and then ensure that device is replaced.
1809.109 -> Gateway Load Balancer actually works at layer three
1811.645 -> and, as I said, managers all of those flows
1814.147 -> so that even if the hardware provider or the appliance vendor
1817.384 -> hasn't thought about how they should manage state between these devices
1820.254 -> – in many cases they haven't, they've just built a single device –
1823.056 -> we actually take care of that state management by ensuring
1825.492 -> that flows travel to these devices correctly.
1828.829 -> Let's take a look at how that works.
1830.23 -> And so now we've got the same deployment,
1831.632 -> you can see we've added a Gateway
1832.799 -> Load Balancer there and now we've taken our firewall
1835.135 -> and actually added three of those behind the Gateway Load Balancer.
1838.805 -> Now if any one of those firewall devices or instances experiences
1841.942 -> a problem, as I said, Gateway
1843.61 -> Load Balancer actually will adjust the flows.
1846.113 -> Now importantly it doesn't do an ECMP flow rehash type algorithm,
1850.05 -> because that will break all the flows
1851.318 -> that are going through the Load Balancer,
1852.719 -> it only affects those that are actually affected by the device
1855.689 -> that's actually failed.
1857.024 -> And they will be merged onto the other ones
1858.725 -> and, ultimately, a new device will be brought on board.
1861.762 -> The other thing we looked at doing is how do we make it really simple
1864.264 -> for you to actually put these devices in the correct place in the network.
1869.803 -> And many of you, I'm sure, have built all sorts of proxies
1873.54 -> and very clever solutions to try and do this in the past,
1876.677 -> but earlier this year we actually launched North-south traffic support,
1879.88 -> or we call it ingress routing, that allows us to basically say
1883.55 -> that any request going to or from the internet
1886.72 -> should flow through a Gateway Load Balancer
1888.655 -> and hit the network appliance.
1890.891 -> Of that firewall decides it should drop the packet,
1892.826 -> it simply drops the packet.
1894.261 -> If the firewall decides it should route the packet
1896.296 -> to the original destination it puts it back into the VPC,
1899.8 -> the VPC route table takes over
1901.435 -> and it'll actually route it directly to the destination address.
1904.872 -> One of the things customers asked for as soon as we launched that one was
1907.674 -> 'how do we do that between VPCs' and so just a few months ago
1911.445 -> we actually launched more specific routing
1913.78 -> or support for East-west traffic.
1915.616 -> And so in this diagram you can see how we're routing packets
1918.218 -> between Subnet A and Subnet B,
1920.954 -> and even though the packets are not destined for the firewall
1923.657 -> or the Gateway Load Balancer, the route tables within those VPCs
1926.76 -> allow us to destine that traffic to the Gateway Load Balancer,
1929.863 -> hit the firewall and the firewall
1931.331 -> gets to decide what to do with that packet.
1933.467 -> So really it's super simple now on AWS to deploy any of those devices
1937.671 -> that you so love from your on-premise networks.
1941.041 -> If you decide not to use those,
1942.543 -> there's obviously the AWS network firewall.
1945.179 -> You know, one of the secrets and success in the cloud is us
1948.115 -> providing fully managed services
1950.284 -> where we handle all of the complexity,
1952.152 -> ensuring that they remain highly available.
1955.022 -> And the firewall has really done that
1956.456 -> and we will manage, you know, all of the rules.
1959.493 -> You get to configure those rules in a very highly flexible rules engine,
1962.729 -> you configure the firewall in exactly how you wanted to run.
1966.366 -> We actually use Gateway Load Balancer behind the scenes
1969.102 -> with network firewall
1970.737 -> to ensure that it does exactly that sort of routing
1972.773 -> and remains highly available and obviously also provides real time
1976.009 -> monitoring so you get an idea of what your firewalls are seeing.
1981.582 -> You know one of the things customers always want to know,
1985.185 -> in the security space,
1986.553 -> is you've configured your network in a certain way,
1989.59 -> has it stayed in the way that you wanted it to be
1993.16 -> or did some engineer come along
1995.195 -> and add an internet gateway or NAT gateway to a VPC
1998.665 -> that you really wanted to be locked down and remain private.
2002.636 -> So customers last year, we actually had re:Invent
2004.805 -> we launched a service called Reachability Analyzer
2007.474 -> that really simplified you know the problem of
2009.643 -> 'I created a VPC, I launched an instance
2011.512 -> but I can't connect to it.'
2012.88 -> You can just ask Reachability Analyze and it will actually tell you
2015.482 -> 'you need a subnet, you need a route table, you didn't add
2017.684 -> IGW, your security group is closed' and makes it really simple.
2021.622 -> And that actually uses automated reasoning
2023.724 -> which is a mathematical process behind the scenes
2026.193 -> and we essentially use a mathematical version of a VPC
2029.663 -> and model your VPC and we're able to tell what the problems might be.
2033.534 -> And so today we happy to announce the availability of Amazon VPC Network
2037.771 -> Access Analyzer.
2040.007 -> Amazon VPC Network Access Analyzer
2041.742 -> allows you to configure policies and rules that model what you allow
2046.613 -> and what you don't allow, within your network configuration.
2050.384 -> And so you might say that you don't want to allow traffic
2052.352 -> between two subnets,
2053.453 -> you might say those two subnets only use encrypted traffic.
2056.356 -> You might say that this VPC can never have any sort of external connectivity
2059.993 -> whether it's internet gateway, NAT gateway, Direct Connect.
2063.697 -> Reachability Analyzer will actually take a look at all of that,
2066.667 -> analyze your network
2068.101 -> and if it finds any policy that doesn't obey what you set up,
2071.004 -> will provide you with an alert as well as an indication
2073.674 -> of how you should go about fixing that.
2076.343 -> And so from a network maintenance point of view,
2078.011 -> now you're able to configure those policies
2080.214 -> and know that no matter who in your organization
2081.949 -> is creating network constructs,
2083.951 -> we're watching those all the time and able to give you an indication
2087.287 -> if any of those policies are violated.
2088.856 -> So we're excited to see what you do with that feature.
2093.894 -> Next I want to take a look at a network for every workload.
2098.765 -> You know this has been an area that I thought deeply about over the years
2103.871 -> and when we started off in the cloud,
2105.305 -> as I said earlier, we had a classic network.
2107.241 -> It was called the classic network.
2109.243 -> We actually called classic much later,
2111.411 -> we didn't have a name at the time, it was just the network.
2114.515 -> It had a single subnet and we used security groups to control
2117.885 -> who had access to what.
2119.586 -> There were a lot of customers that actually love that set up,
2121.788 -> they found it incredibly simple and they were kind of happy
2124.258 -> that they didn't have to deal with a lot of the networking constructs
2126.26 -> that they maybe had on their own data centers.
2132.165 -> What became quickly evident overtime as we went and added VPCs,
2136.537 -> then we started to see our customers grew their networks.
2138.438 -> They'd create a single VPC, many VPCs and we said that, you know,
2142.676 -> we need to make sure that the network that you build today
2146.046 -> should not limit your ability to grow and scale in the future.
2149.716 -> So one of the things that I really hate
2151.752 -> is where customer will come to me and say
2153.187 -> "Dave, we made this decision to do something three years ago,
2157.691 -> now our business has grown and our network has grown
2159.66 -> and that's really slowing us down.
2162.196 -> So the focus that we've had in the last five or so years
2165.999 -> has been what services and features can we build that makes sure
2168.902 -> the customers make the right decisions
2171.004 -> and then we can support them,
2172.306 -> no matter how large the organization becomes over time.
2176.276 -> And it was really interesting for us to learn that
2178.846 -> because we created Amazon VPC in 2009 and I was one of the engineers
2184.184 -> that actually worked on some of the code there
2186.286 -> and we literally believed that nobody would ever need more than one VPC.
2192.025 -> It sounds crazy, I actually told a customer yesterday
2194.962 -> and they said "what?
2196.096 -> Is that really…" and I went, "yes, that was the limit.
2197.698 -> If you wanted more than one you'd have to ask us."
2199.766 -> But what happened is over time, customers started to create hundreds
2203.637 -> if not thousands of these VPCs
2206.974 -> and they became this really useful little network
2208.842 -> that you could deploy for an organization
2210.444 -> or give to a developer for their application
2212.446 -> and it was a great way to keep network access isolated.
2216.517 -> Obviously when you create thousands of VPCs the next problem
2219.186 -> becomes well, how do we talk between these VPCs.
2222.122 -> And that was something we'd never thought about
2223.824 -> because we always thought you only needed one VPC
2225.526 -> and we started this journey of how do we solve this problem.
2227.794 -> And in 2014 we shipped VPC peering which allows you
2232.132 -> to peer up to 125 VPCs
2235.302 -> by essentially creating a flat network between them.
2239.006 -> And customers loved it and then these VPCs started to get larger and larger
2244.678 -> and the management of these peers
2246.48 -> became more complicated and more complicated
2248.348 -> because you needed a full mesh.
2250.384 -> And we launched things like VPN and Direct Connect
2252.452 -> bringing those edge connectivity into your VPC
2254.955 -> just became impossible to manage and so we knew we had a problem.
2259.359 -> At re:Invent 2018 we launched Transit Gateway and Transit Gateway
2264.398 -> is essentially a network router or network hub for your VPC
2268.302 -> that allows you to bring in thousands of VPCs to a single Transit Gateway,
2273.006 -> allows you to bring in VPN connections,
2275.275 -> Direct Connect connections directly into that
2277.911 -> and really solved the network connectivity problem.
2281.415 -> It manages all those routes for you as well,
2283.417 -> so when you bring in a new VPC it will automatically share the routes,
2286.22 -> it allows you to do some smart policy based routing and so,
2289.156 -> you know, the conversation with customers around
2291.124 -> 'how do I grow my VPCs' has really gone away
2294.962 -> because of the launch of Transit Gateway.
2297.097 -> So we've been very happy with the progress.
2299.399 -> About a year after the launch we actually launched Inter-Region
2302.402 -> peering with Transit Gateway
2304.705 -> where we made it very, very simple for you to create global networks
2308.942 -> using Transit Gateways in VPCs.
2311.578 -> We've always had a very strong approach
2315.115 -> to how we build our regions in a fully isolated way
2318.085 -> and so we've never,
2319.486 -> and will never allow, VPCs to span multiple regions.
2322.689 -> Transit Gateway has really solved that problem
2325.292 -> of allowing you to create that global network
2327.461 -> without the concern of having multiple regions
2330.664 -> fail for the same reason, at the same time.
2334.034 -> And so we've seen just incredible progress in the space
2337.104 -> of allowing you to build the network topology that you want to build
2340.274 -> and need to build for your application and allowing us…
2343.11 -> and allowing you to scale as your business scales.
2346.747 -> And to tell us more about the journey that they've taken at Morning Star,
2350.984 -> please welcome AWS Cloud Hero and Program Manager
2353.954 -> at Morning Star, Margaret Valtierra.
2357.424 -> [applause]
2367.835 -> Thank you, Dave.
2369.102 -> I'm happy to be here to share our journey to cloud
2371.572 -> and our network evolution.
2375.175 -> So Morning Star's journey to cloud began five years ago.
2378.779 -> Our first step was to set up account and network security.
2383.116 -> Each team gets two accounts production and non-production,
2387.955 -> each region gets two VPCs, primary and DR or Disaster Recovery.
2395.128 -> One key decision we made from the beginning
2397.297 -> was that teams separate VPCs should never have connectivity.
2401.702 -> We use AWS networks, an account structure to keep our user data
2406.406 -> and our business functions separate from each other.
2412.179 -> As teams migrated to AWS, we scaled globally and grew a lot.
2417.918 -> We have over 200 AWS team accounts,
2421.488 -> that's more than 100 separate teams using AWS.
2425.359 -> To connect all of these we have over 400 VPCs, that's 400 NAT gateways.
2431.865 -> Additionally, we have Direct Connect
2433.634 -> to connect our on-premise data centers to AWS
2438.038 -> and that's more than 1000 virtual interfaces.
2443.944 -> So with our growth and account structure we hit some walls,
2447.881 -> in each account there's a primary and a backup Direct Connect,
2452.152 -> so that's 233 accounts with at least two pairs
2455.822 -> of connections and virtual interfaces that we have to manage.
2460.694 -> Direct Connect has a limit of 50 virtual interfaces,
2465.799 -> as we add more and more that's both physical pairs
2468.368 -> we have to add as well as virtual interfaces
2470.904 -> to manage and keep track of.
2474.374 -> We also decided to use network
2476.143 -> ACLs to control this security and connectivity
2480.848 -> and we sometimes hit that limit of 40 rules per network ACL or NACL.
2488.722 -> So as we scaled our accounts and VPCs it became pretty complicated.
2493.56 -> VPC peering is limited to 125 peers per VPC,
2499.399 -> additionally VPC peering exposes the entire VPC.
2504.805 -> So, for example, our data lake team needs to connect
2507.641 -> to a lot of different team accounts and they need to share widely.
2511.745 -> Other teams and accounts need to stay completely separate
2514.414 -> and isolated for regulatory reasons.
2518.552 -> VPC end points are not enabled for cross-region connectivity
2523.924 -> and they're limited to certain AWS services,
2526.76 -> so other teams want to share a certain resource
2529.963 -> and rather than VPC peering we can tell them to use VPC endpoints,
2534.468 -> but it can start to blur our rigid network segmentation.
2540.107 -> So we are now upgrading our network to use Transit Gateway
2544.077 -> and network firewall.
2546.146 -> Transit Gateway will modernize our networks,
2548.715 -> allowing us to scale and manage them globally.
2553.587 -> Transit Gateway allows us to connect across regions,
2556.423 -> it helps us overcome those VPC peering limits
2559.56 -> and it simplifies our networking centrally.
2562.996 -> Transit Gateway also lays the foundation for apps and services
2566.333 -> that can't use those VPC endpoints
2568.902 -> and enables cross-region connectivity.
2572.606 -> Consolidating those NAT gateways in Transit Gateway
2576.41 -> instead will save us money and reduce complexity.
2581.348 -> We're also going to replace those network
2583.15 -> ACLs with network firewall which helps us with security
2587.154 -> and we get the ability to audit our security posture continuously.
2594.094 -> So with Transit Gateway we are going to scale out
2596.23 -> globally and simplify our networking centrally.
2599.833 -> All right. Back to you, Dave.
2601.401 -> [applause]
2610.31 -> Great. Thank you, Margaret. That was just amazing to see.
2612.679 -> I love when we build all this technology behind the scenes
2614.848 -> and then to see our customers go
2616.416 -> and use it in ways that we expected and very often in ways
2619.753 -> we didn't expect to solve their problems
2621.588 -> is always incredibly exciting.
2622.823 -> You know there was one feature =
2625.526 -> and I'll tell you, I told my team the other day –
2627.561 -> that I had this amazing idea before we shipped.
2629.83 -> I said to the team "wouldn't it be amazing if I could also peer
2632.9 -> Transit Gateways within a single region."
2635.435 -> And apparently that was not quite required at the time,
2639.206 -> but I'm happy to announce the availability of intra-region
2642.576 -> peering for Transit Gateways.
2645.012 -> And so Transit Gateways have become a meaningful building block
2647.881 -> for your network bringing in thousands of VPCs.
2650.817 -> Very often you might have multiple
2652.019 -> Transit Gateways now within a single region.
2654.688 -> So as you think about being able to peer those together,
2656.723 -> not only between regions but now in the same region,
2658.892 -> you're really able to bring together
2660.26 -> maybe different networks, different parts of your organization
2663.03 -> and solve incredible network topologies
2664.831 -> or advanced network topologies
2666.733 -> and we're very excited to support that.
2672.172 -> The other thing we have to start thinking about
2673.874 -> or have been thinking about more
2674.942 -> is not only how you bold large scale networks within AWS
2678.378 -> with the topology you need,
2680.013 -> but also how do we bring the network that you have on-premise
2683.417 -> and allows you to easily connect that to the regions that you might be in.
2689.289 -> Transit Gateway actually helps significantly there as well.
2692.226 -> We have VPN, accelerated VPN connectivity,
2695.162 -> which actually routes through one of our Edge locations
2697.431 -> to the region, connects directly into Transit Gateway
2700.067 -> and we will also have Direct Connect connectivity
2702.87 -> that's actually brings that in as well of a Direct Connect
2706.473 -> connections through regions.
2708.675 -> We also have partnered with about 26 SD-WAN vendors
2712.446 -> to actually allow you to bring your SD-WAN router
2714.681 -> and your SD-WAN configuration and start to use AWS
2717.918 -> on the AWS backbone for connectivity between regions.
2721.555 -> We actually have many customers that are doing that today,
2723.957 -> connected into AWS and then running AWS
2726.793 -> to get to their branch office in Europe from the US.
2731.765 -> Now one thing we've been looking at is as more and more customers
2735.235 -> use this global network for the inter-branch connectivity,
2739.006 -> could we find ways to make that even more efficient.
2742.442 -> You know, how do we reduce latency on those paths,
2745.145 -> and today we're happy to announce the availability
2747.581 -> of AWS Direct Connect SiteLink.
2750.918 -> Direction Connect SiteLink allows you to connect
2752.753 -> into multiple Direct Connect locations
2754.621 -> from your on-premise locations or branch offices
2757.824 -> and then send communication and packets across Direct Connect,
2762.162 -> taking the absolutely shortest network path.
2766.033 -> Well today all of those packets would have to traverse an AWS region,
2769.736 -> with AWS Direct Connect SiteLink,
2771.738 -> those packets may just flow on the backbone
2774.041 -> without even going anywhere near an AWS region.
2776.977 -> So let's take a look at how that works.
2778.879 -> Here you can see in a global network we've got our TGWs on the top layer,
2782.783 -> we've got Direct Connect Gateways connected into those TGWs
2786.053 -> and then we've got our various branch offices
2788.255 -> around the world actually connected into those Direct Connect locations.
2792.526 -> Today if I send traffic on those Direct Connect locations
2795.028 -> and I wanted to communicate from one branch
2796.463 -> to the next I could do it, but I'd take the latency here
2799.166 -> to of having to go all the way to the AWS region
2801.802 -> before I go back down to the branch office.
2804.371 -> With Direct Connect SiteLink that's changed completely
2807.508 -> and you can now have direct connectivity
2809.409 -> between your on-premise locations using the AWS backbone
2813.247 -> without the need to traverse back to the AWS region
2817.284 -> and in many cases that can save tens if of hundreds of milliseconds,
2821.054 -> depending on your network configuration.
2823.991 -> So we're very excited to see what customers do with this.
2827.895 -> You know we're always thinking about how we can improve things
2831.331 -> and while customers are building global networks
2833.901 -> using the AWS Global Backbone
2837.171 -> from on-premise connectivity or between regions,
2840.073 -> we're always asking ourselves is there more that we can do
2842.743 -> to simplify the management of that network that spans multiple regions
2846.313 -> and on-premise locations.
2848.248 -> And, unfortunately, I'm not able to give you the answer to that question,
2850.951 -> but I encourage you to attend Vernon's keynote on Thursday morning
2853.954 -> to find out more that we'll be doing in that space.
2857.157 -> I want to talk about a really new and emerging technology,
2860.861 -> I'm not sure if any of you have heard about it - IPv6.
2866.033 -> It's been around for a while, you know,
2868.202 -> I think maybe 25 years or so,
2870.204 -> but we're obviously keeping a very, very close eye on IPv6
2873.807 -> and our migration from IPv4 to IPv6.
2877.945 -> You know, over the years we've seen a lot of progress,
2880.881 -> we've had a lot of the eyeball networks,
2882.449 -> as we call them, folks like Comcast,
2884.384 -> that they've actually moved all of their edge networking to IPv6
2887.554 -> because they were running out of IP addresses.
2889.456 -> You know there's actually more IPv6 addresses
2892.226 -> then grains of sand on the earth.
2894.761 -> I don't know what's more amazing about that
2896.396 -> that there's more IPv6 addresses or that someone's actually counted
2899.266 -> how many grains of sand there are on the planet.
2902.836 -> You know, one thing that's interesting,
2904.171 -> while it's super easy today to actually set up a network
2908.475 -> and use IPv6 just with a Load Balancer =
2910.811 -> so hopefully all of you that have websites you just click the button
2913.514 -> to enable V6, super important –
2916.35 -> that's not really driving the adoption that we're seeing though.
2920.153 -> What's driving the adoption is customers that are saying to us
2923.09 -> "you know what, I never want to have to think about an IP address again.
2926.994 -> I don't want to worry about where they are, how they overlap,
2929.596 -> I just want a different range allocated to every VPC.
2932.799 -> I don't want to think about how I manage them."
2935.035 -> And that's what's really driving V6 adoption,
2937.671 -> is internal usage within VPCs.
2940.073 -> And today we're happy to announce the availability
2941.775 -> of IPv6 only subnets.
2944.545 -> And so while until about a week ago – we actually launched this –
2947.281 -> until a week ago you had to have an IPv4 address
2951.485 -> with every single EC2 instance and we only support a Dual Stack.
2954.988 -> With this feature and a number of features
2957.624 -> that we've actually built behind the scenes to do this,
2959.993 -> you're able to have VPCs that have IPv6 only subnets.
2964.565 -> No need for a IPv4 address and you can get away
2967.968 -> from some of that IP management that's really been painful to do.
2974.107 -> I'm also happy to announce, for those of you that want to stay on IPv4 –
2977.544 -> but you really want to get rid of that excel spreadsheet
2979.78 -> that's tracking where you've allocated
2981.281 -> all of your IP address ranges – I'm happy to announce
2983.851 -> the availability of Amazon VPC IP Address Manager.
2989.022 -> IP Address Manager allows you to manage all of your IP addresses
2992.292 -> and CIDR ranges both within AWS and on-premise.
2996.463 -> I was kind of joking about the Excel spreadsheet
2998.365 -> but I'm always surprised at how many customers I talk to
3000.968 -> and the networking team actually does have an Excel spreadsheet
3003.971 -> that they're having to manage and every time engineering team
3007.808 -> comes along and says "let me go and create a new VPC,
3010.844 -> they have to contact somebody that manages a spreadsheet"
3013.347 -> and get a CIDR range that hasn't yet been used in the company.
3016.35 -> I see a few nods out there.
3018.452 -> Well IPAM or IP Address Manager
3021.722 -> actually is deeply integrated with VPC,
3023.657 -> so you can set up policies on how you want to manage your IP addresses
3027.027 -> and when a developer goes along to create a new VPC
3029.763 -> it'll automatically allocate a CIDR range
3032.633 -> based on the policies that you've set up.
3035.536 -> It's also able to automatically detect
3037.604 -> if you have any risk of overlapping CIDR ranges in any of your VPCs.
3041.208 -> You can actually going to a management dashboard,
3042.976 -> call those out and go in and think about what changes do you
3045.512 -> have to make to avoid that risk.
3047.281 -> And then, finally, it's able to provide you with utilization data
3051.552 -> on how well you're utilizing your IP addresses
3054.388 -> and very often we find out about that too late
3056.456 -> when somebody tries to launch an instance or creating a Load Balancer
3059.459 -> and we don't have any IP addresses in that subnet for them.
3062.429 -> And so it allows you to get ahead of solving that problem.
3065.666 -> This has been something that customers have been asking for for years
3068.135 -> and I'm super happy that as a team we got to a point
3070.17 -> where we're able to provide this to you
3071.638 -> and I think it's going to be incredibly useful.
3075.275 -> There's one final section we have left to talk about
3077.811 -> and that's what are we doing to bring AWS closer to you.
3082.416 -> And as somebody that's been in AWS, you know,
3084.051 -> for many, many years – 14 years now –
3086.32 -> it's been interesting for me to watch this journey of how we've gone from
3089.923 -> 'we're a cloud provider and we run in regions' to
3092.459 -> 'I'm able to actually bring compute
3094.094 -> and networking all the way to your on-premise location.'
3097.564 -> And we actually think about this today as more of a continuum
3100.4 -> all the way from our regions, through our local zones,
3102.603 -> through our wavelength zones, through AWS Outposts,
3105.706 -> what we doing in the IoT space, Internet of Things,
3108.408 -> and what we're doing from a completely disconnected point of view
3111.245 -> with our AWS Snow Family.
3113.981 -> Obviously the first place to start in all of this
3116.416 -> is that AWS Global Infrastructure,
3118.886 -> and so for us to be able to bring connectivity closer to you
3121.488 -> we have to make sure that we are in every location around the world
3125.792 -> with very, very low latency access from wherever you may be.
3131.164 -> AWS Local Zones, we launched our first Local Zone in 2019
3135.736 -> and it was actually targeted at the Los Angeles region.
3138.605 -> So if any of you live in the LA region it's actually a full,
3142.676 -> two full Local Zones available there
3144.545 -> and we have customers getting latency as low as one or two
3147.181 -> milliseconds within Los Angeles to those Local Zones.
3150.817 -> And the reason we launched them there was actually
3152.519 -> for the media and entertainment space where they have graphic designers
3155.856 -> and artists actually editing video footage
3158.825 -> that the, you know, Hollywood and the media industry
3160.928 -> has actually shot during the day.
3162.896 -> They have them editing that on GPU instances in EC2
3166.366 -> and obviously you want to have very, very low latencies for that
3169.603 -> so the graphic designers can, you know, not drive themselves crazy.
3173.073 -> And so AWS Local Zones provide low latency access to AWS
3177.01 -> infrastructure within metro areas.
3179.646 -> They're fully managed, you can think of them as full availability zones
3182.115 -> where we're actually running everything
3183.383 -> and making sure that that illusion of infinite capacity is sustained
3186.987 -> for everybody that wants to use it.
3188.922 -> And obviously we're targeting large metro areas for this.
3192.559 -> Last year at re:Invent we said we would launch
3193.894 -> 12 Local Zones in 2021… sorry 15 Local Zones in 2021
3199.9 -> and we've actually known as twelve of those
3201.268 -> already at the locations you see on the slide.
3204.137 -> And over the next two to three weeks
3205.639 -> we'll actually be launching the final three in Seattle,
3208.308 -> Atlanta and Phoenix.
3211.144 -> And so if you happen to be in any of these regions
3213.413 -> where we have an AWS Local Zone,
3214.681 -> you're able to get the compute capacity
3216.683 -> and several other AWS services very close to you.
3220.387 -> One of the things, as soon as we put it out there,
3222.189 -> that customers said to us is they not only wanted lower latency
3225.626 -> from the network or from the internet,
3227.494 -> they also wanted that from Direct Connect.
3229.796 -> And today I'm happy to announce the availability
3232.165 -> of Direct Connect support for AWS Local Zones
3236.336 -> and the coming weeks you'll see three of our Local Zones in Boston,
3239.206 -> New York and Chicago will get Direct Connect access
3242.209 -> through a Direct Connect Gateway to these Local Zones.
3245.612 -> And so you maybe have a datacenter in one of those locations
3248.048 -> or maybe you have your company located there,
3250.117 -> you can now consider moving your compute to that edge zone,
3252.486 -> that Local Zone and actually use Direct Connect
3254.421 -> to talk directly to it.
3255.622 -> It will also get you into the AWS backbone,
3258.025 -> so all those other features about Directing Connect SiteLink
3260.127 -> would obviously work from those locations as well.
3262.596 -> So we're very excited to see what customers do with this.
3264.965 -> You know, one of my most favorite parts of Adam's keynote yesterday
3269.436 -> was the announcement with NASDAQ.
3271.772 -> For many, many years
3273.373 -> I thought about you know imagine running NASDAQ on AWS
3276.076 -> and it felt like something that you know we may never ever get to.
3279.613 -> However, Nasdaq announced
3281.315 -> that they would actually be moving their exchange,
3283.517 -> the Matching Engine which is the key part of the exchange,
3286.086 -> they'd be moving that to run on AWS Outposts
3289.223 -> in Local Zones within their Carteret facility in New Jersey.
3293.694 -> This is an incredibly low latency use case,
3296.496 -> it has networking challenges that we've never seen in the cloud
3299.199 -> and honestly never thought we could support in the cloud.
3302.002 -> They provide fairness in the network
3303.637 -> by providing everybody with a piece of optical fiber
3305.906 -> that's exactly the same length.
3307.875 -> Whether you're right next to the server
3309.243 -> or happen to be on the other side of the datacenter.
3311.612 -> And we've worked out ways how to do all of that with AWS Outputs
3314.414 -> and integrate with the rest of the network
3316.416 -> to provide the low latency that an exchange would need.
3319.453 -> So very excited to see about the progress we're making there.
3325.259 -> Local Zones, again, were not finished with Local Zones
3328.395 -> and we have grand plans to see where they would go
3331.031 -> and I'd encourage you as well to be at Vernon's keynote on Thursday
3333.934 -> to see what our next plan is for Local Zones.
3337.671 -> Now I suspect most of you are someone involved with networking,
3341.241 -> this talk seems to attract people that like networking,
3344.578 -> and 5G is definitely an emergent technology
3347.915 -> I think all of us are watching.
3349.783 -> 4G was a great step forward but it doesn't have quite the bandwidth
3352.853 -> that we need for a lot of applications
3354.588 -> and also often with 40/50
3356.056 -> milliseconds of latency is just a little too much
3359.026 -> for a lot of applications to deal with.
3361.662 -> You know, when 5G came out we've seen think it offers
3365.599 -> an improvement of up to 1 X improvement in latency
3368.202 -> over which you could get with
3369.369 -> LTE as well as for peak data speeds and throughput.
3372.539 -> I actually measured more than 2GB for my iPhone in the Climate
3375.876 -> Pledge Arena in Seattle just a few weeks ago on Verizon's
3379.213 -> latest 5G network, which is amazing to see.
3382.683 -> And the first thing we did in the 5G space was AWS Wavelength
3385.886 -> and it allows us to embed our compute capacity
3388.455 -> directly into 5G networks and we've actually partnered with a number of
3392.059 -> TELCOs now including Vodafone, Bell, Verizon, KDDI and SKT
3397.865 -> to bring AWS compute directly into their 5G network.
3400.701 -> And it's just amazing to see the use cases and the innovation
3404.104 -> that we think these low latency on the 5G networks going to drive.
3407.674 -> And one of these is a company called Origo and they were based in the UK
3411.745 -> and they're pioneering driverless technologies.
3415.282 -> Now these are not just any car,
3416.717 -> this is actually a full bus of students without a driver.
3420.12 -> I imagine it's probably pretty terrifying.
3422.556 -> And what they're doing in this driverless bus
3424.491 -> from a car pool location to Cambridge University
3428.495 -> and they worked with us and Vodafone, in the UK,
3431.231 -> to actually provide the buses with ultra low latency
3433.567 -> and expansive bandwidth
3435.335 -> and it lets Origo monitor these autonomous vehicles in real time
3438.972 -> and provide safe and secure communication
3440.641 -> to ensure that they have everything just as it should be
3443.51 -> for these driverless vehicles transporting students around campus.
3447.08 -> I expect to see more of this.
3450.918 -> And what about 5G networks within a factory, a smart building or campus?
3455.022 -> This is something that Adam spoke about yesterday as well
3457.024 -> and we've been looking at this and speaking to a number of customers
3459.993 -> and they said "I'd love to use 5G."
3462.596 -> 5G has a couple of interesting things,
3463.964 -> we see this with our fulfilment centers,
3465.365 -> is Wi-Fi can be incredibly difficult
3467.601 -> to bring access points to all the right locations.
3471.138 -> Also, from a throughput point of view,
3472.506 -> as your number of devices increase on a Wi-Fi network
3474.908 -> it can be very difficult to support and with IoT and robotics
3477.778 -> taking off we're starting to saturate our Wi-Fi networks.
3480.814 -> And so the promise of 5G is very, very compelling.
3484.918 -> The problem is being to deploy a 5G network is just incredibly expensive
3489.456 -> and also requires expertise that our customers often don't have
3493.193 -> or they have to go to employ an SI
3494.862 -> to do a very large project to make it happen,
3497.164 -> and so we thought there must be a better way
3499.967 -> So as Adam announced yesterday we now have a new service
3502.503 -> called AWS Private 5G,
3504.638 -> which is a fully managed service that allows you to install,
3507.407 -> operate and scale private cellular networks.
3511.178 -> And so these are networks within a smart building,
3513.247 -> within a campus, within a factory.
3515.415 -> They obviously don't provide
3516.65 -> any sort of connectivity outside of that location,
3518.819 -> otherwise we wouldn't have called them Private 5G.
3521.855 -> And so you can very easily, just like ordering an Outpost,
3524.458 -> order a Private 5G network.
3526.827 -> We actually have a starter pack where we will ship you a small cell
3529.796 -> together with an outpost and a core that allows you to actually configure
3534.635 -> a Private 5G network and get up and running in as little as a few hours.
3538.372 -> And the customers that have used that have just loved it.
3541.408 -> We also thought a lot about the billing model
3543.243 -> and how we wanted to go about making sure
3545.412 -> that it wasn't cost prohibitive for our customers.
3548.148 -> And while traditional 5G networks and LTE networks
3551.185 -> actually require you to pay, per sim,
3553.687 -> for connectivity which often as your sims grow
3555.989 -> and the utilization you see just doesn't work,
3558.225 -> we've got a model that pays…
3559.526 -> you only pay for the bandwidth that you've actually gone and provisioned.
3562.896 -> And so significantly cheaper, drive better utilization
3566.2 -> and allow you to get a whole lot more out of the network
3569.436 -> and get it up and running.
3571.004 -> It's also hassle free, you can start as small as you want
3573.607 -> and you can scale over time,
3575.509 -> so we're incredibly excited to see what customers do with Private 5G.
3580.214 -> And one of these customers that we've been working very closely with
3583.383 -> is Koch Industries.
3584.985 -> Koch Industries is just an incredible company
3586.82 -> and they're actually a conglomerate that has bought
3588.789 -> and invested in a number of companies around the world,
3591.925 -> across an incredible array of different industries
3594.494 -> and I just love talking to them because it's almost never a topic
3597.865 -> that I bring up that they say "oh, you haven't invested there"
3601.068 -> it's always "yes, we did that, we're doing this." Incredible.
3604.204 -> And one of the things that they're looking at is how do they invest more
3606.44 -> and what can they do in the Private 5G space.
3609.176 -> And there was a partnering with a company called Mavenir
3611.578 -> that we're working very closely with as well.
3613.814 -> to really find ways to bring 5G into many of their facilities.
3618.218 -> And so they said AWS Private 5G can help solve real challenges
3620.921 -> that enterprises face in deploying private cellular
3623.524 -> networks around the world.
3625.826 -> It's an exciting space.
3626.994 -> it's a growing space
3628.095 -> and we're excited to see what customers do with it.
3630.497 -> And so it's been quite a journey it's been very exciting
3632.799 -> to take you through all of these innovations,
3634.468 -> a little bit of history as well,
3636.37 -> and you know for the 11th straight year AWS
3639.406 -> was recognized as a leader in the Gartner Magic Quadrant.
3644.178 -> And we're very proud of that, it shows the ongoing innovation
3646.914 -> that we do for our customers and part of our job here at AWS
3650.617 -> is not only just to listen to our customers and hear what they ask,
3653.954 -> but also to predict what they may need in the future
3657.057 -> and will improve the services that they build on AWS.
3660.494 -> It's been a pleasure to talk to you today
3662.629 -> and I'm very excited to see what you do in the future
3665.332 -> with all the services we spoken about.
3667.267 -> Thank you.
3668.468 -> [applause]
3669.837 -> [music playing]

Source: https://www.youtube.com/watch?v=ii5XWpcYYnI