AWS re:Invent 2020 - Machine Learning Keynote with Swami Sivasubramanian

AWS re:Invent 2020 - Machine Learning Keynote with Swami Sivasubramanian


AWS re:Invent 2020 - Machine Learning Keynote with Swami Sivasubramanian

Swami Sivasubramanian, VP Machine Learning, Amazon Web Services delivers the first-ever Machine Learning Keynote at re:Invent. Hear how AWS is freeing builders to innovate on machine learning with the latest developments in AWS machine learning, demos of new technology, and insights from customers. Including the launch of Distributed Training on SageMaker, SageMaker Clarify, Deep Profiling for SageMaker Debugger, SageMaker Edge Manager, Amazon Redshift ML, Amazon Neptune ML, Amazon Lookout for Metrics, and Amazon HealthLake. Guest speakers include Jennifer Langton, NFL and Elad Benjamin, Philips with demos and deep dives from AWS speakers including Dr. Nashlie Sephus, Dorothy Li, and Dr. Matt Wood.

Launch Announcements:
00:00 Machine Learning Keynote
15:56 Distributed Training on SageMaker
36:16 SageMaker Clarify
43:16 Deep Profiling for SageMaker Debugger
53:29 SageMaker Edge Manager
1:01:58 Amazon Redshift ML
1:04:30 Amazon Neptune ML
1:15:44 Amazon Lookout for Metrics
1:36:40 Amazon HealthLake

Demos:
45:54 SageMaker
1:07:18 Quicksight Q
1:19:50 Industrial AI
1:36:40 Amazon HealthLake

Guest speakers include:
21:56 Jennifer Langton, NFL
1:41:40 Elad Benjamin, Philips

Subscribe:
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4

#AWS


Content

2.375 -> [music playing]
27.15 -> Hello.
28.38 -> Welcome to the second week of re:Invent.
31.23 -> It has certainly been an exciting event
33.26 -> so far with so many groundbreaking launches,
36.69 -> a record breaking toast to kick off the event
39.42 -> and tons of sessions to attend so far.
42.79 -> And last week over 200,000 viewers tuned in
46.61 -> to watch the first ever virtual live races
50.3 -> during the AWS Deep Racer League re:Invent championships on Twitch.
55.66 -> You will also have the opportunity to hear
57.8 -> from our experts and many customers such as NASCAR,
62.02 -> McDonald's, Mobileye, Intuit, and PwC and more than 50 machine
67.53 -> learning sessions taking place during the event.
71.13 -> I love seeing the enthusiasm for machine
73.52 -> learning among our customers, it is a testament to the technology's
77.67 -> potential to change businesses and industries for the better.
85.17 -> Machine learning is one of the most disruptive technologies
89.03 -> we will encounter in our generation.
92.02 -> More than 100,000 customers use AWS for machine learning today,
97.01 -> right from creating a more personalized customer experience
100.8 -> to developing personalized pharmaceuticals.
104.03 -> These tools are no longer a niche investment,
106.85 -> our customers are applying machine
108.65 -> learning to the core of their business.
111.43 -> Now let's take a look at some examples.
115.85 -> Domino's Pizza uses machine learning for predictive
118.98 -> ordering to help meet its goal of delivering hot fresh pizzas
123.8 -> in 10 minutes or less following an order.
126.54 -> Roche, the second largest pharmaceutical company in the world
130.18 -> uses Amazon SageMaker to accelerate the delivery of treatments
134.45 -> and tailor medical experiences. Kabbage an American Express company
139.67 -> apply machine learning to their loan application process
142.83 -> and surpassed major US banks to become the second largest business payment
147.72 -> protection program lender in the country,
150.66 -> preserving an estimated 945,000 jobs across the US.
156.37 -> The BMW Group is using Amazon SageMaker to process,
160.83 -> analyze and enrich more than seven petabytes of data
165.61 -> in order to forecast the demand of both model
168.01 -> mix and individual equipment on a worldwide scale.
172.13 -> Using Amazon SageMaker,
174.49 -> Nike built a product recommender on Nike.net
178.41 -> to deliver a more relevant shopping experience
181.35 -> towards wholesale customers.
183.65 -> And finally in sports,
185.35 -> Formula One applies machine learning to their car design process,
189.62 -> giving them new insights into more than 550 million data points
194.58 -> collected through more than 5,000 single and multi-car simulations.
199.79 -> As you can see our customers are innovating virtually
203.16 -> in every industry. So why do our customers choose us?
208.06 -> They choose us because of our depth and breadth of services
212.28 -> and the rapid pace of innovation.
215.21 -> So now let's take a look at our machine learning offerings.
220.23 -> At the bottom layer of the machine learning stack we provide
223.31 -> ML capabilities for expert machine learning practitioners.
227.08 -> These include optimized versions
228.89 -> of the most popular deep learning frameworks,
231.27 -> including PyTorch MXNet and TensorFlow.
234.97 -> And we provide choice in infrastructure across GPUs, CPUs,
239.82 -> and our own silicon innovation and training and inference as well.
244.47 -> At the middle layer of the stack we have Amazon SageMaker,
248.52 -> which allows developers and data scientists to build,
251.82 -> train and deploy machine learning models at scale.
255.93 -> SageMaker includes a broad set of capabilities,
259.53 -> many of which are both novel and unique to AWS.
263.64 -> And these services are available
265.75 -> through an integrated development environment for machine learning,
269.36 -> which we call SageMaker Studio.
272.09 -> Many organizations are standardizing on SageMaker
276.19 -> to remove the complexity from each step
279.07 -> of the ML development workflow,
281.26 -> so that it's faster, more cost effective and easier to do.
287.1 -> At the top layer are our AI services,
290.28 -> where we are helping customers adopt machine
293.02 -> learning without having to build their own models from scratch.
296.81 -> In vision we have Rekognition and in speech
299.92 -> we provide text to speech in Polly and speech to text with Transcribe
304.16 -> and customers can create their own Chatbots with Amazon Lex.
308.03 -> For text we provide natural language
310.16 -> processing with Comprehend, Translate for translation,
313.77 -> and Textract to extract structured text from documents and images.
318.98 -> We have also applied Amazon's more than
321.47 -> 20 years of experience in machine learning to deliver services,
325.4 -> including Amazon Personalize for personalized recommendation,
329.25 -> Amazon Forecast to automatically create custom demand forecasts
334.1 -> and Amazon Fraud Detector to identify online fraud.
338.47 -> We have also built end to end solutions
341.04 -> including Contact Lens for Amazon Connect
343.83 -> for contact centre analytics, Amazon Kendra for Enterprise Search,
348.29 -> Amazon CodeGuru for automated code review and DevOps Guru
352 -> to improve application availability.
354.53 -> And last week we introduced services
356.8 -> that are custom built for the industrial sector.
360.21 -> As you can see we are innovating really fast
363.27 -> and at a rapid clip to meet the needs of our customers.
368.453 -> Four years ago at re:Invent 2016 we launched our first AI services,
373.86 -> Polly, Lex and Rekognition.
376.12 -> Since then we have launched hundreds of features
379.21 -> including Amazon SageMaker, 11 new AI services
383.59 -> with six more launched just last week.
386.71 -> This year alone we have already launched more than 250 features
390.86 -> and we have delivered over 200 features each year
393.91 -> for the past three years.
395.84 -> That's a really big deal for a new area of technology,
399.16 -> which is moving so rapidly.
401.81 -> As you can see, we are building the most comprehensive
405.32 -> set of machine learning products because giving our customers
409.15 -> the right tools to invent with machine learning,
412.41 -> is necessary to unlock the power of this technology.
417.11 -> 15 years ago, when I was making the transition
420.5 -> to my first job out of grad school,
422.75 -> I noticed that builders were being held back by their technology.
426.98 -> Instead of bringing their ideas to fruition they were waiting on
430.62 -> IT departments to procure the necessary hardware or software
434.47 -> to build their applications.
436.9 -> Shortly after I started at AWS, I had the fortune to be part
441.89 -> of the launch of some amazing technologies like Amazon S3,
446.97 -> RDS, Dynamo and saw how it transformed every industry
452.6 -> as builders finally had the right tools to do their jobs.
457.13 -> It's no exaggeration to say that cloud computing
460.28 -> has enabled various startups and businesses
463.54 -> to achieve a new level of success.
467.73 -> Today, machine learning has reached a similar moment.
471.89 -> Until recently it was only accessible
474.49 -> to the big tech firms and cool startups
477.05 -> that had the resources needed to hire experts
480.38 -> to build sophisticated ML models.
484.12 -> But freedom to invent requires that builders of all skill levels
489.58 -> can reap the benefits of revolutionary technologies.
492.91 -> And the technologies themselves allow for experimentation,
497.2 -> failures and limitless possibilities.
500.62 -> So today, we are enabling all builders,
503.45 -> irrespective of the size of their company or their skill level
506.89 -> to unlock the power of machine learning.
509.48 -> And through feedback from our customers
511.87 -> and our own experience implementing machine learning at Amazon,
516.12 -> we have learnt a lot about what it takes
518.89 -> to create an environment that promotes boundless innovation.
524.92 -> At Amazon we often use tenets or principles to follow
528.99 -> as guides for teams or projects.
532.11 -> Today, I'm going to talk through some of the tenets
534.7 -> that enable freedom to invent.
537.43 -> We will also share more about the work
539.31 -> we are doing to give builders the power to harness machine
542.74 -> learning along the way.
544.66 -> So let's start with the very first thing you will need firm foundations.
551.6 -> To enable more builders to build and deploy machine
554.59 -> learning we are focused on optimizing the very foundations
558.19 -> that these models are built upon frameworks and infrastructure
562.34 -> that is used to speed up the process of training
564.76 -> and deploying these models and reducing costs,
568.35 -> firm from foundations
569.5 -> are essential to giving builders the freedom to invent.
576.01 -> With the abundance of compute power and data available today machine
580.89 -> learning is doing some incredible things,
583.51 -> things that we never thought were possible before like self-
586.96 -> driving cars, autonomous systems or machines
590.32 -> that understand what we are saying.
593.09 -> Often these more advanced applications of machine
596.09 -> learning use deep learning which consume massive amount
599.86 -> of inputs to achieve their high accuracy.
603.26 -> The complexity of the model and the size of training data
606.24 -> set means that building a deep learning model
609.45 -> can be resource intensive
611.39 -> and can take days or even months to train.
615.88 -> Moreover, there is not a single framework that is universally used
620.24 -> by all expert machine learning practitioners.
623.86 -> They typically build on three primary frameworks
626.5 -> for deep learning, TensorFlow, PyTorch, and MXNet.
630.9 -> We know that choice is important to our customers
634.2 -> that is why we invest in making AWS
637.55 -> the best place to run all of the major deep learning frameworks.
642.29 -> Through our deep learning containers
644.32 -> and deep learning AMIs
646.68 -> we ensure customers always have the latest versions
650.65 -> of the major frameworks optimized to run on AWS.
657.41 -> Today, 92% of cloud based TensorFlow and 91% of cloud
663.14 -> based PyTorch runs on AWS
665.94 -> and we actively participate in the community
668.85 -> to add new functionality to these frameworks, for example,
672.69 -> TorchServe now the default model serving library on PyTorch was built
678.63 -> and is maintained by AWS in partnership with Facebook.
683.07 -> We are also expanding the usage of deep learning to new audiences
687.17 -> and widening the available talent pool
689.82 -> with projects like Deep Java library,
692.26 -> an open source toolkit for performing deep learning in Java.
696.73 -> In addition to optimizing frameworks a critical part
700.87 -> of being able to efficiently deploy all machine learning models,
705.18 -> such as deep learning is the underlying infrastructure.
710.92 -> Now, every machine learning project is different
714.73 -> with different compute needs
716.64 -> and we have built the broadest and deepest choice of compute,
720.89 -> networking and storage infrastructure to help our customers
724.72 -> meet their unique performance and budget needs.
728.04 -> We are rapidly investing in this area
730.49 -> to keep up with the growth of machine learning sophistication,
733.72 -> introducing new chips and instances
736.06 -> that help our customers keep the cost of training and inference down
740.43 -> while speeding up their innovation.
743.48 -> The latest addition to our portfolio to help builder train faster
748.07 -> and more cost effectively is the P4d instances,
751.35 -> which provide the highest performance for ML training in the cloud.
755.38 -> They feature the latest NVIDIA A100 GPUs
759.36 -> and first in the cloud 400 gigabit per second networking.
763.87 -> For Inference we launched AWS
766.27 -> Inferentia based EC2 Inf1instances.
770.58 -> They provide the lowest cost per inference in the cloud,
774.33 -> up to 45% lower cost or 30%
778.04 -> higher throughput than comparable GPU based instances.
782.11 -> After migrating the vast majority of inferences
784.7 -> to Inf1 the Amazon Alexa team saw 25% lower end to end latency
790.75 -> for the text to speech workloads. And customers such as Snap,
795.48 -> Finra, Autodesk and Conde Nast use Inf1 instances
799.81 -> to get high performance and low cost ML inference.
804.1 -> Conde Nast, for instance, observed a 72% reduction in cost
809.24 -> than the previously deployed GPU instances.
813.24 -> Together this powerful hardware and optimized frameworks
817.35 -> provide firm foundations for innovation in machine learning.
822.52 -> For training last week, Andy announced two new efforts.
827.84 -> The first is Habana based Amazon EC2 instances,
831.77 -> Habana Gaudi accelerators from Intel offer 40%
835.76 -> better price performance over current GPU
838.46 -> based EC2 instances for training deep learning workloads,
842.62 -> they will be available the first half of 2021.
846.64 -> The second is AWS Trainium,
850.14 -> a machine learning training chip custom
852.54 -> designed by AWS for the most cost effective training in the cloud.
857.22 -> Coming in 2021.
859.58 -> We are building Trainium specifically to provide
863.25 -> the best price performance for training machine
865.95 -> learning workloads in the cloud.
869.37 -> Now, our customers tell us they need more than just the best hardware
873.73 -> to train large models.
875.81 -> For example, let's take a look at two deep learning models
879.76 -> that are highly popular.
882.88 -> Mask-RCNN is a state of the art computer vision model
886.67 -> used by our customers for things like autonomous driving,
889.93 -> it requires a significant amount of training data.
893.43 -> Similarly, T5 is a state of the art natural language model
898.06 -> with 3 billion parameters.
900.65 -> To speed up training for both of these we can use distributed training.
906.1 -> To speed our training times for models
908.29 -> with large training data sets, like Mask-RCNN,
912.2 -> you can split your data sets across multiple GPUs,
915.59 -> commonly known as data parallelism.
919.33 -> When training large models like T5,
922.19 -> which are too big for even the biggest, most powerful GPUs,
926.3 -> you can write code to split it across multiple GPUs,
930.03 -> commonly known as model parallelism.
933.07 -> But doing this is difficult and requires
936.06 -> a high level of expertise and experimentation,
939.08 -> which can even take weeks even for expert practitioners.
943.6 -> So we asked ourselves, how can we make it easier for our customer
949.65 -> to do distributed training? Do it well and do it really fast?
956.6 -> Today, I'm excited to announce with only a few lines
959.92 -> of additional code in your PyTorch and TensorFlow training scripts,
964.29 -> Amazon SageMaker will automatically apply data parallelism
968.78 -> or model parallelism for you, allowing you to train models faster.
973.594 -> [applause]
980.2 -> Data parallelism with Amazon SageMaker
982.68 -> allows you to train 40% faster. Similarly, with model parallelism,
988.08 -> what used to take a dedicated research lab weeks of effort
991.98 -> and hand tuning training code now takes only a few hours.
996.51 -> So what does this mean for our customers?
1000.21 -> Using these engines we challenged our teams
1002.79 -> that work on TensorFlow and PyTorch to train Mask-RCNN
1006.7 -> and T5 as fast as possible. Here is what happened.
1012.32 -> Last year we shared that AWS had the fastest training for Mask-RCNN,
1017.34 -> at 28 minutes on TensorFlow and 27 minutes on PyTorch.
1022.15 -> With our optimization we cut that training time
1025.27 -> by approximately 75% to six minutes and 13 seconds for TensorFlow
1031.33 -> and six minutes and 45 seconds for PyTorch.
1035.17 -> Our TensorFlow training time is 23%
1038.4 -> faster than the previous fastest published record
1041.47 -> training time held by our friends in Mountain View.
1045.71 -> And with our optimizations such as model parallelism on T5
1050.3 -> we went from development to fully trained model on PyTorch
1053.98 -> in less than six days,
1056.07 -> it is the fastest published training time for this model.
1059.86 -> Previously, this should have taken weeks of developer time
1064.27 -> to find the optimal way to split the model.
1067.98 -> We are very excited about the innovations
1069.99 -> we are bringing to builders in this area
1072.72 -> but deep learning is still the domain of expert practitioners
1076.9 -> and is simply too hard for most people to do.
1081.13 -> That leads to my second tenet.
1084.86 -> For any technology to have significant impact, builders
1088.75 -> need to be given the shortest possible path to success.
1093.2 -> Having the tools for your builders to be able to satisfy
1096.8 -> and explore their ideas quickly without barriers
1099.98 -> is a significant accelerator to your business.
1104.18 -> Historically, machine learning development
1106.48 -> was a complex and costly process. There are barriers to adoption
1110.44 -> at each step of the ML development workflow,
1113.26 -> right from collecting and preparing data,
1115.62 -> which is time consuming and undifferentiated.
1118.42 -> Then choosing the right algorithm,
1120.28 -> which is often done by trial and error,
1122.96 -> leading to lengthy training times which leads to higher costs.
1126.63 -> Then there is model tuning, which can be a very long cycle
1130.12 -> and require adjusting 1000s of different combinations.
1134 -> Once you have deployed a model you must monitor it
1137.34 -> and then scale and manage it in production.
1140.99 -> To make things more complicated many of the tools
1144.01 -> developers take for granted
1145.89 -> when building traditional software such as debuggers,
1149.14 -> project management, collaboration and so forth,
1152.21 -> are disconnected when it comes to machine learning development.
1157.45 -> To address these barriers in 2017 we launched Amazon SageMaker.
1162.79 -> We built Amazon SageMaker from the ground
1165.32 -> up to provide every developer and data scientist
1168.53 -> with the ability to build, train and deploy ML models quickly
1172.52 -> and at lower costs by providing the tools
1175.23 -> required for every step of the ML development lifecycle
1178.9 -> in one integrated fully managed service.
1182.31 -> In fact, we have launched more than 50 SageMaker capabilities
1186.58 -> in the past year alone,
1188.25 -> all aimed at making this process easier for our customers.
1193.82 -> The customer response to what we are building
1195.93 -> has been incredible making Amazon SageMaker
1199.19 -> one of the fastest growing services in AWS history.
1204.89 -> And tens of thousands of customers are using Amazon SageMaker today.
1209.66 -> It’s customers from virtually every industry
1212.52 -> including financial services, healthcare, media, sports,
1217.09 -> retail, automotive and manufacturing.
1220.49 -> These customers are seeing significant results
1223.56 -> from standardizing their ML workloads on SageMaker.
1227.33 -> Let's take a look at some of them.
1230.73 -> Lyft autonomous vehicle division Level
1232.8 -> five reduced model training time from days to hours.
1237.35 -> T-Mobile saved data scientists significant time
1240.33 -> labeling thousands upon thousands of customer messages
1243.84 -> to improve customer service by using SageMaker Ground Truth.
1247.84 -> Vanguard deploys workloads up to 20 times faster using SageMaker.
1253.24 -> iFood, the leader in online food delivery in Latin America,
1257.55 -> uses Amazon SageMaker to optimize delivery routes
1261.45 -> to decrease their distance
1263.35 -> travelled by their delivery partners by 12 person.
1267.65 -> The scale at which iFood operates 12% is a big deal.
1271.86 -> And ADP reduced time to deploy machine
1274.72 -> learning models from two weeks to just one day.
1279.72 -> As you can see so many customers are able to innovate more quickly
1284.15 -> by using SageMaker. Another customer that has done
1287.97 -> some really fascinating things with machine learning is NFL.
1292.35 -> We started working with NFL to create Next Gen Stats
1296 -> as a new way to engage fans.
1298.34 -> And we have expanded that work recently to the Player
1301.41 -> Health and Safety initiative. To talk more about this
1304.98 -> and how the NFL is expanding its use of machine learning with SageMaker,
1309.8 -> I'd like to introduce Jennifer Langton of the NFL.
1313.867 -> [applause]
1321.551 -> My name is Jennifer Langton.
1323.1 -> I'm Senior Vice President of Health and Innovation
1325.56 -> for the National Football League. While sports is my profession,
1329.41 -> it's always been a central part of my life.
1331.95 -> In my second year as a college athlete a knee injury
1335.13 -> took me off the field
1336.73 -> and what was at that time one of the greatest challenges I had
1339.96 -> ever faced became inspiration for my career.
1343.52 -> I know personally the impact an injury
1345.79 -> can make on an athlete's life
1347.72 -> and also what an impact technology and innovation
1350.67 -> can have in treating injuries and preventing them before they happen.
1355.66 -> In our work at the NFL our highest priority
1358.76 -> is the health and safety of our players.
1361.29 -> We leverage data and innovation in order to protect our players
1366 -> and make our game safer.
1368.66 -> The NFL has used AWS as its official cloud computing and machine
1373.6 -> learning provider for the NFL Next Gen Stats platform since 2017.
1379.38 -> Next Gen Stats provides real time location data speed and acceleration
1384.18 -> for every player during every play on every inch of our fields.
1389.06 -> Powered by AWS, Next Gen Stats enhances the fan experience.
1394 -> At re:Invent last year we built on that successful platform
1397.75 -> with the announcement of a new expanded partnership,
1401.26 -> a partnership that will pursue an audacious goal,
1404.97 -> making the sport of football and ultimately all sports safer
1409.29 -> for athletes who play them.
1411.76 -> We will combine unique data sets of human performance
1414.88 -> and football information including hours of video with AWS’s
1419.8 -> strong culture of technology innovation,
1422.44 -> to develop a more profound understanding of our game
1425.74 -> and human performance than has ever been done before.
1430.84 -> Our goal is to improve player safety
1433.11 -> by eventually being able to predict and therefore prevent injury.
1437.26 -> AWS’s AI and machine learning services
1440.79 -> combined with the NFL’s data
1443.09 -> will speed an entire generation of new insights into player injuries,
1448.23 -> game roles, equipment, rehabilitation and recovery.
1453.09 -> But before we talk more about where we're going together,
1456.7 -> let me first share where we've been in our innovation journey
1460.06 -> as a league,
1461.18 -> showing just how much we've accomplished
1463.78 -> in a short amount of time.
1465.97 -> Over the past six years, biomechanical engineers
1469.48 -> jointly appointed by the NFL and the NFL Players Association
1473.57 -> have analyzed on field injuries and developed laboratory
1476.85 -> tests for helmets that represent the impacts
1479.82 -> which caused those injuries on the field.
1482.52 -> This work has been central in informing
1485.46 -> everything from our rules changes to improve protective equipment,
1489.89 -> to centre pieces of our efforts to reduce injuries,
1493.4 -> specifically concussions.
1495.5 -> Using video of head impacts, our biomechanical engineers
1499.72 -> developed a test that accounts for hundreds of different variables,
1503.63 -> speed direction,
1505.5 -> who makes contact with who, play type and impact among others.
1510.03 -> We use it to test the performance of the helmets NFL players wear
1514.16 -> and then we create a simple color coded chart
1517.99 -> with the best performing helmets in the darkest green
1521.21 -> and the worst which are prohibited in red.
1524.27 -> This has led to a tremendous behavioral change
1527.34 -> among NFL players over the last five seasons,
1531.05 -> we've gone from having about one third of players
1534.08 -> in top performing helmets to nearly 100%.
1538.08 -> Moving players into better performing equipment,
1541.07 -> encouraging safer tackling techniques and rules changes.
1545.07 -> All three underpinned by data and innovation
1547.96 -> together have led to significant progress
1550.71 -> in keeping our players safe on the field.
1553.5 -> As a result of this three pronged injury reduction
1556.53 -> plan we saw a 24% drop in reported concussions
1561.36 -> during the 2018 season and the 2019 season.
1565.7 -> So our reported concussions remain at that lower rate.
1569.52 -> This validated our intervention.
1572.53 -> That's what we mean when we talk about our key drivers,
1575.42 -> data and innovation, that have evolved the game
1578.31 -> and will continue to evolve the game, and AWS is helping us to do that.
1585.66 -> As a part of our work together we are developing the digital athlete,
1590.21 -> a computer simulation model of a football player
1593.12 -> that can be used to replicate infinite scenarios
1596.03 -> within our game environment,
1597.87 -> including variations by position and even environmental factors.
1602.84 -> By simulating different situations within a game environment our goal
1607.52 -> is to better foster an understanding
1609.85 -> of how to treat and rehabilitate injuries
1612.27 -> in the near term and eventually predict
1614.93 -> and prevent injuries in the future.
1618.11 -> Leveraging video and Next Gen Stats data
1621.67 -> we together are doing something that has never been done in football
1626.01 -> Before, developing computer vision models
1629.4 -> that identify the forces that cause concussions among other injuries.
1634.38 -> Using Amazon SageMaker we are in the early phases
1637.91 -> of training deep learning models
1639.882 -> to identify and track a player on the field,
1643.27 -> an important first step as we train the system to detect,
1647.48 -> classify and identify injury significant events and collisions.
1653.44 -> In the case of our helmet example
1655.69 -> the volume of new data the system generates
1658.54 -> and the speed with which we can incorporate
1661.29 -> that new data into our helmet testing and analysis
1664.24 -> could exponentially expand our ability to rank, develop
1668.35 -> and ultimately encourage player adoption
1670.96 -> of better performing helmets.
1673.63 -> And over time the techniques developed to detect
1676.57 -> and prevent concussions will also be extended
1679.86 -> to reduce a wide range of injuries
1682.01 -> including foot ankle and knee injuries.
1684.82 -> This technology is giving us a deeper understanding of the game
1689.04 -> than ever before allowing us to reimagine the future of football.
1695.71 -> We’ve just launched a challenge to pressure test the solutions
1699.22 -> that the NFL and AWS are creating together.
1702.99 -> The crowd source computer vision challenge is currently underway
1706.87 -> and allows anyone with the interest and capability
1710.39 -> to be a part of our important work.
1713.24 -> The data and insights collected through this project
1716.14 -> have the potential not only to revolutionize football
1720.02 -> but also to help address injury prevention
1722.64 -> and detection beyond the football field to society more broadly.
1727.91 -> Last year the NFL celebrated its 100th season
1731.79 -> and we look forward to the next 100 years of football.
1735.02 -> We remained committed to innovating on behalf of our players
1738.89 -> and those that come after them.
1740.93 -> I'm so proud of the work we are doing together to that end,
1745.37 -> the future of football together with AWS is very bright.
1752.044 -> [applause]
1758.67 -> Thanks, Jennifer. It really amazes me to see the impact
1762.45 -> that SageMaker can have in helping our customers to embrace machine
1766.25 -> learning as a core part of their strategy.
1769.41 -> And as it becomes easier for our customers to build,
1772.73 -> train and deploy one model they are inevitably going to do more of it.
1778.12 -> Take Intuit for example, Intuit was one of our very first
1781.86 -> SageMaker customers, they started with the machine
1784.9 -> learning model to help customers make the most of their tax deduction.
1788.92 -> And today ML has become a core part of the business
1792.5 -> touching everything from fraud detection
1795.18 -> to customer service to personalization
1797.82 -> to the development of new features within their products.
1801.52 -> Just in the last year alone they have increased the number of models
1805.22 -> deployed across the platform by over 50%.
1809.692 -> This increased use of AI and ML drove a variety of customer benefits,
1814.57 -> including saving customers 25,000 hours with self-help
1818.79 -> and cutting expert review time in half,
1821.45 -> improving customer confidence.
1823.91 -> And it's not just Intuit, we see this across many of our customers.
1828.81 -> Many of them today are looking to scale to hundreds
1832 -> or even thousands of models in production
1835.53 -> and at this scale bottlenecks in ML development
1838.93 -> whether it's data prep training
1841.08 -> and many more they become more amplified and new challenges arise.
1846.36 -> So we needed to build new tools across the entirety of the machine
1850.84 -> learning workflow to help with not just one model
1854.37 -> but with hundreds or even thousands of models.
1857.83 -> So I'm going to walk through these tools today,
1860.9 -> some of which we launched last week and some that are new today.
1865.44 -> Let's start with data prep, the first step of building a machine
1868.86 -> learning model. It is a time consuming and involved process
1872.71 -> that is largely undifferentiated, and we hear from our customers
1876.93 -> that it constitutes up to 80% of their time spent in ML development.
1884.31 -> Last week, we announced Amazon SageMaker
1887.07 -> Data Wrangler, a game changing way to do
1889.84 -> ML data prep much faster
1891.89 -> through a visual interface in SageMaker Studio.
1896.93 -> Typically, to get data ready for a machine
1899.43 -> learning model you need to collect data
1901.3 -> in various formats from different sources,
1904.06 -> which may require you to create complex queries.
1907.43 -> With Data Wrangler you can quickly select data
1909.91 -> from multiple data sources, such as Athena, Redshift,
1913.45 -> Lake Formation, S3 and SageMaker Feature Store.
1918.16 -> Previously, you would then need to write code to transform your data.
1922.27 -> But with Data Wrangler we provide 300
1924.9 -> plus pre-configured data transformations,
1927.73 -> so you can transform your data without writing a single line of code.
1933.13 -> Next, once the data is transformed it's easy to clean
1937.69 -> and explore the data, through data visualizations in SageMaker Studio.
1942.84 -> These visuals allow you to quickly identify inconsistencies
1946.87 -> in the data prep workflow and diagnose issues before models
1950.36 -> are deployed in production.
1952.96 -> Finally, rather than having to engage an IT ops team to get your data
1957.42 -> ready for production,
1958.99 -> you simply export your data prep workflow to a notebook
1962.7 -> or a code script with a single click.
1967.75 -> Now, not only will SageMaker Data Wrangler integrate
1971.02 -> with AWS data sources, but coming soon,
1974.41 -> you will be able to quickly select and import data directly into
1978.52 -> SageMaker from Snowflake, Databricks Delta Lake
1982.77 -> and MongoDB Atlas.
1985.325 -> [applause]
1991.6 -> Using SageMaker Data Wrangler with just a few clicks
1994.97 -> you can complete each step of the data prep workflow
1998.17 -> and easily transform your raw data into features.
2003.75 -> Talking about features in machine Learning, features represent
2008.24 -> relevant attributes or properties
2010.08 -> that your model uses for training or for inference
2013.82 -> where you make your predictions.
2015.94 -> Now to explain it a little bit more let's take a look at
2018.71 -> Intuit, for example, in their TurboTax contextual help model,
2023.14 -> which tries to provide
2024.37 -> the most relevant possible tax guidance to a tax filer.
2028.34 -> The features the model might use can include information
2031.69 -> like what step you're on in the tax
2033.64 -> filing process or your prior year's tax returns.
2037.71 -> Intuit uses features in large batches to train its model and at inference
2044.29 -> so they need to be available in real time to make fast predictions.
2049.45 -> Previously, Intuit was storing features for batch training
2052.74 -> in one data store and the real time features in another.
2056.55 -> This means it required months of coding and deep expertise
2060.64 -> to keep these features consistent, so Intuit came to us with this challenge
2066.16 -> and together we worked backwards from the problem to build a feature store
2070.59 -> which served as a training repository for features
2073.64 -> where latency isn't as important
2076.25 -> and also provide access to the exact same features at runtime
2079.89 -> where latency is important.
2082.06 -> It also enabled feature discoverability
2084.92 -> and reuse accelerating the model development lifecycle
2088.78 -> and improving data worker productivity.
2092.68 -> To solve this problem for all of our SageMaker customers, last week
2096.51 -> we launched SageMaker Feature
2098.2 -> Store so you can securely store, discover and share features,
2102.8 -> so you don't need to recreate the same features for different
2105.92 -> ML applications.
2107.8 -> This saves months of development effort.
2111.07 -> Now Features Store serves features in large batches for training
2114.56 -> and also serves features with single digit
2116.96 -> millisecond latency for inference. And it does all the hard work
2121.75 -> of keeping these features in sync and consistent.
2125.45 -> You can use visuals to search your features and share and collaborate
2128.98 -> with other members in your organization.
2132.86 -> Now, as you can see, SageMaker Data Wrangler and Features Store
2137.56 -> make it easier to aggregate data and prepare and store features.
2142.21 -> This is an important part of the machine learning process
2145.71 -> because your model’s predictions are only as good as the data
2149.7 -> and features that use this,
2152.03 -> that's also why we need to better understand
2155.17 -> the bias in the data our models use
2158.04 -> and why our models make a certain prediction.
2161.63 -> But today, it's very hard to get this visibility.
2164.94 -> It requires a lot of manual effort
2167.45 -> and stitching together a bunch of open source solutions.
2170.79 -> So our customers asked us to make this process easy for them.
2176.95 -> Today, we are launching Amazon SageMaker
2179.27 -> Clarify, which helps improve your ML models
2182.87 -> by detecting potential bias across the machine learning workflow.
2187.637 -> [applause]
2194.12 -> To talk more about the work we are doing here
2196.32 -> I'd like to welcome Dr. Nashlie Sephus, one of our leaders at AWS
2200.92 -> focused on algorithmic bias and fairness.
2205.005 -> [applause]
2211.89 -> Thank you, Swami. I have been immersed in machine
2215.06 -> learning technologies as both a scientist and a consumer
2219.62 -> and I have developed a personal passion
2222.63 -> for mitigating bias in technology and identifying potential blind spots.
2227.7 -> As one of the scientists working on bias and fairness at Amazon,
2232.12 -> I see firsthand the challenges in doing this
2235.89 -> and the increasing need for us to get it right.
2240.1 -> Mitigating model bias and understanding
2242.37 -> why a model is making a prediction helps data
2245.61 -> scientists create better machine learning models.
2249.58 -> And it helps the consumers of machine learning predictions
2252.84 -> make better decisions based on that information.
2257.04 -> Bias can show up at every stage of the machine learning workflow
2261.84 -> so even with the best possible intentions
2264.8 -> and a whole lot of expertise, removing bias in machine
2268.53 -> learning models is difficult.
2272.64 -> Bias could come in at the very beginning from the training data
2276.21 -> itself when it's not representative.
2279.78 -> For example, not having enough dramas in your training data for a TV
2284.82 -> show recommendation model may bias the outcome.
2289.67 -> You could also introduce bias through an imbalance in training data labels
2293.78 -> and by selecting a subset of that training data.
2297.98 -> And then you can also have bias through model drift,
2302.13 -> where your model is making predictions using data
2305.93 -> which is sufficiently different from the data on which it is trained.
2310.85 -> For example, a substantial change in mortgage rates
2315.23 -> could cause a home loan model to become bias.
2319.5 -> Today, the process to get insights into data bias across the machine
2324.94 -> learning workflow is tedious for both data scientists
2329.66 -> and machine learning developers.
2332.23 -> I have spent my career working on this problem, and it's hard to do.
2338.17 -> I'm excited to present a product feature that I've been a part of
2342.18 -> from day one of its inception, SageMaker Clarify.
2347.28 -> SageMaker Clarify provides an end to end solution
2350.79 -> to help you mitigate bias in machine learning and provide transparency
2355.1 -> across the entire machine learning workflow.
2357.83 -> And it all works within SageMaker Studio and integrates
2362.16 -> with other SageMaker tools throughout the process of building a model.
2366.7 -> Let's take a look at how it works.
2368.53 -> To start, during your initial data preparation in SageMaker
2372.34 -> Data Wrangler, SageMaker
2374.34 -> Clarify enables you to specify attributes of interest,
2378.17 -> such as location or occupation
2381.26 -> and then it runs a set of algorithms to detect the presence of bias.
2386 -> SageMaker Clarify then provides a visual report
2389.78 -> with a description of the sources and severity of possible bias
2394.29 -> so that you can take steps to mitigate.
2397.1 -> After you've trained the model on this data,
2399.73 -> Clarify will check the trained models for imbalances,
2403.53 -> such as more frequent denial of services
2406.44 -> to one group over another and provide you with a visual report
2412.31 -> on the different types of bias for each attribute.
2416.5 -> With this information you can then
2418.31 -> go back and relabel data to correct any imbalances.
2424.14 -> Once your model is deployed you can get a detailed report
2428.28 -> showing the importance of each model input for a specific prediction.
2433.59 -> This can help the consumers of your machine
2435.9 -> learning model better understand
2438.11 -> why a model is making a certain prediction.
2441.29 -> For instance, a business analyst wanting to understand
2445.31 -> what is driving a demand forecast prediction.
2448.96 -> Lastly, why your initial data or model may not have been
2452.81 -> Biased, changes in the real world may cause bias to develop over time.
2459.56 -> SageMaker Clarify is packaged with SageMaker Model Monitor
2464.19 -> so that you can get alerts to notify you
2467.16 -> if your model begins to develop bias or if changes in real world data
2473.19 -> can cause your model to give different weights to model inputs.
2478.44 -> This way, you can retrain your model on a new data set.
2483.43 -> Reducing bias will continue to be a challenge in machine
2486.52 -> Learning,
2488.01 -> but SageMaker Clarify provides tools to assist in these efforts.
2493.77 -> Thank you.
2495.622 -> [applause]
2501.59 -> Thank you, Nashlie.
2503.32 -> We are really excited to bring this important feature to our customers.
2507.7 -> As customers scale machine Learning, managing time
2511.29 -> and cost is critical. And while data prep consumes
2514.96 -> a large part of the time to build machine
2517.02 -> learning models, training a machine learning model on the data
2520.69 -> can be a costly process at scale.
2523.57 -> So data scientists and machine learning practitioners
2526.76 -> want to naturally maximize their resources.
2529.88 -> Now if you look at training itself it has different phases like data
2534.27 -> pre-processing, training and finalization.
2537.58 -> And one potential bottleneck in optimizing
2540.29 -> your resources can be when your data pre-processing
2543.79 -> ends up being compute intensive and your CPU core is busy,
2547.66 -> while the GPU, which is used for training phase
2550.7 -> and is the most expensive resource in your system,
2553.72 -> sits there idling, underutilized.
2556.92 -> But today, there isn't a standard way to identify these bottlenecks
2561.07 -> like there is in software development with profilers.
2564.56 -> So customers today need to cobble together a diverse set of open tools,
2569.39 -> many of them are unique to their ML framework they're using.
2574.29 -> To address this, last year we started with introducing SageMaker
2578.63 -> Debugger, which automatically identifies complex issues
2583.18 -> developing in ML training jobs. Now customers wanted to use Debugger
2588.7 -> to get more detailed profile information
2591.89 -> to optimize their resources.
2595.87 -> Today, we are adding a new capability to SageMaker Debugger
2599.69 -> to provide deep profiling for neural network
2602.56 -> training to help identify bottlenecks
2605.39 -> and maximize resource utilization for training.
2609.466 -> [applause]
2616.589 -> With deep profiling for debugger
2618.65 -> you can visualize different system resources
2621.43 -> including GPU, CPU, network and IO memory within SageMaker Studio.
2627.71 -> With this information you can analyze your utilization and make changes
2632.34 -> based on the recommendations from the profiler or on your own.
2636.74 -> You can profile your training runs at any point in the training workflow.
2641.15 -> SageMaker Debugger saves developers valuable time while reducing costs.
2647.71 -> Now, as you can see, ML comprises multiple steps,
2652.35 -> some which take place in sequence and others in parallel.
2656.13 -> And there is a lot that goes on in stitching these workflows together.
2660.63 -> In traditional software development continuous integration
2664.28 -> and continuous deployment CICD pipelines
2667.7 -> are used to automate the deployment of workflows
2670.68 -> and keep them up to date, but in machine
2673.34 -> Learning, CICD style tools are rarely used
2677.56 -> because where they do exist, they are super hard to set up,
2681.1 -> configure and manage.
2684.61 -> To address this, last week we launched Amazon SageMaker Pipelines,
2688.81 -> the first purpose built easy to use ML CICD service accessible
2694.04 -> to every developer and data scientists.
2696.92 -> With just a few clicks in SageMaker Pipelines
2699.86 -> you can create an automated ML workflow
2702.23 -> that reduces months of coding to just a few hours.
2706.25 -> SageMaker Pipelines takes care of the heavy lifting involved
2709.72 -> by managing dependencies and tracking each step of the workflow.
2713.83 -> Now, pretty much anything you can do in SageMaker
2717.26 -> you can add to your workflow in Pipelines.
2719.96 -> Moreover, these workflows can be shared
2722.72 -> and re-used within your organization as well.
2726.37 -> Templates for model building and model deployment pipelines
2730.06 -> help you get started quickly.
2732.53 -> And once created, these workflows can be easily visualized
2736.34 -> and managed in SageMaker Studio
2738.87 -> so you can even compare your model performance.
2742.88 -> Now, to show you how all of these new features work together,
2746.84 -> I'd like to invite Dr. Matt Wood to the stage for a demo.
2751.346 -> [music playing - applause]
2762.98 -> Thank you, Swami, and good morning.
2765.2 -> With capabilities like Data Wrangler, Feature Store, Clarify,
2769.92 -> Pipelines and new debugging and profiling features,
2773.16 -> it's never been easier to build, train and deploy machine
2777.17 -> learning models in Amazon SageMaker. By working together,
2781.07 -> these capabilities provide a way for developers and data
2783.96 -> scientists to focus on what really matters, building accurate,
2788.08 -> high quality machine learning models which improve over time
2791.55 -> without all of the undifferentiated heavy lifting.
2794.5 -> SageMaker removes the muck of building machine learning models
2798.04 -> and leaves only the diamonds. So, what goes into a great model?
2802.77 -> Let's take a look at building a model
2804.69 -> which uses track and artist information
2807.24 -> to create the perfect musical play list.
2810.99 -> First, you need data, lots of data, and lots of different types of data.
2816.12 -> The more, the merrier.
2817.83 -> SageMaker lets you connect and load your data
2820.23 -> from sources such as S3 and Redshift
2822.91 -> in just a few clicks from SageMaker Studio.
2825.76 -> SageMaker can then use this data to train a model.
2829.19 -> Models learn complex and often subtle patterns
2832.6 -> to let you map inputs to predicted outputs.
2835.82 -> So, we will need tons of metadata about the songs in our library,
2839.16 -> length, beats per minute, genre, ratings
2842.64 -> and more to use as our inputs.
2846 -> Next, we will need a strong set of features.
2849.34 -> Data in its raw form usually it doesn't provide
2852.47 -> enough or optimal information to train a great model.
2856.36 -> So, to maximize the signal and reduce the noise and the data,
2860.37 -> we need to convert and transform it into features through a process
2863.76 -> known as "feature engineering".
2866.58 -> For instance, beat and genre could be combined into a more abstract
2871.33 -> or super-feature called danceability.
2874.26 -> Now, creating features can take a ton of time.
2877.23 -> Some customers estimate that it is about 80% of the time
2880.12 -> spent building machine learning models.
2882.75 -> Instead, we can use Data Wrangler to convert,
2885.98 -> transform or combine raw tabular data
2889.54 -> into features in a fraction of the time
2891.98 -> without writing a single line of code.
2895.39 -> With a single click, we can then save these features
2897.94 -> to the SageMaker Feature Store
2899.74 -> which lets us check in and check out features
2902.24 -> in a very similar way as you would a source code repository.
2906.58 -> The service lets us create multiple versions of features
2909.9 -> and we can add descriptions and search our features
2912.58 -> which helps teams understand and reuse them for other models.
2917.15 -> You can retrieve an entire data set for training,
2920.08 -> or, once your model is deployed, retrieve individual features
2923.45 -> used in making low latency predictions.
2926.57 -> Such as predicting that I want to listen to more songs
2929.36 -> with high danceability like ABBA’s Dancing Queen.
2932.66 -> All with single digit millisecond latency.
2935.49 -> There is no need to try to recompute these features
2938.47 -> on the fly over and over again.
2940.44 -> You can just do it once in Data Wrangler
2942.47 -> and use them again and again from the Feature Store.
2945.85 -> Next, great models can be used in many different situations
2950.05 -> if they are trained on a balanced set of features and data.
2953.69 -> We are going to use SageMaker Clarify
2955.94 -> to ensure that our training data is well balanced.
2959.51 -> That means that it has the possible values of features and labels
2963.66 -> are well represented across the data.
2966.22 -> And that the accuracy of our training model
2968.18 -> is roughly the same across different subsets of the data,
2971.36 -> such as different musical genres.
2974.27 -> For example, if we had a preponderance of blues music
2977.22 -> in our training set,
2978.34 -> our model would probably create a lot of blues play lists.
2981.39 -> That is fine if all you want to do is listen to the blues.
2984.29 -> But our model will be even more useful
2986.59 -> if we use an evenly balanced set of features
2988.99 -> representing dozens of different genres for training.
2993.08 -> So here we can make sure that that is the case
2995.59 -> and that our model makes good predictions with good coverage
2998.4 -> across a wide range of musical genres.
3001.7 -> We can also use Clarify to inspect every single prediction
3005.34 -> to understand how each feature plays a role in that prediction.
3009.39 -> This allows us to check that our model isn't overly reliant
3012.26 -> on features which we know to be underrepresented in our data.
3016.86 -> Now, one of the great things about machine learning
3019.42 -> is that models can improve over time,
3021.75 -> not just based on new data as it becomes available,
3024.91 -> but also by incorporating the learnings
3026.85 -> we see from tools like Clarify
3029.2 -> and the new debugging and profile features
3031.53 -> to systematically identify sources of error or slowness
3035.24 -> and remove them from our model. With this approach,
3038.66 -> we can condense hundreds of thousands of hours of real word experience
3042.64 -> into just a few re-training iterations
3045.03 -> and our models can improve far more quickly.
3048.55 -> And since we often want to continually improve our model
3051.3 -> by rebuilding it over and over again,
3053.68 -> we can take advantage of the automation in Pipelines,
3056.66 -> the new continuous integration and continuous
3058.89 -> deployment capability in SageMaker,
3061.32 -> which lets us automate the entire end-to-end machine
3064.22 -> learning build process and replay it perfectly with a single click.
3068.42 -> This not only accelerates the time to our first model,
3071.73 -> but it decreases the time between model improvements
3074.53 -> and gets us to better models more quickly.
3077.5 -> So, in SageMaker, we have made the tools which every developer
3081.13 -> is familiar with, visual editors, debuggers, profilers and CI/CD,
3086.27 -> all wrapped into an integrated development environment
3088.73 -> available for machine learning.
3090.57 -> And we can't wait to see what you'll use SageMaker for next.
3094.07 -> With that, I'll hand it back to Swami. Thanks a lot.
3098.08 -> [music playing - applause]
3107.71 -> Thanks, Dr. Wood.
3109.77 -> Another place where we are seeing a lot more machine
3112.51 -> learning happening is the edge.
3115 -> More and more applications such as industrial robots,
3118.15 -> autonomous vehicles, and automated checkouts,
3121 -> require machine-learning models that run on smart cameras,
3124.31 -> robots, equipment and more.
3126.98 -> However, operating ML models on Edge devices is challenging.
3131.69 -> This is because of limited compute memory and connectivity.
3135.99 -> It also takes months of hand-tuning each model to optimize performance.
3141.53 -> In addition, many ML applications
3143.63 -> require multiple models to run on a single device.
3147.23 -> For example, a self-navigating robot
3149.81 -> needs an object detection model to detect obstacles,
3153.12 -> a classification model to recognize them
3155.95 -> and a planning model to legitimize the appropriate actions.
3160.03 -> Now, once a model is deployed in production,
3162.77 -> your model quality may decay because real world data
3165.8 -> used to make predictions often differs from the data
3168.87 -> used to train the model, which leads to inaccurate predictions.
3173.67 -> In 2018, we announce Amazon SageMaker Neo
3177.74 -> to make it easier to deploy models on Edge devices.
3182.08 -> While Neo addresses model deployment for a single model,
3186.35 -> developers still had to deal with managing models
3189.2 -> across fleets of edge devices
3191.29 -> and also build mechanisms to monitor their performance and accuracy.
3196.4 -> This became harder for our customers as their ML edge adoption grew,
3201.62 -> and that is why we are investing more in this area
3205.02 -> to bring the full power of SageMaker to edge devices.
3210.04 -> Today we are excited to announce Amazon SageMaker Edge Manager.
3215.04 -> It provides model management for edge devices
3217.81 -> so you can prepare, run, monitor, and update machine
3221.65 -> learning models across fleets of edge devices.
3225.416 -> [applause]
3232.525 -> SageMaker Edge Manager applies specific performance optimizations
3237.4 -> that can make your model run up to 25 times faster
3241.14 -> compared to hand-tuning. You can easily integrate Edge Manager
3245.58 -> to your existing edge apps through APIs
3248.46 -> and common programming languages.
3250.83 -> And you can understand the performance of models
3253 -> running on each device across your fleet
3255.68 -> through a single dashboard.
3257.9 -> Finally, Edge Manager continuously monitors
3261.14 -> each model instance across your device fleet
3264.18 -> to detect when model quality declines.
3270 -> With these services, we are delivering the most complete
3273.8 -> end-to-end solution for ML development
3276.77 -> with Amazon SageMaker,
3278.57 -> all integrated in one pane of glass with SageMaker Studio.
3286.44 -> While tools like SageMaker make machine learning model building
3290.32 -> and scaling more accessible to data scientists
3293.21 -> and developers with machine learning skills,
3296.81 -> there are many more people who either lack the skills
3300 -> or the time to build models,
3302 -> but they can benefit from the insights
3304.17 -> that running machine learning can provide.
3307.14 -> As you all know, good ideas can come from anywhere in the organization,
3311.53 -> so we need to invest in making machine
3313.92 -> learning more available to more builders.
3317.85 -> And one of the ways we do this today is through SageMaker Autopilot.
3324.338 -> Building machine learning models historically
3327.19 -> has traditionally required a binary choice.
3330.67 -> On one hand, you can manually prepare the features,
3333.42 -> select the algorithm, and optimize model parameters
3337.03 -> and have full control of your model design and understand
3341.25 -> all the thought that went into creating it.
3343.96 -> But this requires deep machine learning expertise.
3347.26 -> On the other hand, if you don't have that expertise,
3350.27 -> you could use an automated approach to model generation with AutoML.
3355.26 -> But that provides very little visibility
3358 -> into how the model was created.
3361.55 -> Last year we launched SageMaker Autopilot
3364.57 -> to address this trade off.
3366.31 -> It automatically trains and tunes the best machine
3369.08 -> learning models for classification or regression based on your data
3373.36 -> while giving you full control and visibility
3376.29 -> so you can create your first model in minutes.
3379.62 -> With Autopilot, you just need to upload the training data,
3383.15 -> it automatically transforms the data in correct format for ML training.
3387.54 -> It then selects the best algorithm for the prediction
3389.95 -> you’re trying to make, trains up to 50 different models
3393.45 -> and then ranks them in a model leader board in SageMaker Studio
3397.4 -> so you can choose which model to use.
3400.05 -> Then you can deploy the model into production with a single click.
3403.97 -> No longer are developers left in the dark about how an AutoML model
3408.59 -> was built or the process in which it was created.
3413.15 -> Now, over the past year we have invested
3416.24 -> in making Autopilot even more useful, increasing its accuracy by over 20%
3421.83 -> and reducing training time by 40%.
3427.27 -> While SageMaker Autopilot makes machine
3430.15 -> learning more accessible to ML builders and developers,
3434.66 -> there is a large group of database developers and data analysts
3438.69 -> who work in databases and data warehouses
3441.78 -> that still find it too difficult
3443.82 -> and involved to extract meaningful insights from that data.
3448.32 -> While they are SQL experts, they may not know Python
3451.87 -> and are reliant on data scientists to build the models for them
3455.14 -> so that they can add intelligence
3456.78 -> to their applications to derive insights.
3459.95 -> And even when they have a model in hand,
3462.09 -> there is a long and involved process
3464.36 -> to move data from the data source to the model
3467.29 -> and back to the application
3468.88 -> so that they can actually add intelligence to their apps.
3474.07 -> The result is that machine learning
3475.82 -> isn't being used as much as it could be.
3479.62 -> So, we ask ourselves how can we bring machine
3483.31 -> learning to this large and growing group of database
3486.38 -> developers and data analysts.
3490.49 -> We are bringing Amazon SageMaker and other ML services
3494.42 -> directly into the tools that database developers,
3498.07 -> data analysts and business analysts use every day.
3501.34 -> These are databases, data warehouses, data lakes and BI tools.
3507.64 -> Our customers use different types of data stores,
3510.5 -> relational, non-relational, data warehouses and analytic services,
3515.37 -> for different use cases.
3517.19 -> So, we are providing a range of integrations
3520.12 -> to give customers options for training their models
3522.78 -> on the data and adding inference results right from the data store
3527.34 -> without having to export and process that data.
3531.59 -> Now, let's start with relational databases.
3535.13 -> Our customers use Amazon Aurora as an efficient relational database
3539.9 -> for enterprise apps, SaaS, and web and mobile apps.
3544.83 -> Historically, adding machine learning from Aurora
3549.64 -> to an application was very complicated.
3552.45 -> It involved the data scientists building and training a model,
3556.02 -> next you had to write app code to read data from the database,
3560.01 -> then you had to call an ML service to run the model,
3563.04 -> then the output must be then reformatted to your application,
3566.83 -> and finally you had to load the results into the app.
3570.27 -> This process is bad enough with a single database
3572.86 -> but if you are using multiple data services
3575.68 -> like a customer database and an order management,
3578.58 -> then there is even more work and integration to be done.
3583.8 -> So, to make it easier for customers to integrate machine
3587.15 -> learning into Aurora-powered apps,
3589 -> we launched Aurora ML which makes it super-easy to apply ML
3593.43 -> to apps right from the database just by using a SQL query.
3598.84 -> Let's say you wanted to conduct sentiment analysis
3601.29 -> of customer product reviews to identify negative feedback.
3605.46 -> No longer do you have to do all this multi-step process.
3608.99 -> You can simply run a SQL query and then under the
3611.93 -> covers Aurora passes the data to Amazon Comprehend
3616.05 -> and then the results are then returned to Aurora ready to be used.
3620.59 -> This integration makes it so much easier
3623.62 -> for relational database developers to apply ML.
3627.49 -> Now let's talk about data analysts. They often use Amazon Athena,
3632.85 -> an interaction serverless query service
3635.3 -> to easily analyze data in Amazon S3 using standard SQL.
3640.74 -> And they want to apply ML to this data to generate deeper insights.
3646.95 -> To address this, we launched Amazon Athena ML.
3649.66 -> Customers can now use more than a dozen built-in ML algorithms
3653.88 -> provided by SageMaker directly in Athena
3657.27 -> to get an ML-based prediction for their data sitting on S3.
3661.89 -> Within seconds analysts can run inferences to forecast sales,
3665.67 -> detect suspicious logins, or sort users into customer cohorts
3670.25 -> by invoking pre-trained ML models with simple SQL queries.
3675.45 -> So, we have shown you how you can use pre-trained models
3679.52 -> in Amazon Aurora and Athena,
3682.03 -> but what if you didn't need to fuss with selecting a model at all?
3686.85 -> Every day our customers use Amazon Redshift
3690.84 -> to process exabytes of data to power their analytics workloads.
3697.07 -> And customers want their analysts to leverage machine
3699.85 -> learning with their data in Redshift
3702.89 -> without having to deal with having the skills
3706.31 -> or the time to use machine learning. So, we asked ourselves,
3711.57 -> how can we make this easy for our Redshift customers?
3716.41 -> Today, I am really excited to announce Amazon Redshift ML,
3720.1 -> an integration of Amazon SageMaker Autopilot into Amazon Redshift
3725.01 -> to make it easy for data warehouse users
3728.09 -> to apply machine learning on their data.
3730.77 -> [applause]
3736.77 -> Let's see how it works.
3739.84 -> It starts with the simple SQL statement for creating a model.
3743.64 -> And once this SQL has run,
3745.94 -> the selected data is securely exported from Redshift to Amazon S3
3750.53 -> and SageMaker Autopilot takes it from there.
3753.38 -> It performs the data cleansing, and preprocessing,
3757.53 -> then creating a model and applying the best algorithm.
3760.66 -> All of the interaction between Amazon Redshift,
3763.44 -> Amazon S3, and Amazon SageMaker are completely abstracted away
3767.42 -> and automatically occur. Now once a model is trained,
3771.17 -> it becomes available as a SQL function
3774.12 -> right in the customer’s Redshift data warehouse.
3777.21 -> Customers can then use the function
3779.3 -> to apply the ML model to their data in queries, reports, and dashboards.
3785.06 -> So, for instance, in our customer churn example,
3788.67 -> they can run the customer churn SQL function
3790.86 -> on new customer data in the data warehouse regularly to identify
3795.29 -> which customers are more at risk
3797.39 -> and then feed this information to sales and marketing.
3802.31 -> Now, in addition to making ML more accessible to data analysts,
3806.7 -> it turns out that combining machine
3809.04 -> learning with certain types of data models
3811.15 -> can also lead to better predictions.
3815.65 -> For example, graph databases are often used
3818.87 -> to store complex relationships
3820.799 -> between data and a graph model.
3822.87 -> These include things like knowledge graphs used by search engines,
3826.87 -> graphs of models of disease and gene interactions,
3831.09 -> and the relationship between financial and purchase transactions
3835.5 -> to aid in fraud detection and product graphs for recommendation engines.
3841.1 -> Amazon Neptune is a fast, reliable, fully managed graph database service
3846.57 -> that makes it easy to build and run applications
3849.63 -> that work with these kind of graphs.
3852.68 -> Our customers tell us that they would like to apply machine
3855.82 -> learning to applications that use graph data
3858.53 -> to build things like better recommendation engines
3861.54 -> and generate more accurate predictions for fraud detection.
3865.31 -> But again, they lack the time or the skills.
3870.43 -> So, today, we are announcing Amazon Neptune ML,
3873.51 -> enabling easy, fast, and accurate predictions for graph applications.
3879.27 -> [applause]
3885.611 -> Neptune ML does the hard work for you by selecting the graph data
3889.53 -> needed for training.
3890.81 -> It automatically chooses the best ML model for selected data,
3895.11 -> exposing ML capabilities via simple graph queries
3898.73 -> and providing templates to allow developers
3901.65 -> to customize ML models for advanced scenarios.
3905.37 -> And with machine-learning algorithms That are purpose built for graph data
3909.73 -> using SageMaker and the Deep Graph Library,
3913.13 -> developers can improve prediction accuracy by over 50%
3917.7 -> compared to that of traditional ML techniques.
3921.08 -> We are very excited about this one.
3925.38 -> We are not only integrating the power of machine
3928.37 -> learning into our own products, but we are integrating SageMaker
3932.33 -> into partners’ products as well.
3934.58 -> We have integrated SageMaker Autopilot into Domo,
3938.19 -> Sisense and Qlik, with Tableau and Snowflake coming early next year.
3944.228 -> [applause]
3952.36 -> In May of this year, we also added machine
3954.81 -> learning to Amazon QuickSight,
3956.96 -> the scalable, embeddable BI service built for the Cloud.
3961.25 -> QuickSight ML Insights integrates with Amazon SageMaker Autopilot
3965.81 -> to enable business analysts to do things like anomaly detection
3969.65 -> and forecasting without any heavy lifting.
3973.12 -> Customers like Expedia Group, Tata Consultancy Services,
3976.74 -> [PH] Ricoh Company, are already benefiting from ML
3979.98 -> out-of-the-box experience with QuickSight.
3982.75 -> And it has a really great feature called ‘auto-narratives.’
3986.41 -> It uses machine learning insights
3988.51 -> to tell customers the story of their dashboard
3991.27 -> using plain language narratives.
3993.86 -> Customers love these human readable narratives.
3996.96 -> And they told us that they want to interact
3999.31 -> with their dashboards in a similar way,
4001.93 -> ask new business questions in plain written language
4005.52 -> when the answers are not easily found in the data
4007.84 -> displays in their existing dashboards.
4011.62 -> Last week Andy announced Amazon QuickSight Q
4014.64 -> to solve just this problem.
4016.79 -> Q is a deep-learning powered capability in Amazon QuickSight
4020.36 -> that empowers business users to ask questions in natural language
4025.25 -> and get answers instantly.
4027.52 -> To tell us more, I would like to invite Dorothy Li
4030.8 -> to give a look at how Q works.
4033.807 -> [applause]
4038.197 -> Hi everyone! Amazon QuickSight Q
4043.51 -> is a deep-learning based capability in QuickSight
4046.37 -> that is built using state-of-the-art machine
4048.57 -> learning and natural language processing techniques
4051.63 -> allowing business users to ask data questions in plain language
4055.62 -> and get answers instantly.
4057.97 -> Let's dive into the capabilities of Q.
4060.85 -> Let's look at the scenario of a sales leader
4063 -> who is trying to look at the insights from her dashboard
4065.42 -> to inform next year’s planning.
4067.61 -> Now my dashboard shows a summary of the data.
4070.5 -> Sales per state, sales per product and some yearly trends.
4074.86 -> But what if I wanted to understand
4076.18 -> something not in the dashboard like the specific sales
4079.25 -> for the top two performing states, California, and New York?
4083.05 -> Typically to do that I would need to cut a ticket
4086.19 -> or send an email to the BI Team and wait for an answer.
4091.52 -> And since most BI teams are thinly staffed,
4094.27 -> that answer could come in days or weeks.
4097.46 -> Now with Q, I can simply type my questions in QuickSight
4101.47 -> and get answers.
4103.13 -> ‘Show me last year's weekly sales in California,’
4109.57 -> and, Q provides an answer in just a few seconds.
4113.94 -> ‘Now let's see how it compares to New York,’
4122.02 -> and now, Q shows a nice comparison of the two trend lines.
4125.97 -> It's interesting to see that in March,
4128.18 -> California sales had a huge spike
4130.64 -> and that most likely got them to the top spot in sales last year.
4135.1 -> Since Q uses advanced natural language
4137.37 -> understanding you can ask the same question in multiple ways.
4141.57 -> For the same question, let's try asking a different way.
4144.89 -> ‘Weekly revenue for California versus New York in 2019.’
4150.74 -> And I get the same answer.
4153.37 -> Typically, users in different functions of the business
4156.97 -> from sales, marketing, to finance,
4159.38 -> often have their own specific language.
4162.28 -> To understand everyday phrases in these different functions
4165.3 -> of the enterprise, we partnered with hundreds of teams in Amazon
4169.15 -> to collect a large volume of real-world data
4172.75 -> and train Q’s models to understand these phrases,
4176.06 -> so there's no need for users to learn anything new.
4179.27 -> They ask questions in the natural way that they already do and get answers.
4184.65 -> Let's continue from our sales example.
4187.4 -> I know that California was our best-performing territory.
4190.76 -> I want to drill a little bit deeper
4192.57 -> and find the best-selling product categories in California.
4196.61 -> All I need to do is ask Q,
4198.97 -> what are the best-selling categories in California this year?
4205.17 -> Ah, it's kitchenware and outdoor,
4208.16 -> but it's a bit hard to see who the laggards are.
4210.94 -> How about we change the visual to show a bar chart?
4216.75 -> In the bar chart, I noticed that gaming is underperforming.
4221.47 -> Look how easy it was to get these insights.
4225.79 -> Getting started with Q is incredibly easy.
4229.14 -> Once you have connected Q with your existing data,
4231.83 -> Q automatically generates a knowledge layer
4234.45 -> that captures the meaning and relationship of your data.
4238.08 -> Allowing you to start asking questions in natural language
4241.58 -> in a matter of minutes.
4243.35 -> From all your data, not just specific data set or dashboard.
4248.15 -> And getting started is just the beginning.
4250.91 -> Q uses machine learning models to continuously
4253.95 -> improve with no machine learning expertise required.
4258.22 -> It's incredibly exciting to be able to reinvent
4261.17 -> BI using machine learning with Amazon QuickSight Q.
4265.13 -> Thank you.
4267.01 -> [applause]
4273.13 -> Thanks, Dorothy. For technology to be really impactful
4277.96 -> it has to solve real business problems
4280.54 -> end-to-end and Amazon QuickSight
4283.61 -> Q is one example of the impact machine
4286.6 -> learning can have when applied to a real business need.
4290.92 -> And the most successful customers
4292.55 -> are those in which domain experts and technical experts
4296.42 -> come together to move from idea to implementation to do just that.
4303.1 -> What makes a good machine learning problem?
4306.43 -> When we think about good machine learning problems,
4309.55 -> these are typically areas that are rich in data,
4312.74 -> impactful to the business but that you haven't been able
4316.52 -> to solve sufficiently using traditional methods.
4320.37 -> Examples where our customers find these synergies are areas
4323.93 -> like product recommendations, improving code reviews,
4327.81 -> bringing more efficiency to manual processes,
4331.21 -> faster and more accurate forecasting and fraud detection.
4335.34 -> When we identify these common use cases
4338.55 -> we build AI services that enable companies
4341.8 -> to quickly add intelligence to these areas
4344.47 -> without needing any machine learning expertise.
4349.9 -> Some customers also ask us instead of us having
4353.08 -> to stitch together these point-products ourselves
4356.2 -> by writing code on top of your AI services,
4359.1 -> could you just solve the problem for us end-to-end?
4362.32 -> That's why we have launched several things that do just this.
4366.19 -> Amazon Connect is one example. A contact center in the cloud
4370.205 -> where we provide automatic voice transcription,
4373.304 -> sentiment analysis and analytics using ML through Contact Lens.
4378.74 -> Amazon Kendra is an end-to-end intelligent search solution
4382.9 -> which can connect to multiple internal data silos
4386.22 -> and uses machine learning to create an accurate index
4389.67 -> which can be searched with simple natural language queries.
4392.96 -> Again, no ML experience required.
4395.81 -> Customers can build and customize their index
4398.63 -> and search interface without writing a line of code.
4402.01 -> And today we are expanding the support
4404.2 -> for more than 40 more data sources
4406.79 -> via the Amazon Kendra connector library
4409.54 -> including Atlassian, Jira GitLab, Slack, and Box.
4414.41 -> Plus, we are releasing incremental learning
4418.46 -> which is a capability that learns from user behavior
4421.88 -> to improve your results on an individual level.
4425.69 -> We also launched Amazon CodeGuru that allows developers to use machine
4429.76 -> learning to provide automated code review
4432.82 -> providing guidance and recommendations
4434.91 -> on how to fix some truly hard to find bugs
4438.04 -> and to locate the most expensive line of code
4440.36 -> by automatically profiling applications
4444.17 -> as they are running and making recommendations
4446.78 -> for how to dramatically reduce latency,
4449.62 -> CPU contention and so on.
4452.46 -> And we just launched DevOps Guru to easily improve
4455.86 -> an application’s operational performance and availability.
4461.55 -> Another area where our customers are asking us
4465.43 -> to do the heavy lifting for them
4467.19 -> to solve a business problem is anomaly detection.
4470.4 -> It turns out machine learning is really good
4473.07 -> at identifying subtle signals against a lot of noisy data.
4477.77 -> And there is data across a broad spectrum of industries
4481.23 -> where machine learning can be applied to help understand
4484.49 -> and catch anomalies before it's too late.
4488.91 -> Organizations of all sizes use data to monitor trends
4492.67 -> and changes in their business metrics
4495.12 -> in an attempt to find unexpected anomalies
4497.54 -> from the norm such as a dip in a product sales
4500.34 -> or a sudden increase in qualified sales leads.
4503.92 -> Now, traditional methods for detecting these anomalies
4506.95 -> such as setting fixed thresholds are error prone leading to false alarms,
4512.39 -> undetected anomalies and results that are not always actionable.
4517.4 -> The cost of not finding these anomalies
4519.61 -> in a timely manner can be really high.
4523.08 -> For instance, if a retailer prices
4525.44 -> something incorrectly on an e-commerce site,
4528.58 -> that product could be completely sold out
4530.84 -> before someone even realizes that there is a certain spike in sales.
4535.51 -> So, our customers asked,
4538.24 -> how can we make this process of anomaly detection
4540.9 -> for business detection easier?
4545.14 -> To solve this problem, I'm excited to announce
4548.09 -> that we are launching Amazon Lookout for Metrics.
4551.9 -> [applause]
4557.07 -> It uses machine learning to detect anomalies
4559.7 -> in virtually any timeseries-driven
4561.88 -> business and operational metrics such as revenue performance,
4565.8 -> purchase transactions and customer acquisition and retention rates.
4572.24 -> Lookout for Metrics detects unexpected changes
4575.46 -> in your metrics with high accuracy
4577.39 -> by applying the right algorithm to the right data.
4581.33 -> It's very easy to get up and running with Lookout for Metrics
4586.11 -> because it has 25 built-in connectors for data analysis.
4589.99 -> It not only identifies the anomaly
4592.94 -> but it also helps you find the root cause of these anomalies
4596.54 -> so that you can take quick action to remediate an issue
4600.01 -> or to react to an opportunity.
4602.22 -> And it continues to improve over time with the feedback as well.
4606.94 -> Retail customers can gain insights into category-
4609.71 -> level revenue by monitoring point-of-sale or to clickstream data,
4613.86 -> or an adtech company can optimize spend
4616.53 -> by detecting spikes or dips in metrics like reach,
4620.34 -> impressions, views and ad clicks.
4623.93 -> Now, let's take a look at how it works.
4628.6 -> It automatically retrieves the data you want to monitor
4631.6 -> from your selected data source from sources
4634.27 -> including various popular AWS services such as S3, Redshift,
4638.59 -> RDS, CloudWatch,
4640.43 -> and many other popular Saas applications such as Salesforce,
4643.94 -> Marketo, Amplitude, Zendesk and others.
4647.44 -> The service inspects the data and trains
4650.1 -> ML models to find anomalies using the best algorithm.
4654.11 -> It automatically scores and ranks anomalies
4657.21 -> based on their severity and helps you find potential root
4660.76 -> causes of the detected anomalies.
4663.4 -> Finally, it also prepares an impact analysis
4666.23 -> and sends you a real time alert via your preferred alert channel.
4670.56 -> You can also automatically trigger a custom Lambda function
4674.43 -> whenever an anomaly is detected.
4676.55 -> So, for instance, if something is selling out quickly on your site
4679.9 -> due to pricing inaccuracy, you could trigger an action
4682.95 -> to pull the product off the site until further inspection is done.
4687.78 -> Lookout for Metrics uses your feedback
4690.6 -> to continuously optimize its algorithm
4693.06 -> and improve its accuracy over time.
4695.68 -> You can visualize and review the details
4697.96 -> of these anomalies in the AWS consul or retrieve them through an API.
4704.24 -> Amazon Lookout for Metrics has use cases that apply across industries,
4709.34 -> but we also hear from our customers that they want more solutions
4713.43 -> that are tailored and specific to their industries.
4717.64 -> To share more, I would like to invite again, Dr. Matt Wood.
4722.57 -> [music playing - applause]
4733.85 -> Thanks, Swami.
4735.06 -> Machine learning is driving extraordinary levels of reinvention
4738.64 -> across virtually every industry.
4741.08 -> Take for example, iHeartMedia which uses machine learning on AWS
4745.45 -> to give its listeners real-time music recommendations
4747.96 -> across all of their media and entertainment platforms.
4751.03 -> Or, in the auto space, Lyft is gathering petabytes of data
4755.47 -> and analyzing it with Amazon SageMaker
4757.71 -> to improve self-driving systems.
4760.15 -> In finance, J.P. Morgan is improving its banking experience
4764.37 -> by adding personalization to its client interactions
4767.3 -> including real-time coaching and recommendations
4769.83 -> for contact center agents
4771.45 -> so they can better serve their customers.
4773.95 -> And we see reinvention happening in industrial manufacturing,
4777.47 -> where they are using data in the Cloud
4779.35 -> and in nodes at the edge
4780.64 -> to rethink virtually all of their design
4783.21 -> processes on the production line
4784.92 -> from supply chain to finished product.
4787.96 -> At their simplest, industrial processes are a series of steps
4792.31 -> but, unlike most software, industrial processes are monolithic.
4797 -> They are very, very tightly coupled
4799.09 -> which means that in equipment or process problem
4801.73 -> anywhere on the line can have a very, very large blast radius.
4805.55 -> As a result, maintaining throughput and cost goals in manufacturing
4809.54 -> and other industrial processes
4811.22 -> is a high wire tight rope balancing act.
4814.38 -> It's critical that these systems are monitored
4816.87 -> and that early warnings are given when something if off.
4820.3 -> Today, much of this is managed by process control
4823.02 -> with fixed thresholds.
4824.89 -> But these are brittle and don't take it advantage
4826.89 -> of the vast amount of data available from industrial systems.
4830.95 -> Now last week, we announced new industrial focused services
4834.53 -> that enable customers to apply machine
4836.27 -> learning to find and maintain balance in industrial processes
4840.58 -> making it easier, safer,
4842.79 -> and faster to monitor and evaluate everything from manufacturing
4847.02 -> to power generation, to agriculture.
4850.07 -> Together, these services help lower and widen
4853.51 -> that tight rope significantly.
4855.4 -> So, let me walk you through them briefly
4857.67 -> and then I will show you how they all worked together.
4860.81 -> So, there are a lot of industrial companies who know that
4863.4 -> if they could use this data to do better predictive maintenance,
4866.93 -> they could save a lot of time and money.
4869.77 -> But some customers either don't have sensors installed
4873.06 -> or they are sensors that are not modern or not sensitive enough.
4876.66 -> And they don’t know how to take that data from the centers
4879.15 -> and send it to the Cloud,
4880.51 -> or to build the machine learning models
4882.38 -> that detect a problem before it occurs.
4884.93 -> To help last week we launched Amazon Monitron,
4888.29 -> an end-to-end solution for equipment monitoring.
4891.4 -> Monitron comes with three things.
4893.52 -> A set of sensors, I have one here with me right here.
4896.54 -> A network gateway device and a mobile app to track
4899.77 -> and resolve machine failures detected by Monitron on the shop floor.
4904.43 -> They work right out of the box. These are wireless sensors
4907.53 -> and they are designed to have a three-year battery life.
4910.38 -> They measure vibration in three directions as well as temperature
4914.37 -> and they can easily be mounted to equipment with epoxy.
4917.84 -> You easily mount sensors to any piece of equipment like motors,
4921.89 -> gear boxes, compressors, turbines, fans, and pumps,
4925.76 -> and they start taking vibration
4927.15 -> and temperature measurements straight away.
4930.05 -> The vibration and temperature data
4931.68 -> is sent automatically from the sensors to the network gateway,
4935.26 -> which then transfers the measurements to the Cloud.
4937.76 -> You can view the sensor readings
4939.39 -> right away directly on the mobile app.
4942.33 -> Monitron will also start building an ML model using the sensor data
4946.7 -> and use it to determine the normal baseline operating performance.
4950.71 -> If there is an anomaly in the machine-sensor data,
4953.34 -> Monitron alerts technicians via push notifications to the app.
4957.31 -> It’s a simple end-to-end solution for predictive maintenance
4960.98 -> with no machine learning expertise required.
4963.35 -> And that’s a big deal.
4964.77 -> It makes it much, much easier for companies
4966.86 -> to do predictive maintenance on their equipment.
4969.83 -> Now there are other companies that we talk to that say,
4971.71 -> “Look, I have modern sensors that I am fine with
4975.13 -> and I don’t want to build the machine-learning models
4977.28 -> based on their data.
4978.56 -> I just want to send you the data, use your models,
4981.52 -> and have the predictions come back to me through the API
4983.85 -> so that I can integrate it with my existing systems.”
4987.45 -> So, we have something for this group of customers too called
4990.2 -> ‘Amazon Lookout for Equipment.’
4992.15 -> A new anomaly detection service for industrial machinery.
4996.16 -> With Lookout for Equipment, you send the data to AWS.
4999.04 -> It gets stored in S3.
5000.79 -> The service can analyze data from up to 300 sensors
5003.67 -> per industrial machine,
5005.52 -> and uses machine-learning models to identify early warning signs
5009.25 -> that could be a sign of impending machine failures.
5012.3 -> The service pinpoints the sensor or sensors
5015.06 -> indicating anomalies letting you respond
5017.39 -> even more quickly before the line is impacted.
5020.74 -> And if finds anomalies, the service will send them to you via API
5024.43 -> so that you can do your predictive maintenance.
5027.43 -> The anomaly is detected by Lookout for Equipment.
5029.73 -> It can be integrated with your existing monitoring software,
5033.28 -> IoT SiteWise or industrial data systems such as OSISoft,
5037.54 -> and you can also set up automated actions to take
5040.37 -> when anomalies are detected such as filling in a trouble ticket
5043.7 -> or sending an automated alarm
5045.46 -> that notifies you immediately of any issues.
5049.76 -> Customers are also asking for help
5052.62 -> with using Computer Vision to improve industrial processes.
5056.99 -> Industrial manufacturing processes, they move fast
5060.2 -> and often require constant vigilance to maintain quality control.
5064.67 -> Determining if a part has been manufactured correctly
5067.77 -> or if it is damaged, can significantly impact
5070.6 -> product quality and operational safety.
5073.41 -> You can try and do it manually,
5075.01 -> but it’s super-hard to do this accurately,
5077.24 -> and to scale this on fast moving line.
5080.59 -> So last week we launched Lookout for Vision,
5084.05 -> a new service that spots visual defects and anomalies in images
5087.66 -> using Computer Vision.
5090.33 -> You start by providing as few as 30 images
5093.052 -> to establish a baseline good state
5094.95 -> for machine parts or manufactured products.
5097.65 -> Then you can start sending images for cameras on the line
5100.91 -> straight away to identify anomalies.
5103.95 -> Lookout for Vision will spot differences
5106.24 -> between the known good state and any differences
5109.34 -> it detects like dents on a manufactured part,
5112.14 -> a crack in a machine part, irregular shapes,
5114.87 -> or inconsistent colors in a product.
5117.79 -> If anomalies are detected you can get alerts
5120.18 -> in the Lookout for Vision dashboard
5123.1 -> where it will highlight the portion of the image
5124.99 -> that differs from the baseline.
5127.29 -> Now, Lookout for Vision’s machine-learning models
5129.91 -> are sophisticated enough to handle variances in camera angle,
5133.84 -> pose, and lighting from changes in the work environment.
5139.22 -> In an industrial line,
5141.1 -> there are also lots of split-second decisions to make.
5144.14 -> We just don't have the time to send that information to the Cloud
5147.85 -> and get the answer back.
5150 -> So, many industrial companies try to use smart cameras
5153.46 -> that allow them to process video on-site at the edge.
5157.53 -> But the problem is that most of the smart cameras out there today,
5160.59 -> they're just not powerful enough
5161.95 -> to run sophisticated computer vision models.
5165.33 -> And most companies that we talked to,
5167.06 -> they don't want to rip out all of their cameras
5168.84 -> that they have just installed and put in a different one.
5172.08 -> That's why we built the AWS Panorama Appliance,
5175.78 -> a new hardware appliance that allows organizations
5178.59 -> to add computer vision to existing on-premises smart cameras.
5182.77 -> Here's how it works.
5184.16 -> You simply plug in the Panorama Appliance
5186.45 -> and connect it to the network.
5188.31 -> Panorama starts to recognize and pick-up video streams
5191.63 -> from your existing cameras in the facility.
5194.33 -> The appliance can then process streams
5196.26 -> of up to 20 concurrent cameras
5198.26 -> and operate Computer Vision models on those streams.
5201.77 -> And if you need to have more concurrently,
5203.6 -> you can just buy more Panorama Appliances.
5206.81 -> We have prebuilt models inside Panorama
5209.03 -> that do Computer Vision for you
5210.94 -> and that we have optimized by industry.
5213.09 -> So, we have got them for manufacturing,
5214.91 -> construction, retail, safety, and a host of others.
5219.1 -> And of course, you can also build your own models in SageMaker
5222.87 -> and then just deploy those to Panorama,
5225.37 -> and Panorama also integrates seamlessly with the rest of AWS
5228.81 -> and IoT machine-learning services.
5232.05 -> The appliance itself is small but perfectly formed for industrial use.
5236.15 -> I have one here with me.
5238.48 -> It's IP62 rated which means that it is dust and water resistant.
5242.77 -> It's not as rugged as a Snowboard Edge device,
5245.06 -> but you also don't have to treat this with white gloves.
5247.98 -> That said, it's a one unit tall
5249.71 -> and half a rack wide with chassis points,
5252.65 -> so if you did want to mount it in the cabinet, you can.
5255.68 -> It has multiple GigE networking ports for redundancy
5259.28 -> or to connect cameras from multiple subnets.
5262.39 -> People are pretty excited about the possibility
5264.42 -> of having real Computer Vision at the edge,
5266.47 -> but they have also told us that,
5267.74 -> “Look, we’re going to buy the next generation of smart cameras
5272.15 -> and those smart camera manufactures have told us,
5274.51 -> that we want to actually embed something
5276.25 -> that allows us to run more powerful Computer Vision models
5279.012 -> right on those devices.”
5281.5 -> So, we’re also providing a brand new AWS Panorama SDK
5285.73 -> which enables hardware vendors to build new cameras
5288.3 -> that run more sophisticated Computer Vision models at the edge.
5292.19 -> This SDK and the API’s associated with it
5295.2 -> can be used to add a lot more Computer Vision power to cameras.
5299.21 -> We have done the work to optimize models for memory and latency
5302.98 -> so that you can fit more powerful models
5304.89 -> into what is often a very constrained space.
5307.46 -> The Panorama SDK devices will integrate with other AWS services.
5311.32 -> You can build and train models in SageMaker
5313.3 -> and then deploy them with a single click to all of your devices.
5317.2 -> Those devices will also integrate
5318.67 -> with SageMaker Edge Monitor and IoT services
5321.45 -> such as SiteWise for integration with existing systems.
5325.1 -> And we are already seeing a ton of excitement
5327.17 -> with partners across system integrators,
5329.25 -> devices, independent software vendors and silicon providers
5332.93 -> working with us on this next generation of cameras.
5335.66 -> It's really exciting.
5341.12 -> So, all these new capabilities are designed
5343.48 -> to help customers in industrial manufacturing
5345.92 -> to improve their processes from start to finish.
5348.94 -> So, let's see how they all work together
5351.26 -> looking at a manufacturing line.
5353.4 -> Building a product which is manufactured in its billions of year
5357.17 -> that many of us always carry in our pockets
5359.57 -> and has famously changed the way that most of us create and communicate.
5364.24 -> The humble number two pencil. Like many industrial processes,
5370.15 -> pencil manufacturing is a high volume low-margin game
5373.73 -> which is automated in part
5375.16 -> but still requires several manual steps to keep moving.
5378.68 -> So, let's look at our pencil manufacturing line.
5381.6 -> Large compressors create the pencil wafers
5384.34 -> and large-scale machines high throughput machines insert graphite,
5387.71 -> paint ,and then sharpen the pencil.
5390.61 -> Industrial machines like these include dozens of individual sensors.
5395.32 -> Using Amazon Lookout for Equipment,
5397.48 -> the sensor data from this equipment is aggregated and analyzed
5400.95 -> using machine learning models which are trained using your own data
5404.52 -> but require no machine learning experience to apply.
5408.18 -> The ML models are trained to identify early warning signs
5411.32 -> of future operational issues by monitoring behavior
5414.43 -> such as how many reps per minute is considered normal for a machine.
5418.84 -> These are the proverbial needle in the haystack problems
5421.42 -> that if found early, could help avoid expensive downtime.
5426.23 -> When the ML model detects a potential issue,
5428.42 -> such as a sudden drop in the rate of repetitions
5430.98 -> of this pencil wafer machine, the service will send text alerts
5434.72 -> so you can send engineers to take a look or preemptively inspect
5438.3 -> the equipment for issues way before disaster strikes
5441.82 -> and the entire line is impacted and has to come down.
5446.24 -> Even with this sensor data, in lines like this,
5448.89 -> there is often bound to be blind spots.
5451.61 -> Equipment which either doesn’t have sensors installed
5454.29 -> or rotating equipment such as conveyor belts which move products
5457.68 -> between equipment and provide a potential point of failure.
5461.75 -> Monitron allows you to completely remove these blind spots
5465.97 -> by expanding the coverage of the sensors
5468.61 -> with an end-to-end machine monitoring solution.
5471.61 -> Process engineers can install Monitron sensors onto machines
5475.05 -> to start closing these blind spots in minutes.
5479.67 -> Like on this pencil sharpening machine.
5481.85 -> Once the sensors are installed, you can start collecting data
5485.51 -> such as vibration and temperature which is then analyzed automatically
5491.29 -> and any early warning signals that deviate from the norm
5494.93 -> are flagged to staff onsite through a mobile app,
5498.75 -> providing a completely closed loop for monitoring and remediation
5502.53 -> which requires no machine learning
5504.01 -> or even AWS skills to set up and operate.
5508.83 -> Now, quality at every step is critical in lines like this.
5513.48 -> Even small imperfections at each step can compound
5516.97 -> and they get more expensive to correct as they move down the line.
5520.8 -> Amazon Lookout for Vision uses machine
5522.86 -> learning to automatically evaluate quality at every step on the line.
5527.9 -> Using its view as thirty reference images,
5530.39 -> Lookout for Vision can identify even subtle defects such as misalignments,
5534.85 -> dents and scratches, sending alerts and notifications
5538.48 -> as soon as defects are identified, before they move down the line
5542.33 -> and impact entire batches of products.
5546.4 -> Amazon Lookout for Vision processes the pencils’ images
5549.38 -> from the cameras along the belt
5551.38 -> and the model analyzes them for defects in real time.
5555.05 -> Each time it spots a lead that is out of alignment,
5557.53 -> it will record it and report the rate of defect via an online dashboard
5562.07 -> so that you can take actions such as maintenance
5564.79 -> or the switching off of a line
5566.52 -> to stop more defects from occurring quickly.
5570.15 -> Now, of course, these lines don’t exist in isolation.
5573.26 -> They are surrounded by entire teams of people,
5576.34 -> stacks of inventory, other lines,
5578.67 -> and dozens of other pieces of equipment and moving vehicles.
5583.28 -> In addition to monitoring each process,
5585.78 -> many customers have installed cameras
5587.51 -> to help monitor the environment as a whole.
5591.07 -> With the AWS Panorama Appliance,
5593.42 -> these cameras just got a whole lot more useful.
5596.39 -> Now you can process video onsite with low latency.
5599.38 -> So, for example, you can count and monitor inventory and analyze
5603.69 -> its movement through the site
5605.41 -> or monitor the impact or process changes for improvements.
5609.68 -> Panorama can help transform your existing
5611.96 -> on premises cameras into computer vision enabled devices
5615.81 -> so that you can monitor all of these processes,
5618.52 -> remove bottlenecks,
5619.65 -> and make improvements to the overall supply chain.
5623.53 -> So, with services such as Lookout for Equipment, Monitron,
5627.26 -> Lookout for Vision, and Panorama,
5629.17 -> you can use machine learning to add end-to-end monitoring and analysis
5632.88 -> to your industrial processes, whether you’re manufacturing cars,
5636.47 -> mobile phones, producing and packing food,
5639.18 -> collecting harvests, generating power,
5641.4 -> or yes, even building billions of pencils.
5645.12 -> We can’t wait to see how our industrial customers
5647.36 -> reinvent their processes through machine
5649.05 -> learning using these services.
5652.64 -> So, industrial manufacturing is transforming in a very rapid way.
5657.94 -> And the same thing is happening with healthcare.
5661.21 -> A good example is to look at what Moderna has done in the last year
5664.84 -> or so, in really just the last nine months.
5667.55 -> They built an entire digital manufacturing suite on top of AWS
5671.61 -> to sequence their most recent COVID-19 candidate
5674.69 -> that they just submitted, that has a 94% effectiveness.
5678.2 -> And they did it on AWS in forty-two days
5681.06 -> instead of the typical twenty months that it takes.
5684.14 -> Novartis uses natural language
5685.87 -> processing to improve its ability to detect adverse events,
5689.69 -> a crucial part of delivering drugs safely to market.
5693.63 -> Cerner is using SageMaker to query large anonymized patient data sets
5698.24 -> and build complex deep learning models
5700.33 -> to predict the onset of congestive heart failure
5702.83 -> up to fifteen months before clinical manifestation.
5706.29 -> But even with all of this innovation,
5708.62 -> piecing together data that lives in silos
5711.63 -> and different formats to create this
5713.69 -> three-hundred-and-sixty-degree view of patients
5715.94 -> or trial participants is really hard.
5719.06 -> And this is really the Holy Grail for healthcare companies.
5722.52 -> And they’re just not there yet.
5725.41 -> This data is often spread out across various systems
5728.42 -> such as electronic medical records, lab systems,
5731.73 -> and exists in dozens of incompatible formats.
5735.06 -> It often includes unstructured information
5737.7 -> contained in medical records like clinical notes,
5740.32 -> documents like PDF laboratory reports,
5742.91 -> forms such as insurance claims, or medical images,
5746.18 -> and it all needs to be organized and normalized
5749 -> before you can start to analyze it.
5750.93 -> And gathering and preparing all of this data
5753.11 -> for analysis takes healthcare organizations weeks or even months.
5757.67 -> This often involves manually going through individual health records
5761.55 -> to identify and extract key clinical information
5764.12 -> like diagnoses or medications, procedures from notes,
5768.6 -> documents, images, recordings,
5770.49 -> forms, before normalizing it so that it can be searched.
5774.61 -> It's expensive and time-consuming to do well,
5777.75 -> which means analysis like this effectively remains out of reach
5781.61 -> for almost all healthcare and life sciences companies.
5784.65 -> Every healthcare provider, payer, and life science company
5788.54 -> is trying to solve the problem of analyzing this data.
5791.4 -> Because if you do, you can make better patient support decisions,
5795.4 -> operate more efficiently, and better understand population health trends.
5800.78 -> So, today, I'm excited to announce the launch of Amazon HealthLake.
5804.97 -> A new service that enables healthcare organizations to store,
5808.73 -> transform and analyze petabytes of health
5811.68 -> and life sciences data in the cloud.
5817.91 -> HealthLake transforms data seamlessly to automatically understand
5822.43 -> and extract meaningful medical information
5824.49 -> from raw disparate data such as prescriptions,
5827.87 -> procedures and diagnoses.
5829.87 -> Reinventing a process that was traditionally manual,
5832.85 -> error prone and costly.
5835 -> HealthLake organizes data in chronological order
5837.93 -> so that you can look at trends like disease progression over time,
5841.26 -> giving healthcare organizations new tools
5843.65 -> to improve care and intervene earlier.
5846.4 -> Healthcare organizations can query and search data
5849.22 -> and build machine learning models with Amazon SageMaker
5852.05 -> to find patterns, identify anomalies and forecast trends.
5857.01 -> HealthLake also supports interoperability standards like FHIR,
5860.75 -> the Fast Healthcare Interoperability Resource,
5863.52 -> a standard format to enable data sharing across health systems
5867.28 -> in a consistent compatible format.
5870.86 -> So, let's take a look at an example of how HealthLake can be applied
5874.3 -> to one of the most common chronic medical conditions, diabetes.
5878.86 -> Now, early detection and control of diabetes
5881.48 -> is critical to prevent the disease from getting worse
5884.34 -> and can lead to tangible improvements in the quality of life for patients.
5888.61 -> Data can help with earlier diagnosis
5891.4 -> and more fine-grained control over treatment.
5894.66 -> Healthcare organizations receive a lot of data for diabetic patients.
5898.6 -> For just one patient, there are hundreds of thousands of health data
5902.03 -> points from doctors’ notes to prescriptions to blood sugar levels.
5905.66 -> And it is all stored in different silos,
5907.56 -> in dozens of different formats and file types.
5913.22 -> It is a Herculean effort for healthcare organizations
5915.84 -> to organize all of this information for each patient
5918.96 -> and to normalize it for analysis.
5921.1 -> But with HealthLake, we can bring together all of this data
5924.22 -> in minutes with natural language understanding,
5926.83 -> ontology mapping, and medical comprehension.
5930.05 -> HealthLake can load prescriptions and identify
5932.96 -> if a patient has been prescribed a drug like metformin,
5936.17 -> accurately identifying
5937.53 -> and pulling out the medication’s name, dosage and frequency.
5942.32 -> Here, the information from a patient’s blood
5944.52 -> glucose monitoring system can be added.
5946.53 -> HealthLake can load this structured data on an ongoing basis.
5950.54 -> And HealthLake also extracts information important
5953.34 -> from forms like physicians’ notes, insurance forms and lab reports,
5957.93 -> and then adds it to the data lake
5959.62 -> so that it can be queried using a standard nomenclature.
5963.06 -> Separately, these are all just pieces of the puzzle,
5966.22 -> scattered around different silos.
5968.34 -> But when combined, we can start to get a much clearer picture of health.
5973.05 -> With HealthLake, you can bring together
5975.42 -> hundreds of millions of data points across millions of patients
5979.32 -> to paint a picture of the entire diabetic patient population.
5983.31 -> So, now that this data is collected and normalized in HealthLake,
5986.63 -> it is immensely more useful.
5988.87 -> Let's see what we can do with this data
5990.58 -> to start to unlock new insights about this population.
5994.55 -> First, we can identify a subset of patients
5996.86 -> with uncontrolled diabetes with high blood sugar levels
6000.43 -> so as a provider we can adjust the treatment
6002.91 -> and avoid severe complication by better managing the disease.
6006.95 -> To do this, we can query the data directly from the HealthLake console
6010.84 -> to identify these high-risk patients using standard medical terms
6014.83 -> such as medications, diagnoses, or blood sugar levels.
6019.07 -> Next, we can use Amazon QuickSight to build a dashboard
6022.24 -> to visualize this data to get a more complete picture.
6025.65 -> We can compare this group of patients
6027.25 -> against others in a similar situation to identify trends.
6031.05 -> And monitor patients to better understand
6033.27 -> how their risk factors change over time
6035.33 -> based on interventions or public health initiatives.
6039.99 -> We can also build predictive models which look forward.
6044.34 -> We can use SageMaker to forecast the number
6046.59 -> of new diabetic cases year-over-year informed
6049.54 -> by millions of points of health data,
6051.53 -> providing a quick easy way to identify health trends
6054.69 -> in patient populations.
6056.62 -> What was once just a pile of disparate and unstructured data
6059.95 -> is now structured, easily read and searched.
6063.61 -> And for every healthcare provider, payer and life sciences company,
6067.34 -> HealthLake helps them get more value out of their health data
6070.66 -> by removing the undifferentiated heavy lifting
6073.37 -> associated with storing, normalizing, organizing
6077.01 -> and understanding their data
6078.68 -> so that they can answer important questions
6080.73 -> which help their patients and improve the quality of their care.
6084.94 -> So, to talk more about how they’re applying machine
6087.14 -> learning to reduce this complexity and provide better care,
6090.05 -> I would like to invite Elad Benjamin,
6091.85 -> the General Manager of Radiology Informatics at Philips
6094.46 -> to talk more about his work. Thanks a lot.
6097.034 -> [applause]
6105.042 -> Hi, everyone. My name is Elad Benjamin.
6106.98 -> I'm the General Manager
6109.01 -> of the Radiology Informatics Business at Philips.
6112.4 -> When you envision how you would like healthcare to be,
6115.43 -> what do you think of?
6116.52 -> For me and many others what comes to mind is quality.
6120.48 -> And what is quality in the context of healthcare?
6123.5 -> It’s the optimal meeting point of speed, accuracy and cost.
6128.12 -> The holy grail of medicine is to reach diagnosis
6131.15 -> and deliver treatment in the least time possible
6133.59 -> with no mistakes at the lowest cost.
6136.07 -> We have the opportunity to get closer to the Holy Grail
6139.21 -> by synthesizing data in new ways.
6142.36 -> And today, I will talk specifically about data analytics,
6145.88 -> machine learning and computer vision.
6148.89 -> Part of the difficulty in healthcare today
6151.13 -> is the abundance of data being generated,
6153.6 -> quantity diversity and multi-source, imaging, monitoring, genomics.
6158.44 -> Physicians need to work through those data silos
6161.63 -> and it's getting harder for them to diagnose and treat.
6165.16 -> At Philips, we are trying to help tackle these challenges
6167.54 -> in a number of ways.
6169.12 -> One is HealthSuite. HealthSuite is a foundational layer,
6172.55 -> a cloud-based data platform that consolidates patient records,
6177.17 -> data from wearable or home-based remote medical monitoring equipment,
6181.19 -> information from insurance companies or healthcare organizations.
6185.5 -> The HealthSuite clinical data lake runs on AWS
6189.19 -> and brings high volume clinical data together
6191.86 -> while meeting regulatory requirements.
6194.62 -> HealthSuite includes dozens of AWS services
6197.26 -> from the edge to the cloud,
6199.16 -> providing the cloud foundation for IoT and remote connectivity
6203.72 -> for smart diagnostic systems,
6206.05 -> operational analytics for optimizing workflows,
6209.75 -> scalable tele-diagnostics for remote and emerging points of care,
6214.45 -> and cloud PACS for integrated diagnostics.
6219.03 -> A specific example within HealthSuite,
6221.61 -> we just announced the availability
6223.17 -> of the new Analyze AI training service.
6226.26 -> It’s a multi-tenant service that provides functionality to submit
6230.48 -> and manage CPU or GPU-based AI, machine learning,
6234.51 -> and deep learning training jobs.
6236.37 -> It uses Amazon SageMaker as the execution engine in the background.
6240.44 -> The training service offers users the ability
6242.43 -> to configure the custom compute environments
6245.07 -> and permitted compute targets.
6246.99 -> It helps users submit and manage the long running training jobs
6250.69 -> connecting to an existing repository and associating its execution
6255.43 -> with required compute environment and targets.
6259.08 -> Within radiology, in order to advance precision diagnosis,
6264.05 -> Philips is applying machine learning and AI tools
6267.11 -> to improve diagnostic systems,
6269.18 -> realizing first time right diagnosis
6271.89 -> through clinically relevant and intelligent diagnostics,
6275.48 -> optimized workflows, connecting and integrating workflows
6278.7 -> to drive operational efficiency, integrating insights from imaging,
6283.41 -> monitoring, laboratory, genomics, and longitudinal data
6287.36 -> to help create clear care pathways assisting with decision
6291.05 -> making at pivotal moments of the patient’s journey.
6296.22 -> We build machine learning models using Amazon SageMaker
6299.43 -> to draw insights from the data.
6301.7 -> And in the future, we may use Amazon Transcribe Medical
6304.52 -> and Amazon Comprehend Medical to integrate additional data sources
6308.91 -> and store them in a data lake built on AWS.
6313.03 -> Using AWS machine learning and AI services to streamline building,
6316.99 -> training and deploying our models makes sense.
6320.09 -> AWS builds these services to run at scale
6322.59 -> and cost efficiently, freeing our data scientists
6325.33 -> to focus on higher value activities. Philips and AWS share a common goal,
6330.27 -> to demystify data science and artificial intelligence methods
6334.36 -> and accelerate their use to extract new knowledge
6337.16 -> from health data to improve healthcare delivery.
6340.65 -> At Philips, we are disrupting healthcare
6342.34 -> by bringing together the right information,
6345.23 -> the right tools, to make the right decisions for patients
6348.53 -> and have providers really do what they signed up for.
6352.14 -> Taking care of those patients. We expect to see AWS machine
6356.8 -> learning and AI services continue to be further embedded
6360.09 -> throughout the broader Philips organization.
6362.71 -> There are a number of business areas
6364.94 -> that will benefit from accelerated AI adoption,
6367.82 -> from image guided therapy to sleep and respiratory care,
6371 -> remote patient monitoring.
6372.81 -> The unlocking of data using ML and AI tools
6376.6 -> will support the fundamental shift from volume to value-based care
6381.27 -> and to a precise diagnosis for each patient.
6384.72 -> Thank you very much.
6387.487 -> [applause]
6393.3 -> Thank you, Elad.
6394.93 -> Finally, the last tenet that I’m going to talk about
6397.87 -> is giving builders the ability to learn continuously.
6402.25 -> Training and education,
6403.67 -> especially in emerging areas like machine learning,
6407.03 -> enables teams to keep up with new technologies
6409.78 -> and fosters innovation throughout an organization.
6413.91 -> At Amazon, one of our leadership principles
6417.24 -> and my favorite one, is learn and be curious.
6420.63 -> We encourage everyone, including our builders,
6423.78 -> to try new things, learn new technologies,
6426.77 -> and stay curious about the world around us.
6429.78 -> This is one of the reasons why Amazon has been at the forefront
6433.75 -> in adopting disruptive technologies like machine
6436.62 -> learning before they were even mainstream.
6441.19 -> In fact, early on in our adoption journey of machine learning,
6444.99 -> we developed the machine learning university
6447.07 -> that we have used for over six years to train our engineers on ML.
6451.19 -> Now, to help others benefit from this content,
6455.12 -> we've made it available for free for anyone to learn
6458.82 -> and launched a certification for machine learning on AWS.
6462.88 -> And developers cannot get enough of it.
6465.5 -> Based on this demand, we also develop content from massive open online
6469.62 -> courses such as Udacity, Coursera,
6472.56 -> and edX to bring practical applications of machine
6476.05 -> learning to more people.
6478.68 -> Also, to make more complex machine
6481.08 -> learning concepts like reinforcement learning,
6483.65 -> deep learning, and GANs more accessible,
6487.17 -> we created our educational devices
6489.86 -> like DeepRacer, DeepLens, and DeepComposer.
6493.6 -> Over the years, programs like DeepRacer,
6496.41 -> our fully autonomous one eighteenth scale race car
6499.71 -> driven by reinforcement learning, have built a loyal fan base
6503.64 -> and the teams continue to bring new experience
6506.76 -> to our DeepRacer leagues.
6511.74 -> And over one hundred and fifty customers
6513.77 -> globally have trained thousands of developers.
6516.71 -> These include Capital One, Moody’s, Accenture,
6520.09 -> DBS Bank, JP Morgan Chase, BMW, and Toyota.
6524.42 -> They have held events for their workforce.
6527.15 -> Now, let's take a look at the fun we had with DeepRacer last year
6531.19 -> and a look ahead at what's next with DeepRacer this year.
6535.83 -> [applause]
6538.19 -> [revving engines]
6543.078 -> [revving engines and techno music playing]
6550.875 -> What am I getting myself into?
6553.602 -> [techno music playing]
6559.032 -> Welcome to another AWS DeepRacer Underground.
6562.734 -> [techno music playing]
6567.6 -> Oh, wow. OMG.
6570.249 -> [techno music playing]
6572.359 -> [revving engines]
6574.822 -> Go. Go. Go. Go.
6577.2 -> [revving engines - techno music playing]
6593.286 -> [applause]
6599.77 -> Some exciting stuff coming from DeepRacer.
6603.44 -> Over the past few years,
6604.65 -> machine learning has come an incredibly long way.
6607.74 -> The barriers to entry have been significantly lowered,
6611.1 -> enabling builders to quickly apply machine
6613.42 -> learning to their most pressing challenges and biggest opportunities.
6618.15 -> This was never more apparent than in the wake of the pandemic.
6621.89 -> Our customers needed to move faster
6623.86 -> than ever to respond to the changing world.
6626.69 -> They applied machine learning to create new ways
6629.13 -> to interact with customers, reimagine the way we work and learn,
6633.39 -> and automate business processes to react faster to customer needs.
6638.65 -> They applied machine learning to tracking the disease,
6641.58 -> finding new ways to care for patients,
6643.92 -> and to speed up vaccine discovery.
6646.59 -> They were able to do all this because their builders were free
6649.8 -> to harness the potential of machine learning.
6652.47 -> Free to build remarkable technology on top of it.
6657.96 -> Enabling this freedom is what our team is passionate about.
6661.34 -> It is what drives our own innovation
6663.74 -> and it is why we push out new features nearly every single day.
6668.43 -> In fact, we have so many things launching during re:Invent that,
6672.41 -> even between myself and Andy, we are not able to announce them all.
6676.31 -> So, be sure to check out the more than 50 ML sessions
6680.26 -> that we have available throughout the event.
6682.83 -> Thank you and have a great rest of your re:Invent.
6687.442 -> [music playing]

Source: https://www.youtube.com/watch?v=PjDysgCvRqY