AWS re:Invent 2022 - Innovate with AI/ML to transform your business (AIM217-L)

AWS re:Invent 2022 - Innovate with AI/ML to transform your business (AIM217-L)


AWS re:Invent 2022 - Innovate with AI/ML to transform your business (AIM217-L)

AI/ML can make your business a disruptive innovator in your industry. But, you might encounter barriers to get started and scale AI/ML. In this session, Bratin Saha, VP of AWS AI and ML Services, explains how AWS customers have overcome these barriers by using AWS AI/ML services, fueling business profitability and growth. Bratin also dives deep into the latest trends in AI/ML and how they are enabled by the newly launched AWS capabilities.

Learn more at https://go.aws/3VJe15v

Subscribe:
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4

ABOUT AWS
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts.

AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

#reInvent2022 #AWSreInvent2022 #AWSEvents


Content

2.636 -> [music playing]
4.605 -> Please welcome Vice President Machine Learning and AI Services,
8.809 -> Bratin Saha.
10.277 -> [music playing]
18.819 -> Good afternoon, everyone.
20.287 -> Welcome and thank you for being here.
24.224 -> I'm Bratin Saha, VP of AI and Machine Learning Services at AWS.
29.596 -> When I earned my PhD in computer science,
32.466 -> machine learning was a largely academic pursuit,
35.936 -> but over the last five years, machine learning has transitioned
39.673 -> to become a rapidly growing, mainstream endeavor,
43.31 -> and I feel incredibly fortunate
45.012 -> to have been part of AWS during this time
47.314 -> and to have been leading machine learning at AWS during this time,
51.218 -> because in large part,
53.353 -> this transformation has been driven by AWS.
57.724 -> This also gave me the opportunity to help build
60.46 -> one of the fastest-growing services in AWS history,
64.464 -> with more customers doing machine learning on AWS than anywhere else.
71.638 -> Now, as a reminder, the AI and machine learning services
76.443 -> are part of AWS's overall portfolio of data services
80.514 -> that Swami talked about in his keynote this morning.
84.051 -> Now, machine learning helps computers learn from data,
89.223 -> identify patterns, and make predictions,
93.26 -> and so, where the AI and ML services come in,
95.863 -> within the entire AWS portfolio of data services,
99.433 -> is when you're trying to extract insights from your data
102.903 -> and then act on those insights.
106.94 -> Now, before I get into the details of our AI and machine learning,
111.178 -> let me share an interesting anecdote with you.
114.815 -> So, from time to time,
116.783 -> my friends and colleagues send me books and articles
120.521 -> and events of interest in machine learning.
123.857 -> So, five years back, I would get books like this.
129.296 -> You know, this is a really deep book on machine learning
132.533 -> that has been written for experts and data scientists and researchers.
137.938 -> These days, however, I get book links like this.
142.91 -> Yes, babies are now part of the machine learning community,
150.317 -> and that tells me machine learning is, indeed, getting democratized,
157.724 -> and the data seems to back this up.
161.361 -> According to a McKinsey survey on the adoption of AI,
165.866 -> almost 60% of companies now say that they use AI
170.804 -> in at least one function in the organizations,
174.842 -> and that shows how machine learning
176.81 -> has transitioned from being a niche activity
180.414 -> to becoming integral to how companies do their business,
185.319 -> and in large part, AWS drove this transformation
190.057 -> by building the broadest and deepest set of machine learning services,
195.495 -> and as a result, today, over 100,000 customers
201.068 -> do machine learning on AWS.
204.371 -> Now, customers approach machine learning in one of three ways,
209.243 -> and therefore, at AWS,
210.811 -> we have built three layers of machine learning services,
213.981 -> so we can meet customers where they are.
219.253 -> At the bottom layer are the machine learning infrastructure services.
224.591 -> This is where we provide the machine learning hardware
227.628 -> and the machine learning software
230.397 -> that customers can use to build their own machine learning infrastructure,
235.335 -> and this is meant for customers of highly custom needs,
238.472 -> and that is why they want to build
240.073 -> their own machine learning infrastructure.
244.011 -> At the middle layer is Amazon SageMaker.
247.414 -> This is where AWS builds the machine learning infrastructure,
252.119 -> so that customers can focus just on the differentiated work
256.256 -> of building machine learning models,
258.926 -> and because customers just focus on the differentiated work,
262.529 -> that is where most ML builders are,
266.3 -> and then at the top layer are our AI services.
270.037 -> This is where AWS embeds machine learning into different use cases,
274.675 -> such as personalization, such as forecasting,
277.211 -> anomaly detection, speech transcription, and others,
281.048 -> and because AWS embeds machine learning into these services,
285.152 -> and customers can call these services,
288.055 -> they're able to embed machine learning into their applications
291.458 -> without requiring any ML expertise.
298.498 -> Now, customers across every domain and across every geo,
302.87 -> more than 100,000 of them, as I said,
305.739 -> are using these services to innovate at a very rapid clip,
310.611 -> and since machine learning is now so important for innovation,
315.983 -> I want to spend the rest of this time on talking about the key trends
320.954 -> that drive machine learning innovation,
323.857 -> the key enablers that let customers scale
327.361 -> out the machine learning innovation,
330.13 -> so that, as you're thinking about your own machine learning strategy,
333.734 -> you can consider how you can leverage these key trends,
339.106 -> and they should also give you an idea of where machine learning is headed.
347.147 -> To innovate with machine learning,
349.583 -> it's really important to be able to leverage these six key trends.
358.358 -> First is the exponential increase
361.862 -> in the sophistication of machine learning models
364.198 -> and being able to use the latest models.
369.069 -> Next is harnessing the variety of data
374.107 -> available to train machine learning models.
378.679 -> Then comes machine learning industrialization,
381.815 -> or the standardization of machine learning infrastructure and tools.
387.921 -> Then there is ML powered use cases
390.591 -> or automating use cases by embedding machine learning into them.
396.73 -> Then there is responsible AI
399.7 -> or making sure
400.801 -> that we are using machine learning in an appropriate way,
404.671 -> and finally, is ML democratization, in other words,
409.142 -> making sure that more users
411.745 -> have access to machine learning tools and skills.
415.716 -> Let's now dive deeper into the first trend,
418.318 -> and that is the exponential increase
421.188 -> in the sophistication of machine learning models,
423.29 -> and how you can use these latest models.
427.661 -> Now, one way in which we measure
431.465 -> the sophistication of machine learning models
433.734 -> is by counting the number of parameters
436.069 -> within these models.
437.638 -> You can think of parameters as like variables or values
440.44 -> that are embedded inside machine learning models.
443.777 -> Now, in 2019,
445.946 -> the state-of-the-art machine learning models
449.049 -> had about 300 million parameters.
452.152 -> Now, the state-of-the-art models
456.49 -> have more than 500 billion parameters.
461.662 -> In other words, in just three years,
464.298 -> the sophistication of machine learning models
466.567 -> has increased by 1,600 times.
471.839 -> Now, these models are also called foundation models,
476.143 -> and because they're so massive,
478.245 -> you can actually train them once on a lot of data
481.481 -> and then reuse them for a variety of tasks,
484.484 -> and as a result,
486.52 -> they reduce the cost and effort of doing machine learning
489.89 -> by an order of magnitude.
492.559 -> In fact, within Amazon,
495.362 -> we also use these foundation models for a variety of tasks,
499.933 -> and one of the tasks that we use these foundation models for
503.604 -> is actually software development, and we are now making this available
508.509 -> to our customers through Amazon CodeWhisperer,
512.346 -> and so, I'm very happy to announce that Amazon CodeWhisperer
516.483 -> is now open for all developers.
526.026 -> Now, Amazon CodeWhisperer
528.428 -> is a machine learning powered coding assistant,
532.065 -> and it generates code just like a developer writes code.
536.837 -> It's based on a giant foundation model that has been trained
540.44 -> on billions of lines of code, and it comes integrated with IDEs.
546.313 -> IDEs are the tools that software developers use
548.849 -> for writing their programs,
550.918 -> and so, as a developer is writing the program in the IDE,
555.455 -> CodeWhisperer understands the intent of the developer,
559.593 -> and by understanding what the developer is trying to do,
563.13 -> CodeWhisperer is able to generate code
566.233 -> just like a developer writes code.
570.437 -> Let me give you a demo of how CodeWhisperer works.
575.142 -> So, you have this IDE, and the programmer is writing some code.
579.68 -> CodeWhisperer looks at this code and understands
582.916 -> this is Python code.
584.718 -> CodeWhisperer also understands that the developer wants to use AWS
589.022 -> APIs by just looking at the library is being used.
592.693 -> Now, all that the developer has to do is write a comment.
597.197 -> The developer says, hey, generate me a function
601.535 -> that loads data from S3 and use encryption.
605.806 -> That's all that the developer has to do.
608.842 -> CodeWhisperer looks at the context, looks at the comment,
612.312 -> understands what is it that the developer wants to do,
616.817 -> and then CodeWhisperer automatically generates the code.
623.056 -> This is amazing.
625.425 -> This is transformational,
629.062 -> and I would encourage all of you,
630.764 -> and I would encourage all of the software
632.633 -> developers in your organizations to go try out CodeWhisperer,
637.771 -> because it's going to change the software development paradigm.
644.178 -> Now, many other customers are also using foundation models.
649.75 -> To hear more about this, let's look at this video from LG AI.
654.922 -> [KOREAN SPOKEN]
722.523 -> Isn't it amazing
724.358 -> that a machine learning model is generating fashion designs
729.93 -> that were displayed at the New York Fashion Week?
734.168 -> I mean, 18 months back, this was unthinkable,
739.773 -> and so, customers are now asking us
742.943 -> that they want to be able to use foundation models on AWS,
746.88 -> and they want us to make these foundation models
749.55 -> available to them on SageMaker,
752.119 -> because they don't want to have to build
754.188 -> these foundation models themselves,
757.09 -> and so, I'm very happy to announce that foundation models
760.794 -> from Stability AI are now available on SageMaker.
770.637 -> These models are some of the most popular
774.508 -> foundation models available today,
777.177 -> and these are going to be transformational.
779.713 -> These are going to be able to act as assistants to your creative work,
785.419 -> and so, I'm very happy to welcome Emad Mostaque,
788.989 -> the CEO and founder of Stability AI.
792.159 -> Welcome, Emad.
793.393 -> [applause]
797.431 -> Hi, everyone.
798.632 -> Thank you, Amazon, AWS, for having me here today.
803.303 -> Stability AI is a company we set up 13 months ago,
806.006 -> actually, it's been quite a period.
808.642 -> Our mission is to build the foundation
810.344 -> to activate humanity's potential through AI.
812.479 -> What does that mean?
813.68 -> These foundation models are just so amazingly flexible,
816.884 -> trained on almost the entirety of human knowledge.
822.022 -> The nature of these things is that we've seen gigantic models,
825.592 -> 540 billion parameters.
827.427 -> We've seen flexible models that can do fashion,
829.296 -> as we've seen, text and others, but we thought,
831.932 -> what if we made these models available to everyone.
833.734 -> What if we built AI for the people by the people?
837.204 -> To do that, we developed communities.
839.072 -> So, we have OpenBioML doping protein folding,
841.475 -> Carper doing code, and other models, Harmonai doing audio,
845.112 -> eleutherai doing language,
847.648 -> and these communities have tens of thousands of developers
849.583 -> that work with our core team
850.684 -> and our partners to build some of the most advanced foundation models
853.554 -> in the world that we then give away to everyone.
856.123 -> We give it away to stimulate this sector
858.258 -> and to see what can we create around this.
862.596 -> The most famous model that we've released is Stable Diffusion,
864.898 -> which was led by the CompVis Lab
867.501 -> at the University of Munich with our team,
870.304 -> Runway ML, eleuther, Lyon, and many others contributing,
874.208 -> and it's an interesting model.
876.21 -> In just two gigabytes of file size,
879.58 -> it can generate any image in any style.
882.082 -> We took 100,000 gigabytes of images and labels
884.918 -> to compress it down to that, and it's been an absolute revolution.
888.555 -> So, these, you just type in floral wolf,
890.724 -> a color splash lady, or a cat knight,
893.393 -> and that's what you get, all in a matter of seconds.
896.864 -> It’s taken the world by storm.
899.766 -> This is the time to get to 40,000 GitHub stars.
902.903 -> So, Ethereum and Bitcoin just got there.
905.038 -> If you look on the left-hand side, there, yep,
907.474 -> that’s Stable Diffusion in 90 days,
910.077 -> so one of the most popular pieces of software ever, let alone AI.
913.981 -> You can see Kafka and Cockroach and kind of other things there.
917.15 -> The developer community is hundreds of thousands strong,
919.953 -> building hundreds of different applications.
921.989 -> It runs on your MacBook M1 without internet.
924.491 -> It runs on your iPhone now.
927.027 -> This is a step change.
929.897 -> Last week, we were proud to release
931.231 -> Stable Diffusion 2.0, developed entirely at Stability,
934.568 -> which is another step forward.
936.57 -> It's a cleaner dataset, better quality, less bias, and faster.
940.44 -> These are some of the example images that were created from that.
944.611 -> We worked very hard to listen to community feedback,
947.247 -> and so, we made it safer.
948.882 -> We have attribution mechanisms coming in,
951.218 -> and we built this all on AWS.
954.555 -> We're happy now to turn and take this forward,
956.123 -> and I'll give you some examples of the types of things that you can do.
960.561 -> It's hit photorealism, or at least, it's approaching that.
963.33 -> These people do not exist.
966.967 -> This content does not exist.
969.736 -> These were created in two seconds on G5s.
975.742 -> These interiors do not exist,
977.678 -> but they do now, just from a few words of description.
981.448 -> This is a revolution, and you can take this general model
983.65 -> and create anything, or Suraj Patel at Hugging Face
988.488 -> took ten images and created a Mad Max World
990.924 -> and a Mad Max model in just an hour.
993.694 -> You can take your own content and bring it to these models,
996.23 -> or in fact, you bring the models to your data.
998.732 -> This is one of the revolutions that we've seen,
1000.334 -> because typically, you've had to do massive training tasks,
1003.003 -> where these models know about the world,
1004.771 -> and then you can extend that knowledge.
1006.673 -> So, hopefully, we don't end up like that,
1008.275 -> although the cars are kind of cool,
1011.345 -> but it's not enough just to have the models that can do anything.
1013.68 -> What if the images aren't quite right?
1015.582 -> We released Depth to Image that does a 3D depth map
1018.552 -> that you can then transform one image to another, just with words.
1022.055 -> You can use it, for example, to transform a CEO into a robot
1025.692 -> or something else, you know,
1028.829 -> but then, if that image itself isn't correct, we can do inpainting.
1032.065 -> We can make him cool.
1034.868 -> The ability to transform and adjust these pictures is amazing,
1038.472 -> and it'll be through natural language and new interfaces.
1041.708 -> Beyond that, you can have things like our four times Upscaler,
1044.444 -> soon to be eight times.
1045.979 -> It's a bit like enhance, enhance, enhance on a procedural TV show.
1051.251 -> This technology is revolutionary.
1052.452 -> I mean, look at the whiskers there.
1053.787 -> It's fantastic, and this technology is getting faster and faster
1056.89 -> and better and better.
1058.292 -> When we released Stable Diffusion in August,
1060.26 -> oh gosh, 23rd of 2022, it took 5.6 seconds to generate an image.
1065.966 -> Now, it takes 0.9 seconds, thanks to the work of our partners at NVIDIA.
1070.504 -> Today, I'm proud to announce Distilled Stable Diffusion,
1073.273 -> this will be a paper released today,
1075.309 -> and the model's available very soon on SageMaker.
1079.213 -> We've managed to get a ten times improvement in speed.
1081.882 -> So, it's not 0.9 seconds anymore.
1084.451 -> It usually takes 50 steps of iteration to get to that image,
1088.322 -> those images that you just saw, in one second.
1090.924 -> Now, it takes five, and in fact, in the last 24 hours
1094.328 -> since I submitted this, it now takes two.
1098.966 -> What does that mean?
1100.1 -> It means you're heading towards real-time generation of images
1104.004 -> in high resolution.
1107.374 -> That is completely disruptive for every creative industry,
1110.611 -> and it's something everyone has to get used to now,
1112.312 -> or any image generation industry,
1114.448 -> because what we've done in the last year
1115.782 -> is we've actually enabled humans to communicate visually.
1118.986 -> Talking is the easiest, then writing.
1120.487 -> Visual communication is awful, especially slides.
1123.69 -> We'll be able to make this PowerPoint presentation
1126.46 -> just by talking within the next couple years, and that's amazing.
1130.831 -> That's why we're delighted to work with SageMaker.
1133.867 -> AWS and Stability worked together to build
1136.703 -> one of the largest open-source public cloud clusters in the world.
1139.806 -> We have nearly, gosh, over 5,000 A100s.
1145.245 -> Working with SageMaker, we have unprecedented quality
1148.315 -> of output, unprecedented resilience, and this is across our model suite.
1153.253 -> So, for example, GPT NeoX from our eleutherai community
1156.623 -> is the most popular language model foundation in the world.
1159.493 -> It's been downloaded 20 million times.
1162.763 -> Working with SageMaker, we took it on 500 to 1,000,
1165.832 -> A100s, to give you an example,
1167.668 -> the fastest supercomputer in the UK is 640,
1171.004 -> from 103 teraflops to 163 teraflops within a week of performance,
1175.843 -> a 60 times performance increase.
1178.512 -> Scaling our infrastructure is incredibly hard.
1180.514 -> Making these models available is incredibly hard.
1182.683 -> We think that with SageMaker, with the broader Amazon suite,
1186.053 -> we can bring this technology,
1187.254 -> to everyone to create not only one model for someone
1190.224 -> but create models all around the world and make this accessible.
1193.427 -> We have audio, video, 3D, code, and all other models coming,
1197.231 -> and these will be available as tools to use in CodeWhisperer
1199.533 -> and others for you to create amazing new things
1202.302 -> to activate the potential of your businesses,
1203.937 -> your community, and humanity,
1206.039 -> and we're super excited to see what you're going to create.
1208.709 -> Thank you, everyone.
1210.477 -> [applause]
1217.217 -> Thank you, Emad.
1219.753 -> I mean, I'm really excited by what customers
1222.656 -> will be able to do with Stable Diffusion on AWS.
1226.126 -> You can imagine, as these models start
1229.162 -> developing photorealistic images and start doing it in real-time,
1233.734 -> all kinds of content generation will get disrupted.
1238.872 -> Now, I talked about foundation models,
1243.076 -> and they have billions of parameters,
1246.813 -> and they need terabytes of data to be trained,
1250.784 -> and that means they need lots of compute,
1254.221 -> and they need lots of compute at very low cost,
1258.525 -> and that is why AWS
1260.527 -> is also innovating on machine learning hardware.
1269.603 -> AWS Trainium is a purpose-built machine learning processor
1274.641 -> that has been designed from the ground up
1278.212 -> for machine learning tasks.
1280.681 -> In fact, compared to GPUs,
1283.65 -> it has twice the number of accelerators, 60% more memory,
1290.023 -> and twice the network bandwidth,
1293.56 -> and so, what this means is that Trainium can provide
1296.53 -> you more compute power than any other processor in the cloud,
1301.969 -> and not just that.
1303.737 -> Trainium provides you the lowest cost of any processor in the cloud,
1309.643 -> and because it has such a compelling value proposition,
1313.413 -> we have been collaborating with a lot of customers
1316.283 -> for developing Trainium,
1318.452 -> and so, to hear more about this collaboration,
1320.888 -> let's listen to Aparna Ramani,
1322.856 -> who's the VP of AI and data infrastructure at Meta.
1327.594 -> Hello, I'm Aparna Ramani, VP of AI, data,
1331.532 -> and developer infrastructure engineering at Meta,
1334.635 -> and PyTorch Foundation board member.
1337.404 -> It is my pleasure to talk about Meta's AI relationship with AWS.
1342.075 -> Our collaboration has been expanding since 2018,
1345.212 -> when Meta AI researchers started using AWS
1347.881 -> for state-of-the-art AI research.
1350.217 -> PyTorch is seeing great adoption among large enterprises and startups
1354.821 -> and is a leading machine learning framework today.
1357.691 -> For years now, Meta’s PyTorch engineers have been collaborating
1360.961 -> with AWS on key PyTorch projects,
1363.297 -> such as co-leading and maintaining TorchServe
1366.466 -> and making open source contributions to TorchElastic.
1369.203 -> More recently, we've been working together
1371.438 -> on PyTorch enhancements for AWS
1373.407 -> purpose-built ML chips: Inferentia and Trainium.
1377.711 -> We are excited to see AWS launch Trainium-based EC2 instances.
1382.649 -> Our engineers saw near-linear scaling
1385.285 -> across the Trainium cluster for large language models.
1388.222 -> Meta has also collaborated extensively with AWS
1391.258 -> to provide native PyTorch support
1392.86 -> for these new Trainium-powered instances.
1395.562 -> AWS contributed a new XLA backend to TorchDistributed,
1399.366 -> that makes it really easy to migrate your models to Trainium instances.
1402.636 -> This also enables developers to seamlessly integrate PyTorch
1405.706 -> with their applications and leverage the speed of distributed
1408.942 -> training libraries and models.
1410.511 -> We look forward to continuing our collaboration
1412.579 -> through the PyTorch Foundation and beyond.
1423.524 -> I'm truly thankful to the Meta team,
1426.093 -> because I think this collaboration between AWS and Meta
1429.463 -> is going to make it much easier to use Trainium, PyTorch,
1433.3 -> and do machine learning on AWS.
1436.937 -> Let me now get to the next key trend
1439.907 -> that drives machine learning innovation,
1442.309 -> and that is harnessing the variety of data
1445.946 -> available to train machine learning models,
1448.015 -> harnessing multiple modalities of data
1450.817 -> to train machine learning models.
1453.253 -> Now, data fuels machine learning, and so, at AWS,
1457.157 -> we have been building a variety of data processing capabilities,
1460.761 -> so that customers can prepare a variety of data,
1464.231 -> multiple modalities of data, as I mentioned.
1467.234 -> So, you have SageMaker Ground Truth
1469.102 -> that can be used for processing images,
1471.238 -> audio, video, text, and other forms of unstructured data.
1475.409 -> You have SageMaker Data Wrangler that can be used
1477.878 -> for processing structured data, and then you have SageMaker notebooks
1482.482 -> that can be used for Spark-based data processing,
1486.386 -> and all of these are allowing customers
1488.889 -> to train machine learning models to extract insights from data,
1493.994 -> insights that let machine learning systems
1498.398 -> answer the who and the what.
1502.569 -> So, for example, if I take a trained machine learning system,
1506.306 -> and I show it this image, and I ask, what is this image about,
1512.246 -> it'll actually be able to answer, this is an image of a football game,
1516.95 -> and if I ask, who are in this image,
1521.655 -> it'll actually be able to identify all the players in this image,
1527.294 -> but if I ask when was this game played,
1532.332 -> where was this game played,
1535.235 -> unfortunately, machine learning models do not do a good job
1539.306 -> of answering the when and the where,
1542.609 -> but ironically, most of the data generated in the world today
1546.78 -> actually comes tagged with geospatial coordinates
1550.15 -> that let you answer the when and the where.
1553.32 -> It's just that it's too hard to process this data,
1557.057 -> and that's because it needs special visualization tools
1560.494 -> and special data processing parameters,
1564.231 -> but it's important to answer the when and the where,
1568.735 -> and that is why we are augmenting our machine learning capabilities
1572.573 -> to train with geospatial data.
1576.743 -> At this morning’s keynote, we announced the public preview
1580.38 -> of SageMaker’s geospatial machine learning capabilities
1584.184 -> that will now allow customers to train models with geospatial data
1589.056 -> and answer the when and the where, now.
1594.294 -> [applause]
1598.832 -> Now, the automotive industry uses geospatial data in a variety of ways.
1603.871 -> For example, BMW uses geospatial data for many different use cases.
1609.543 -> To talk more about this, I'm pleased to welcome Marco Görgmaier,
1614.915 -> the general manager of AI and Data Transformation at BMW.
1619.887 -> [music playing]
1627.828 -> So, thank you, Bratin.
1629.263 -> Good afternoon, everyone. It's great being here with you.
1632.099 -> My name is Marco Görgmaier,
1633.4 -> and I'm heading our Data Transformation
1635.269 -> Artificial Intelligence unit at the BMW group.
1638.372 -> So, the vision and the mission of our team
1640.641 -> is to drive and scale business value creation
1643.544 -> through the usage of AI across our value chain.
1648.315 -> Now, looking to our products, at the BMW group,
1650.817 -> we believe that individual mobility
1653.353 -> is more than just moving the body from A to B.
1656.323 -> We believe it's also about touching the heart,
1658.992 -> stimulating the mind,
1661.228 -> and what you see here is the BMW eye vision circular.
1664.731 -> It's a compact, all-electric vehicle that shows how a sustainable
1670.137 -> and luxury approach in the future could look like,
1672.94 -> and we believe this future is electric, digital, and circular.
1678.378 -> So, today, I have an exciting use case for you
1680.714 -> where we touch on all three of those areas,
1684.218 -> and before I jump right into the use case,
1686.72 -> I just want to give you a short overview
1689.389 -> of where we stand with our data and AI transformation.
1692.659 -> So, we've built up our data analytics
1695.295 -> and our AI ecosystem at the BMW group,
1698.432 -> and we have more than 40,000 of our employees engaged here,
1702.035 -> and they created thousands of curated data assets
1704.905 -> in the company that can be reused and brought siloed data together,
1709.309 -> and based on this, they were able to deliver more than 800 use cases
1713.68 -> with more than 1 billion US dollar value since 2019.
1717.751 -> So, we’re taking this transformation very seriously,
1720.754 -> and one main area where we focus on is sustainability,
1724.992 -> and today, I want to drive you through one specific area there,
1729.363 -> namely, mobility.
1732.566 -> So, around 60% of the world's population
1735.335 -> lives in cities and urban areas,
1737.404 -> and that's also where 70% of greenhouse
1740.073 -> gas emissions are generated.
1742.676 -> So, clearly, we can make the biggest contribution here,
1745.479 -> and that's why we, the BMW group, are getting involved here,
1749.483 -> and our vision,
1751.518 -> and also, the idea is here to assist city planners
1754.054 -> in solving problems in those urban areas,
1756.89 -> and let me give you three examples how we do this already today.
1760.394 -> So, we are able of training machine learning models to predict
1764.464 -> how new traffic regulations, for example,
1766.9 -> E-drive zones can probably reduce traffic
1771.271 -> and gas emissions locally.
1774.007 -> We can also help identify where we have insufficient
1777.177 -> charging infrastructure, since obviously,
1779.947 -> that prevents people from switching to an electric vehicle,
1784.084 -> and the last example here, based on machine learning models,
1787.421 -> we can predict how change in pricing policies,
1790.09 -> for example, for parking or use in certain streets
1793.527 -> can impact drivers' commuting route,
1796.363 -> and therefore, estimate like the traffic and emission.
1801.235 -> So, and all of these problems,
1803.37 -> they're characterized by geospatial information.
1806.273 -> So, to solve them, we had to extensively use geo services
1810.11 -> within machine learning,
1811.512 -> such as map matching, efficient geo hashing, or digital maps,
1816.65 -> and we opted to test the new geospatial capabilities
1820.32 -> Bratin just mentioned, and let's see, how and with what results.
1826.059 -> So, specifically for our fleet customers,
1828.762 -> so large company fleet, it's difficult to foresee
1832.866 -> how their share of electric vehicles will look like in the future.
1836.37 -> So, we set us the goal to train machine learning models
1839.406 -> to learn correlations between engine type and driving profiles.
1843.911 -> The rationale behind this was, if such a correlation would exist,
1847.414 -> then the model could learn to predict
1849.249 -> the affinity of certain drivers for an electric vehicle,
1851.852 -> based on their profiles.
1853.62 -> Of course, we did this with fully anonymized data,
1856.056 -> and also, only on a fleet level.
1858.592 -> So, we could never draw any conclusions to individual drivers.
1862.93 -> So, now let's see how the solution works.
1867.668 -> So, we started from anonymized raw GPS data
1870.37 -> of where vehicles are driven and parked,
1873.14 -> and then we converted those GPS traces into routes
1876.109 -> using map matching, and if a route were a sentence,
1880.681 -> then the landmarks along the route would be words.
1883.283 -> So, we used a natural language processing model to predict
1885.986 -> which routes are likely to be taken by EV drivers.
1890.624 -> In parallel, we built a second model to cluster vehicle
1893.327 -> parking locations to predict where EVs are likely to be parked,
1898.198 -> so for example, near charging infrastructure.
1902.536 -> Then we merged the two models to triangulate the predictions,
1906.106 -> and at the end of the training,
1907.441 -> the hybrid model was capable of predicting
1909.943 -> how likely it was for specific fleets to convert to EV,
1913.347 -> with an accuracy of more than 80%.
1919.186 -> So, let me show you three things that really helped us here
1923.457 -> to be so quick on building the solution.
1926.293 -> So, one advantage of SageMaker geospatial capabilities
1929.897 -> is the standardization of common APIs
1933.2 -> to access, transform, and enrich geospatial data.
1936.336 -> So, for example, for reverse geocoding,
1939.806 -> SageMaker provides a single managed interface to APIs
1943.343 -> by the integration with Amazon location services,
1946.446 -> and they, again, source high-quality geospatial data from ESRI
1949.383 -> and HERE.com.
1951.285 -> So, second thing is, with SageMaker, you have pre-built algorithms
1956.823 -> that split the raw dataset along geospatial boundaries.
1960.427 -> So, the data can be used for training and inference,
1963.83 -> and in the end, of course, you need to visualize,
1967.401 -> and there are great pre-built visualization tools
1970.47 -> really tailored to geospatial data.
1975.275 -> So, to sum it up, yeah, we went from idea to solution in just eight weeks,
1980.881 -> and with a high accuracy of 80% in prediction in that short time.
1985.719 -> So, it was really great using the services, and they helped,
1989.356 -> but we also had a great collaboration
1991.191 -> with the Amazon Machine Learning Solutions Lab team
1994.628 -> and our internal BMW team.
1996.63 -> So, Bratin, thank you very much for the great collaboration,
2003.403 -> and let’s move on.
2004.638 -> [music playing]
2012.379 -> Thank you, Marco.
2013.58 -> Truly inspiring work at BMW,
2016.884 -> and I'm also really impressed
2019.586 -> by how BMW has successfully applied
2022.256 -> machine learning to automotive, because it's hard,
2026.326 -> and I have some personal experience of it.
2029.396 -> In a previous life, I worked on self-driving cars,
2032.966 -> and machine learning then was hard.
2034.468 -> It was hard to apply machine learning to automotive,
2039.206 -> and so, my management would ask me, from time to time,
2042.643 -> when will these cars work,
2045.579 -> and I would tell them, look, we got to have patience.
2050.584 -> These cars have to be at least 18 years old
2053.353 -> before they can drive by themselves,
2057.191 -> and in hindsight, what I realized
2062.696 -> is that we lacked an industrial-scale machine learning system,
2067.901 -> a machine learning infrastructure that would've allowed us
2070.838 -> to quickly iterate on developing machine learning models,
2073.74 -> that would've allowed us to make machine learning development
2076.443 -> robust and scalable and reliable,
2079.713 -> and that gets me to the next key trend
2083.05 -> that drives machine learning innovation,
2085.786 -> and that is ML industrialization.
2089.823 -> Let me first define what is machine learning industrialization,
2094.027 -> and why that's important.
2096.63 -> ML industrialization is the standardization
2099.566 -> of machine learning tools and machine learning infrastructure,
2103.303 -> and it's important, because it helps customers automate
2107.04 -> and make the development reliable and scalable.
2111.044 -> Like, five years back, you would have customers deploying
2114.581 -> maybe half a dozen models.
2117.017 -> Now, you have customers deploying thousands of models,
2120.254 -> and they train models with billions
2122.456 -> or hundreds of billions of parameters,
2124.491 -> and the infrastructure often makes trillions of predictions a month,
2129.73 -> and so, when you're talking of billions and trillions,
2133.367 -> you need an industrial-scale machine learning infrastructure,
2138.639 -> and on AWS, you can use SageMaker for standardizing and industrializing
2143.443 -> a machine learning development,
2145.078 -> and tens of thousands of customers are doing that now.
2148.582 -> In fact, AstraZeneca moved to SageMaker,
2153.053 -> and they were able to reduce the lead time
2155.289 -> to start machine learning projects
2157.291 -> from three months to just one day.
2161.195 -> Think of it. Three months to just one day.
2165.899 -> Even within Amazon, we’re using SageMaker
2169.303 -> for industrializing our machine learning development.
2172.372 -> For example, the most complex Alexa speech models
2176.743 -> are now being trained on SageMaker.
2179.279 -> To hear more about this, let's start with Alexa.
2185.118 -> Hey, Alexa, I'm curious.
2188.222 -> How are you able to answer all the questions
2191.158 -> that people ask you so intelligently?
2194.294 -> Hi, Bratin.
2195.462 -> Thanks for the compliment.
2197.13 -> There's actually a whole team of applied scientists and engineers
2200.4 -> who train ML models that power my intelligence.
2205.005 -> Thank you, Alexa.
2206.406 -> I'm pleased to welcome now Anand Victor,
2208.742 -> VP of Alexa ML development,
2210.978 -> who can talk about the Alexa machine learning infrastructure,
2214.715 -> and how they use SageMaker
2217.117 -> to industrialize the machine learning development.
2220.32 -> [music playing]
2229.029 -> Oh, this is awesome.
2231.365 -> Before I get started,
2232.466 -> I was wondering how many of you are Alexa users in the room.
2235.435 -> If you're an Alexa user, make some noise,
2239.339 -> and if you're a hey, Siri,
2240.674 -> hey, Google or Siri user, maybe you should be…
2242.876 -> I'm kidding, I'm kidding. Don't get worried.
2245.712 -> You know, folks, in my role at Amazon,
2250.184 -> I'm on fire for ML builders anywhere,
2252.119 -> and I'm really excited to be here to speak about
2254.922 -> how SageMaker has helped the Alexa ML builders innovate way faster.
2262.129 -> Our mission for Alexa is to become an indispensable assistant,
2266.266 -> a trusted advisor, and a fun and caring companion,
2270.404 -> and today, Alexa supports 17 languages, with 130,000+ skills,
2276.376 -> and 900,000 developers building on Alexa.
2280.314 -> Of course, these are active on more than a hundred million
2283.617 -> Alexa-powered devices.
2288.388 -> To deliver this awesome experience, behind the scenes,
2291.892 -> Alexa is powered by thousands of ML models
2294.962 -> that power the billions of customer interactions that happen worldwide,
2298.599 -> and my team is specifically responsible
2301.768 -> for the tooling that enables this:
2303.67 -> thousands of ML builders building effectively on Alexa.
2308.275 -> Of course, we need to do this at massive scale,
2310.511 -> millions GPU hours, but more importantly,
2314.214 -> we need to do this securely while maintaining customer privacy.
2319.219 -> So, when we started on this journey with SageMaker,
2322.022 -> we launched one of our simpler Alexa models to prove
2324.892 -> that SageMaker does help our scientists innovate faster,
2327.961 -> and it worked.
2329.93 -> The scientists for this particular model were so happy,
2333.567 -> but the broader business teams
2334.968 -> and the security teams were still not convinced.
2337.604 -> You know, most of the feedback was, oh no, this is not going to work.
2340.174 -> We’re unique.
2341.275 -> We don’t have use cases,
2344.511 -> and we realized that to really go with SageMaker,
2349.55 -> we had to really go big or go home.
2352.419 -> So, we picked one of the biggest,
2353.82 -> most complex critical models for Alexa at the time:
2357.457 -> the Alexa speech recognition model, and a little bit of mea culpa.
2360.561 -> They were right.
2361.695 -> There were gaps we had to fix.
2363.964 -> So, we worked closely with SageMaker
2366.466 -> and other AWS teams to design a secure foundation.
2370.27 -> This secure foundation included an air-gapped network,
2374.007 -> fine-grained permission controls,
2376.443 -> and a secure browser that enabled our ML builders
2379.646 -> to interact with data inside SageMaker.
2383.05 -> Now, this becomes a standard pattern
2385.219 -> if you're going to industrialize ML with critical data.
2388.422 -> With this secure foundation in place,
2390.891 -> we use the same tools you do
2392.292 -> to ingest and store training data into S3,
2396.196 -> and we use the same SageMaker tool set to develop, train,
2400.033 -> and host ML models, and of course, our ML builders are so happy,
2406.54 -> because, you know, while they get to focus on building
2408.942 -> and executing experiments,
2410.711 -> instead of wasting their time building and managing infrastructure,
2414.348 -> literally saving them multiple hours every week,
2421.655 -> and of course, the business teams are happy.
2423.557 -> Not only did we actually increase the security bar for Amazon
2426.693 -> by moving our most critical model into SageMaker,
2429.429 -> the pay for what you use model
2431.498 -> has helped us increase our resource utilization.
2434.601 -> This enables us to train more models, more iterations,
2437.471 -> with the same resources to improve Alexa customer experience,
2443.577 -> but of course, it's still day one for us.
2446.18 -> All these happy ML builders still have a truckload of experiments
2450.184 -> and features they want from SageMaker
2452.352 -> for the next wave of Alexa functionality,
2455.889 -> but before I leave, I want to leave you with some words of wisdom.
2459.426 -> Some of you in are in my role
2460.727 -> where you own the ML infrastructure for your teams,
2463.03 -> and you're going to go back, you're going to tell them,
2464.932 -> hey, get the ML models on SageMaker, right,
2467.734 -> and what can you tell them?
2471.138 -> You're going to tell them, Bratin told you,
2472.472 -> hey, I saw babies doing their own SageMaker.
2474.541 -> Yeah? You're going to get beat up.
2475.943 -> Don't do that.
2477.744 -> You know, you're going to say, hey, Alexa's running on SageMaker.
2480.18 -> The most critical model is running on SageMaker.
2482.216 -> We can do this, but more importantly, my learning has been,
2487.02 -> in a leader who’s leading
2488.922 -> and owning the infrastructure for ML builders,
2491.525 -> we need to be on fire for ML builders,
2494.294 -> and they need to hear this from us, not just think it.
2497.564 -> So, before I go, I want to practice this with you, right?
2499.633 -> I often say, I'm on fire for ML builders.
2501.969 -> So, I'm going to ask you,
2503.67 -> are you on fire for ML builders in the room,
2505.439 -> and I want you to say I'm on fire for ML builders.
2508.342 -> You guys got that?
2509.71 -> You going to say shout it, I'm on fire for ML builders.
2511.678 -> You got that?
2512.88 -> Yes? Okay.
2514.181 -> Who's on fire for ML Builders?
2517.15 -> Oh, come on guys, louder.
2518.252 -> Who's on fire for ML Builders?
2520.22 -> I love it.
2521.355 -> Thank you, guys.
2522.456 -> Thank you, Bratin.
2530.163 -> Thank you, Anand.
2531.265 -> I really look forward to all of the innovations that Alexa comes up with.
2537.07 -> Now, one of the capabilities of SageMaker that makes it easy
2540.908 -> for customers to standardize the machine learning development
2543.844 -> is SageMaker Studio notebooks,
2546.58 -> and these notebooks are based on the open source
2548.849 -> Jupyter Notebooks that revolutionized data science
2552.019 -> by making it easy for customers
2554.421 -> to prepare data and experiment with machine learning models,
2558.158 -> and as these notebooks have become more popular for development,
2562.362 -> we saw an opportunity to make them easier to use on SageMaker,
2567.034 -> and so, I'm pleased to announce that SageMaker Studio notebooks
2570.637 -> just launched the next generation of Studio notebooks,
2574.942 -> which makes it easy for customers
2582.349 -> to visually prepare their data, to do real-time collaboration,
2587.287 -> and to quickly move from experimentation to production.
2591.058 -> Let me dive a little deeper into these details.
2595.229 -> Now, machine learning development today
2596.864 -> is a highly collaborative activity,
2599.499 -> but what happens is developers use one tool
2602.603 -> for developing their models and a different tool
2605.739 -> for communicating with each other.
2607.574 -> So, they're using notebooks for developing their models,
2610.911 -> but they communicate with each other
2612.312 -> on email or Slack or other ad-hoc ways,
2615.516 -> and that makes their collaboration a little disjointed.
2619.152 -> With this new generation of notebooks,
2621.655 -> SageMaker now allows you to both develop and collaborate
2626.46 -> within the notebook itself,
2629.129 -> and what that means is that multiple users
2631.999 -> can simultaneously co-edit and read these notebooks and files,
2637.371 -> and not just that.
2638.705 -> These notebooks are also integrated with source code repositories
2642.442 -> like Bitbucket and AWS CodeCommit, and that makes it much easier
2647.381 -> to manage multiple versions of these notebooks
2650.25 -> that get created as users are collaborating with each other.
2655.255 -> Now, when you want to go from experimentation to production today,
2660.861 -> a data scientist has to take all of the code
2663.197 -> they've written in a notebook,
2664.665 -> paste it into a script, convert it into a container,
2668.068 -> spin up the infrastructure, run their code,
2671.138 -> and then tear down the infrastructure.
2674.441 -> Instead, with this new generation of notebooks,
2678.011 -> all you do is you click a single button,
2681.682 -> and SageMaker does all of the work of taking your code,
2685.052 -> converting it into a container,
2686.62 -> spinning up the infrastructure, running your container,
2689.122 -> and then tearing down the infrastructure,
2691.892 -> and so, what used to take weeks before takes only a few hours now.
2698.966 -> Now, SageMaker industrializes your machine learning
2702.336 -> and makes it much easier and much faster
2705.172 -> for you to do machine learning deployments,
2709.276 -> but we didn't just stop there.
2711.678 -> We also embedded machine learning into many commonly used use cases,
2717.017 -> and that gets me to the next key trend
2720.387 -> that drives machine learning innovation,
2722.356 -> and that is ML powered use cases.
2727.16 -> Customers asked us to help them automate
2729.63 -> a lot of common use cases, like document processing,
2732.866 -> like industrial manufacturing,
2735.369 -> like personalization, forecasting, anomaly detection,
2739.306 -> language translation, and others, and so,
2742.409 -> we built a lot of AI services to help customers automate
2746.413 -> these use cases through machine learning.
2749.183 -> Let me give you a few examples
2750.717 -> of how customers are innovating with these AI services.
2757.191 -> Amazon Transcribe lets you embed AI
2760.961 -> into your contact center solutions, both on-prem and in the cloud,
2765.799 -> and Amazon Transcribe supports both post-call analytics
2769.97 -> and real-time call analytics.
2773.106 -> So, for example, State Auto Insurance,
2775.943 -> they provide insurance in many different segments.
2779.279 -> They used Amazon Transcribe’s call analytics
2782.349 -> to be able to glean insights from millions of calls
2785.953 -> to their customer service representatives,
2788.755 -> and by using these insights, State Auto was able to increase
2793.36 -> the efficiency of their call handling by 83%.
2800.067 -> Wix used Amazon Transcribe’s post-call analytics
2804.872 -> to increase visibility of customer sentiment
2807.774 -> from just 12% to 100% of the calls.
2813.313 -> Now, the experience that customers have
2815.716 -> when they call into your call centers
2818.252 -> can have a profound influence on how they view your company,
2822.823 -> and so, it's really important that they get all the help they need
2826.96 -> when they call into your call centers.
2829.196 -> Now, today, contact center supervisors
2833.166 -> listen in on a fraction of the calls
2835.002 -> to make sure that customers are getting the help they need.
2838.739 -> Obviously, this is not scalable,
2840.908 -> and so, there are many calls where customers remain frustrated.
2845.812 -> So, our customers have been asking us for a solution
2849.416 -> that enables live call assistance, and so, I'm very happy to announce
2855.022 -> Amazon Transcribe’s new real-time call analytics capabilities.
2865.299 -> This new real time-call analytics capabilities uses machine learning.
2870.07 -> It uses speech recognition models to understand customer sentiment.
2875.576 -> For example, it uses speech recognition models
2878.679 -> to detect raised voices or prolonged periods of silence
2883.984 -> or repeated requests to talk to a manager
2887.187 -> or even the user phrases like,
2889.189 -> I'm going to cancel this subscription,
2892.192 -> and when Transcribe finds these customer issues,
2895.362 -> it then sends a notification to the call center supervisor,
2900.1 -> in real-time, who can then join the call
2902.603 -> and help both the customer and the agent.
2907.174 -> Another domain that is getting transformed by AI
2910.544 -> is actually document processing, and Amazon Textract lets you embed AI
2915.616 -> into document processing and automate document
2918.318 -> processing by extracting things like names, addresses,
2921.788 -> and other key bits of information from documents.
2925.459 -> In fact, Pennymac used to spend hours every day processing documents.
2931.999 -> By using Textract, they are now able to process 3,000-page PDFs
2938.005 -> in just five minutes.
2939.573 -> Imagine.
2940.641 -> 3,000-page PDFs in just five minutes.
2944.211 -> Elevance Health also automated their document processing,
2948.382 -> their claims insurance,
2949.917 -> and they have been able to automate 90% of the document processing.
2955.189 -> Now, customers tell us that they want to be able to automate document
2958.992 -> processing in specialized tasks, like mortgage processing.
2963.197 -> It turns out that a mortgage loan package can have 500 pages
2968.635 -> and can take 45 days to close,
2971.705 -> and almost half of this time, almost 20 days,
2975.909 -> is just spent getting information out of these documents
2979.813 -> and sending it to various departments,
2982.716 -> and so, I'm very happy to announce Amazon Textract’s new Analyze
2987.621 -> Lending capability.
2994.194 -> We built this capability by taking Amazon Textract
2997.531 -> and training it on a lot of mortgage-specific documents,
3001.301 -> like mortgage loan forms and W2s and pay slips and others,
3006.607 -> and here is how this works.
3008.609 -> So, Analyze Lending takes a machine learning model
3012.946 -> and then first understands what kind of a document is it.
3015.749 -> Is it pay slip, is it a W2,
3017.784 -> is it a mortgage loan form, or something else?
3021.021 -> It then uses a second set of machine learning models
3024.124 -> to extract out all of the information,
3027.427 -> and not just that, it can actually even flag pages
3032.266 -> that need review by a human underwriter.
3036.069 -> So, for example, if a page is missing, a signature,
3040.14 -> Analyze Lending will actually flag that page for the human underwriter,
3044.912 -> and that makes it a lot easier to automate document processing.
3050.918 -> Another domain that is getting transformed by AI
3054.087 -> is industrial monitoring.
3056.056 -> In fact, by being able to predict
3058.592 -> when an equipment is due for maintenance,
3061.094 -> we can significantly reduce equipment downtime,
3065.265 -> and to enable this for our customers, we launched Amazon Monitron in 2020.
3071.371 -> Amazon Monitron uses machine learning to predict
3075.409 -> when an equipment may need maintenance,
3078.812 -> and it's a complete end-to-end solution.
3081.315 -> It comes with its own wireless sensors,
3084.084 -> its own gateway, and its own app,
3087.254 -> and best of all, it needs no machine learning to be used.
3092.359 -> So, here is how it works.
3094.828 -> You first have to decide on what equipment you want to monitor,
3098.265 -> and then once you've decided that, you take the Amazon Monitron sensors.
3103.036 -> These just work out of the box,
3105.272 -> and they measure your equipment’s vibrations and temperature.
3109.676 -> So, you just take these sensors, attach them to your equipment,
3113.113 -> and then wire them to the gateway, and that's it.
3117.284 -> Amazon Monitron sensors then take your equipment's temperature
3121.688 -> and vibrations, stream that data to the cloud,
3125.792 -> where machine learning models analyze that equipment's data,
3129.563 -> and if they find any anomalies, they send an alert to the app.
3136.236 -> To hear more about Monitron in action, please welcome A.K.
3140.14 -> Karan, the Senior Director of Digital Transformation at Baxter.
3144.378 -> [music playing]
3148.982 -> Thank you, Bratin.
3153.253 -> Yeah, hello and good afternoon.
3154.988 -> I'm A.K. Karan, the senior director of Digital Transformation
3158.192 -> for Baxter Healthcare.
3159.96 -> It's my pleasure and great honor to be here today.
3165.632 -> Since 1931, the Baxter name has stood for excellence and innovation.
3170.704 -> We are a global manufacturer of healthcare and lifesaving products.
3176.41 -> We have a pretty broad portfolio,
3178.111 -> and we are driven by a higher purpose,
3180.447 -> with a mission to save and sustain lives.
3184.918 -> So, if you’ve been into a doctor's office,
3186.687 -> which I think most of us have been, or say, been in an emergency room,
3191.325 -> or say, been in surgery, you have been touched
3194.127 -> by one of the many products that we make.
3197.831 -> Our impact is felt by 315 million patients,
3203.203 -> whose lives we touch in a year, their families, and their friends.
3211.011 -> As a company, we have over 70 manufacturing sites,
3213.614 -> which are located globally, and we run 24/7, 365,
3218.852 -> and as any other manufacturer,
3221.154 -> our supply chain is very complex and very dynamic.
3224.725 -> So, what does it mean to us, right?
3226.393 -> So, if you have to keep our operations running trouble free,
3230.43 -> nonstop, a human reliability is going to be key.
3236.036 -> Every minute of production counts for us,
3238.005 -> and every instance of downtime that we can avoid
3240.774 -> is very critical and highly crucial.
3244.545 -> Let's say it could be an HVAC system that is providing conditioned air
3248.682 -> to a clean room assembly process,
3251.084 -> or it could be a pump that is applying water to a steam generator,
3254.788 -> or it could be a motor that is driving a high-speed conveyer line.
3258.592 -> When any of these systems fail, we have a catastrophe on our hands.
3264.998 -> So, as we started exploring tools, I mean, we were looking
3267.301 -> for some predictive tools, tools that can give us insights
3270.337 -> before the systems will go down or fail,
3273.507 -> as opposed to having a condition-based monitoring tool
3275.876 -> or time-based systems
3278.345 -> to kind of take us into the next generation,
3280.48 -> and what we found out
3282.049 -> was Amazon Monitron has some unique capabilities.
3285.519 -> First and foremost, right, there's a plug-and-play system,
3287.855 -> as Bratin showed on his previous slide.
3290.324 -> For us, as a consumer, we had to just stick the sensor onto the device.
3294.962 -> It's literally flipping a button.
3297.264 -> Stream the data to the cloud,
3300.033 -> the system has in-built capabilities to do all the analytics
3302.369 -> and give us alerts to let us know when things might go wrong.
3306.64 -> We are looking for a system that was agnostic,
3308.909 -> meaning we have systems that are five years old or 50 years old,
3312.946 -> but the system is in good shape.
3315.082 -> We wanted to have the system deployed across the board.
3318.519 -> So, the system made it, the Monitron made it a breeze.
3321.688 -> The third was we are looking for ease of use
3323.991 -> in terms of deploying the sensors,
3326.193 -> scaling it up, ease of use of the software, and a mobile app.
3330.797 -> It gave us what we wanted, but the biggest game changer
3334.434 -> or the biggest driver
3336.036 -> was the embedded machine learning and AI capabilities.
3339.64 -> The system has capabilities to develop a custom signature profile
3343.11 -> or a temperature profile for every single asset.
3346.446 -> So, this helped us scale very fast to thousands of our assets,
3350.951 -> and this, indeed, was truly a game changer for us.
3356.023 -> In one of our early use cases, we saw, I mean,
3358.859 -> this is one of the HVAC systems that is providing conditioned air
3362.196 -> to a group of machines,
3364.131 -> and what we found out was we got an alert from the Monitron.
3367.301 -> What our technicians found out, I mean, the gearbox on the system
3370.204 -> was in a pretty bad shape.
3372.172 -> So, they planned for the downtime event.
3374.274 -> They took it down, replaced the hardware,
3376.71 -> and put it back into good health.
3379.146 -> If we had not reacted to this alert,
3381.715 -> it would've created a very serious supply chain issue for us.
3386.253 -> So, thanks to Monitron for helping us identify this issue
3389.356 -> and react in an appropriate time manner.
3393.794 -> Our journey with Monitron has been very exciting.
3396.663 -> We started with a group of sensors, I would say, like 300-400 sensors.
3401.001 -> We wanted to deploy, kind of get a feel for it,
3403.604 -> see how it operates in real life.
3406.106 -> We saw some good success.
3407.941 -> From that, we launched it to the entire site,
3409.81 -> and this is for one of our lighthouse plants we have in the U.S.,
3413.981 -> and now, based on all the results we have,
3415.849 -> we are scaling this across to all our global sites
3419.853 -> in a very prudent time manner.
3424.925 -> What we've seen so far,
3426.527 -> we have seen around 500 hours of downtime elimination,
3431.198 -> and this impacts to seven million units of production,
3435.869 -> but the bigger impact is we have been able to supply lifesaving products
3440.541 -> to our patients on time,
3442.442 -> and this cannot be any more gratifying.
3447.381 -> On the operations front, and these are my team that really took Monitron
3451.685 -> and they deployed it for the entire site.
3454.755 -> There's a myth that says, I mean, machine learning
3456.69 -> and AI is going to be eliminating jobs.
3460.227 -> In our case, that is not the case.
3462.429 -> It has augmented our workforce.
3465.399 -> It is driving higher productivity levels,
3468.335 -> and our engineering team has not been ever more excited.
3472.94 -> Before, they used to go on rounds.
3474.241 -> I mean, they used to check every single hardware or device.
3477.778 -> They used to log the data.
3479.313 -> They don't do that anymore, because Monitron with these capabilities
3482.85 -> gives us actionable alerts, helps us to be more efficient.
3490.624 -> Monitron has really helped us to democratize machine learning
3495.262 -> and AI on our shop floor,
3498.465 -> but the biggest benefit that we've seen:
3501.101 -> the system has really put a smile on our engineering team's face,
3507.14 -> and ladies and gentlemen,
3509.009 -> this is only a start in our digital transformation journey.
3513.146 -> Thank you. Bratin?
3514.948 -> [music playing]
3520.621 -> Thank you, A.K. Awesome work at Baxter.
3523.724 -> It's an amazing example of how our company
3525.959 -> is transforming an entire domain with AI.
3530.43 -> Now, all of this great innovation
3532.566 -> that I've been talking about would not be possible
3536.069 -> unless we knew how to use machine learning in a responsible way,
3540.774 -> and that gets me to the next key trend
3543.61 -> that drives machine learning innovation,
3545.412 -> and that is responsible AI.
3548.849 -> According to IDC, the global spend on AI-related technologies
3552.719 -> will exceed 200 billion by 2025.
3557.558 -> In fact, more than 50% of executives
3560.06 -> say that AI will transform the organization in the next three years.
3565.265 -> With that growth in AI and machine learning
3568.368 -> comes the realization that we must use it responsibly.
3574.107 -> Now, what does it mean to use AI in a responsible way?
3579.146 -> At AWS, we think of it along these six key dimensions.
3584.218 -> First is fairness, or in other words,
3587.221 -> the machine learning system must operate equally for all users
3591.158 -> regardless of race, religion, gender, and other factors.
3595.562 -> Then there is explainability, or in other words,
3598.665 -> we must be able to understand
3600.1 -> how the machine learning system operates.
3604.371 -> Then there is robustness, or in other words,
3607.04 -> there must be a mechanism
3608.575 -> to ensure that the machine learning system is working reliably.
3613.58 -> Then there is privacy and security,
3615.249 -> which is always job number one at AWS.
3620.32 -> Then there's governance, which means there must be mechanisms
3624.024 -> to make sure responsible AI practices are being used,
3628.729 -> and finally, there's transparency, which increases customer trust
3634.201 -> and makes it possible for them to make informed decisions
3637.871 -> about how to use your systems.
3640.24 -> Now, talking about transparency,
3642.142 -> I'm really pleased to announce a new transparency tool
3645.445 -> for our AI services called AI Service Cards.
3650.15 -> Now, we are announcing these cards now for Amazon Rekognition,
3656.156 -> Amazon Textract, and Amazon Transcribe,
3659.66 -> and these will serve as a single-stop-shop
3662.663 -> for all of the responsible AI questions of our customers.
3666.8 -> They represent our comprehensive development process
3670.204 -> that spans all of the dimensions of responsible AI
3673.407 -> that I talked about previously, and they' go into the model,
3676.977 -> the systems, the features, and the performance.
3681.048 -> Now, it's important to build our services in a responsible way,
3685.552 -> but at AWS, we are also taking a people-centric approach
3689.89 -> and educating developers on responsible AI,
3693.594 -> and that is why I'm pleased to announce a new course on fairness
3697.431 -> and bias mitigation as part of the AWS
3700.267 -> Machine Learning University.
3702.936 -> This free, public course has more than nine hours of tutorials,
3708.642 -> and once you've taken the course,
3710.41 -> you will realize why bias happens in practice,
3714.848 -> and how you can mitigate it with scientific methods.
3720.354 -> Talking about education gets me to the last key trend
3724.691 -> that drives machine learning innovation,
3727.494 -> and that is ML democratization, or making machine learning tools
3732.733 -> and skills accessible to more people.
3736.87 -> Customers tell us that they have a hard time,
3739.773 -> they often have a hard time in hiring all the data science talent
3743.71 -> that they need, and to address this,
3746.446 -> we launched Amazon SageMaker Canvas at last year's re:Invent.
3751.785 -> Canvas is a completely no-code tool
3754.721 -> for doing machine learning.
3757.09 -> What this means is that Canvas prepares your data,
3761.094 -> builds your models, trains your models,
3763.897 -> and then deploys a fully explainable model,
3767.167 -> all of this without the user having to write
3770.37 -> even a single line of code,
3774.441 -> and so, what this means is that data analysts,
3777.544 -> marketing professionals, sales professionals,
3779.88 -> finance professionals,
3781.415 -> anybody that uses data that would benefit from using machine learning
3786.687 -> but may not have the coding skills,
3788.322 -> may not have the machine learning skills,
3790.224 -> can actually now do machine learning,
3792.993 -> and so, analysts at Samsung are using this for forecasting.
3797.13 -> At 3M, they're using it for operations improvements.
3802.336 -> At Siemens, they're using it for supply chain research.
3806.34 -> Now, it's important for us to make our services easier to use,
3811.345 -> and we are going to continue to do that,
3814.181 -> but AWS is also investing in making machine learning,
3818.919 -> in training the next set of machine learning developers.
3823.857 -> Amazon has committed that, by 2025,
3827.427 -> we will help more than 29 million people
3831.665 -> improve their tech skills through free cloud
3834.935 -> computing skills training.
3837.604 -> Then there's AWS DeepRacer,
3839.54 -> that has now educated more than 320,000 developers
3844.545 -> in more than 160 countries.
3847.381 -> We also have the training and certification programs
3850.317 -> that are part of AWS Machine Learning University
3853.12 -> and available for free to the public, and then lastly, earlier this year,
3859.493 -> we launched the AWS AI and Machine Learning Scholarships
3863.897 -> in partnership with Intel and Udacity,
3867.067 -> and we, to date, we have been able to train
3869.77 -> more than 20,000 underserved and underrepresented college
3874.508 -> and high school students
3875.943 -> on foundational machine learning concepts
3878.245 -> and prepare them for careers in machine learning.
3883.217 -> So, to summarize, machine learning is no longer the future.
3889.957 -> Machine learning is the present that needs to be harnessed now,
3895.495 -> and if you want to harness machine learning,
3897.631 -> you want to be able to leverage these six key trends.
3903.103 -> First, leverage the exponential increase
3906.34 -> in the sophistication of machine learning models
3908.675 -> and use these latest models.
3912.045 -> Harness the variety of data available,
3914.414 -> the multiple modalities of data available,
3917.05 -> to train your machine learning models.
3919.72 -> Industrialize machine learning in your companies.
3923.59 -> Use machine learning powered use cases for automation.
3928.629 -> Make responsible AI an integral part of everything you do,
3933.8 -> and then democratize machine learning in your companies,
3936.336 -> so that more employees have access to machine learning tools and skills.
3942.142 -> Thank you for coming and enjoy the rest of re:Invent
3944.444 -> and please fill out the session survey. Thank you.
3946.88 -> [applause]

Source: https://www.youtube.com/watch?v=Jf1ca54dofI