DevOps Tutorial for Beginners | Learn DevOps in 7 Hours - Full Course | DevOps Training | Edureka
Aug 16, 2023
DevOps Tutorial for Beginners | Learn DevOps in 7 Hours - Full Course | DevOps Training | Edureka
๐ฅ Edureka DevOps Training (Use Code โ๐๐๐๐๐๐๐๐๐โ): https://www.edureka.co/devops-certifi โฆ This Edureka DevOps Tutorial for Beginners will help you learn DevOps concepts and DevOps tools with examples and demos. You will understand how a DevOps pipeline can be imagined for existing infrastructure. Furthermore, it will cover different DevOps tools \u0026 phases. Below are the topics covered in this Full Course DevOps Tutorial for Beginners: 00:00 Introduction 2:06 Waterfall Model 3:35 Limitations of Waterfall Model 6:39 Agile Methodology 7:32 Waterfall vs Agile 8:20 Limitation of Agile Methodology 11:21 What is DevOps? 13:06 DevOps Stages 17:02 Source Code Management 21:40 Introduction to Git 23:50 Basic Git Commands 28:50 Continuous Integration 30:19 Continuous Delivery 31:33 Continuous Deployment 34:06 Jenkins Demo 35:44 Configuration Management 41:56 Containerization 45:15 Docker Demo 47:38 Continuous Monitoring 49:28 Introduction to Nagios 51:53 DevOps Use-Case 1:00:27 Git \u0026 GitHub 1:01:21 Version Control System 1:03:43 Why Version Control? 1:04:08 Collaboration 1:05:56 Storing Versions 1:08:06 Backup 1:09:57 Analyze 1:10:54 Version Control Tools 1:13:04 Git \u0026 GitHub 1:17:06 GitHub Case Study 1:20:33 What is Git? 1:21:33 Features of Git 1:32:42 What is a Repository? 1:33:26 Central \u0026 Local Repository 1:35:15 Git Operations \u0026 Commands 1:36:00 Creating Repositories 1:43:32 Syncing Repositories 1:47:22 Making Changes 1:56:12 Parallel Development 1:56:25 Branching 2:01:00 Merging 2:06:35 Rebasing 2:20:36 Git Flow 2:27:04 Continuous Integration using Jenkins 2:27:44 Process Before Continuous Integration 2:28:29 Problem Before Continuous Integration 2:33:27 What is Continuous Integration? 2:34:09 Continuous Integration Case Study 2:36:48 What is Jenkins? 2:36:58 Jenkins Plugins 2:39:52 Jenkins Example 2:52:39 Shortcomings of Single Jenkins Server 2:53:19 Jenkins Distributed Architecture 2:56:50 Introduction to Docker 2:57:39 Why we need Docker 3:01:39 What is Docker? 3:05:30 Docker Case Study 3:08:50 Docker Registry 3:10:22 Docker Image \u0026 Containers 3:14:33 Docker Compose 3:21:14 Kubernetes 3:21:14 Kubernetes Installation 3:48:35 Introduction to Kubernetes 3:55:20 Kubernetes: Container Management Tool 3:57:44 Kubernetes Features 4:01:40 Uncovering Myths About Kubernetes 4:07:06 Kubernetes vs Docker Swarm 4:12:09 Kubernetes Use-Case: Pokemon Go 4:18:42 Kubernetes Architecture 4:20:15 Working of Kubernetes 4:21:40 Kubernetes Hands-on 4:52:06 Ansible 4:53:03 Configuration Management 4:54:42 Why Configuration Management 5:03:30 Configuration Management Tools 5:04:17 What is Ansible? 5:04:48 Features of Ansible 5:06:32 Ansible Case Study: NASA 5:13:32 Ansible Architecture 5:17:05 Writing a Playbook 5:18:37 Ansible Playbook Example 5:20:12 How to use Ansible? 5:28:53 Ansible Hands-on 5:48:23 Introduction to Puppet 5:49:07 Why Configuration Management? 5:53:06 What is Configuration Management? 5:55:22 Configuration Management Components 5:56:39 Configuration Management Tools 5:57:07 What is Puppet? 5:57:55 Puppet Master-Slave Architecture 5:59:33 Puppet Master Slave Connection 6:03:46 Puppet Use-Case 6:05:20 Resources, Classes, Manifests \u0026 Modules 6:21:01 Continuous Monitoring using Nagios 6:21:36 Why Continuous Monitoring? 6:25:36 What is Continuous Monitoring? 6:29:35 Continuous Monitoring Tools 6:30:07 What is Nagios? 6:31:43 Nagios Features 6:32:26 Nagios Architecture 6:35:24 Monitoring Remote Linux Hosts 6:37:15 Nagios Case Study 6:33:26 Nagios Demo ๐นEdureka DevOps Tutorial Playlist: https://bit.ly/3iJoJIP ๐นEdureka DevOps Tutorial Blog Series: https://goo.gl/05m82t ---Edureka DevOps Trainings---- ๐ตDevOps Online Training: http://bit.ly/2GV1SG2 ๐ตKubernetes Online Training: http://bit.ly/2v3VSbu ๐ตDocker Online Training: http://bit.ly/2v9TYG8 ๐ตAzure DevOps Online Training: https://bit.ly/3oLqmba ๐ตAWS Certified DevOps Engineer Online Training: http://bit.ly/2OvjmwZ ๐ตDevOps Engineer Masters Program: http://bit.ly/2Osdpkq ----University Program--- ๐ Professional Certificate Program in DevOps with Purdue University: https://bit.ly/3yqRlMS โฉ NEW Top 10 Technologies To Learn In 2023 - ย ย ย โขย Topย 10ย Technologiesย Toย Learnย Inย 2023ย โฆย ย Instagram: https://www.instagram.com/edureka_lea โฆ Facebook: https://www.facebook.com/edurekaIN/ Twitter: https://twitter.com/edurekain LinkedIn: https://www.linkedin.com/company/edureka SlideShare: https://www.slideshare.net/edurekaIN #edureka #DevOpsEdureka #DevOpsTutorial #DevOpsTraining #DevOpsTools - - - - - - - - - - - - - - For more information, please write back to us at [email protected] or call us at IND: 9606058406 / US: 18338555775 (toll-free).
Content
10.959 -> Welcome everyone to a Edureka YouTube channel. My name is Saurabh and today I'll be taking
14.749 -> you through this entire session on Devops
full course. So we have designed this crash
18.619 -> course in such a way that it starts from the
basic topics and also covers the advanced
22.699 -> ones. So we'll be covering all the stages
and tools involved in Devops. So this is how
27.859 -> the modules are structured. We'll start by
understanding. What is the meaning of devops?
31.05 -> What was the methodology before devops? Right?
So all those questions will be answered in
35.42 -> the first module. Then we are going to talk
about what is git how it works. And what is
39.579 -> the meaning of Version Control and how we
can achieve that with the help of git, that
43.559 -> session will be taken by Miss Reyshma. Post that I'll be teaching you how you can create
47.469 -> really cool digital pipelines with the help
of Jenkins Maven and git and GitHub. After
52.019 -> that. I'll be talking about the most famous
software containerization platform, which
56.199 -> is docker and post that Vardhan we'll be
teaching you how you can Kubernetes for orchestrating
61.41 -> Docker container clusters. After that, We
are going to talk about configuration management
66.8 -> using ansible and puppet. Now, both of these
tools are really famous in the market ansible
71.49 -> is pretty trending whereas puppet is very
mature it is there in the market since 2005
76.67 -> finally. I'll be teaching you how you can
perform continuous monitoring with the help
80.17 -> of Nagios. So let's start the session guys.
Will Begin by understanding what is devops?
91.37 -> So this is what we'll be discussing today.
We'll Begin by understanding why we need devops
95.009 -> everything exists for a reason. So we'll try
to figure out that reason we are going to
98.189 -> see what are the various limitations that
the traditional software delivery methodologies
101.93 -> and how it devops overcomes all of those limitations.
Then we are going to focus on what exactly
106.399 -> is the devops methodology and what are the
various stages and tools involved in devops.
110.37 -> And then finally in the hands on part I will
tell you how you can create a docker image
114.35 -> how you can build it test it and even push
it onto Docker Hub in an automated fashion
119.289 -> using Jenkins. So I hope you all are clear with the
agenda. So let's move forward guys and we'll
124.14 -> see why we need DevOps. So guys, let's start
with the waterfall model. Now before devops
129.489 -> organizations were using this particular software
development methodology. It was first documented
133.78 -> in the year 1970 by Royce and was the first
public documented life cycle model. The waterfall
140.019 -> model describes a development method that
is linear and sequential waterfall development
145.6 -> has distinct goals for each phase of development.
Now, you must be thinking why the name waterfall
150.62 -> model because it's pretty similar to a waterfall.
Now what happens in a waterfall once the water
155.299 -> has flowed over the edge of the cliff. It
cannot turn back the same is the case for
159.1 -> waterfall development strategy as well. An
application will go to the next stage only
163.28 -> when the previous stage is complete. So let
us focus on what are the various stages involved
167.39 -> in waterfall methodology. So notice the diagram
that is there in front of your screen. If
172.07 -> you notice it's almost like a waterfall or
you can even visualize it as a ladder as well.
176.53 -> So first what happens the client gives requirement
for an application. So you gather that requirement
181.5 -> and you try to analyze it then what happens
you design the application how the application
185.959 -> is going to look like. Then you start writing
the code for the application and you build
189.91 -> it when I say build it involves multiple think
compiling your application, you know unit
194.37 -> testing then even it involves packaging is
well after that it is deployed onto the test
199 -> servers for testing and then deployed onto
the broad service for release. And once the
202.76 -> application is life. It is monitored. Now.
I know this small looks perfect and trust
207.13 -> me guys. It was at that time, but think about
it what will happen if we use it. Now fine.
212.11 -> Let me give you a few disadvantages of this
model. So here are a few disadvantages. So
216.78 -> first one is once the application is in the
testing stage. It is very difficult to go
220.799 -> back and change something that was not well
thought out in the concept stage now what
225.01 -> I mean by that suppose you have written the
code for the entire application but in testing
229.51 -> there's some bug in that particular application
now in order to remove that bug you need to
233.859 -> go through the entire source code of the application
which used to take a lot of time, right? So
239.62 -> that is Very big limitation of waterfall model
apart from that. No working software is produced
244.43 -> until late during the life cycle. We saw that
when we are discussing about various stages
248.68 -> of what for more there are high amount of
risk and uncertainty which means that once
252.709 -> your product is life it is there in the market
then if there is any bug or any downtime,
257.93 -> then you have to go through the entire source
code of the application again, you have to
262.45 -> go through that entire process of waterfall
model that we just saw in order to produce
267.039 -> a working software again, right? So that's
how it used to take. A lot of time. There's
271.46 -> a lot of risk and uncertainty and imagine
if you have upgraded some software stack in
276.01 -> your production environment and that led to
the failure of your application now to go
280.38 -> back to the previous table version used to
also take a lot of time now, it is not a good
285.06 -> model for complex and object oriented projects
and it is not suitable for the Project's where
290.229 -> requirements are at a moderate to high risk
of changing. So what I mean by that suppose
294.639 -> your client has given you a requirement for
a web application today now you have taken
299.78 -> Own sweet time and you are in a condition
the release the application say after one
304.849 -> year now after one year, the market has changed.
The client does not want a web application.
309.919 -> He's looking for a mobile application now,
so this type of model is not suitable where
314.86 -> requirements are at a moderate to high risk
of changing. So there's a question popped
319.569 -> in my screen is from Jessica. She's asking
so all the iteration in the waterfall model
323.86 -> goes through all the stages. Well, there are
no I tration as such Jessica. First of all,
329.249 -> it is not agile methodology or devops. It
is waterfall model, right? There are no I
334 -> trations once the stage is complete then only
it will be good. It will be going to the next
338.12 -> stage. So there are no I trations as such
if you're talking about the application and
342.629 -> it is life and then there is some bug or there
is some downtime then at that time based on
347.74 -> the kind of box, which is there in the application
Suppose. There might be a bug because of some
352.129 -> flawed version of a software stack installed
in your production environment. Probably some
355.909 -> upgraded version because if that your application
is not working properly. You need to roll
360.389 -> back to the previous table version of the
software stack in your production environment.
363.909 -> So that can be one bug apart from that. There
might be bugs related to the code in which
368.509 -> you have to check the entire source code of
the application again. Now if you look at
372.669 -> it to roll back and incorporate the feedback
that you have got is used to take a lot of
377.58 -> time. Right? So I hope this answers your question.
All right, she's finally the answer any of
382.659 -> the questions any other doubt you have guys
you can just go ahead and ask me find so there
387.669 -> are no questions right now. So I hope you
have understood what was the relation with
391.77 -> waterfall model. What are the various limitations
of this waterfall model. Now we are going
396.039 -> to focus on the next methodology that is called
the agile methodology. Now agile methodology
401.259 -> is a practice that promotes continuous iteration
of development and testing throughout the
405.539 -> software development life cycle of the project.
So the development and the testing of an application
409.219 -> used to happen continuously with the agile
methodology. So what I mean by that if you
414.509 -> focus on a diagram that is there in front
of your screen, so here we get the feedback
418.03 -> from the testing that we have done in the
previous iteration. We design the application
421.909 -> again, then we develop it there again. We
test it then we discover few things that we
425.749 -> can incorporate in the application. We again
design it develop it and there are multiple
430.27 -> I trations involved in development and testing
of a particular application cinestyle. Methodology.
435.379 -> Each project is broken up into several I trations
and all I tration should be of the same time
439.749 -> duration and generally it is between 2 to
8 weeks and at the end of each iteration of
445.03 -> working for dr. Should be delivered. So this
is what agile methodology in a nutshell is
449.559 -> now let me go ahead and compare this with
the waterfall model. Now if you notice in
453.479 -> the diagram that is there in front of your
screen, so waterfall model is pretty linear
457.469 -> and it's pretty straight as you can see from
the diagram that we analyze requirements.
460.759 -> We plan it design. It build it test it. And
then finally we deploy it onto the processor
465.379 -> was for release, but when I talk about the
agile methodology over here the design build
470.46 -> and testing part is happening continously.
We are writing the code. We are building the
474.979 -> application. We are testing it continuously
and there are several iterations involved
478.58 -> in this particular stage. And once the final
testing is done. It is then deployed onto
482.719 -> the broad service for release, right? So agile
methodology basically breaks down the entire
488.11 -> software delivery life cycle into small sprains
or iterations that we call it due to which
492.729 -> the development and the testing part of the
software delivery life cycle used to happen
497.069 -> continously. Let's move forward and we are
going to focus on what are the various limitations
501.86 -> of agile methodology the first and the biggest
limitation of agile methodology is that the
506.74 -> deaf part of the team was pretty agile right
the development and testing used to happen
511.139 -> continuously. But when I talk about deployment
then that was not continuous there were still
516.61 -> a lot of conflicts happening between the Devon
the off side of the company the dev team wants
521.52 -> agility. Whereas the Ops Team want stability
and there's a very common conflict that happens
526.1 -> and a lot of you can actually relate to it
that the code works fine in the developers
530.27 -> laptop, but when it reaches to production
there is some bug in the application or it
534.47 -> does not work any production at all. So this
is because if you know some inconsistency
538.51 -> in the Computing environment And that has
caused that and due to which the operations
542.65 -> team and the dev team used to fight a lot.
There are a lot of conflicts guys at that
547.85 -> time happening. So agile methodology made
the deaf part of the company pretty agile,
551.14 -> but when I talk about the off side of the
company, they needed some solution in order
555.31 -> to solve the problem that I've just discussed
right? So I hope you are able to understand
559.59 -> what kind of a problem I'm focusing on. If
you go back to the previous diagram as well
564.02 -> so over here if you notice only the design
build and test or you can say Development
568.3 -> building and testing part is continuous, right
the deployment is still linear. You need to
573.34 -> deploy it manually on to the various products
overs. That's what you was happening in the
578.01 -> agile methodology. Right? So the error that
I was talking about you too busy. Our application
582.56 -> is not working fine. I mean once your application
is life and do you do some software stack
587.68 -> in the production environment? It doesn't
work properly now to go back and change something
592.1 -> in the production environment used to take
a lot of time. For example, you know, you
595.67 -> have upgraded some particular software stack
and because of that your application is Doll
600.14 -> working it fails to work now to go back to
the previous table version of the software
604.65 -> stack the operations team was taking a lot
of time because they have to go through the
607.52 -> login scripts that they have written on in
order to provision the infrastructure. So
612.68 -> let me just give you a quick recap of the
things that we have discussed till now, we
616.5 -> have discussed quite a lot of history. We
started with the waterfall model the traditional
620.66 -> waterfall model be understood what are its
various stages and what are the limitations
624.16 -> of this waterfall mode? Then we went ahead
and understood what exactly the design methodology
629.14 -> and how is it different from the waterfall
model and what are the various limitations
632.88 -> of the agile methodology? So this is what
we have discussed till now now we are going
636.97 -> to look at the solution to all the problems
that we have just discussed and the solution
640.95 -> is none other than divorce divorce is basically
a software development strategy which Bridges
647.07 -> the gap between the deaf side and the offside
of the company. So devops is basically a term
652.54 -> for a group of Concepts that while not all
new half catalyze into a movement and a rapidly
659.03 -> spreading. Well, the technical community like
any new and popular term people may have confused
664.42 -> and sometimes contradictory impressions of
what it is. So let me tell you guys devops
669.3 -> is not a technology. It is a methodology.
So basically devops is a practice that equated
674.779 -> to the study of building evolving and operating
rapidly changing systems at scale. Now. Let
680.96 -> me put this in simpler terms. So devops is
the practice of operations and development
685.72 -> Engineers participating together in the entire
software life cycle from design through the
691.21 -> development process to production support
and you can also say that devops is also characterized
696.44 -> by operation staff making use many of the
same techniques as Developers for this system
701.99 -> work. I'll explain you that how is this definition
relevant because all we are saying here is
707.571 -> devops is characterized by operation staff
making use many of the same techniques as
713.01 -> Developers for their systems work seven. I
will explain you infrastructure as code you
717.39 -> will understand why I am using this particular
definition. So as you know, that devops is
722.45 -> a software development strategy which Bridges
the gap between the dev part in the upside
726.02 -> of the company and helps us to deliver good
quality software in time and how this happens
731.04 -> this happens because of various stages and
tools involved in Des Moines. So here is a
735.52 -> diagram which is nothing but an infinite Loop
because everything happens continuously in
740.03 -> Dev Ops guys, everything starting from coding
testing deployment monitoring everything is
745.41 -> happening continuously, and these are the
various tools which are involved in the devops
751.13 -> methodologic, right? So not only the knowledge
of these tools are important for a divorce
755.57 -> engineer, but also how to use these tools.
How can I architect my software delivery lifecycle
760.88 -> such that I get the maximum output right?
So it doesn't mean that you know, if I have
765.53 -> a good knowledge of Jenkins or gate or docker
then I become a divorce engineer. No that
770.49 -> is not true. You should know how to use them.
You should know where to use them to get the
774.61 -> maximum output. So I hope you have got my
point what I'm trying to say here in the next
779.39 -> slide. Be discussing about various stages
that are involved in devops fine. So let's
783.44 -> move forward guys and we are going to focus
on various stages involved in divorce. So
787.57 -> these are the various stages involved in devops.
Let me just take you through all these stages
791.79 -> one by one starting from Version Control.
So I'll be discussing all of these stages
795.93 -> one by one as well. But let me just give you
an entire picture of these stages in one slide
800.73 -> first. So Version Control is basically maintaining
different versions of the code what I mean
804.47 -> by that Suppose there are multiple developers
writing a code for a particular application.
808.88 -> So how will I know that which developer has
made which commits at what time and which
813.94 -> commits is actually causing the error and
how will I revert back to the previous commit
817.94 -> so I hope you are getting my point my point
here is how will I manage that source code
822.66 -> suppose developer a has made a commit and
that commit is causing some error. Now how
827.09 -> will I know the developer a has made that
commit and at what time he made that comment
831.82 -> and very the code was that editing happened,
right? So all of these questions can be answered
836.56 -> once you use Version Control tools like git
subversion. XXXX of we are going to focus
841.17 -> on getting our course. So then we have continuous
integration. So continuous integration is
845.43 -> basically building your application continuously
what I mean by that suppose any developer
849.97 -> made a change the source code a continuous
integration server should be able to pull
853.43 -> that code. I am prepare a built now when I
say build people have this misconception of
858.2 -> you know, only compiling the source code.
It is not true guys includes everything starting
862.36 -> from compiling your source code validating
your source code code review unit, testing
866.49 -> integration, testing, etc, etc. And even packaging
your application as well. Then comes continuous
872.4 -> delivery. Now the same continuous integration
tool that we are using suppose Jenkins. Now
877.06 -> what Jenkins will do once the application
is built. It will be deployed onto the test
881.73 -> servers for testing to perform, you know,
user acceptance test or end user testing whether
885.4 -> you call it there will be using tools like
selenium right for performing automation testing.
890.17 -> And once that is done it will be then deployed
onto the process servers for release, right
894.56 -> that is called continuous deployment and here
we'll be using configuration management and
899.61 -> Tools so this is basically to provision your
infrastructure to provision your Prada environment
904.79 -> and let me tell you guys continuous deployment
is something which is not a good practice
908.21 -> because before releasing a product in the
market, there might be multiple checks that
911.71 -> you want to do before that right? There might
be multiple other testings that you want to
915.5 -> do. So you don't want this to be automated
right? That's why continuous deployment is
919.95 -> something which is not preferred after continuous
delivery. We can go ahead and manually use
924.15 -> configuration management tools like puppet
chef ansible and salts tag, or we can even
927.67 -> use Docker for a similar purpose and then
we can go ahead and deploy it onto the Crossovers
932.4 -> for release. And once the application is live.
It is continuously monitored by tools like
937.18 -> Nagi Os or Splunk, which will provide the
relevant feedback to the concern teams, right?
942.3 -> So these are various stages involved in devops,
right? So now let me just go back to clear
947.01 -> if there are doubts. So this is our various
stages are scheduled various jobs schedule.
951.47 -> So we have Jenkins here. We have a continuous
integration server. So what Jenkins will do
956.17 -> the moment any developer makes a change in
the source code it Take that code and then
960.75 -> it will trigger a built using tools like Maven
or and or Gradle. Once that is done. It will
965.95 -> deploy it onto the test servers for testing
for end user testing using tools like selenium
971.14 -> j-unit Etc. Then what happens it will automatically
take that tested application and deploy it
976.43 -> onto the process servers for release, right?
And then it is continuously monitored by tools.
981.12 -> Like Nagi was plunky LK cetera et cetera.
So Jenkins is basically heart of devops life
986.34 -> cycle. It gives you a nice 360 degree view
of your entire software delivery life cycle.
992.1 -> So with that UI you can go ahead and have
a look how your application is doing currently
996.15 -> right? We're in which stage it is in right
now testing is done or not. All those things.
1000.87 -> You can go ahead and see in the Jenkins dashboard
right? There might be multiple jobs running
1005.39 -> in the Jenkins dashboard that you can see
and it gives you a very good picture of the
1010.53 -> entire software delivery life cycle. Uh, don't
worry. I'm going to discuss all of these stages
1015.05 -> in detail when we move forward. We are going
to discuss each of these stages one by one.
1019.36 -> Eating from source code management or even
call us Version Control. Now what happens
1023.47 -> in source code management? There are two types
of source code management approaches one is
1027.35 -> called centralized Version Control. And another
one is called the distributed Version Control
1032.22 -> the source code management. Now imagine there
are multiple developers writing a code for
1035.97 -> an application if there is some bug introduced
how will we know which commits has caused
1040.13 -> that error and how will I revert back to the
previous version of the code in order to solve
1044.059 -> these issues source code management tools
were introduced and there are two types of
1047.679 -> source code management tools one is called
centralized Version Control and another is
1051.13 -> distributed Version Control. So let's discuss
the centralized Version Control first. So
1055.769 -> centralized version control system uses a
central server to store all the files and
1060.2 -> enables team collaboration. It works in a
single repository to which users can directly
1064.679 -> access a central server. So this is what happens
here guys. So every developer has a working
1069.759 -> copy the working directory. So the moment
they want to make any change in the source
1073.35 -> code. They can go ahead and make a comment
in the shared repository right and they can
1077.95 -> even update their working. By you know pulling
the code that is there in the repository as
1082.399 -> well. So the repository then the diagram that
your nose noticing indicates a central server
1087.19 -> that could be local or remote which is directly
connected to each of the programmers workstation.
1091.7 -> As you can see now every programmer can extract
or update their workstation or the data present
1096.4 -> in the repository or can even make changes
to the data or committed in the repository.
1101.46 -> Every operation is performed directly on the
central server or the central repository,
1106.519 -> even though it seems pretty convenient to
maintain a single repository, but it has a
1111.049 -> lot of drawbacks. But before I tell you the
drawbacks, let me tell you what advantage
1114.409 -> we have here. So first of all, if anyone makes
a comment in the repository, then it will
1119.039 -> be a commit ID Associated to it and there
will always be a commit message. So, you know,
1123.919 -> which person has made that commit and at what
time and where in the code basically, right
1128.85 -> so you can always revert back but let me now
discuss few disadvantages. First of all, it
1133.85 -> is not locally available. Meaning you always
need to be connected to a network to perform
1138.48 -> any action. It is always not available locally,
right? So you need to be connected with the
1143.07 -> some sort of network. Basically since everything
is centralized in case of the central server
1147.94 -> getting crashed or corrupted. It will result
in losing the entire data of the project.
1152.009 -> Right? So that's a very serious issue guys.
And that is one of the reasons why Industries
1156.24 -> don't prefer centralized Version Control System,
that's talk about the distributed version
1160.299 -> control system. Now now these systems do not
necessary rely on a central server to store
1165.22 -> all the versions of the project file. So in
distributed Version Control System, every
1169.77 -> contributor has a local copy or clone of the
main repository as you can see, I'm highlighting
1174.919 -> with my cursor right now that is everyone
maintains a local repository of their own
1179.58 -> which contains all the files and metadata
present in the main repository. As you can
1184.59 -> see then the diagram is well, every programmer
maintains a local repository on its own which
1189.129 -> is actually the copy or clone of the central
repository on their hard drive. They can commit
1193.309 -> and update the local repository without any
interference. They can update the local repositories
1198.71 -> with new data coming from the central server
by an operation called pull and effect changes
1204.51 -> the main repository by an operation called
push write operation called push from the
1209.679 -> local post re now. You must be thinking what
advantage we get here. What are the advantages
1215.299 -> of distributed version control over the centralized
Version Control now basically the act of cloning
1220.289 -> and entire repository gives you that Advantage.
Let me tell you how now all operations apart
1225.179 -> from push-and-pull are very fast because the
tool only needs to access the hard drive not
1230.2 -> a remote server, hence, you do not always
need an internet connection committing new
1234.751 -> change sets can be done locally without manipulating
the data on the main proposed three. Once
1240.47 -> you have a group of change sets ready. You
can push them all at once. So what you can
1244.45 -> do is you can ask the commit to your local
repository, which is there in your local hard
1248.57 -> drive. You can commit the changes. Are you
want in the source code you can you know,
1253.279 -> once you review it and then once you have
quite a lot of It's ready. You can go ahead
1258.08 -> and push it onto the central server as well
as the central server gets crashed at any
1262.419 -> point of time. The lost data can be easily
recovered from any one of the contributors
1266.7 -> local repository. This is one very big Advantage
apart from that since every contributor has
1272.11 -> a full copy of the project repository. They
can share changes with one another if they
1276.899 -> want to get some feedback before affecting
the changes in the main repository as well.
1280.559 -> So these are the various ways in which you
know distributed version control system is
1284.73 -> actually better than a centralized version
control system. So we saw the two types of
1288.419 -> phones code Management systems and I hope
you have understood it. We are going to discuss
1292.48 -> a one source code management tool called gate,
which is very popular in the market right
1296.46 -> now almost all the companies actually use
get for now. I'll move forward and we'll go
1301.26 -> into focus on a source code management tool
a distributed Version Control tool that is
1305.399 -> called as get now before I move forward guys.
Let me make this thing clear. So when I say
1309.29 -> Version Control or source code management,
it's one in the same thing. Let's talk about
1313.159 -> get now now git is a distributed Version Control
tool. Boards distributed nonlinear workflows
1319.059 -> by providing data Assurance for developing
quality software, right? So it's a pretty
1324.139 -> tough definition to follow but it will be
easier for you to understand with the diagram
1327.85 -> that is there in front of your screen. So
for example, I am a developer and this is
1331.86 -> my working directory right now. What I want
to do is I want to make some changes to my
1336.72 -> local repository because it is a distributed
Version Control System. I have my local repository
1341.129 -> as well. So what I'll do I'll perform a get
add operation now because of get add whatever
1346.46 -> was there in my working directory will be
present in the staging area. Now, you can
1350.33 -> visualize the staging area as something which
is between the working directory and your
1354.389 -> local repository, right? And once you have
done get ad you can go ahead and perform git
1360 -> commit to make changes to your local repository.
And once that is done you can go ahead and
1365.27 -> push your changes to the remote repository
as well. After that you can even perform get
1369.86 -> pull to add whatever is there in your remote
repository to your local repository and perform
1374.259 -> get check out to our everything which was
there in your Capacity of working directory
1378.23 -> as well. All right, so let me just repeat
it once more for you guys. So I have a working
1382.669 -> directory here. Now in order to add that to
my local repository. I need to First perform
1387.23 -> get add that will add it to my staging area
staging area is nothing but area between the
1392.85 -> working directory and the local repository
after guitar. I can go ahead and execute git
1397.13 -> commit which will add the changes to my local
repository. Once that is done. I can perform
1401.639 -> get push to push the changes that I've made
in my local repository to the remote repository
1405.57 -> and in order to pull other changes which are
there in the remote repository of the local
1409.78 -> repository. You can perform get pull and finally
get check out that will be added to your working
1414.33 -> directory as well and get more which is also
a pretty similar command now before we move
1419.02 -> forward guys. Let me just show you a few basic
commands of get so I've already installed
1423.029 -> get in my Center is virtual machine. So let
me just quickly open my Center as virtual
1427.129 -> machine to show you a few basic operations
that you can perform with get device virtual
1431.519 -> machine, and I've told you that have already
installed get now in order to check the version
1435.419 -> of get you can just Then he'd get - - version
and you can see that I have two point seven
1440.58 -> point two here. Let me go ahead and clear
my terminal. So now let me first make a directory
1445.22 -> and let me call this as a deal breaker - repository
and I'll move into this array core repository.
1452.259 -> So first thing that I need to do is initialize
this repository as an empty git repository.
1456.97 -> So for that all I have to type here is get
in it and it will go ahead and initialize
1461.32 -> this R empty directory as a local git repository.
So it has been initialized now as you can
1466.649 -> see initialise empty git repository in home
and Drake I drink - report dot kit or right
1471.749 -> then so over here. I'm just going to create
a file of python file. So let me just name
1476.669 -> that as a deer a card dot p y and I'm going
to make some changes in this particular files.
1482.58 -> So I'll use G edit for that. I'm just going
to write in here, uh normal print statement.
1489.44 -> Welcome to Ed Eureka close the parenthesis
save it. Close it. Let me get my terminal
1498.71 -> now if I hit an LS command so I can see that
edeka dot py file is here. Now. If you can
1503.11 -> recall from the slides, I was telling you
in order to add a particular file or a directory
1508.289 -> into the local git repository first. I need
to add it to my staging area and how will
1512.41 -> I do that by using the guitar? Come on. So
all I have to type here is get ad at the name
1517.2 -> of my file, which is edureka.py then here
we go. So it is done now now if I type in
1522.399 -> here git status it will give me the files
which I need to commit. So this particular
1528.009 -> command gives me the status status as a little
tell me model files. They need to commit to
1532.169 -> the local repository. So it says when you
file has been created that is in the record
1536.62 -> or py in the state and it is present in the
staging area and I need to come at this particular
1540.98 -> Phi. So all I have to type here is git commit
- M and the message that I want so I'll just
1549.539 -> type in here first commit and here we go.
So it is successfully done now. So I've added
1556.049 -> a particular file to my local git repository.
So now what I'm going to show you is basically
1561.169 -> how to deal with the remote repositories.
So I have a remote git repository present
1565.169 -> on GitHub. So I have created a GitHub account.
The first thing that you need to do is create
1569.23 -> a GitHub account and then you can go ahead
and create a new repository there and then
1573.539 -> I'll tell you how to add that particular repository
to a local git repository. Let me just go
1578.11 -> to my browser once and me just zoom in a bit.
And yeah, so this is my GitHub account guys.
1586.71 -> And what I'm going to do is I'm first going
to go to this repository stab and I'm going
1591.529 -> to add one new repository. So I'll click on
new. I'm going to give a name to this repository.
1597.12 -> So whatever name that you want to give you
just go ahead and do that. Let me just write
1601.61 -> it here. Get - tutorial - Dev Ops, whatever
name that you feel like just go ahead and
1609.489 -> write that I'm going to keep it public if
you want any description you can go ahead
1613.059 -> and give that and I can also initialize it
with a readme create a posse and that's all
1617.96 -> you have to do in order to create a remote
GitHub repository now over here. You can see
1622.22 -> that there's only one read me dot MD file.
So what I'm going to do, I'm just going to
1625.749 -> copy this particular SSH link and I'm going
to perform git remote add origin and the link
1633.73 -> there are just copy. I'll paste it here and
here we go. So this has basically added my
1639.489 -> remote repository to my local repository.
Now, what I can do is I can go ahead and pull
1644.509 -> whatever is there in my remote repository
to my local git repository for that. All our
1648.69 -> to type here is git pull origin master and
here we go. Set is done. Now as you can see
1656.499 -> that I've pulled all the changes. So let me
clear my terminal and hit an endless command.
1660.039 -> So you'll find read me dot MD present here
right now. What I'm going to show you is basically
1664.049 -> how to push this array card or py file onto
my remote repository. So for that all I have
1668.73 -> to type here is git push origin master and
here we go. So it is done. Now. Let me just
1677.739 -> go ahead and refresh this particular repository
and you'll find Erica py file here. Let me
1683.36 -> just go ahead and reload this so you can see
a record or py file where I've written welcome
1687.889 -> to edit a car. So it's that easy guys. Let
me clear my terminal now. So I've covered
1693.22 -> few basics of get so let's move forward with
this devops tutorial and we are going to focus
1697.509 -> on the next stage which is called continuous
integration. So we have seen few basic commands
1701.399 -> of get we saw how to initialize an empty directory
into a git repository how we can you know,
1705.869 -> add a file to the staging area and how we
can go ahead and commit in the local repository.
1710.419 -> After that. We saw how we can push the changes
in the local repository to the remote repository.
1715.809 -> My repository was on GitHub. I told you how
to connect to the remote repository and then
1719.84 -> how even you can pull the changes from the
remote repository rights all of these things
1723.499 -> we have discussed in detail. Now, let's move
forward guys in we are going to focus on the
1727.58 -> next stage which is called continuous integration.
So continuous integration is basically a development
1732.71 -> practice in which the developers are required
to commit changes. Just the source code in
1736.679 -> a shared repository several times a day, or
you can say more frequently and every commit
1741.45 -> made in the repository is then built this
allows the teams to detect the problems early.
1745.72 -> So let us understand this with the help of
the diagram that is there in front of your
1749.269 -> screen. So here we have multiple developers
which are writing code for a particular application
1753.74 -> and all of them are committing code to a shared
repository which can be a git repository or
1758.049 -> subversion repository from there the Jenkins
server, which is nothing but a continuous
1762.44 -> integration tool will pull that code the moment
any developer commits a change in the source
1767.34 -> code the moment any developer coming to change
in the source code Jenkins server will pull
1771.409 -> that it will prepare a built now as I have
told you earlier as well build does not only
1775.739 -> mean compiling the source code. It includes
compiling but apart from that there are other
1779.919 -> things as well. For example code review unit
testing integration testing, you know packaging
1784.869 -> your application into an executable file.
It can be a war file. It can be a jar file.
1789.159 -> So it happens in a continuous manner the moment
any developer coming to change in the source
1793.25 -> code Jenkins server will pull that prepare
a bill. Right. This is called as continuous
1797.84 -> integration. So Jenkins has various Tools
in order to perform this so it has various
1802.36 -> tools for development testing and deployment
Technologies. It has well over 2,500 plugins.
1807.95 -> So you need to install that plug-in and you
can just go ahead and Trigger whatever job
1811.269 -> you wanted with the help of Jenkins. It is
originally written in Java. Right and let's
1816.14 -> move forward and we are going to focus on
continuous delivery now, so continuous delivery
1820.359 -> is nothing but taking continuous integration
to The Next Step. So what are we doing in
1824.419 -> a continuous manner or in an automated fashion?
We are taking this build application onto
1829.66 -> the test server for end user testing or unit
or user acceptance test, right? So that is
1835.23 -> basically what is continuous delivery. So
let us just summarize containers delivery
1838.929 -> again moment. Any developers makes a change
in the source code. Jenkins will pull that
1842.72 -> code prepare a built once build a successful.
It will take the build application and Jenkins
1848.029 -> will deploy it onto the test server for end
user testing or user acceptance test. So this
1852.65 -> is basically what continuous delivery is is
happens in a continuous fashion. So what advantage
1856.929 -> we get here? Basically if they the build failure
then we know which commits has caused that
1862.59 -> error and we don't need to go through the
entire source code of the application similarly
1866.739 -> for testing even if any bug appears in testing
is well, we know which comment has caused
1871.4 -> that are Ernie can just go ahead and you know
have a look at that particular comment instead
1875.129 -> of checking out the entire source code of
the application. So they basically this system
1879.389 -> allows the team to detect problems early,
right as you can see from the diagram as web.
1884.09 -> You know, if you want to learn more about
Jenkins, I'll leave a link in the chat box.
1887.529 -> You can go ahead and refer that and people
are watching it on YouTube can find that link
1891.07 -> in the description box below now, we're going
to talk about continuous deployment. So continuous
1895.194 -> deployment is basically taking the application
the build application that you have tested
1900.34 -> and deploying that onto the process servers
for release in an automated fashion. So once
1904.44 -> the application is tested it will automatically
be deployed on to the broad service for release.
1908.669 -> Now, this is something not a good practice
as I've told you earlier as well because there
1912.22 -> might be certain checks that you need to do
now to release your software in the market.
1915.72 -> Are you might want to Market your product
before that? So there are a lot of things
1919.249 -> that you want to do before deploying your
application. So it is not advisable or a good
1923.35 -> practice to you know, actually automatically
deploying your application onto the processor
1927.619 -> which for release so this is basically continuous
integration delivery and deployment any questions.
1932.669 -> You have guys you can ask me. All right, so
Dorothy wants me to repeat it. Once more sure
1936.629 -> jovial do that. Let's start with continuous
integration. So continuous integration is
1941.2 -> basically committing the changes in the source
code more frequently and every commit will
1945.97 -> then be built using a Jenkins server, right
or any continuous integration server. So this
1950.619 -> Jenkins what it will do it will trigger a
build the moment any developer commits a change
1954.33 -> in the source code and build includes of compiling
code review unit, testing integration testing
1959.239 -> packaging and everything. So I hope you are
clear with what is continuous integration.
1962.139 -> It is basically continuously building your
application, you know, the moment any developer
1966.59 -> come in to change in the source code. Jenkins
will pull that code and repairable. Let's
1970.179 -> move forward and now I'm going to explain
you continuous delivery now incontinence delivery
1974.159 -> the package that we Created here the war of
the jar file of the executable file. Jenkins
1979.57 -> will take that package and it will deploy
it onto the test server for end user testing.
1983.889 -> So this kind of testing is called the end
user testing or user acceptance test where
1987.94 -> you need to deploy your application onto a
server which can be a replica of your production
1992.47 -> server and you perform end user testing or
you call it user acceptance test. For example
1996.46 -> in my application if I want to check all the
functions right functional testing if I want
2000.03 -> to perform functional testing of my application,
I will first go ahead and check whether my
2003.47 -> search engine is working then I'll check whether
people are able to log in or not. So all those
2007.409 -> functions of a website when I check or an
application and I check is basically after
2011.22 -> deploying it on to apps over right? So that's
sort of testing is basically what is your
2015.6 -> functional testing or what? I'm trying to
refer here next up. We are going to continuously
2019.85 -> deploy our application onto the process servers
for release. So once the application is tested
2024.519 -> it will be then deployed onto the broad service
for release and I've told you earlier is well,
2028.249 -> it is not a good practice to deploy your application
continuously or in an automated fashion. So
2033.029 -> guys you have discussed a lot about Jenkins.
How about I show you How Jenkins UI looks
2037.2 -> like and how you can download plugins on all
those things. So I've already installed Jenkins
2041.529 -> in my Center is virtual machine. So let me
just quickly open. My Center is virtual machine.
2045.789 -> So guys, this is my Center is virtual machine
again and over here. I have configured my
2049.77 -> Jenkins on localhost port 8080 / Jenkins and
here we go. Just need to provide the username
2059.819 -> and password that you have given when you
are installing Jenkins. So this is how Jenkins
2065.45 -> looks like guys over here. There are multiple
options. You can just go and play around with
2069.52 -> it. Let me just take you through a few basic
options that are there. So when you click
2073.149 -> on new item, you'll be directed to a page
which will ask you to give a name to your
2076.76 -> project. So give whatever name that you want
to give then choose a kind of project that
2080.77 -> you want. Right and then you can go ahead
and provide the required specifications required
2085.129 -> configurations for your project. Now when
I was talking about plugins, let me tell you
2089.1 -> how you can actually install plug-ins. So
you need to go to manage and kins and here's
2093.55 -> a tab that you'll find manage plugins. In
this tab, you can find all the updates that
2098.28 -> are there for the plugins that you have already
installed in the available section. You'll
2101.96 -> find all the available plugins that Jenkins
support so you can just go ahead and search
2106.329 -> for the plug-in that you want to install just
check it and then you can go ahead and install
2109.78 -> it similarly. The plug-ins that are installed
will be found in the install Tab and then
2114.38 -> you can go ahead and check out the advanced
tab as well. So this is something different.
2117.73 -> Let's not just focus on this for now. Let
me go back to the dashboard and this is basically
2122.46 -> one project that I've executed which is called
Ada Rekha Pipeline and this blue color symbolizes
2127.2 -> and it was successful the blue Colour ball
means it was successful. That's how it works
2131.02 -> guys. So I was just giving you a tour to the
Jenkins dashboard will actually execute the
2135.06 -> Practical as well. So we'll come back to it
later. But for now, let me open my slides
2139.83 -> in will proceed with the next stage in the
devops life cycle. So now let's talk about
2144.079 -> configuration management. So what exactly
is configuration management, so now let me
2148.49 -> talk about few issues with the deployment
of a particular application or provisioning
2152.67 -> of the server's so basically what happens,
you know, I've been My application but when
2156.559 -> I deployed onto the test servers or onto the
process servers, there are some dependency
2160.45 -> issues because of his my application is not
working fine. For example in my developers
2164.809 -> laptop. There might be some software stack
which was upgraded but in my prod and in the
2169.109 -> test environment, they're still using the
outdated version of that software side because
2173.119 -> of which the application is not working fine.
This is just one example apart from that what
2177.39 -> happens when your application is life and
it goes down because of some reason and that
2181.44 -> reason can be you have upgraded the software
stack. Now, how will you go back to the previous
2185.16 -> table version of that software stack. So there
are a lot of issues with you know, the admin
2189.339 -> side of the company the upside of the company
which were removed the help of configuration
2193.18 -> management tools. So, you know before Edmonds
used to write these long scripts in order
2197.92 -> to provision the infrastructure whether it's
the test environment of the prod environment
2201.22 -> of the dev environment, so they utilize those
long scripts, right which is prone to error
2205.58 -> plus. It used to take a lot of time and apart
from that the Edmund who has written that
2209.88 -> script. No one else can actually recognize
what's the problem with it once if you have
2213.18 -> to debug it, so there are a lot of problems
at work. Are with the admin side or the Absurd
2217.56 -> the company which were removed by the help
of configuration management tools and when
2221.18 -> very important concept that you guys should
understand is called infrastructure as code
2225.26 -> which means that writing code for your infrastructure.
That's what it means suppose if I want to
2229.44 -> install lamp stack on all of these three environments
whether it's devtest abroad I will write the
2233.74 -> code for installing lamp stack in one central
location and I can go ahead and deploy it
2238.16 -> onto devtest and prom so I have the record
of the system State president my one central
2244.24 -> location, even if I upgrade to the next version,
I still have the recorded the previous stable
2248.78 -> version of the software stack, right? So I
don't have to manually go ahead and you know
2253.11 -> write scripts and deployed onto the nodes
this is that easy guys. So let me just focus
2258.29 -> on few challenges at configuration management
helps us to overcome. First of all, it can
2263.05 -> help us to figure out which components to
change when requirements change. It also helps
2267.53 -> us in redoing an implementation because the
requirements have changed since the last implementation
2272.869 -> and very important Point guys that it helps
us to revert to a Previous version of the
2276.47 -> component if you have replaced with a new
but the flawed version now, let me tell you
2280.52 -> the importance of configuration management
through a use case now the best example I
2284.96 -> know is of New York Stock Exchange a software
glitch prevented the NYC from Trading stocks
2290.79 -> for almost 90 minutes this led to millions
of dollars of loss a new software installation
2296.349 -> caused the problem that software was installed
on 8 of its twenty trading Terminals and the
2301.619 -> system was tested out the night before however
in the morning it failed to operate on the
2305.981 -> a term ends. So there was a need to switch
back to the old software. Now you might think
2310.23 -> that this was a failure of nyc's configuration
management process, but in reality, it was
2315.24 -> a success as a result of proper configuration
management NYC recovered from that situation
2320.93 -> in 90 minutes, which was pretty fast have
the problem continued longer the consequences
2325.48 -> would have been more severe guys. So I hope
you have understood its importance. Now, let's
2330 -> focus on various tools available for configuration
management. So we have multiple tools like
2334.74 -> Papa Jeff and silence. Stack I'm going to
focus on pop it for now. So pop it is a configuration
2338.76 -> management tool that is used for deploying
configuring and managing servers. So, let's
2342.849 -> see, what are the various functions of puppet.
So first of all, you can Define distinct configurations
2348.02 -> for each and every host and continuously check
and confirm whether required configuration
2353.24 -> is in place and is not altered on the host.
So what I mean by that you can actually Define
2358.89 -> distinct configuration for example in my one
particular node. I need this office. I can
2363.25 -> another node. I need this office stack so
I can you know, defined distinct configurations
2367.119 -> for different nodes and continuously check
and confirm whether the required configuration
2372.24 -> is in place and is not alter and if it is
altered pop, it will revert back to the required
2377.97 -> configurations. This is one function of puppet.
It can also help in Dynamic scaling up and
2381.74 -> scaling down of machines. So what will happen
if in your company there's a big billion day
2385.45 -> sale, right and you're expecting a lot of
traffic. So at that time in order to provision
2389.38 -> more servers probably today our task is to
provision 10 servers and tomorrow you might
2394.01 -> have two revisions. Jim's right. So how will
you do that? You cannot go ahead and do that
2398.849 -> manually by writing scripts. You need tools
like puppet that can help you in Dynamic scaling
2403.13 -> up and scaling down of machines. It provides
control over all of your configured machines.
2407.599 -> So a centralized change gets propagated to
all automatically so it follows a master-slave
2412.61 -> architecture in which the slaves will pull
the central server for changes made in the
2417.71 -> configuration. So we have multiple nodes there
which are connected to the master. So they
2421.17 -> will poll they will check continuously. Is
there any change in the configuration happened
2424.91 -> the master the moment any change happen it
will pull that configuration and deploy it
2428.93 -> onto that particular node. I hope you're getting
my point. So this is called pull configuration
2432.859 -> and push configuration. The master will actually
push the configurations on to the nose which
2437.25 -> happens in ansible and salts that but does
not happen in pop it in Chef. So these two
2441.359 -> tools follow full configuration and an smellin
salts that follows push configuration in which
2445.369 -> these configurations are pushed onto the nodes
and here in chef and puppet. The nodes will
2449.95 -> pull that configurations. They keep on checking
the master at regular intervals and if there's
2454.67 -> any change in the configuration It'll pull
it now. Let me explain you the architecture
2458.01 -> that is there in front of your screen. So
that is basically a typical puppet architecture
2461.369 -> in which what happens you can see that there's
a master/slave architecture here is our puppet
2465.539 -> master and here is our puppet slave now the
functions which are performed in this architecture
2469.48 -> first, the puppet agent sends the fact to
the puppet master. So this puppet slave will
2473.48 -> first send the fact to the Puppet Master facts
what our Fox basically they are key value
2479.12 -> data appears. It represents some aspects of
slave states such as its IP address up time
2484.18 -> operating system or whether it's a virtual
machine, right? So that's what basically facts
2488.73 -> are and the puppet master uses a fact to compile
a catalog that defines how the slaves should
2494.92 -> be configured. Now. What is the catalog it
is a document that describes a desired state
2500.21 -> for each resource that Puppet Master manages.
Honestly, then what happens the puppet slave
2506.039 -> reports back to the master indicating that
configuration is complete and which is also
2510.789 -> visible in the puppet dashboard. So that's
how it works guys. So let's move Forward and
2515.92 -> talk about containerization. So what exactly
is containerization so I believe all of you
2521.931 -> have heard about virtual machines? So what
are containers containers are nothing but
2526.21 -> the lightweight alternatives to Virtual machines.
So let me just explain that to you. So we
2531.52 -> have Docker containers that will contain the
binaries and libraries required for a particular
2536.01 -> application. And that's when we call it. You
know, we have containerized a particular application.
2539.849 -> Right? So let us focus on the diagram that
is there in front of your screen. So here
2543.76 -> we have host operating system on top of which
we have Docker engine. We have a No guest
2547.799 -> operating system here guys. It uses the host
operating system and we're learning to Containers
2552.77 -> container one will have application one and
it's binaries in libraries the container to
2557.029 -> will have application to and it's binaries
and libraries. So all I need in order to run
2562.33 -> my application is this particular container
or this particular container? Because all
2567 -> the dependencies are already present in that
particular container. So what is basically
2571.65 -> a container it contains my application the
dependencies of my application. The binary
2576.829 -> is Ivory is required for that application.
Is that in my container nowadays? If you must
2580.72 -> have noticed that even you want to install
some software you will actually get ready
2584.38 -> to use Docker container, right? That is the
reason because it's pretty lightweight when
2588.41 -> you compare it with virtual machines, right?
So let me discuss a use case how you can actually
2592.51 -> use Docker in the industry. So suppose you
have some complex requirements for your application.
2598.17 -> It can be a microservice. It can be a monolithic
application anything. So let's just take microservice.
2603.029 -> So suppose you have complex requirements for
your microservice your you have written the
2607.099 -> dockerfile for that with the help of this
Docker 5. I can create a Docker image. So
2612.41 -> Docker image is nothing but you know a template
you can think of it as a template for your
2616.663 -> Docker container, right? And with the help
of Docker image, you can create as many Docker
2620.39 -> containers as you want. Let me repeat it once
more so we have written the complex requirements
2625.39 -> for a micro service application in an easy
to write Docker file from there. We have created
2629.609 -> a Docker image and with the help of Docker
image we can build as many containers as we
2634.23 -> want. Now that Docker image I can upload that
onto Docker Hub, which is nothing. Butter
2639.5 -> git repository of Docker images we can have
public repositories can have private repositories
2643.619 -> e and from Docker Hub any team beat staging
a production can pull that particular image
2648.779 -> and prepare as many containers as they want.
So what advantage we get here, whatever was
2654.039 -> there in my developers laptop, right? The
Microsoft is application. The guy who has
2657.93 -> written that and the requirement for that
microbes obvious application. So that guy's
2661.569 -> basically a developer and because he's only
developing the application. So whatever is
2666.089 -> there in my developers laptop I have replicated
in my staging as well as in a production.
2670.42 -> So there's a consistent Computing environment
throughout my software delivery life cycle.
2675.369 -> I hope you are getting my point. So guys,
let me just quickly brief you again about
2679.289 -> what exactly a Docker containers so just visualize
container as actually a box in which our application
2685.28 -> is present with all its dependencies except
the box is infinitely replicable. Whatever
2690.69 -> happens in the Box stays in the Box unless
you explicitly take something out or put something
2695.43 -> in and when it breaks you will just throw
it away and get a new What so containers usually
2700.289 -> make your application easy to run on different
computer. Ideally the same image should be
2705.109 -> used to run containers in every environment
stage from development to production. So that's
2710.03 -> what basically Docker containers are. So guys.
This is my sent to us virtual machine here
2715.21 -> again, and I've already installed docker.
So the first thing is I need to start Docker
2719.13 -> for that. I'll type system CTL start docker.
Give the password. And it has started successfully.
2730.279 -> So now what I'm going to do, there are few
images which are already there in Docker up
2734.47 -> which are public images. You can pull it at
anytime you want. Right? So you can go ahead
2738.99 -> and run that image as many times as you want.
You can create as many containers as you want.
2743.92 -> So basically when I execute the command of
pulling an image from dog a rabbit will try
2747.63 -> to First find it locally whether its present
or not and if it is present then it's well
2751.59 -> and good. Otherwise, we'll go ahead and pull
it from the docker Hub. So right so before
2755.09 -> I move forward, let me just show you how dr.
Of looks like If you have not created an account
2761.539 -> and Dock and have you need to go and do that
because for executing a use case you have
2765.799 -> to do is it's free of cost. So this is our
doctor of looks like guys and this is my repository
2770.309 -> that you can notice here. Right? I can go
ahead and search for images here as well.
2774.309 -> So for example, if I want to search for Hadoop
images, which I believe one of you asked so
2779.329 -> you can find that we have Hadoop images present
here as well. Right? So these are nothing
2783.91 -> but few images that are there on Docker Hub.
So I believe now I can go back to my terminal
2788.059 -> and execute your basic Docker commands. So
the first thing that I'm going to execute
2791.79 -> is called Docker images which will give the
list of all the images that I have in my local
2796.829 -> system. So I have quite a lot of images you
can see right this is the size and and all
2802.66 -> those things when it was created the image.
This is called the image ID, right? So I have
2807.63 -> all of these things displayed on my console.
Let me just clear my terminal now what I'm
2811.579 -> going to do, I'm going to pull an image rights.
All I have to type here is the awkward pull
2816.76 -> for example if I want to pull an Ubuntu image.
Just type in here Docker pull open to and
2822.67 -> here we go. So it is using default tag latest.
So tag is something that I'll tell you later
2827.16 -> party at will provide the default tag latest
all the time. So it is pulling from the docker
2832.339 -> Hub right now because it couldn't find it
locally. So download is completed is currently
2836.51 -> extracting it. Now if I want to run a container,
all I have to type here is to occur and - IIT
2843.6 -> Ubuntu or you can type the image ideas. Well,
so I am in the Ubuntu container. So I've told
2850.2 -> you how you can see the various Docker images
of told you how you can pull an image from
2854.01 -> Docker Hub and how you can actually go ahead
and run a container and you're going to focus
2857.75 -> on continuous monitoring now, so continuous
monitoring tools resolve any system errors,
2863.14 -> you know, what kind of system errors low memory
unreachable server, etc, etc. Before they
2868.69 -> have any negative impact on your business
productivity. Now, what are the reasons to
2872.42 -> use continuous monitoring tools? Let me tell
you that it detects any network or server
2877.03 -> problems. It can determine the root cause
of any issue. It maintains the security and
2881.48 -> availability of the services and also monitors
in troubleshoot server performance issues.
2886.089 -> It also allows us to plan for infrastructure
upgrades before outdated system cause failures
2890.41 -> and it can respond to issues of the first
sign of problem and let me tell you guys these
2894.609 -> tools can be used to automatically fix problems
when they are detected as well. It also ensures
2899.4 -> it infrastructure outages have a minimal effect
on your organization's bottom line and can
2904.17 -> monitor your entire infrastructure and business
processes. So what is continuous monitoring
2908.86 -> it is all about the ability of an organization
to detect report respond contain and mitigate
2915.74 -> that acts that occur on its infrastructure
or on the software. So basically we have to
2920.73 -> monitor the events on the ongoing basis and
determine what level of risk. We are experiencing.
2925.299 -> So if I have to summarize continuous monitoring
in one definition, I will say it is the integration
2930.049 -> of an organization security tools. So we have
different security tools in an organization
2935.599 -> the integration of those tools the aggregation
normalization and correlation of the data
2941.76 -> that is produced by the security tools right
now. It happens the data that has been produced
2948 -> the analysis of that data based on the organization's
risk goals and threat knowledge and near real-time
2955.599 -> response to the risks identified is basically
what is continuous monitoring and this is
2960.17 -> a very good saying guys if you can't measure
it, you can't manage it. I hope you know what
2964.14 -> I'm talking about. Now, there are multiple
continuous monitoring tools available in the
2967.339 -> market. We're going to focus on nagas now
give us is used for continuous monitoring
2971.329 -> of systems application services and business
processes in a devops culture, right and in
2976.369 -> the event of failure nagas can alert technical
staff of the problem allowing them to begin
2980.799 -> the mediation process before outages affect
business processes and users or Customers
2986.42 -> so with nagas you don't have to explain why
19 infrastructure outage affect your organization's
2991.339 -> bottom line. So let me tell you how it works.
So I'll focus on the diagram that is there
2995.049 -> in front of your screen. So now I give is
runs on a server usually as a Daemon or a
2999.42 -> service it periodically runs plugins residing
on the same server, they contact holes or
3004.829 -> servers on your network so you can see it
in the diagram as well. It periodically runs
3009.71 -> plugins residing on the same server. They
contact horse or servers on your network or
3014.28 -> on the Internet or Source overs, which can
be locally present or can be remotely present
3018.58 -> as well. One can view the status information
using the web interface. You can also receive
3023.519 -> email or SMS notification if something happens,
so now gives them and behaves like a scheduler
3029.19 -> that runs out in scripts at certain moments.
It stores the results of those scripts and
3033.029 -> we'll run other scripts if these results change
now what our plugins plugins are compiled
3037.369 -> executables or scripts that can be run from
a command line to check the status of a host
3041.89 -> or service. So now uses the results from the
plugins. Mine the current status of the host
3046.74 -> and services on your network. So what happened
actually in this diagram now your server is
3051.829 -> running on a host and plugins interact with
local or remote host right. Now. These plugins
3057.26 -> will send the information to the scheduler
which displays that in the gy that's what
3061.619 -> is happening guys. All right, so we have discussed
all the stages. So let me just give you a
3065.78 -> quick recap of what all things we have discussed
first. We saw what was the methodology before
3070.66 -> devops? We saw the waterfall model. What were
its limitations then we understood the agile
3074.96 -> model and the difference between the waterfall
and agile methodology. And what are the limitations
3079.43 -> of agile methodology then we understood how
devops overcomes all of those limitations
3083.61 -> in what exactly is the worms. We saw the various
stages and tools involved in devops starting
3088.77 -> from Version Control. Then we saw continuous
integration. Then we saw countenance delivery.
3092.85 -> Then we saw countenance deployment. Basically,
we understood the difference between integration
3097.079 -> delivery and deployment then we saw what is
configuration management and containerization
3102.329 -> and finally explained continuous monitoring,
right? So in between I was even switching
3106.32 -> back to my virtual machine where a few tools
already installed and I was telling you a
3109.99 -> few Basics about those tools now comes the
most awaited topic of today's session which
3114.46 -> is our use case. So let's see what we are
going to implement in today's use case. So
3119.119 -> this is what we'll be doing. We have git repository,
right? So developers will be committing code
3123.68 -> to this git repository. And from there. Jenkins
will pull that code and it will first clone
3129.07 -> that repository after cloning that repository
it will build a Docker image using a Docker
3134.151 -> file. So we have the dockerfile will use that
to build an image. Once that image is built.
3138.92 -> We are going to test it and then push it onto
Docker Hub as I've told you what is the organ
3142.99 -> of is nothing but like a git repository of
Docker images. So this is what we'll be doing.
3146.81 -> Let me just repeat it once more so developers
will be committing changes in the source code.
3151.18 -> So the moment any developers commit to change
in the source code Jenkins will clone the
3154.93 -> entire git repository. It will build a Docker
image based on a Docker file that will create
3159.93 -> and from there. It will push the docker image
onto the docker Hub. This will happen automatically.
3165.099 -> The click of a button. So what I'll do is
we'll be using will be using gate Jenkins
3169.96 -> and Docker. So let me just quickly open my
Virtual Machine and I'll show you that so
3175.279 -> what our application is all about. So we are
basically what creating a Docker image of
3178.92 -> a particular application and then pushing
it onto Docker Hub in an automated fashion.
3182.99 -> And our code is written in the GitHub repository.
So what is it application? So it's basically
3187.349 -> a Hello World server written with node. So
we have a main dot JS. Let me just go ahead
3191.88 -> and show you on my GitHub repository. Let
me just go back. So this is how our application
3201.349 -> looks like guys we have main dot J's right
apart from that. We have packaged or Json
3206.26 -> for a dependencies. Then we have Jenkins file
and dockerfile Jenkins file. I'll explain
3210.69 -> it to you what we are going to do with it.
But before that let me just explain you few
3214.24 -> basics of Docker file and how we can build
a Docker image of this particular. Very basic
3219.7 -> node.js application. First thing is writing
a Docker file now to be able to build a Docker
3224.09 -> image with our application. We will need a
Docker file. Yeah, right you can think of
3227.45 -> it as a blueprint for Docker. It tells Docker
what the contents in parameters of our image
3231.521 -> should be so Docker images are often based
on other images, but before that, let me just
3236.16 -> go ahead and create a Docker file for you.
So let me just first clone this particular
3241.01 -> Repository. So let me go to that particular
directory first. It's Darren downloads. Let
3255.119 -> me unzip this first unzip divorce - tutorial
and let me hit an LS command. So here is my
3261.93 -> application present. So I'll just go to this
particular devops - tutorial - master and
3266.95 -> let me just say my terminal let us focus on
what all files we have. We have dockerfile.
3271.859 -> Let's not focus on Jenkins file at all for
now, right we have dockerfile. We have main
3276.39 -> dot J's package dot Json read me dot MD and
we have test dot J's. So I have a Docker file
3282.14 -> with the help of which I will be creating
a Docker image, right? So let me just show
3286.26 -> you what I have written in this Docker file
before this. Let me tell you that Docker images
3291.93 -> are often based on other images right for
this example. We are basing our image on the
3297.109 -> official node Docker image. So this line that
you are seeing is basically to base our application
3302.15 -> on the official node Docker image. This makes
our job easy and our dockerfile very very
3307.27 -> short guys. So the in a hectic task of installing
node, and it's dependencies in the image is
3312.53 -> already done in our basement. So we'll just
need to include our application. Then we have
3317.569 -> set a label maintainer. I mean, this is optional
if you want to do it. Go ahead. If you don't
3321.369 -> want to do it, it's still fine. There's a
health check which is basically for Docker
3326.049 -> to be able to tell if the server is actually
up or not. And then finally we are telling
3330.74 -> Docker which Port ask server will run on right?
So this is how we have written the dockerfile.
3336.19 -> Let me just go ahead and close this and now
I'm going to create an image using this Docker
3340.56 -> file. So for that all I have to type here
is sudo docker Bell slash home slash Edureka
3353.769 -> downloads devops - tutorial basically the
path to my dockerfile and here we go need
3360.49 -> to provide the sudo password. So had I started
now and is creating an image for me the docker
3367.19 -> image and it is done it successfully built
and this is my image ID, right so I can just
3372.67 -> go ahead and run this as well. So all I have
to type here is Docker Run - it and my image
3379.21 -> ID and here we go. So it is listening at Port
8000. Let me just stop it for now. So I've
3386.569 -> told you how you can create an image using
Docker file right now. What I'm going to do,
3390.99 -> I'm going to use Jenkins in order to clone
a git repository then build an image and then
3396.529 -> perform testing and finally pushing it onto
Docker Hub my own tokra profile. All right,
3402.289 -> but before that what we need to do is we need
to tell Jenkins what our stages are and what
3406.559 -> to do in each one of them for this purpose.
We will write Jenkins pipeline specification
3411.319 -> in on Jenkins file. So let me show you how
the Jenkins file looks like just click on
3416.44 -> it. So this is what I have written in my Jenkins
file, right? That's pretty self-explanatory
3422.059 -> first. I've defined my application. I mean
just clone the repository that I have then
3426.049 -> build that image. This is the target I'm using
a draca one, which is username. And Erica
3430.92 -> is the repository name rights built that image
then test it. So we are just going to print
3435.789 -> test passed and then finally push it onto
Docker Hub, right? So this is the URL of Docker
3440.53 -> Hub and my credentials are actually saved
in Jenkins in Docker Hub credentials. So,
3445.71 -> let me just show you how you can save those
credentials. So go to the credentials tab,
3451.109 -> so here you need to click on system and click
on global credentials. Now over here, you
3455.769 -> can go ahead and click on update and you need
to provide your username your password and
3460.589 -> your doctor have credential ID that whatever
you gonna pass there, right? So, let me just
3466.38 -> type the password again. All right. Now we
need to tell Jenkins two things where to find
3474.41 -> our code and what credentials to use to publish
the docker image, right? So I've already configured
3479.52 -> my project. Let me just go ahead and show
you what I have written there. So the first
3483.95 -> thing is the name of my project right which
I was showing you when you create a new item
3487.68 -> over there. There's an option called where
you need to give the name of your project
3490.799 -> and I've chosen pipeline project. So if I
have to show you the pipeline project you
3495.44 -> can go to new item. And this is what I've
chosen that the kind of project and then I
3500.259 -> have clicked on Bill triggers. So basically
this will pull my CM the source code management
3504.42 -> repository after every minute Whenever there
is a change in the source code will pull that
3509.2 -> and it will repeat the entire process after
every minute then Advanced project options
3514.43 -> are selected the pipeline script from SCM
here either you can write pipeline script
3518.95 -> directly or you can click on Pipeline script
from source code management that kind of source
3522.64 -> code management is get then I've provided
the link to my repository and that's all I
3526.94 -> have done now when I scroll down there's nothing
else I can just click on apply and Save So
3532.5 -> I've already build this project one. So let
me just go ahead and do it again. All right
3536.619 -> side. I started first. It will clone the repository
that I have. You can find all the logs. Once
3541.509 -> you click on this blue color ball and you
can find the logs here as well. So once you
3546.02 -> click here, you'll find it over here as well.
And similarly the logs are present here also,
3552.2 -> so now I we have successfully build our image.
We have tested it now. We are pushing it onto
3556.06 -> Docker hub. So we are successfully pushed
our image onto Docker Hub as well. Now if
3564.369 -> I go back to my profile and I go to my repository
here. So you can find the image is already
3573.589 -> present here have actually pushed it multiple
times. So this is how you will execute the
3578.019 -> Practical. It was very easy guys. So let me
just give you a quick recap of all the things
3582.039 -> we have done first. I told you how you can
write a Docker file in order to create a Docker
3586.46 -> image of a particular application. So we were
basing our image on the official node image
3591.04 -> of present of the docker Hub, right which
already contains all the dependencies and
3594.869 -> it makes a Docker file looks very small after
that. I build an image using the dockerfile
3599.44 -> then I explain to you how you can use Jenkins
in order to automate the task of cloning a
3603.46 -> repository then building a Docker image testing
the docker image and then finally uploading
3608.39 -> the add-on to the docker Hub. We did that
automatically with the help of Jenkins a told
3612.2 -> you where you need to provide the credentials
what our tags how you can write Jenkins file
3616.869 -> the next part of the use cases different teams
beat staging and production can actually pull
3620.77 -> the image that we have uploaded onto Docker
Hub and can run as many containers as you
3624.94 -> want. Hey everyone, this is Reyshma from Edureka
and today's tutorial. We're going to learn
3635.98 -> about git and GitHub. So without any further
Ado, let us begin this tutorial by looking
3642.2 -> at the topics that we'll be learning today.
So at first we will see what is Version Control
3648.74 -> and why do we actually need Version Control
after that? We'll take a look at the different
3654.18 -> version control tools and then we'll see all
about GitHub and get lots of taking account
3660.74 -> a case study of the Dominion Enterprises about
how they're using GitHub after that. We'll
3666.709 -> take a look at the features of git and finally
we're going to use all the git commands to
3671.78 -> perform all the get operations. So this is
exactly what we'll be learning today. So we're
3677.05 -> good to go. So let us begin with the first
topic. What is Version Control? Well, you
3684.23 -> can think of Version Control as the management.
System that manages the changes that you make
3690.46 -> in your project till the end the changes that
you make might be some kind of adding some
3696.21 -> new files or you're modifying the older files
by changing the source code or something.
3701.779 -> So what the version control system does is
that every time you make a change in your
3706.41 -> project? It creates a snapshot of your entire
project and saves it and these snapshots are
3712.46 -> actually known as different versions. Now
if you're having trouble with the word snapshot
3718.049 -> just consider that snapshot is actually the
entire state of your project at a particular
3723.289 -> time. It means that it will contain what kind
of files your project is storing at that time
3728.19 -> and what kind of changes you have made. So
this is what a particular version contains
3733.089 -> now, if you see the example here, let's say
that I have been developing my own website.
3739.569 -> So let's say that in the beginning. I just
had only one web page which is called the
3744.45 -> index dot HTML and Few days. I have added
another webpage to it, which is called about
3751.19 -> dot HTML and I have made some modifications
in the about our HTML by adding some kind
3756.4 -> of pictures and some kind of text. So, let's
see what actually the Version Control System
3761.53 -> stores. So you'll see that it has detected
that something has been modified and something
3767.16 -> has been created. For example, it is storing
that about dot HTML is created and some kind
3773.089 -> of photo is created or added into it and let's
say that after a few days. I have changed
3779.74 -> the entire page layout of the about dot HTML
page. So again, my version control system
3785.03 -> will detect some kind of change and will say
that some about duration T. Ml has been modified
3790.799 -> and you can consider all of these three snapshots
as different versions. So when I only have
3796.099 -> my index dot HTML webpage and I do not have
anything else. This is my version 1 and after
3802.15 -> that when I added another web page, this is
going to be a version 2 and after have The
3808.26 -> page layout of my web page. This is my version
3. So this is how a Version Control System
3813.4 -> stores different versions. So I hope that
you've all understood what is a version control
3818.289 -> system and what are versions so let us move
on to the next topic and now we'll see why
3824.869 -> do we actually need Version Control? Because
you might be thinking that why should I need
3829.98 -> a Version Control? I know what the changes
that I have made and maybe I'm making this
3834.18 -> changes just because I'm correcting my project
or something, but there are a number of things
3839.65 -> because of why we need Version Control n so
let us take a look at them one by one. So
3845.26 -> the first thing that version control system
avails us is collaboration. Now imagine that
3851.91 -> there are three developers working on a particular
project and everyone is working in isolation
3858.38 -> or even if they're working in the same shared
folder. So there might be conflicts sometimes
3864.819 -> when each one of them are trying to modify
the same file. Now, let's say they are working
3869.59 -> in isolation. Everyone is minding their own
business. Now the developer one has made some
3874.359 -> changes XYZ in a particular application and
in the same application the developer to has
3880.65 -> made some kind of other changes ABC and they
are continuing doing that same thing. They're
3886.5 -> making the same modifications to the same
file, but they're doing it differently. So
3890.66 -> at the end when you try to collaborate or
when you try to merge all of their work together,
3895.42 -> you'll come up with a lot of conflicts and
you might not know who have done what kind
3900.47 -> of changes and this will at the end end up
in chaos. But with Version Control System,
3906.799 -> it provides you with a shared workspace and
it continuously tells you who has made what
3912.33 -> kind of change are what has been changed.
So you'll always get notified if someone has
3917.03 -> made changed in your project. So with Version
Control System a collaboration is available
3922.349 -> tween all the developers and you can visualize
everyone's work properly and as a result your
3928.67 -> project will always evolve as a whole from
the start and it will save a lot of time for
3933.589 -> you because there won't be much conflicts
because obviously if the developer a will
3938.49 -> see that he has already made some changes
he won't go for that right because he can
3943.799 -> carry out his other work. You can make some
other changes without interfering his work.
3948.49 -> Okay, so we'll move on to the next reason
for what I we need Version Control System.
3956.039 -> And this is one of the most important things
because of why we need Version Control System.
3961.13 -> I'll tell you why now. The next reason is
because of storing versions because saving
3966.809 -> a version of your project after you have made
changes is very essential and without a Version
3973.539 -> Control System. It can actually get confusing
because there might be some kind of questions
3978.839 -> that will arise in your mind when you are
trying to save a version the first question
3983.359 -> might be how much would you save would you
just save the entire project or would you
3988.2 -> just save the changes that you made now? If
you only save the changes it'll be very hard
3994.049 -> for you to view the whole project at a time.
And if you try to save the entire project
3999.17 -> at every time there will be a huge amount
of unnecessary and redundant data lying around
4004.69 -> because you'll be saving the same thing that
has been remaining unchanged again. And again,
4009.299 -> I will cover up a lot of your space and after
that they're not the problem comes that. How
4015.44 -> do I actually named this versions now? Even
if you are a very organized person and you
4021.49 -> might actually come up with a very comprehensive
naming scheme, but as soon as your project
4026.631 -> starts varying and it comes to variance there
is a pretty good chance that you'll actually
4032.44 -> lose track of naming them. And finally the
most important question. Is that how do you
4039.21 -> know what exactly is different between these
versions now you ask me that? Okay. What's
4043.841 -> the difference between version 1 and version
2 what exactly was changed you need to remember
4048.99 -> or document them as well. Now when you have
a version control system, you don't have to
4054.049 -> worry about any of that. You don't have to
worry about how much you need to save. How
4058.49 -> do you name them? Are you have to you don't
have to remember that what exactly is different
4063.299 -> different between the versions because the
Version Control System always acknowledges
4068.13 -> that there is only one project. So when you're
working on your project, there is only one
4073.23 -> version on your disk. And everything else
all the changes that they've made in the past
4078.339 -> are all neatly packed inside the Version Control
System. Let us go ahead and see the next reason
4086.069 -> now version control system provides me with
a backup. Now the diagram that you see here
4091.769 -> is actually the layout of a particul distributed
Version Control System here. You've got your
4098.58 -> central server where all the project files
are located and apart from that every one
4105.74 -> of the developers has a local copy of all
the files that is present in the central server
4111.46 -> inside their local machine and this is known
as the local copies. So what the developers
4117.22 -> do is that every time they start coding at
the start of the day, they actually fetch
4122.589 -> all the project files from the central server
and store it in the local machine and after
4127.4 -> they are done working the actually transfer
all the files back into the central server.
4133.429 -> So at every time you'll always Is have a local
copy in your local machine at times of Crisis.
4139.85 -> Like maybe let's say that your central server
gets crashed and you have lost all your project
4144.779 -> files. You don't have to worry about that
because all the developers are maintaining
4149.6 -> a local copy the same exact copy of all the
files that is related to your project that
4155.48 -> is present in the central server. Is there
in your local machine and even if let's say
4160.64 -> that maybe this developer has not updated
his local copy with all the files if he loses
4167.17 -> and the central servers gets crashed and the
developer has not maintained its local copy
4172.589 -> is always going to be someone who has already
updated it because obviously there is going
4176.699 -> to be huge number of collaborators working
on the project. So even a particular developer
4181.63 -> can communicate with other developers and
get fetch all the project files from other
4187.23 -> developers local copy as well. So it is very
reliable when you have a version control system
4192.779 -> because you're always going to have a backup
of all. You're fired. So the next thing and
4197.9 -> which Version Control helps us is to analyze
my project because when you have finished
4203.79 -> your project you want to know that how your
project has actually evolved so that you can
4209.06 -> make an analysis of it and you can know that
what could you have done better or what could
4214.67 -> have been improved in your project? So you
need some kind of data to make an analysis
4219.25 -> and you want to know that what is exactly
changed and when was it change and how much
4224.1 -> time did it take and Version Control System
actually provides you with all the information
4230.4 -> because every time you change something version
control system provides you with the proper
4235.28 -> description of what was changed. And when
was it changed you can also see the entire
4241.06 -> timeline and you can make your analysis report
in a very easy way because you have got all
4246.07 -> the data present here. So this is how a version
control system helps you to analyze your project
4251.739 -> as well. So let us move ahead and let us take
a look. Add the Version Control tools because
4257.69 -> in order to incorporate version control system
in your project, you have to use a Version
4263.82 -> Control tool. So let us take a look at what
is available. What kind of tools can I use
4269.38 -> to incorporate version control system. So
here we've got the four most popular version
4275.28 -> control system tools and they are get and
this is what we'll be learning in today's
4281.31 -> tutorial will be learning how to use git and
apart from get you have got other options
4286.17 -> as well. You've got the Apache subversion
and this is also popularly known as SBN SVN
4291.54 -> and CVS, which is the concurrent version systems.
They both are a centralized Version Control
4299.88 -> tool. It means that they do not provide all
the developers with a local copy. It means
4305.42 -> that all the contributors are all the collaborators
are actually working directly with the central
4311.23 -> repository only they don't maintain local
copy and Kind of actually becoming obsolete
4317.08 -> because everyone prefers a distributed Version
Control System where everyone has an okay
4321.909 -> copy and Mercurial on the other hand is very
similar to get it is also a distributed Version
4327.639 -> Control tool but we'll be learning all about
get here. That's what I get is highlighted
4331.69 -> in yellow. So let's move ahead. So this is
the interest over time graph and this graph
4338.679 -> has been collected from Google Trends and
this actually shows you that how many people
4344.5 -> have been using what at what time so the blue
line here actually represents get the green
4352.219 -> is SVN. The yellow is Mercurial and the red
is CVS. So you can see that from the start
4358.81 -> get has always been the most popular version
control tool as compared to as bian Mercurial
4364.54 -> and CVS and it has always kind of been a bad
day for CVS, but get has always been popular.
4370.94 -> So why not use get right? So there's nothing
to say much about That a yes and a lot of
4377 -> my fellow attendees agree with me. We should
all use get and we're going to learn how to
4381.219 -> use get in this tutorial. So let us move ahead
and let us all learn about git and GitHub
4387.699 -> right now. So the diagram that you see on
my left is actually the diagram which represents
4394.71 -> that what exactly is GitHub and what exactly
is get now I've been talking about a distributed
4401.52 -> version control system and the right hand
side diagram actually shows you the typical
4406.31 -> layout of a distributed Version Control System
here. We've got a central server or a central
4412.04 -> repository now, I'll be using the word repository
a lot from now on just so that you don't get
4417.79 -> confused. I'll just give you a brief overview.
I'll also tell you in detail. What is the
4422.82 -> repository and I'll explain you everything
later in this tutorial, but for now just consider
4428.5 -> repository as a data space where you store
all the project files any kind of files that
4434.6 -> is related. Your project in there, so don't
get confused when I say rip off the tree instead
4439.69 -> of server or anything else. So in a Distributive
Version Control System, you've got a central
4446.14 -> repository and you've got local repositories
as well and every of the developers at first
4452.38 -> make the changes in their local repository
and after that they push those changes or
4458.07 -> transfer those changes from into the central
repository and also the update their local
4465.33 -> repositories with all the new files that are
pushed into the central repository by an operation
4470.81 -> called pull. So this is how the fetch data
from Central repository. And now if you see
4476.92 -> the diagram again on the left, you'll know
that GitHub is going to be my central repository
4483.01 -> and get is the tool that is going to allow
me to create my local repositories. Now, let
4488.85 -> me exactly tell you what is GitHub. Now people
actually get confused between git and GitHub
4494.6 -> they I think that it's kind of the same thing
maybe because of the name they sound very
4499.679 -> alike. But it is actually very different.
Well git is a Version Control tool that will
4505.79 -> allow you to perform all these kind of operations
to fetch data from the central server and
4510.869 -> to just push all your local files into the
central server. So this is what get will allow
4516.59 -> you to do it is just a Version Control Management
tool. Whereas in GitHub. It is a code hosting
4524.15 -> platform for Version Control collaboration.
So GitHub is just a company that allows you
4531.28 -> to host your central repository in a remote
server. If you want me to explain in easy
4537.88 -> words, you can consider GitHub as a social
network, which is very much similar to Facebook.
4544.31 -> Like only the differences that this is a social
network for the developers. We're in Facebook,
4550.58 -> you're sharing all your photos and videos
or any kind of statuses. What the developers
4556.36 -> doing get have is that they share their code
for everyone to see their projects either
4560.739 -> code about how they have worked on. So that
is GitHub. There are certain advantages of
4567.119 -> a distributed Version Control System. Well,
the first thing that I've already discussed
4571.73 -> was that it provides you with the backup.
So if at any time your central server crashes,
4576.65 -> everyone will have a backup of all their files
and the next reason is that it provides you
4582.87 -> with speed because Central servers typically
located on a remote server and you have to
4588.219 -> always travel over a network to get access
to all the files. So if at sometimes you don't
4593.08 -> have internet and you want to work on your
project, so that will be kind of impossible
4597.4 -> because you don't have access to all your
files, but with a distributed Version Control
4601.489 -> System, you don't need internet access always
you just need internet when you want to push
4607.15 -> or pull from the central server apart from
that you can work on your own your files are
4612.85 -> all inside your local machine so fetching
it. In your workspace is not a problem. So
4618.389 -> that are all the advantages that you get with
a distributed version control system and a
4623.57 -> centralized version control system cannot
actually provide you that so now let us take
4627.74 -> a look at a GitHub case study of the Dominion
Enterprises. So Dominion Enterprises is a
4635.56 -> leading marketing services and Publishing
company that works across several Industries
4640.94 -> and they have got more than 100 offices worldwide.
So they have distributed a technical team
4647.98 -> support to develop a range of a website and
they include the most popular websites like
4653.9 -> for and.com volts.com homes.com. All the Dominion
Enterprises websites actually get more than
4662.6 -> tens of million unique visitors every month
and each of the website that they work on
4669.719 -> has a separate development team and all of
them has got a unique needs and You were close
4675.95 -> of their own and all of them were working
independently and each team has their own
4681.87 -> goals their own projects and budgets, but
they actually wanted to share the resources
4687.3 -> and they wanted everyone to see what each
of the teams are actually working on. So basically
4693.48 -> they want to transparency. Well the needed
a platform that was flexible enough to support
4698.889 -> a variety of workflows. And that would provide
all the Dominion Enterprises development around
4704.47 -> the world with a secure place to share code
and work together and for that they adopted
4710.42 -> GitHub as the platform. And the reason for
choosing GitHub is that all the developers
4716.469 -> across the Dominion Enterprises, we're already
using github.com. So when the time came to
4722.389 -> adopt a new version control platform, so obviously
GitHub Enterprise definitely seemed like a
4728.13 -> very intuitive choice and because everyone
all the developers were also familiar with
4733.93 -> GitHub. So the learning curve Was also very
small and so they could start contributing
4739.28 -> code right away into GitHub and with GitHub
all the developer teams. All the development
4744.98 -> teams were provided access to when they can
always share their code on what they're working
4749.73 -> on. So at the end everyone has got a very
secure place to share code and work together.
4756.51 -> And as Joe Fuller, the CIO of dominion Enterprises
says that GitHub Enterprise has allowed us
4763.02 -> to store our company source code in a central
corporately control system and Dominion Enterprises
4770.31 -> actually manages more than 45 websites, and
it was very important for dominion and the
4776.23 -> price to choose a platform that made working
together possible. And this wasn't just a
4781.55 -> matter of sharing Dominion Enterprises open
source project on GitHub. They also had to
4786.55 -> combat the implications of storing private
code publicly to make their work more transparent
4791.719 -> across the company as well and they were also
using Jenkins to facilitate continuous integration
4798.56 -> environment and in order to continuously deliver
their software. They have adopted GitHub as
4805.369 -> a Version Control platform. So GitHub actually
facilitated a lot of things for Dominion Enterprises
4812.4 -> and for that there were able to incorporate
a continuous integration environment with
4817 -> Jenkins and they were actually sharing their
code and making software delivery even more
4822.06 -> faster. So this is how GitHub helped not only
just a minute Enterprises, but I'm sure there's
4827.6 -> might be common to a lot of other companies
as well. So let us move forward. So now this
4834.25 -> is the topic that we were waiting for and
now we'll learn what is get so git is a distributed
4842.599 -> Version Control tool and it supports distributed
non linear workflow. So get is the tool that
4848.09 -> actually facilitates all the distributed Version
Control System benefits because it will provide
4853.21 -> you to create a local Repository. In your
local machine and it will help you to access
4858.71 -> your remote repository to fetch files from
there or push files and do that. So get is
4864.639 -> the tool that you required to perform all
these operations and I'll be telling you all
4870.11 -> about how to perform these operations using
get later in this tutorial for now. Just think
4875.56 -> of get as a to that you actually need to do
all kind of Version Control System task. So
4881.631 -> we'll move on and we'll see the different
features of git now. So these are the different
4887.86 -> features of get is distributed get is compatible
get a provides you with the non linear workflow
4894.8 -> at avails you branching. It's very lightweight
it provides you with speed. It's open source.
4901.409 -> It's reliable secure and economical. So let
us take a look at all these features one by
4906.44 -> one. So the first feature that we're going
to look into is its distributed now, I've
4912.59 -> been like telling you it's a it's a distributor.
Version Control tool that means that the feature
4917.621 -> that get provides you is that it gives you
the power of having a local repository and
4923.83 -> lets you have a local copy of the entire development
history, which is located in the central repository
4929.79 -> and it will fetch all the files from the central
repository to get your local repository always
4935.73 -> updated and this time calling it distributed
because every was let's say that there might
4941.82 -> be a number of collaborators or developers
so they might be living in different parts
4948.84 -> of the world. Someone might be working from
the United States and one might be in India.
4953.11 -> So the word the project is actually distributed.
Everyone has a local copy. So it is distributed
4959.56 -> worldwide you can say so this is what distributed
actually means. So the next feature is that
4966.51 -> it is compatible. Now, let's say that you
might not be using get on the first place.
4972.739 -> But you have a different version control system
already installed like SVN, like Apache subversion
4979.77 -> or CVS and you want to switch to get because
obviously you're not happy with the centralized
4985.76 -> version control system and you want a more
distributed version control system. So you
4989.909 -> want to migrate from SVN to get but you are
worried that you might have to transfer all
4995.88 -> the files all the huge amount of files that
you have in your SVN repository into a git
5000.969 -> repository. Well, if you are afraid of doing
that, let me tell you you don't have to be
5005.79 -> anymore because get is compatible with as
VM repositories as well. So you just have
5011.69 -> to download and install get in your system
and and you can directly access the SVN repository
5018.34 -> over a network which is the central repository.
So the local repository that you'll have is
5023.5 -> going to be a good trip. The tree and if you
don't want to change your central repository,
5028.23 -> then you can do that as well. We can use get
SVN and you can directly access all the files
5034.04 -> all the files in your project that is residing
in an SVN repository. So do you don't have
5039.63 -> to change that and it is compatible with existing
systems and protocols but there are protocols
5045.39 -> like SSH and winner in protocol. So obviously
get users SSH to connect to the central repository
5051.699 -> as well. So it is very compatible with all
the existing things so you don't have to so
5056.869 -> when you are migrating into get when you are
starting to use get you don't have to actually
5060.77 -> change a lot of things so is as I have everyone
understood these two features by so far Okay,
5068.9 -> the next feature of get is that it supports
nonlinear development of software. Now when
5074.96 -> you're working with get get actually records
the current state of your project by creating
5081.199 -> a tree graph from the index a tree that you
know is nonlinear now when you're working
5087.34 -> with get get actual records the current state
of the project by creating a tree graph from
5092.77 -> the index. And as you know that a tree is
a non linear data structure and it is usually
5099.08 -> actually in the form of a directed acyclic
graph which is popularly known as the DH e.
5103.98 -> So, this is how I actually get facilitates
a nonlinear development of software and it
5111.429 -> also includes techniques where you can navigate
and visualize all of your work that you are
5116.31 -> currently doing and how does it actually facilitate
and when I'm talking about non-linearity,
5122.78 -> how does get actually facilitates a nonlinear
development is actually by Crunching now branching
5129.42 -> actually allows you to make a nonlinear software
development. And this is the gift feature
5136.59 -> that actually makes get stand apart from nearly
every other Version Control Management do
5142.79 -> because get is the only one which has a branching
model. So get allows and get actually encourages
5150.25 -> you to have a multiple local branches and
all of the branches are actually independent
5155.54 -> of each other the and the creation and merging
and deletion of all these branches actually
5161.869 -> takes only a few seconds and there is a thing
called the master Branch. It means the main
5167.659 -> branch which starts from the start of your
project to the end of your project and it
5172.901 -> will always contain the production quality
code. It will always contain the entire project
5178.38 -> and after that it is very lightweight now
you might be thinking that since we're using
5184.63 -> local repositories on our local machine and
we're fetching all the files that are in the
5189.69 -> central repository. And if you think that
way you can know that there are like hon,
5194.61 -> maybe there are It's of people's pushing their
code into the central repository and and updating
5199.25 -> my local repository with all those files.
So the data might be very huge but actually
5204.73 -> get uses lossless compression technique and
it compresses the data on the client side.
5211.01 -> So even though it might look like that you've
got a lot of files when it actually comes
5215.469 -> to storage or storing the data in your local
repository. It is all compressed and it doesn't
5221.51 -> take up a lot of space only when you're fetching
your data from the local repository into your
5226.449 -> workspace. It converts it and then you can
work on it. And whenever you push it again,
5231.599 -> you can press it again and store it in a very
minimal space in your disk and after that
5237.84 -> it provides you with a lot of speed now, since
you have a local repository and you don't
5243.05 -> have to always travel over a network to fetch
files, so it does not take any time to get
5248.929 -> files in your into your workspace from your
local repository because and if you see that
5254.139 -> it is actually Three times faster than fetching
data from a remote repository because he's
5259.76 -> obviously have to travel over a network to
get that data or the files that you want and
5265.69 -> Mozilla has actually performed some kind of
performance tests and it is found out that
5272.389 -> get is actually one order of magnitude faster
than other version control tools, which is
5278.139 -> actually equal to 10 times faster than other
version control tools. And the reason for
5283.54 -> that is because get is actually written in
C and C is not like other high-level languages.
5290.89 -> It is very close to machine language. So it
produces all the runtime overheads and it
5295.929 -> makes all the processing very fast. So get
is very small and it get is very fast. And
5302.409 -> the next feature is that it is open source.
Well, you know that get was actually created
5307.909 -> by Linus Torvalds and he's the famous man
who created the Linux kernel and he actually
5312.87 -> used get in the development of the Next Colonel
now, they were using a Version Control System
5319.409 -> called bitkeeper first, but it was not open
source day. So the owner of bitkeeper has
5324.409 -> actually made it a paid version and this actually
got Linus Torvalds mad. So what he did is
5331.11 -> that he created his own version control system
tool and he came up with get and he made it
5337.07 -> open source for everyone so that you can so
the source code is available and you can modify
5342.04 -> it on your own and you can get it for free.
So there is one more good thing about get
5347.929 -> and after that it is very reliable. Like I've
been telling you since the star that egg have
5354.1 -> a backup of all the files in your local repository.
So if your central server crashes, you don't
5359.57 -> have to worry your files are all saving your
local repository and even if it's not in your
5364.16 -> local repository, it might be in some other
developers local repository and you can tell
5369.16 -> him when and whenever you need some that data
and you lose the data and after your central
5373.79 -> server is all If it was crashed, he can directly
push all the data into the central repository
5379.76 -> and from there everyone and Skinner always
have a backup. So the next thing is that get
5384.96 -> is actually very secure now git uses the sha-1
do name and identify objects. So whenever
5392.449 -> you actually make change it actually creates
a commit object and after you have made changes
5398.64 -> and you have committed to those changes, it
is actually very hard to go back and change
5404.25 -> it without other people knowing it because
whenever you make a commit the sha-1 actually
5410.19 -> converts it what is sha-1. Well it is a kind
of cryptographic algorithm. It is a message
5418.139 -> digest algorithm that actually converts your
commit object into a four digit hexadecimal
5423.58 -> code Now message AI uses techniques and algorithms
like md4 md5 and it is actually very secure.
5432.34 -> It is considered to be very secure because
even National Security Agency of the United
5437.639 -> States of America uses ssj. I so if they're
using it so you might know that it is very
5443.09 -> secure as well. And if you want to know what's
md5 and message digest I'm not going to take
5448.889 -> you through the whole algorithm whole cryptographic
algorithm about how they make that Cipher
5454.51 -> and all you can Google it and you can learn
what is sji, but the main concept of it is
5460.36 -> that after you have made changes. You cannot
deny that you have not met changes because
5466.389 -> it will store it and everyone can see it it
will create commit hash for you. So everyone
5473.739 -> will see it and this commit hash can is also
useful when you want to revert back to previous
5478.6 -> versions you want to know that which commits
exactly caused what problem and if you want
5484.14 -> to remove that commit or if you want to remove
that version you can do that because sha I
5488.79 -> will give you the hash log of every government
so we move on and see the Feature, which is
5496.62 -> economical now get is actually released under
the general public license and it means that
5503.02 -> it is for free. You don't have to pay any
money to download get in your system. You
5508.949 -> can have kids without burning a hole in your
pocket. And since all the heavy lifting is
5514.01 -> done on the kind side because everything you
do you do it on your own entire workspace
5519.57 -> and you push it into the local repository
first, and after that it's pushing the central
5524.389 -> server. So it means that people are only pushing
into the central server after when they're
5530.08 -> sure about their work and and they're not
experimenting on the central repository. So
5535.75 -> your central repository can be very simple
enough. You don't have to worry about having
5540.1 -> a very complex and very powerful hardware
and a lot of money can be saved on that as
5545.53 -> well. So get us free get a small so good provides
you with all the cool features that you would
5552.32 -> actually want. So this All the get features.
So we'll go ahead to the next topic our next
5560.18 -> the first we'll see what is a repository now
as GitHub says that it is a directory or storage
5568.159 -> space where all your projects get live. It
can be local to a folder on your computer
5573.48 -> like your local repository or it can be a
storage space and GitHub or another online
5579.34 -> host. It means your central repository and
you can keep your gold files text files image
5584.909 -> files. You name it? You can keep it inside
a repository everything that is related to
5589.97 -> your project and like I have been chanting
since the start of this tutorial that we have
5595.81 -> got two kinds of repositories. We've got the
central repository and we've got the local
5600.41 -> repository and now let us take a look at what
this repositories actually are. It's on my
5606.699 -> left hand side. You can see all about the
central repository and in the right hand side.
5610.949 -> This is all about my local repository and
the diagram in the middle actually shows you
5615.739 -> the entire layout so the local repository
will be inside my local machine and my central
5622.219 -> repository for now is going to be on GitHub.
So my central repository is typically located
5628.07 -> on a remote server and like I just told you
it is typically located on GitHub and my local
5634.301 -> repository is going to be my local machine
at we reside in as in a DOT git folder and
5640.58 -> it will be inside your Project's root. The
dot git folder is going to be inside your
5644.449 -> Project's root and it will contain all the
templates and all the objects and every other
5649.81 -> configuration files when you create your local
repository and since you're pushing all the
5655 -> code, your central repository will also have
the same dot git repository folder inside
5660.27 -> it and the sole purpose of having a central
repository is so that you're all the Actors
5666.31 -> are all the developers can actually share
and exchange the data because someone might
5671.81 -> be working on a different problem and someone
might be needing help in that so what you
5676 -> can do is that he can push all the code all
the problems that he has sauce or something
5680.6 -> that he has worked on it to the central repository
and everyone else can see it and everyone
5685.54 -> else can pull his code and use it for themselves
as well. So this is just meant for sharing
5691.369 -> data. Whereas in local repository. It is only
you can access it and it is only meant for
5698.63 -> your own so you can work in your local repository.
You can work in isolation and no one will
5704.52 -> interfere even after you have done after years
sure that your code is working and you want
5710.159 -> to show it to everyone just transfer it or
push it into the central Repository. Okay,
5715.85 -> so now we'll be seeing the get operations
and come on. So this is how we'll be using
5721.599 -> it. There are various operations and commands
that will help us to do all the things that
5728.26 -> we were just talking about right now. We're
talking about pushing changes. So these are
5733.38 -> all get operations. So we'll be performing
all these operations will be creating repositories
5739.06 -> with this command will be making changes in
the files that are in a repositories with
5744.19 -> the commands will be also doing parallel nonlinear
development that I was just talking about
5749.28 -> and we also be sinking a repositories so that
our Central repository and local repository
5755.56 -> are connected. So I'll show you how to do
that one by one. So the first thing that we
5761.36 -> need to do is create repositories, so we need
a central repository and we need a local repository
5768.409 -> now will host our Central repository on GitHub.
So for that you need an account in GitHub.
5774.12 -> And create a repository there and for your
local repository you have to install get in
5780.239 -> your system. And if you are working on a completely
new project and if you want to start something
5786.579 -> fresh and very new you can just use git init
to create your repository or if you want to
5793.23 -> join an ongoing project, and if you're new
to the project and you just join so what you
5798.35 -> can do is that you can clone the central repository
using this command get blown. So let us do
5804.73 -> that. So let's first create a GitHub account
and create repositories on GitHub. I said
5821.849 -> first you need to go to github.com. And if
you don't have an account, you can sign up
5826.52 -> for GitHub and here you just have to pick
a username that has not been already taken
5831.55 -> you have just provide your email address get
a password and then just click this green
5837.13 -> button here and your account will be created.
It's very easy don't have to do much and after
5843.139 -> that you just have to verify your email and
everything and after you're done with all
5846.78 -> sort of thing. You can just go a sign in our
already have an account. So I'm just going
5853.13 -> to sign in here. Softer you're signed in you'll
find this page here. So you'll get two buttons
5860.38 -> where you can read the guide of how to use
GitHub or you can just start a project right
5864.63 -> away. Now, I'll be telling you all about GitHub
so you don't have to click this button right
5869.54 -> now. So you can just go ahead and start a
project. So now get tells that for every project
5879.63 -> you need to have you need to maintain a unique
repository it is because it's very healthy
5884.92 -> and keeps things very clean because if you
are storing just the files related to one
5891.26 -> project in a repository, you won't get confused
later. So when you're creating a new repository,
5896.65 -> you have to provide with a repository name
now, I'm just going to name it get - GitHub.
5908.579 -> And you can provide it with the description
of this repository. And this is optional.
5913.27 -> If you don't want to you can leave it blank
and you can choose whether you want it public
5918.13 -> or private. Now if you want to it to be private,
you have to pay some kind of amount. So like
5924.84 -> this will cost you $7 a month. And so what
what is the benefit of having a private account?
5930.61 -> Is that only you can say it if you don't want
to share your code with anyone and you don't
5934.619 -> want anyone to see it. You can do that in
GitHub as well. But for now, I'll just leave
5939.65 -> it public. I just want it for free and let
everyone see my work what you have done. So
5945.06 -> we'll just leave it up lik for now and after
that you can initialize this repository with
5950.79 -> the read me. So the readme file will contain
the description of your files. This is the
5955.98 -> first file that is going to be inside a repository
when you create the repository, so and it's
5962.04 -> a good habit to actually initialize your repository
of the readme, so I'll just click this option.
5968.159 -> This is the option to add git ignore. Now.
There might be some kind of files that you
5974.26 -> don't want when you're making operations,
like push or pull you don't want those files
5979.19 -> to get pushed or pulled like it might be some
kind of log files or anything so you can add
5984.94 -> those files and get ignore here. So right
now I don't have gone any files that this
5990.04 -> is just the starting of our project. So I
will just ignore this get ignore for now.
5997.46 -> And then you can actually add some license
as well. So you can just go through what this
6001.92 -> license actually are. But if you want to just
leave it as none. And after that just click
6010.01 -> on this green button here, so just create
a repository. And so there it is so you can
6017.639 -> see this is the initial comment you have initialized
your repository with the readme and this is
6023.59 -> your readme file. Now if you want to make
changes and do the read me file, just click
6029.469 -> on it and click on the edit pencil image or
icon kind of that is in here and you can make
6037.949 -> changes on the readme files if you want to
write something. Let's say just write it as
6043.79 -> scription. So this is our tutorial purpose
and that's it. Just keeping it simple. And
6056.34 -> after that you've made changes. The next thing
that you have to do is you have to commit
6062.489 -> a changes so you can just go down and click
on this commit changes green button here.
6067.19 -> And it's done. So you have updated read me
dot MD and this is your commit hash so you
6075.329 -> can see that in here. So if you go back to
your repository, you can say that something
6079.961 -> has been updated and will show you when was
your last commit little even show you the
6084.05 -> time? So and for now you're on the branch
master your and this will actually show you
6091.65 -> all the logs. So since only I'm contributing
here. So this is only one contributor and
6096.29 -> I've just made two commits. The first one
was when I initialized it and right now when
6100.38 -> I modified it and right now I have not created
any branches. So there is only one branch.
6105.42 -> So now my central repository has been created.
So the next thing that I need to do is create
6110.469 -> a local repository in my local machine. Now
I have already installed get in my system.
6117.5 -> I have using a Windows system. So I have installed
get for Windows. So if you want some help
6123.21 -> with the installation, I have already written
a Blog on that. I'll leave the link of the
6128.15 -> blog in the description below. You can refer
to that blog and install get in your system.
6132.61 -> Now, I've already done that. So let's say
that I want my project to be in the C drive.
6139.6 -> So let's say I'm just waiting in folder here
from my project. So just name it. Ed Eureka
6149.179 -> project and let's say that this is where I
want my local repository to been. So the first
6157.091 -> thing that I'll do is right click and I'll
click this option here git bash here. And
6164.949 -> this will actually open up a very colorful
terminal for you to use and this is called
6170.239 -> the git bash emulator. So this is where you'll
be typing all your commands and you'll be
6174.909 -> doing all your work in the get back here.
So in order to create your local repository,
6181.38 -> the first thing that you'll do is type in
this command get in it and press enter. So
6187.87 -> now you can see that it is initialized empty
git repository on this path. So, let's see
6194.28 -> and you can see that a DOT get of a folder
has been created here and if you see here
6200.21 -> and see you can see that it contains all the
configurations and the object details and
6205.659 -> everything. So your repository is initializing.
This is going to be your local repository.
6211.02 -> So after we have created a repositories, it
is very important to link them because how
6217.099 -> would you know which repository to push into
and how will you just pull all the changes
6222.57 -> or all the files from a remote repository?
If you don't know if they're not connected
6226.92 -> properly. So in order to connect them with
the first thing that we need to do is that
6232.01 -> we need to add a region and we're going to
call our remote repository as origin and we'll
6238.46 -> be using the command git remote add origin
to add so that we can pull files from our
6244.03 -> GitHub or Central repository. And in order
to fetch files. We can use git pull and if
6250.55 -> you want to transfer all your files or push
files into GitHub will be using git push.
6255.829 -> So let me just show you how to do that. So
we are back in the local repository. And as
6260.849 -> you can see now that I have not got any kind
of files. And if you go to my central repository,
6266.55 -> you can see that I've got a readme file. So
the first thing that I need to do is to add
6272.4 -> this remote repository as my origin. So for
that I'll clear my screen first. So for that
6279.5 -> you need to use this command. Git remote add
origin. And the link of yours and the repository
6290.84 -> and let me just show you where you can find
this link. So when you go back into your repository,
6296.949 -> you'll find this green button here, which
is the Clone or download just click here.
6302.84 -> And this is the HTTP URL that you want. So
just copy it on your clipboard. Go back to
6310.04 -> your git bash and paste it and enter so your
original has been added successfully because
6318.36 -> it's not showing any kind of Errors. So now
what will do is that will perform a git pull.
6323.15 -> It means will fetch all the files from the
central repository into my local Repository.
6329.489 -> So just type in the command get full. origin
master And you can see that they have done
6340.19 -> some kind of fetching from the master Branch
into the master branch and let us see that
6347.179 -> whether all the files have been fished or
not. Let us go back to our local repository
6351.699 -> and there is the readme file that was in my
central repository and now it is in my local
6358.929 -> repository. So this is how you actually update
your local repository from the central repository
6365.659 -> you perform a git pull and it will fetch all
the files from this entire repository in your
6371.79 -> local machine. So let us move forward and
move ahead to the next operation. Now, I've
6376.739 -> told you in order to sync repositories, you
also need to use a git push, but since we
6381.98 -> have not done anything in our local repository
now, so I'll perform the good get push later
6386.78 -> on after a show you all the operations and
we'll be doing a lot of things. So at the
6392.03 -> end I'll be performing the git push and push
all the changes into my central Repository.
6397.52 -> And actually that is how you should do that
the it's a good habit and it's a good practice
6404.07 -> if you're working with GitHub and get is that
when you start working. The first thing that
6408.639 -> you need to do is make a get bull to fetch
all the files from your central repository
6413.83 -> so that you could get updated with all the
changes that has been recently made by everyone
6419.25 -> else and after you're done working after you're
sure that your code is running then only make
6425.54 -> the get Bush so that everyone can see it you
should not make very frequent changes into
6430.679 -> the central repository because that might
interrupt the work of your other collaborators
6435.8 -> or other contributors as well. So let us move
ahead and see how we can make changes. So
6442.961 -> now get actually has a concept it has an intermediate
layer that resides between your workspace
6450.829 -> and your local repository. Now when you want
to commit changes or make changes in your
6455.989 -> local repository, you have to add those files
in the index first. So this is the layer that
6462.17 -> is between your workspace and local repository.
Now, if your files are not in the index, you
6467.59 -> cannot make commit organ app cannot make changes
into your local repository. So for that you
6473.099 -> have to use the command git add and you might
get confused that which all files are in the
6478.869 -> index and which all are not. So if you want
to see that you can use the command git status
6483.739 -> and after you have added the changes in the
index you can use the command git commit to
6490.31 -> make the changes in the local repository.
Now, let me tell you what is exactly a git
6495.94 -> commit everyone will be talking about get
coming. Committing changes when you're making
6501.11 -> changes. So let us just know what is a git
commit. So let's say that you have not made
6507.13 -> any kind of changes or this is your initial
project. So what a comet is is that it is
6513.44 -> kind of object which is actually a version
of your project. So let's say that you have
6520.17 -> made some changes and you have committed those
changes what your version control system will
6524.78 -> do is that it will create another commit object
and this is going to be your different version
6530.909 -> with the changes. So your commit snapshots
actually going to contain snapshots of the
6538.28 -> project which is actually changed. So this
is what come it is. So I'll just show you
6543.699 -> I'll just go ahead and show you how to commit
changes in your local repository. So we're
6549.39 -> back into our local repository. And so let's
just create some files here. So now if you're
6555.46 -> developing a project you might be just only
contributing your source code files into the
6560.06 -> central repository. So now I'm not just going
to tell you all about coding. So we're just
6564.56 -> going to create some text files write something
in that which is actually pretty much the
6568.949 -> same if you're working on a gold and you're
storing your source code in your repositories.
6573.85 -> So I just go ahead and create a simple text
file. Just name it Eddie one. Just write something
6583.639 -> so I'll just try first file. Save this file
close it. So now remember that even if I have
6595.38 -> created inside this repository, this is actually
showing my work space and it is not in my
6602.489 -> local repository now because I have not committed
it. So what I'm going to do is that I'm going
6609.15 -> to see what all files are in my index. But
before that I'll clear my screen because I
6615.58 -> don't like junk on my screen. Okay. So the
first thing that we're going to see is that
6621.09 -> what all files are added in my index and for
that I just told you we're going to use the
6625.25 -> command git status. So you can see that it
is calling anyone dot txt which we just have
6634.44 -> written. It is calling it an untracked file
now untracked files are those which are not
6640.469 -> added in the index yet. So this is newly created.
I have not added it explicitly into the index.
6646.34 -> So if I want to commit changes in Eddie one
dot txt, I will have to add it in the index.
6653.08 -> So for that I'll just use the command git
add and the name of your file which is a D1
6660.21 -> Dot txt. And it has been added. So now let
us check the status again. So for that will
6672.76 -> choose get status. And you can see that changes
ready to be committed is the Eddie Wonder
6681.15 -> txt? Because it's in the index and now you
can commit changes on your local repository.
6686.309 -> So in order to commit the command that you
should be using is git commit. - em because
6697.739 -> whenever you are committing you always have
to give a commit message so that everyone
6702.23 -> can see who made all this comments and what
exactly is just so this commit message is
6708.04 -> just for your purpose that you can see what
exactly was changed. But even if you don't
6713.28 -> dry it it the version control system is also
going to do that. And if you have configured
6718.179 -> your get it is always going to show that who's
the user who has committed this change. So
6724.329 -> I was just talking about writing a commit
message. So I'm just going to write something
6728.78 -> like adding first commit and press enter so
you can see one file change something has
6745.98 -> been inserted. So this is the changes are
finally committed in my local repository.
6753.719 -> And if you want to see how actually get stores
all this commit with actually I'll show you
6758.77 -> after I show you how to commit multiple files
together. So let's just go back into our local
6766.17 -> Rebel folder and we'll just create some more
files more text files. I'm just going to name
6771.846 -> it. I do do with create another one. Just
name it I do three. Let's just write something
6783.809 -> over here. We just say second file. Sorry.
so let's go back to our get bash terminal
6806.73 -> and Now let us see the get status. So now
you can see that it is showing that I do too
6817.16 -> and I do three are not in my index and if
you remember anyone was already in the index,
6823.28 -> actually, let me just go back and make some
modifications in Eddie one as well. So I'm
6831.54 -> going to ride. modified one So, let's see
get status again. And you can see that it
6842.7 -> is showing that anyone is modified and there
are untracked files and you do and edit three.
6853.11 -> Because I haven't added them in my index yet.
So now Sebastian and Jamie you have been asking
6858.12 -> me how to like a doll multiple files together.
So now I'm going to add all these files at
6863.909 -> once so for that I'm just going to use get
at - capital a Just press enter and now see
6872.119 -> the get status. And you see that all the files
have been added to the index and ones. And
6882.02 -> it's similarly with commit as well. So now
that you have added all the files in the index.
6887.969 -> I can also commit them all at once and how
to do that. Let me just show you you just
6892.96 -> have to write git commit and - small a so
if you want to commit all you have to use
6900.349 -> - small are in case of git commit whereas
in case of get add if you want to add all
6906.25 -> the files you have to use - capital A. So
just remember that difference and add message.
6914.64 -> hiding so you can see three files has been
changed and now let me show you how this actually
6927.71 -> gets stores all this comets. So you can perform
an operation called the git log. And you can
6938.67 -> see so This Is 40 digit hexadecimal code that
I was taking a talking about and this is the
6943.869 -> sha-1 hash and you can see the date and you
have got the commit message that we have just
6951.32 -> provided where I just wrote adding three files
together. It shows it it shows the date and
6957.13 -> the exact time and the author and this is
me because I've already configured it with
6962.329 -> my name. So this is how you can see come in
and this is actually how Version Control System
6968.8 -> like get actually stores all your commit.
So let us go back and see the next operation
6976.04 -> which is how to do parallel development or
non-linear development. And the first operation
6983.59 -> is branching now, we've been talking about
branching a lot and let me just tell you what
6989.559 -> exactly is branching and what exactly you
can do with branching. Well, you can think
6995.719 -> of branches like a pointer to a To become
it. Let's say that you've made changes in
7001.13 -> your main branch. Now remember that your main
branch that I told you about. It's called
7006.3 -> The Master branch and the master Branch will
contain all the code. So let's say that you're
7012.26 -> working on the master branch and you've just
made a change and you've decided to add some
7016.35 -> new feature on to it. So you want to work
on the new feature individually or you don't
7021.849 -> want to interfere with the master Branch.
So if you want to separate that you can actually
7026.31 -> create a branch from this commit and let me
show you how to actually create branches.
7032.78 -> Now Alice tell you that there are two kinds
of branches their local branches and remote
7037.38 -> tracking branches. Your remote branches are
the branches that is going to connect your
7041.96 -> branches from your local repository to your
central repository and local branches are
7047.019 -> something that you only create in your workspace.
That is only going to work with your with
7052.15 -> the files in your local repository. So I'll
show you how to create branches and then everything
7057.75 -> will Clear to you. So let us go back to our
git Bash. Clear the screen. And right now
7068.15 -> we are in the master branch and this indicates
which brands you were onto right now. So we're
7074.29 -> in the master Branch right now and we're going
to create a different branch. So for that
7079.679 -> you just have to type the command git branch
and write a branch name. So let us just call
7085.619 -> it first branch. and enter so now you have
created a branch and and this first Branch
7095.969 -> will now contain all the files that were in
the master because it originated from the
7101.139 -> master Branch. So now the shows that you are
still in the master branch and if you want
7106.781 -> to switch to the new branch that you just
created you have to use this command git checkout,
7112.23 -> but it's called checking out it going to move
from one branch to another it's called checking
7117.58 -> out and get so we're going to use git checkout
and the name of the branch. Switch to first
7125.09 -> brush and now you can see that we are in the
first branch and now we can start doing all
7130.06 -> the work in our first Branch. So let us create
some more files in the first Branch. So let's
7135.56 -> go back and this will actually show me my
workspace off my first Branch right now. So
7142.44 -> we'll just create another text document and
we're going to name it edu for and you can
7152.441 -> just write something first. garage to save
it just will go back and now we've made some
7165.78 -> changes. So let us just commit this changes
all at once. So let me just use git add. After
7178.69 -> that, what do you have to do if you remember
is that you have to perform a git commit?
7184.98 -> And I guess one pile changed. So now remember
that I have only made this edu for change
7205.69 -> in my first branch and this is not in my master
Branch it because now we are in the first
7213.809 -> Branch if it lists out all the files in the
first Branch, you can see that you've got
7218.909 -> the Eddie one. I did 283 and the readme which
was in the master Branch because it will be
7224.042 -> there because it originated from the master
branch and apart from that. It has a new file
7227.78 -> called edu for DOT txt. And now if you just
move back into the master Branch, let's say
7233.44 -> We're going back into the Master Garage. And
if you just see the five Master Branch, you'll
7244.13 -> find that there is no edu for because I've
only made the changes in my first Branch.
7249.73 -> So what we have done now is that we have created
branches and we have also understood the purpose
7254.59 -> of creating branches because you're moving
on to the next topic. The next thing we'll
7260.28 -> see is merging so now if you're creating branches
and you are developing a new feature and you
7266.87 -> want to add that new feature, so you have
to do an operation called emerging emerging
7272.429 -> means combining the work of different branches
all together and it's very important that
7276.96 -> after you have branched off from a master
Branch always combine it back in at the end
7282.76 -> after you're done working with the branch
always remember to merge it back in so now
7287.88 -> we have created branches. Let us see and we
have made changes in our Branch like we have
7292.98 -> added edu for and if you want to combine that
back in our Master Branch because like I told
7297.59 -> you your master Branch will always contain
your production quality. Code so let us know
7302.75 -> actually merge start merging those files because
I've already created branches. It's time that
7308.06 -> we merge them. So we are back in my terminal.
And what do we need to do is merge those changes
7316.849 -> and if you remember that we've got a different
file in my first branch, which is the ending
7320.969 -> for and it's not there in the master Branch
yet. So what I want to do is merge that Branch
7326.29 -> into my master Branch so for that I'll use
a command called git merge and the name of
7333.829 -> my branch and there is a very important thing
to remember when you're merging is that you
7339.05 -> want to merge the work of your first Branch
into master. So you want Master to be the
7345.03 -> destination. So whenever you're merging you
have to remember that you were always checked
7350.05 -> out in the destination Branch some already
checked out in the master Branch, so I don't
7355.52 -> have to change it back. So I'll just use the
command git merge and the name of the branch
7360.57 -> which word you want to merge it into and you
have to provide the name of the branch whose
7365.199 -> work you want merged into the current branch
that you were checked out. So for now, I've
7369.829 -> just got one branch, which is called the first
branch. and and so you can see that one file
7377.679 -> chain. Something has been added. We are in
the master bounce right now. So now let us
7382.51 -> list out all the files in the master branch
and there you see now you have edu for DOT
7387.88 -> txt, which was not there before. I'm merged
it. So this is what merging does now you have
7395.42 -> to remember that your first branch is still
separate. Now, if you want to go back into
7401.63 -> your first branch and modify some changes
again in the first branch and keep it there
7406.36 -> you can do that. It will not actually affect
the master Branch until you merge it. So let
7411.949 -> me just show you an example. So just go back
to my first branch. So now let us make changes
7424.139 -> and add you for. I'll just ride modified in
first branch. We'll go back and we'll just
7441.19 -> commit all these changes and I'll just use
git. So now remember that the git commit all
7463.449 -> is also performed for another purpose now.
It doesn't only actually commit all the uncommitted
7469.23 -> file at once if your files are in the index
and you have just modified it also does the
7474.61 -> job of adding it to the index Again by modifying
it and then committing it but it won't work.
7481.89 -> If you have never added that file in the index
now Eddie for was already in the index now
7486.92 -> after modifying it I have not explicitly added
in the index. And if I'm using git commit
7492.099 -> all it will explicitly add it in the index
bit will because it was already a track file
7498.11 -> and then it will commit the changes also in
my local Repository. So you see I didn't use
7504.909 -> the command git add. I just did it with Git
commit because it was already attract file.
7510.09 -> So one file has been changed. So now if you
just just cat it and you can see that it's
7522.15 -> different. It shows the modification that
we have done, which is modified it first Branch
7527.119 -> now, let's just go back to my master branch.
Now remember that I have not emerged it yet
7538.38 -> and my master Branch also contains a copy
of edu for and let's see what this copy actually
7544.92 -> contains. See you see that the modification
has not affected in the master Branch because
7558.3 -> I have only done the modification in the first
Branch. So the copy that is in the master
7564.599 -> branch has not it's not the modified copy
because I have not emerged it yet. So it's
7569.679 -> very important to remember that if you actually
want all the changes that you have made in
7574.349 -> the first Branch all the things that you have
developed in the Anu branch that you have
7577.76 -> created make sure that you merge it in don't
forget to merge or else it will not show any
7584.08 -> kind of modifications. So I hope that if understood
why emerging is important how to actually
7592.829 -> merge different branches together. So we'll
just move on to the next topic and which is
7599.8 -> rebasing now when you say rebasing rebasing
is also another kind of merging. So the first
7608.84 -> thing that you need to understand about vbase
is that it actually solves the same problem
7614.59 -> as of git merge and both of these commands
are designed to integrate changes from one
7620.789 -> branch into another. It's just that they just
do the same task in a different way. Now what
7627.01 -> rebasing means if you see the workflow diagram
here is that you've got your master branch
7631.469 -> and you've got a new Branch now when you're
rebasing it what it does if you see in this
7637.349 -> workflow diagram here is that if God a new
branch and your master branch and when your
7642.409 -> rebasing it instead of creating a Comet which
will have two parent commits. What rebasing
7648.949 -> does is that it actually places the entire
commit history of your branch onto the tip
7654.6 -> of the master. Now you would ask me. Why should
we do that? Like what is the use of that?
7660.619 -> Well, the major benefit of using a re basis
that you get a much cleaner project history.
7667.87 -> So I hope you've understood the concept of
rebase. So let me just show you how to actually
7674.139 -> do rebasing. Okay. So what we're going to
do is that we're going to do some more work
7680.14 -> in our branch and after that will be base
our branch on to muster. So we'll just go
7686.53 -> back to our branch. You skip check out. first
branch and now we're going to create some
7701.17 -> more files here. same it at your five and
let's say I do six. So we're going to write
7717.94 -> some random stuff. I'd say we're saying welcome
to Ed, Eureka. one all right the same thing
7730.449 -> again that Sarah come two so we have created
this and now we're going back to our get bash
7743.159 -> and we're going to add all these new files
because now we need to add because it we cannot
7749.05 -> do it with just get commit all because these
are untracked files. This is the files that
7754.05 -> I've just created right now. So I'm using
And now we're going to commit. And it has
7780.17 -> been committed. So now if you just see all
the files, you can see any one two, three,
7788.61 -> four five six and read me and if you go back
to the master. And if you just list out all
7805.25 -> the files and master it only has up to four
the five and six are still in my first brush
7811.36 -> and I have not emerged it yet. And I'm not
going to use git merge right now. I'm going
7816.719 -> to use rebase this time instead of using git
merge and this you'll see that this will actually
7822.46 -> do the same thing. So for that you just have
to use the command. So let us go back to our
7829.44 -> first branch. Okay did a typing error? Irst
BR a MCH. Okay switch the first branch and
7857.05 -> now we're going to use the command git rebase
master. Now it is showing that my current
7867.429 -> Branch first branch is up to date just because
because whatever is in the master branch is
7872.51 -> already there in my first branch and they
were no new files to be added. So that is
7879.27 -> the thing. So, but if you want to do it in
the reverse way, I'll show you what will happen.
7885.17 -> So let's just go and check out let's do rebasing
kit rebase first branch. So now what happened
7903.61 -> is that all the work of first branch has been
attached to the master branch and it has been
7910.55 -> done linearly. There was no new set of comments.
So now if you see all the files are the master
7915.929 -> Branch, you'll find that you've got a new
five and Ed U6 as well, which was in the first
7922.119 -> Branch. So basically rebasing has merged all
the work of my first Branch into the master,
7928.369 -> but the only thing that happened is that it
happened in a linear way all the commits that
7933.159 -> we did in first Branch actually got rid dashed
to the head in the master. So this was all
7938.93 -> about nonlinear development. I have told you
about branching merging rebasing we've made
7944.98 -> changes with pull changes committed changes,
but I remember that I haven't shown you how
7950.57 -> to push changes. So since we're done working
in our local repository now, we have made
7956.099 -> are all final changes and now we want it to
contribute in our Central Repository. Tree.
7961.5 -> So for that we're going to use git push and
I'm going to show you how to do a get Bush
7966.429 -> right now. Before I go ahead to explain you
a get Bush. You have to know that when you
7977.59 -> are actually setting up your repository. If
you remember your GitHub repository as a public
7982.42 -> repository, it means that you're giving a
read access to everyone else in the GitHub
7986.64 -> community. So everyone else can clone or download
your repository files. So when you're pushing
7992.059 -> changes in a repository, you have to know
that you need to have certain access rights
7996.699 -> because it is the central repository. This
is where you're storing your actual code.
8001.84 -> So you don't want other people to interfere
in it by pushing wrong codes or something.
8007.67 -> So we're going to connect a mice and repository
via ssh in order to push changes into my central
8013.829 -> repository now at the beginning when I was
trying to make this connection with SSS rows
8020.099 -> facing some certain kind of problems. Let
me go back to the repository of me show you
8025.86 -> when you click this button. You see that this
is your HTTP URL in order that we use in order
8032.331 -> to connect with yours and repository now if
you want to use SSH, so this is your SSH connection
8038.719 -> URL. So so in order to connect with ssh, what
do you need to do is that you have to generate
8044.199 -> a public SSH key and then just add that key
simply into your GitHub account. And after
8050.34 -> that you can start pushing changes. So first
we'll do that will generate SSH public key.
8058.42 -> So for that, we'll use this command SSH - heejun.
So under file, there is already an SSH key,
8069.21 -> so they want to override it. Yes. So my SSH
key has been generated and it has been saved
8078.869 -> in here. So if I want to see it and just use
cat and copy it. So this is my public SSH
8094.72 -> key if I want to add this SSH key, I'll go
back into my GitHub account. And here I will
8102.27 -> go back and settings and we'll go and click
on this option SSH and gpg keys and I've already
8109.989 -> had two SSH Keys added and I want to add my
new one. So I'm going to click this button
8116.28 -> new SSH key and just make sure that you provide
a name to it. I'm just going to keep it in
8123.13 -> order because I've named the other ones sssh
won an SSS to just say I'm going to say it's
8130.3 -> sh3. So just paste your search key in here.
Just copy this key. Paste it and click on
8149.84 -> this button, which is ADD SSH key. Okay, so
now well the first thing you need to do is
8159.63 -> clear the screen. And now what you need to
do is you need to use this command as the
8164.51 -> search - d And your SSI at URL that we use
which is get at the rate github.com. And enter
8175.449 -> so my SSH authentication has been successfully
done. So I'll go back to my GitHub account.
8182.48 -> And if I refresh this you can see that the
key is green. It means that it has been properly
8188.57 -> authenticated and now I'm ready to push changes
on to the central repository. So we'll just
8195.08 -> start doing it. So let me just tell you one
more thing that if you are developing something
8204.179 -> in your local repository and you have done
it in a particular branch in your repository
8209.929 -> and let's say that you don't want to push
this changes into the master branch of your
8215.849 -> central report or your GitHub repository.
So let's say that whatever work that you have
8222.01 -> done. It will stay in a separate branch in
your GitHub repository so that it does not
8227.96 -> interfere with the master branch and everyone
can identify that it is actually your branch
8232.519 -> and you have created it and this Branch only
contains your work. So for that let me just
8240.219 -> go to the GitHub repository and show you something.
Let's go to the repositories. And this is
8248.269 -> the repository that I have just created today.
So when you go in the repository, you can
8254.229 -> see that I have only got one branch here,
which is the master branch. And if I want
8259.58 -> to create branches I can create it here, but
I would advise you to create all branches
8264.58 -> from your command line or from you get bash
only in your central repository as well. So
8269.98 -> let us go back in our branch. So now what
I want is that I want all the work of the
8290.08 -> first branch in my local repository to make
a new branch in the central repository and
8296.45 -> that branch in my central repository will
contain all the files that is in the first
8301.12 -> branch of my local repository through so for
that I'll just perform. get Push the name
8310.229 -> of my remote which is origin and first branch.
And you can see that it has pushed all the
8322.599 -> changes. So let us verify. Let us go back
to our repository and let's refresh it. So
8332.42 -> this is the master branch and you can see
that it has created another branch, which
8337.33 -> is called the first Branch because I have
pushed all the files from my first Branch
8344.22 -> into an and I have created a new Branch or
first Branch as similar to my first branch
8350.109 -> in my local repository here in GitHub. So
now if we go to Branch you can see that there
8356.059 -> is not only a single Master we have also got
another branch, which is called the first
8360.87 -> Branch now if you want to check out this brand
just click on it. And you can see it has all
8366.859 -> the files with all the combat logs here in
this Branch. So this is how you push changes
8373.559 -> and if you want to push all the change in
to master you can do the same thing. Let us
8383.309 -> go back to our Branch master. And we're going
to perform a git push here. But only what
8395.621 -> we're going to do this time is we're going
to push all the files into the master branch
8400 -> and my central repository. So for that I'll
just use this get bush. Okay, so the push
8411.6 -> operation is done. And if you go back here
and if you go back to master, you can see
8417.85 -> that all the files that were in the master
branch in. My local repo has been added into
8421.99 -> the master branch of my central Ripple also.
So this is how you make changes and from your
8428.98 -> central repository to look repository. So
this is exactly what you do with get so if
8435.729 -> I have to summarize what I just showed you
entirely in this when I'm when I was telling
8441 -> about get ad and committing and pushing and
pulling so this is exactly what is happening.
8445.97 -> So this is your local repository. This is
your working directory. So the staging area
8451.09 -> is our index the intermediate layer between
your workspace and your local repository.
8455.8 -> So you have to add your files into the staging
area or the index with Git add and a commit
8462.431 -> those changes with Git commit and your local
repository and if you want to push all this
8467.81 -> Listen to the remote repository or the central
repository where everyone can see it you use
8472.67 -> a get Bush and similarly. If you want to pull
all those files of fetch all those files from
8478.24 -> your GitHub repository, you can use git pull
and you want to use branches. If you want
8484.13 -> to move from one branch to another you can
use git checkout. And if you want to combine
8489.21 -> the work of different branches together, you
can use git merge. So this is entirely what
8492.87 -> you do when you're performing all these kind
of operations. So I hope it is clear to everyone
8498.8 -> so I'll just show you how can you check out
what has been changed and modifications so
8507.18 -> So just clear the screen and okay. So let
us go back to our terminal and just for experimentation
8514.66 -> proper just to show you that how we can actually
get revert back to our previous changes. So
8520.35 -> now you might not want to change everything
that you made an Eddie wanted to do a duet
8525.19 -> for or some other files that we just created.
So let's just go and create a new file modify
8530.47 -> it two times and revert back to the previous
version just for demonstration purpose. So
8535.52 -> I'm just going to create a new text file.
Let's call it revert. And now let us just
8547.51 -> type something. Hello. Let's just keep it
that simple. Just save it and go back. We'll
8560.69 -> add this file. then commit this let's say
just call it revert once just remember that
8577.85 -> this is the first comment that I made with
revert one enter. So it has been changed.
8587.12 -> So now let's go back and modify this. So after
I've committed this file, it means that it
8592.99 -> has stored a version with the text Hello exclamation
in my revert text file. So I'm just going
8599.979 -> to go back and change something in here. So
I'm just let us just add there. Hello there.
8610.681 -> Save it. Let's go back to our bash. Now. Let
us commit this file again because I've made
8620.42 -> some changes and I want a different version
of the revert file. So we'll just go ahead
8625.931 -> and commit again. So I'll use git commit all.
Saints River do and enter and it's done. So
8649.78 -> now if I want to revert back to okay, so now
you just see the file. You can see I've modified
8661.729 -> it. So now it has got hello there. Let's say
that I want to go back to my previous version.
8667.39 -> I would just want to go back to when I had
just hello. So for that, I'll just check my
8674.54 -> git log. I can check the hair that this is
the commit log or the commit hash. When I
8681.92 -> first committed revert it means that this
is the version one of my revert. Now, what
8688.229 -> you need to do is that you need to copy this
commit hash. Now, you can just copy the first
8694.14 -> eight hexadecimal digits and that will be
it. So just copy it whole I just clear the
8706.81 -> screen first. So you just need to go use this
command get check out and hexadecimal code
8718.12 -> or the hexadecimal digits that you just copied
and the name of your file, which is revert
8725 -> Dot txt. So you just have to use this command
kit. Check out and the commit hash that you
8735.33 -> just copied the first 8 digits and you have
to name the file, which is revert Dot txt.
8754.271 -> So now when you just see this file, you have
gone back to the previous commit. And now
8759.91 -> when you just display this file, you can see
that now I've only got just hello. It means
8765.22 -> that I have rolled back to the previous version
because I have used the commit hash when I
8771.109 -> initially committed with the first change.
So this is how you revert back to a previous
8776.35 -> version. So this is what we have learned today
in today's tutorial. We have understood. What
8781.569 -> is Version Control and why do we need version
controls? And we've also learned about the
8786.62 -> different version control tools. And in that
we have primarily focused on get and we have
8792.58 -> learned all about git and GitHub about how
to create repositories and perform some kind
8797.95 -> of operations and commands in order to push
pull and move files from one repository to
8804.47 -> another we've also studied about the features
of git and we've also seen a case study about
8809.72 -> how Dominion Enterprises which is one of the
biggest public In company who makes very popular
8815.85 -> websites that we have got right now. We have
seen how they have used GitHub as well. Hello
8824.58 -> everyone. This is order from 80 Rekha in today's
session will focus on what is Jenkins. So
8831.02 -> without any further Ado let us move forward
and have a look at the agenda for today first.
8834.88 -> We'll see why we need continuous integration.
What are the problems that industries were
8839.05 -> facing before continuous integration was introduced
after that will understand what exactly is
8843.51 -> continuous integration and will see various
types of continuous integration tools among
8848.34 -> those countries integration tools will focus
on Jenkins and we'll also look at Jenkins
8852.43 -> distributed architecture finally in our hands
on part will prepare a build pipeline using
8856.939 -> Jenkins and I'll also tell you how to add
Jenkins slaves now, I'll move forward and
8861.59 -> we'll see why we need continuous integration.
So this is the process before continuous integration
8866.28 -> over here, as you can see that there's a group
of developers who are making changes to the
8870.521 -> source code that is present in the source
code repository. This repository can be a
8874.1 -> git repository subversion repository Etc.
And then the entire source code of the application
8878.83 -> is written it will be built by tools like
and Maven Etc. And after that that built application
8884.63 -> will be deployed onto the test server for
testing if there's any bug in the code developers
8889.109 -> are notified with the help of the feedback
loop as you can see it on the screen and if
8892.84 -> there are no bugs then the application is
deployed onto the production server release.
8897.68 -> I know you must be thinking that what is the
problem with this process is process looks
8900.851 -> fine. As you first write the code then you
build it. Then you test it and finally you
8904.82 -> deploy but let us look at the flaws that were
there in this process one by one. So this
8909.45 -> is the first problem guys as you can see that
there is a developer who's waiting for a long
8913.81 -> time in order to get the test results as first
the entire source code of the application
8917.91 -> will be built and then only it will be deployed
onto the test server for testing. It takes
8922.069 -> a lot of time so developers have to For a
long time in order to get the test results.
8926.92 -> The second problem is since the entire source
code of the application is first build and
8931.819 -> then it is tested. So if there's any bug in
the code developers have to go through the
8936.131 -> entire source code of the application as you
can see that there is a frustrated developer
8941.25 -> because he has written a code for an application
which was built successfully but in testing
8945.25 -> there were certain bugs in that so he has
to check the entire source code of the application
8949.39 -> in order to remove that bug which takes a
lot of time so basically locating and fixing
8953.899 -> of bugs was very time-consuming. So I hope
you are clear with the two problems that we
8958.29 -> have just discussed now, we'll move forward
and we'll see two more problems that were
8961.92 -> there before continuous integration. So the
third problem was software delivery process
8965.99 -> was slow developers were actually wasting
a lot of time in locating and fixing of birds
8970.5 -> instead of building new applications as we
just saw that locating and fixing of bugs
8974.939 -> was a very time-consuming task due to which
developers are not able to focus on building
8978.939 -> new applications. You can relate that to the
diagram which is present in front of your
8983.63 -> screen as Always a lot of time in watching
TV doing social media similarly developers
8988.81 -> were also basic a lot of time in fixing bugs.
All right. So let us have a look at the fourth
8993.689 -> problem that is continuous feedback continues
feedback related to things like build failures
8998.51 -> test status Etc was not present due to which
the developers were unaware of how their application
9003.78 -> is doing the process that you showed before
continuous integration. There was a feedback
9008.12 -> loop present. So what I will do I will go
back to that particular diagram and I'll try
9011.77 -> to explain you from there. So the feedback
loop is here when the entire source code of
9015.569 -> the application is built and tested then only
the developers are notified about the bugs
9020.03 -> in the code. All right, when we talk about
Cantonese feedback suppose this developer
9024.07 -> that I'm highlighting makes any commit to
the source code that is present in the source
9027.72 -> code repository. And at that time the code
should be pulled and it should be built and
9032.37 -> the moment it is built the developer should
be notified about the build status and then
9036.2 -> once it is built successfully it is then deployed
onto the test server for testing at that time.
9041.27 -> Whatever the test data says the developer
should be notified about it. Similarly, if
9044.83 -> this developer makes any commit to the source
code at that time. The coach should be pulled.
9048.85 -> It should be built and the build status should
be notified the developers after that. It
9052.96 -> should be deployed onto the test server for
testing and the test results should also be
9056.26 -> given to the developers. So I hope you all
are clear. What is the difference between
9060.11 -> continents feedback and feedback? So incontinence
feedback you're getting the feedback on the
9064.33 -> run. So we'll move forward and we'll see how
exactly continuous integration addresses these
9069.01 -> problems. Let us see how exactly continuous
integration is resolving the issues that we
9073.39 -> have discussed. So what happens here, there
are multiple developers. So if any one of
9077.8 -> them makes any commit to the source code that
is present in the source code repository,
9081.83 -> the code will be pulled it will be built tested
and deployed. So what advantage we get here.
9087.529 -> So first of all, any comment that is made
to the source code is built and tested due
9092.181 -> to which if there is any bug in the code developers
actually know where the bug is present or
9097.01 -> bitch come it has caused that error so they
don't need to go through the entire source
9100.89 -> code of the application. They just need to
check that particular. Because introduce the
9105.33 -> button. All right. So in that way locating
and fixing of bugs becomes very easy apart
9109.88 -> from that the first problem that we saw the
developers have to wait for a long time in
9113.5 -> order to get the test result here every commit
made to the source code is tested. So they
9118.351 -> don't need to wait for a long time in order
to get the test results. So when we talk about
9122.069 -> the third problem that was software delivery
process was slow is completely removed with
9125.899 -> this process developers are not actually focusing
on locating and fixing of bugs because that
9131.229 -> won't take a lot of time as we just discussed
instead of that. They're focusing on building
9134.87 -> new applications. Now a fourth problem was
that there is feedback was not present. But
9139.63 -> over here as you can see on the Run developers
are getting the feedback about the build status
9143.93 -> test results Etc developers are continuously
notified about how their application is doing.
9148.8 -> So I will move forward now, I'll compare the
two scenarios that is before continuous integration
9153.87 -> and after continuous integration now over
here what you can see is before continuous
9157.42 -> integration as we just saw first the source
code of the application will be built the
9161.46 -> entire source code then only it will be tested.
But when we talk about after continuous integration
9166.279 -> every commit whatever change you made in the
source code whatever change the my new changes.
9171.189 -> Well you committed to the source code that
time only the code will be pulled. It will
9174.85 -> be built and then lll be tested developers
have to wait for a long time in order to get
9180.12 -> the test results as we just saw because the
- source code will be first build and then
9183.069 -> it will be deployed onto the test server.
But when we talk about continuous integration
9186.99 -> the test result of every come it will be given
to the developers and when we talk about feedback,
9192.27 -> there was no feedback that was present earlier,
but in continuous integration feedback is
9196.439 -> present for every committee met to the source
code. You will be provided with the relevant
9200.38 -> result. All right, so now let us move forward
and we'll see what exactly is continuous integration
9205.9 -> now in continuous integration process developers
are required to make frequent commits to the
9210.979 -> source code. They have to frequently make
changes in the source code and because of
9215.5 -> that any change made in the source code, it
will report by The Continuous integration
9219.7 -> server, and then that code will be built or
you can say it will be compiled. All right
9224.19 -> now. Pentagon The Continuous integration tool
that you are using or depending on the needs
9229.99 -> of your organization. It will also be deployed
onto the test server for testing and once
9233.031 -> testing is done. It will also be deployed
onto the production server for release and
9236.66 -> developers are continuously getting the feedback
about their application on the run. So I hope
9242.06 -> I'm clear with this particular process. So
we'll see the importance of continuous integration
9249.52 -> with the help of a case study of Nokia. So
Nokia adopted a process called nightly build
9253.811 -> nightly build can be considered as a predecessor
to continuous integration. Let me tell you
9256.67 -> why. All right. So over here as you can see
that there are there are developers who are
9261.14 -> committing changes to the source code that
is present in a shared repository. All right,
9265.96 -> and then what happens in the night? There
is a build server. This build server will
9269.66 -> pull the shared repository for changes and
then it'll pull that code and prepare a bill.
9274.11 -> All right. So in that way whatever commits
are made throughout the day are compiled in
9278.42 -> the night. So obviously this process is better
than writing the entire source code of the
9283.06 -> application and then Bai Ling it but again
since if there is any bug in the code developers
9288.56 -> have to check all the comments that have been
made throughout the day so it is not the ideal
9292.81 -> way of doing things because you are again
wasting a lot of time in locating and fixing
9296.32 -> of bucks. All right, so I want answers from
you all guys. What can be the solution to
9300.47 -> this problem. How can Nokia address is particular
problem since we have seen what exactly continuous
9305.149 -> integration is and why we need now without
wasting any time. I'll move forward and I'll
9309.55 -> show you how Nokia solved this problem. So
Nokia adopted continuous integration as a
9314.689 -> solution in which what happens developers
commit changes to the source code in a shared
9319.41 -> repository. All right, and then what happens
is a continuous integration server this continuous
9324.67 -> integration server pose the repository for
changes if it finds that there is any change
9328.8 -> made in the source code and it will pull the
code and compile it. So what is happening
9332.84 -> the moment you commit a change in the source
code continuous integration server will pull
9336.359 -> that and prepare a build. So if there is any
bug in the code developers know which government
9341.96 -> is causing that error. All right, so they
can do Go through that particular commit in
9345.78 -> order to fix the bug. So in this way locating
and fixing of box was very easy, but we saw
9351.439 -> that in nightly builds if there is any bug
they have to check all the comments that have
9354.97 -> been made throughout the day. So with the
help of continuous integration, they know
9358.7 -> which commits is causing that error. So locating
in fixing of bugs didn't take a lot of time.
9363.689 -> Okay before I move forward, let me give you
a quick recap of what we have discussed till
9367.26 -> now first. We saw why we need continuous integration.
What were the problems that industries were
9371.99 -> facing before continuous integration was introduced
after that. We saw how continuous integration
9376.49 -> addresses those problems and we understood
what exactly continuous integration is. And
9381.11 -> then in order to understand the importance
of continuous integration, we saw case study
9385.161 -> of Nokia in which they shifted from nightly
build to continuous integration. So we'll
9389.979 -> move forward and we'll see various continuous
integration tools available in the market.
9394.11 -> These are the four most widely used continuous
integration tools. First is Jenkins on which
9399.26 -> we will focus in today's session then buildbot
Travis and bamboo. Right and let us move forward
9405.62 -> and see what exactly jenkins's so Jenkins
is a continuous integration tool. It is an
9410.02 -> open source tool and it is written in Java
how it achieves continuous integration. It
9415.38 -> does that with the help of plugins. Jenkins
have well over a thousand plugins. And that
9419.5 -> is the major reason why we are focusing on
Jenkins. Let me tell you guys it is the most
9423.79 -> widely accepted tool for continuous integration
because of its flexibility and the amount
9428.59 -> of plugins that it supports. So as you can
see from the diagram itself that it is supporting
9433.43 -> various development deployment testing Technologies,
for example gate Maven selenium puppet ansible
9439.92 -> lawgivers. All right. So if you want to integrate
a particular tool you need to make sure that
9444.5 -> plug-in for that tool is installed in your
Jenkins the for better understanding of Jenkins.
9449.43 -> Let me show you the Jenkins dashboard. I've
installed Jenkins in my Ubuntu box. So if
9453.99 -> you want to learn how to install Jenkins,
you can refer the Jenkins installation video.
9457.58 -> So this is a Jenkins dashboard guys, as you
can see that there are currently no jobs because
9462 -> of that this section is empty otherwise We'll
give you the status of all your build jobs
9465.899 -> over here. Now when you click on new item,
you can actually start a new project all over
9470.75 -> from scratch. All right. Now, let us go back
to our slides. Let us move forward and see
9475.729 -> what are the various categories of plugins
as I told you earlier is when the Jenkins
9480 -> achieves continuous integration with the help
of plugins. All right, and Jenkins opposed
9483.37 -> well over a thousand plugins and that is the
major reason why Jenkins is so popular nowadays.
9488.359 -> So the plug-in categorization is there on
your screen but there are certain plugins
9492.14 -> for testing like j-unit selenium Etc when
we talk about reports, we have multiple plugins,
9497.92 -> for example HTML publisher for notification.
Also, we have many plugins and I've written
9502.46 -> one of them that is Jenkins build notification
plug-in and we talked about deployment we
9506.8 -> have plugins like deploy plug-in when we talk
about compiled we have plugins like Maven
9511.13 -> and Etc. Alright, so let us move forward and
see how to actually install a plug-in on the
9517.03 -> same about to box where my Jenkins is installed.
So over here in order to install Jenkins,
9521.49 -> what you need to do is you need to click on
manage. Ken's option and overhead, as you
9526.1 -> can see that there's an option called manage
plugins. Just click over there. As you can
9530.6 -> see that it has certain updates for the existing
plugins, which I have already installed. Right
9535.62 -> then there's an option called installed where
you'll get the list of plugins that are there
9539.729 -> in your system. All right, and at the same
time, there's an option called available.
9543.46 -> It will give you all the plugins that are
available with Jenkins. Alright, so now what
9547.229 -> I will do I will go ahead and install a plug-in
that is called HTML publisher. So it's very
9552.55 -> easy. What you need to do is just type the
name of the plug-in. Headed HTML publisher
9558.62 -> plugin, just click over there and install
without restart. So it is now installing that
9564.45 -> plug-in we need to wait for some time. So
it has now successfully installed now, let
9570.14 -> us go back to our Jenkins dashboard. So we
have understood what exactly Jenkins is and
9575.08 -> we have seen various 10 kids plugins as well.
So now is the time to understand Jenkins with
9580.56 -> an example will see a general workflow how
Jenkins can be used. All right. So let us
9584.26 -> go back to our slides. So now as I have told
you earlier as well, we'll see Jenkins example,
9588.811 -> so let us move forward. So what are what is
happening developers are committing changes
9593.27 -> to the source code and that source code is
present in a shared repository. It can be
9597.67 -> a git repository subversion repository or
any other repository. All right. Now, let
9602.14 -> us move forward and see what happens now now
we're here what is happening. There's a Jenkins
9608.47 -> server. It is actually polling the source
code repository at regular intervals to see
9611.061 -> if any developer has made any commit to the
source code. If there is a change in the source
9615.41 -> code it will pull the code and we'll prepare
a build and at the same time developers will
9620.46 -> be notified about the build results now, let
us execute this practically. All right, so
9624.66 -> I will again go back to my Jenkins dashboard,
which is there in my Ubuntu bar. What had
9629.03 -> what I'm going to do is I'm going to create
a new item read basically a new project now
9634.05 -> over here. I'll give a suitable named my project
you can use any name that you want. I'll just
9639.14 -> write compile. And now I click on freestyle
project. The reason for doing that is free-style
9644.89 -> project is the most configurable and the flexible
option. It is easier to set up as well. And
9649.88 -> at the same time many of the options that
we configure here are present in other build
9653.51 -> jobs as well move forward with freestyle project
and I'll click on ok now over here what I'll
9658.319 -> do, I'll go to the source code management
Tab and it will ask you for what type of source
9662.88 -> code management you want. I'll click on get
and over here. You need to type your repository
9668.189 -> URL in my case. It is http. github.com your
username slash the name of your Repository.
9684.6 -> And finally dot get all right now in the bill
auction, you have multiple options. All right.
9690.64 -> So what I will do I click on invoke top-level
Maven targets. So now over here, let me tell
9695.3 -> you guys it may even has a built life cycle
and that build life cycle is made up of multiple
9700.311 -> build phases. Typically the sequence for build
phase will be festive validate the code then
9705.08 -> you compile it. Then you test it. Then you
perform unit test by using suitable unit testing
9710.27 -> framework. Then you package your code in a
distributable format like a jars, then you
9715.98 -> verify it and you can actually install any
package that you want with the help of install
9721.109 -> build phase and then you can deploy it in
the production environment for release. So
9725.52 -> I hope you have understood the maven build
life cycle. So in the goals tab, so what I
9730.06 -> need to do is I need to compile the code that
is present in the GitHub account. So for that
9734.229 -> in the gold stabbed I need to write compile.
So this will trigger the compile build phase
9739.189 -> of Maven now, that's it guys. That's it. Just
click on apply. And save now on the left hand
9745.029 -> side. There's an option called bill now to
trigger the built just click over there and
9749.55 -> you will be able to see the the Builder starting
in order to see the console output. You can
9754.541 -> click on that build and you see the console
output. So it has validated the GitHub account
9759.8 -> and it is now starting to compile that code
which is there in the GitHub account. So we
9765.51 -> are successfully compiled the code that was
present in the GitHub account. Now, let us
9769.41 -> go back to the Jenkins dashboard. Now in this
Jenkins dashboard, you can see that my project
9774.68 -> is displayed over here. And as you can see
the blue color of the ball indicates that
9778.92 -> as that it has been successfully executed.
All right. Now, let us go back to the slides
9784.109 -> now, let us move forward and see what happens.
Once you have compile your code. Now the code
9789.31 -> that you have compiled you need to test it.
All right. So what Jenkins will do it will
9793.41 -> deploy the code onto the test server for testing
and at the same time developers will be notified
9798.04 -> about the test results as well. So let us
again execute this practically, I'll go back
9802.63 -> to my Ubuntu box again. So in the GitHub repository,
the test cases are already defined. Alright,
9807.229 -> so we are going to analyze those test cases
with the help of Maven. So let me tell you
9812.021 -> how to do it will again go and click on new
item on over here will give any suitable name
9816.62 -> to a project. I'll just type test. I'll again
use freestyle project for the reason that
9822.26 -> I've told you earlier click on OK and in the
source code management tab. Now before applying
9829.029 -> unit testing on the code that I've compiled.
I need to First review it with the help of
9833.83 -> PMD plug-in. I'll do that. So for that I will
again click on new item and a over here. I
9839.189 -> need to type the name of the project. So I'll
just type it as code underscore review. Freestyle
9845.02 -> project click. Ok. Now the source code management
tab. I will again choose gate and give my
9853.62 -> repository URL https. github.com username
/ name of the Repository . Kit All right now
9867.29 -> scroll doubt now in the build tab. I'm going
to click over there. And again, I will click
9872.069 -> on invoke top-level Maven targets now in order
to review the code. I am going to use the
9876.75 -> Matrix profile of Maven. So how to do that.
Let me tell you you need to type here - p
9889.48 -> Matrix PMD: PMD, all right, and this will
actually produce a PMD report that contains
9893.76 -> all the warnings and errors now in the post
Bill action tab, I click on publish PMD analysis
9899.851 -> result. That's all click on apply and Save
the finally click on Bill now. And let us
9908.64 -> see the console output. So it has now pulled
the code from the GitHub account and Performing
9914.25 -> the code review. So they successfully review
the code now. Let us go back to the project
9919.71 -> over here. You can see an option called PMD
warnings just click over there and it will
9924.87 -> display all the warnings that are there present
in your code. So this is the PMD Alice's report
9929.39 -> over here. As you can see that there are total
11 warnings and you can find the details here
9933.899 -> as well like package you have then you have
then you have categories then the types of
9939.22 -> warnings which are there like for example,
empty cache blocks empty finally block. Now,
9943.81 -> you have one more tab called warnings over
there. You can find where the warning is present
9947.899 -> the filename package. All right, then you
can find all the details in the details tab.
9952 -> It will actually tell you where the warning
is present in your code. All right. Now, let
9956.52 -> us go back to the Jenkins dashboard and now
we'll perform unit tests on the code that
9961.06 -> we have compiled for that again. I'll click
on new item and I'll give a name to this project.
9966.439 -> I will just type test. And I click on freestyle
project. Okay. Now in the source code management
9973.93 -> tab, I'll click on get now over here. I'll
type the repository URL http. github.com / username
9989.01 -> / name of the Repository . Kit and in the
build option I click on again invoke top-level
9999.88 -> Maven targets now over here as I've told you
earlier as well that Maven build life cycle
10004.51 -> has multiple build phases like first it would
validate the code compile then tested package
10010.55 -> that will verify then it will install if certain
packages are required. And then finally it
10014.72 -> will deploy it. Alright. So one of the phase
is actually testing that performs unit testing
10019.85 -> using the suitable unit testing framework.
The test cases are already defined in my GitHub
10024.28 -> account. So to analyze the test case in the
Gold section, I need to write tests. All right,
10028.15 -> and it will invoke the test phase of the maven
build life cycle. All right, so just click
10034.191 -> on apply and Save finally click on Builder
To see the console output click here now in
10042.71 -> the source code management tab. I'll select
get all right over here again. I need to type
10046.979 -> my repository URL. That is HTTP github.com
/ username. / repository name dot get and
10064.17 -> now in the build tab. I'll select invoke top-level
Maven targets and over here as I have told
10070.76 -> you earlier as well that the maven build life
cycle has multiple phases. All right, and
10075.26 -> one of that phase is unit tests, so in order
to invoke that unit test what I need to do
10079.689 -> is in the goals tab, I need to write tests
and it will invoke the test build phase of
10084.939 -> the maven build life cycle. All right. So
the moment I write tests here and I'll build
10089.342 -> it. It will actually analyze the test cases
that are present in the GitHub account. So
10093.81 -> let us write test and apply and Save Finally
click on Bill now. And in order to see the
10103.609 -> console output click here. So does pull the
code from the GitHub account and now it's
10109.35 -> performing unit test. So we have successfully
perform testing on that code now, I will go
10115.59 -> back to my Jenkins dashboard or as you can
see that all the three build jobs that have
10119.98 -> executed a successful which is indicated with
the help of view colored ball. All right.
10124.75 -> Now, let us go back to our slides. So we have
successfully performed in unit tests on the
10129.09 -> test cases that were there on the GitHub account
now, we'll move forward and see what happens
10133.93 -> after that. Now finally, you can deploy that
build application or to the production environment
10138.16 -> for release, but when you have one single
Jenkins over there are multiple disadvantages.
10143.87 -> So let us discuss that one by one so we'll
move forward and we'll see what are the disadvantages
10148.029 -> of using one single Jenkins over now. What
I'll do I'll go back to my Jenkins dashboard
10152.64 -> and I'll show you how to create a build pipeline.
All right. So for that I'll move to my Ubuntu
10157.78 -> box. Once again now over here you can see
that there is an option of plus. Ok, just
10161.69 -> click over there now over here click on build
pipeline view, whatever name you want. You
10166.051 -> can give I'll just give it as a do Rekha.
pipeline And click on ok. Now over here what
10177.51 -> you can do you can give some certain description
about your bill pipeline. All right, and there
10181.05 -> are multiple options that you can just have
a look and over here. There's an option called
10187.36 -> select initial job. So I want compiled to
be my first job and there are display options
10193.08 -> over here number of display builds that you
want. I'll just keep it as 5 now the row headers
10198.811 -> that you want column headers, so you can just
have a look at all these options and you can
10203.141 -> play around with them just for the introductory
example, let us keep it this way now finally
10207.89 -> click on apply and ok. Currently you can see
that there is only one job that is compiled.
10214.64 -> So what I'll do, I'll add more jobs this pipeline
for that. I'll go back to my Jenkins dashboard
10221.439 -> and over here. I'll add code review as well.
So for that I will go to configure. And in
10226.54 -> this bill triggers tab, what I'll do I click
on build after other projects are built. So
10232.85 -> whatever project that you want to execute
before code review just type that so I want
10237.13 -> compile. Yeah, click on compile and over here.
You can see that there are multiple options
10242.17 -> like trigger only if build stable trigger,
even if the build is unstable trigger, even
10246.31 -> if the build page so I'll just click on a
trigger even if the bill fails. All right,
10251.689 -> finally click on apply and safe. Similarly
if I want to add my test job as well to the
10257.18 -> pipeline. I can click on configure and again
the bill triggers tab. I'll click on build
10263.58 -> after other projects are built. So overhead
type the project that you want to execute
10268.069 -> before this particular project in our case.
It is code review. So let us click over there
10274.271 -> trigger, even if the build fails apply and
Save Now let us go back to the dashboard and
10282.39 -> see how our pipeline looks like. So this is
our pipeline. Okay, so when we click on run
10288.149 -> Let us see what happens first. It will compile
the code from the GitHub account. That is
10292.76 -> it will pull the code and it will compile
it. So now this compile is done. All right,
10296.729 -> now it will review the code. So the code review
has started in order to see the log. You can
10301.859 -> click on Console. It will give you the console
output. Now once code review is done. It will
10307.63 -> start testing. It will perform unit tests
or it's a code has been successfully reviewed
10311.6 -> with the as you can see the color has become
green. Now, the testing has started it will
10317.41 -> perform unit tests on the test case is that
there in the GitHub account? So we have successfully
10322.06 -> executed three build jobs that is compile
the code then review it and then perform testing.
10327.1 -> All right, and this is the build pipeline
guys. So let us go back to the Jenkins dashboard.
10332.09 -> And we'll go back to our slides now. So now
we have successfully performed unit tests
10338.06 -> on the test cases that are present in the
GitHub account. All right. Now, let us move
10341.71 -> forward and see what else you can do with
Jenkins. Now the application that we have
10345.71 -> tested that can also be deployed onto the
production server for release as well. Alright,
10350.75 -> so now let us move forward and see what are
the disadvantages of this one single Jenkins
10354.56 -> over. So there are two major disadvantages
of using one single Jenkins over first is
10360.42 -> you might require different environments for
your builds and test jobs. All right. So at
10364.689 -> that time one single Jenkins over cannot serve
a purpose and the second major disadvantages
10369.729 -> suppose. You have a heavier projects to build
on regular basis. So at that time one single
10375.62 -> Jenkins server cannot simply handle the load.
Let us understand this with an example suppose.
10381.27 -> If you need to run web test using Internet
Explorer. So at that time you need a Windows
10384.75 -> machine, but your other build jobs might require
a Linux box. So you can't use one single Jenkins
10389.75 -> over. All right, so let us move forward. See
what is actually the solution to this problem
10395.25 -> the solution to this problem is Jenkins distributed
architecture. So the Jenkins distributed architecture
10400.859 -> consists of a Jenkins master and multiple
Jenkins slave. So this Jenkins Master is actually
10407.149 -> used for scheduling build jobs. It also dispatches
builds to the slaves for actual execution.
10413.22 -> All right, it also monitors a slave that is
possibly taking them online and offline as
10418.109 -> required and it also records and presents
the build results and you can directly executable
10424.13 -> job or Master instance as well. Now when we
talk about Jenkins slaves, these slaves are
10428.91 -> nothing but the Java executable that are present
on remote machines. All right, so these slaves
10434.16 -> basically here's the request of the Jenkins
master or you can say they perform the jobs
10438.97 -> As Told by the Jenkins Master they operate
on variety of operating system. So you can
10442.99 -> configure Jenkins in order to execute a particular
type of builds up on a particular Jenkins
10447.279 -> slave or on a particular type of Jenkins slave
or you can actually let Jenkins pick the next
10452.91 -> available. Budget get slave. All right. Now
I go back again to my Ubuntu box and I'll
10457.319 -> show you practically how to add Jenkins slaves
now over here as you can see that there is
10461.24 -> an option called Mana Jenkins just click over
there and when you scroll down you'll see
10467.021 -> man option called managed nodes under the
left hand side. There is an option called
10471.16 -> new node. Just click over there click on permanent
agent give a name to your slave. I'll just
10476.609 -> give it as slave underscore one. Click on
OK over here. You need to write the remote
10483.46 -> root directory. So I'll keep it as slash home
slash Edureka. And labels are not mandatory
10491.1 -> still if you want you can use that and launch
method. I want it to be launched slave agents
10497.6 -> via SSH. All right over here. You need to
give the IP address of your horse. So let
10502.83 -> me show you the IP address of my Host this
my Jenkins slave, which I'll be using like
10509.89 -> Jenkins slave. So, this is the machine that
I'll be using as Jenkins slave in order to
10517.38 -> check the IP address. I'll type ifconfig.
This is the IP address of that machine just
10524.7 -> copy it. Now I'll go back to my Jenkins master.
And in the host tab, I'll just paste that
10534 -> IP address and over here. You can add the
credentials to do that. Just click on ADD
10539 -> and over here. You can give the user name.
I'll give it as root password. That's all
10548.529 -> just click on ADD. And over here select it.
Finally save it. Now it is currently adding
10556.931 -> the slave in order to see the logs. You can
click on that slave again. Now, it has successfully
10562.899 -> added that particular slave. Now what I'll
do, I'll show you the logs for that and click
10566.47 -> on slave. And on the left hand side, you will
notice an option called log just click over
10572.3 -> there and we'll give you the output. So as
you can see agent has successfully connected
10576.569 -> and it is online right now. Now what I'll
do, I'll go to my Jenkins slave and I'll show
10581.95 -> you in slash home slash enter a car that it
is added. Let me first clear my terminal now
10590.239 -> what I'll do, I'll show you the contents of
Slash home slash at Eureka. As you can see
10602.85 -> that we have successfully added slave dot
jar. That means we have successfully added
10606.729 -> Jenkins slave to our Jenkins Master. Hello
everyone. This is ordered from 80 Rekha and
10616.93 -> today's session will focus on what is docker.
So without any further Ado let us move forward
10622.31 -> and have a look at the agenda for today first.
We'll see why we need Docker will focus on
10626.979 -> various problems that industries were facing
before Docker was introduced after that will
10632.27 -> understand what exactly Docker is and for
better understanding of Docker will also look
10636.859 -> at a Docker example after that will understand
how Industries are using Docker with the case
10642.37 -> study of Indiana University. Our fifth topic
will focus on various Docker components, like
10648.18 -> images containers Etc and our Hands-On part
will focus on installing WordPress and phpmyadmin
10654.979 -> using Docker compose. So we'll move forward
and we'll see why we need Docker. So this
10660.14 -> is the most common problem that industries
were facing as you can see that there is a
10664.62 -> developer who has built an application that
works fine in his own environment. But when
10669.311 -> it reach production there were certain issues
with that application. Why does that happen
10674.109 -> that happens because of difference in the
Computing environment between deaf and product
10678.7 -> I'll move forward and we'll see the second
problem before we proceed with the second
10683.67 -> problem. It is very important for us to understand.
What a microservices consider a very large
10689.08 -> application that application is broken down
into smaller Services. Each of those Services
10693.27 -> can be termed as micro services or we can
put it in another way as well microservices
10698.54 -> can be considered a small processes that communicates
with each other over a network to fulfill
10704.2 -> one particular goal. Let us understand this
with an example as you can see that there
10709.23 -> is an online shopping service application.
It can be broken down into smaller micro services
10714.189 -> like account service product catalog card
server and Order server Microsoft was architecture
10720.83 -> is gaining a lot of popularity nowadays even
giants like Facebook and Amazon are adopting
10725.92 -> micro service architecture. There are three
major reasons for adopting microservice architecture,
10730.68 -> or you can say there are three major advantages
of using Microsoft's architecture first. There
10735.53 -> are certain applications which are easier
to build and maintain when they are broken
10740.08 -> down into smaller pieces or smaller Services.
Second reason is suppose if I want to update
10745.59 -> a particular software or I want a new technology
stack in one of my module on one of my service
10751.06 -> so I can easily do that because the dependency
concerns will be very less when compared to
10755.82 -> the application as a whole. Apart from that
the third reason is if any of my module of
10761.63 -> or any of my service goes down, then my whole
application remains largely unaffected. So
10767.899 -> I hope we are clear with what our micro services
and what are their advantages so we'll move
10772.84 -> forward and see what are the problems in adopting
this micro service architecture. So this is
10777.6 -> one way of implementing microservice architecture
over here, as you can see that there's a host
10782.14 -> machine and on top of that host machine there
are multiple virtual machines each of these
10786.64 -> virtual machines contains the dependencies
for one micro service. So you must be thinking
10790.96 -> what is the disadvantage here? The major disadvantage
here is in Virtual machines. There is a lot
10795.899 -> of wastage of resources resources such as
RAM processor disk space are not utilized
10801.63 -> completely by the micro service which is running
in these virtual machines. So it is not an
10806.22 -> ideal way to implement microservice architecture
and I have just given you an example of five
10811.71 -> microservices. What if there are more than
5 micro Services what if your application
10816.07 -> is so huge that it requires? Microsoft versus
so at that time using virtual machines doesn't
10821.08 -> make sense because of the wastage of resources.
So let us first discuss the implementation
10825.939 -> of microservice problem that we just saw.
So what is happening here. There's a host
10830.27 -> machine and on top of that host machine. There's
a virtual machine and on top of that virtual
10834.649 -> machine, there are multiple Docker containers
and each of these Docker containers contains
10838.27 -> the dependencies 41 Microsoft Office. So you
must be thinking what is the difference here
10842.851 -> earlier? We were using virtual machines. Now,
we are using our Docker containers on top
10847.291 -> of virtual machines. Let me tell you guys
Docker containers are actually lightweight
10851.729 -> Alternatives of virtual machines. What does
that mean in Docker containers? You don't
10856.1 -> need to pre-allocate any Ram or any disk space.
So it will take the RAM and disk space according
10861.35 -> to the requirements of applications. All right.
Now, let us see how Dockers all the problem
10866.022 -> of not having a consistent Computing environment
throughout the software delivery life cycle.
10870.739 -> Let me tell you first of all Docker containers
are actually developed by the developers.
10875.07 -> So now let us see how Dockers all the first
That we saw where an application works fine
10879.99 -> and development environment but not in production.
So Docker containers can be used throughout
10884.83 -> the SCLC life cycle in order to provide consistent
Computing environment. So the same environment
10890.149 -> will be present in Dev test and product. So
there won't be any difference in the Computing
10895.13 -> environment. So let us move forward and understand
what exactly Docker is. So the docker containers
10901.609 -> does not use the guest operating system. It
uses the host operating system. Let us refer
10906.39 -> to the diagram that is shown. There is the
host operating system and on top of that host
10910.609 -> operating system. There's a Docker engine
and with the help of this Docker engine Docker
10914.81 -> containers are formed and these containers
have applications running in them and the
10919.47 -> requirements for those applications such as
all the binaries and libraries are also packaged
10923.939 -> in the same container. All right, and there
can be multiple containers running as you
10928.79 -> can see that there are two containers here
1 & 2. So on top of the host machine is a
10933.479 -> docker engine and on top of the docker engine
there are multiple containers and Each of
10937.79 -> those containers will have an application
running on them and whatever the binaries
10941.68 -> and library is required for that application
is also packaged in the same container. So
10946.58 -> I hope you are clear. So now let us move forward
and understand Docker in more detail. So this
10951.42 -> is a general workflow of Docker or you can
say one way of using Docker over here. What
10955.96 -> is happening a developer writes a code that
defines an application requirements or the
10961.6 -> dependencies in an easy to write Docker file
and this Docker file produces Docker images.
10967.189 -> So whatever dependencies are required for
a particular application is present inside
10970.64 -> this image and what our Docker containers
Docker containers are nothing but the runtime
10975.07 -> instance of Docker image. This particular
image is uploaded onto the docker Hub. Now,
10979.649 -> what is Docker Hub? Docker Hub is nothing
but a git repository for Docker images it
10984.32 -> contains public as well as private repositories.
So from public repositories, you can pull
10988.689 -> your image as well and you can upload your
own images as well on to the docker Hub. All
10993.55 -> right from Docker Hub various teams such as
QA or production. We'll pull the image and
10998.95 -> prepare their own containers as you can see
from the diagram. So what is the major advantage
11003.31 -> we get through this workflow? So whatever
the dependencies that are required for your
11008.01 -> application is actually present throughout
the software delivery life cycle. If you can
11012.83 -> recall the first problem that we saw that
an application works fine in development environment,
11017.48 -> but when it reaches production, it is not
working properly. So that particular problem
11022.08 -> is easily resolved with the help of this particular
workflow because you have a same environment
11027.739 -> throughout the software delivery lifecycle
be Dev test or product will see if a better
11032.2 -> understanding of Docker a Docker example.
So this is another way of using Docker in
11036.97 -> the previous example, we saw that Docker images
were used and those images were uploaded onto
11041.95 -> the docker Hub. I'm from Doc and have various
teams were pulling those images and building
11046.739 -> their own containers. But Docker images are
huge in size and requires a lot of network
11051.33 -> bandwidth. So in order to say that Network
bandwidth, we use this kind of a work flow
11055.89 -> over here. We use Jenkins server. Or any continuous
integration server to build an environment
11060.8 -> that contains all the dependencies for a particular
application or a Microsoft Office and that
11065.97 -> build environment is deployed onto various
teams, like testing staging and production.
11071.3 -> So let us move forward and see what exactly
is happening in this particular image over
11075.38 -> here developer has written complex requirements
for a micro service in an easy to write dockerfile.
11081.649 -> And the code is then pushed onto the get repository
from GitHub repository continuous integration
11086.859 -> servers. Like Jenkins will pull that code
and build an environment that contains all
11091.819 -> they have dependencies for that particular
micro service and that environment is deployed
11096.54 -> on to testing staging and production. So in
this way, whatever requirements are there
11101.87 -> for your micro service is present throughout
the software delivery life cycle. So if you
11106.64 -> can recall the first problem we're application
works fine in Dev, but does not work in prod.
11111.55 -> So with this workflow we can completely remove
that problem because the requirements for
11115.8 -> the Microsoft Office is present throughout
The software delivery life cycle and this
11120.51 -> image also explains how easy it is to implement
a Microsoft's architecture using Docker now,
11126.04 -> let us move forward and see how Industries
are adopting Docker. So this is the case study
11130.859 -> of Indiana University before Docker. They
were facing many problems. So let us have
11135.83 -> a look at those problems one by one. The first
problem was they were using custom script
11140.3 -> in order to deploy that application onto various
vm's. So this requires a lot of manual steps
11145.88 -> and the second problem was their environment
was optimized for legacy Java based applications,
11152.359 -> but they're growing environment involves new
products that aren't solely java-based. So
11157.02 -> in order to provide these students the best
possible experience, they needed to began
11160.55 -> modernizing their applications. Let us move
forward and see what all other problems Indiana
11165.51 -> University was facing. So in the previous
problem of dog, Indiana University, they wanted
11170.77 -> to start modernizing their applications. So
for that they wanted to move from a monolithic
11175.41 -> architecture to a Microsoft Office architecture
and the previous slides. We also saw that
11180.39 -> if you want to update a particular technology
in one of your micro service it is easy to
11184.75 -> do that because will be very less dependency
constrains when compared to the whole application.
11189.56 -> So because of that reason they wanted to start
modernizing their application. They wanted
11193.88 -> to move to a micro service architecture. Let
us move forward and see what are the other
11198.41 -> problems that they were facing Indiana University
also needed security for their sensitive student
11203.85 -> data such as SSN and student health care data.
So there are four major problems that they
11209.2 -> were facing before Docker now, let us see
how they have implemented Docker to solve
11213.46 -> all these problems the solution to all these
problems was docker Data Center and Docker
11218.911 -> data center has various components, which
are there in front of your screen first is
11223.77 -> universal control plane, then comes ldap swarm.
CS engine and finally Docker trusted registry
11230.05 -> now, let us move forward and see how they
have implemented Docker data center in their
11234.729 -> infrastructure. This is a workflow of how
Indiana University has adopted Docker data
11239.54 -> center. This is dr. Trusted registry. It is
nothing but the storage of all your Docker
11244.819 -> images and each of those images contain the
dependencies 41 Microsoft Office as we saw
11250.029 -> that the Indiana University wanted to move
from a monolithic architecture to a Microsoft
11254.51 -> is architecture. So because of that reason
these Docker images contain the dependencies
11258.66 -> for one particular micro service, but not
the whole application. All right, after that
11264.18 -> comes universal control plane. It is used
to deploy Services onto various hosts with
11268.84 -> the help of Docker images that are stored
in the docker trusted registry. So it obscene
11273.3 -> can manage their entire infrastructure from
one single place with the help of universal
11278.46 -> control plane web user interface. They can
actually use it to provision Docker installed
11282.75 -> software on various hosts, and then deploy
applications without doing a lot Of manual
11287.46 -> steps as we saw in the previous slides that
Indiana University was earlier using custom
11292.45 -> scripts to deploy our application onto VMS
that requires a lot of manual steps that problem
11297.14 -> is completely removed here when we talk about
security the role based access controls within
11302.319 -> the docker data center allowed Indiana University
to Define level of access to various themes.
11308.49 -> For example, they can provide read-only access
to Docker containers for production team.
11313.71 -> And at the same time they can actually provide
read and write access to the dev team. So
11319.39 -> I hope we all are clear with how Indiana University
has adopted Docker data center will move forward
11325.36 -> and see what are the various Docker components.
First is Docker registry Docker registry is
11333.41 -> nothing but the storage of all your Docker
images your images can be stored either in
11337.55 -> public repositories or in private repositories.
These repositories can be present locally
11342 -> or it can be present on the cloud dog. A provides
a cloud hosted service called Docker Hub Docker
11347.13 -> Hub as public as well as private repositories
from public repositories. You can actually
11352.13 -> pull an image and prepare your own containers
at the same time. You can write an image and
11356.7 -> upload that onto the docker Hub. You can upload
that into your private repository or you can
11361.4 -> upload that on a public repository as well.
That is totally up to you. So for better understanding
11366.26 -> of Docker Hub, let me just show you how it
looks like. So this is how a Docker Hub looks
11370.699 -> like. So first you need to actually sign in
with your own login credentials. After that.
11375.16 -> You will see a page like this, which says
welcome to Docker Hub over here, as you can
11379.41 -> see that there is an option of create repository
where you can create your own public or private
11383.87 -> repositories and upload images and at the
same time. There's an option called explore
11387.68 -> repositories this contains all the repositories.
These which are available publicly. So let
11392.85 -> us go ahead and explore some of the publicly
available repositories. So we have a repositories
11398.88 -> for nginx reddish Ubuntu then we have Docker
registry Alpine Mongo my SQL swarm. So what
11406.229 -> I'll do I'll show you a centralized repository.
So this is the centralized repository which
11410.96 -> contains the center West image. Now, what
I will do later in the session, I'll actually
11415.819 -> pull a centralized image from Docker Hub.
Now, let us move forward and see what our
11420.041 -> Docker images and containers. So Docker images
are nothing but the read-only templates that
11425.47 -> are used to create containers these Docker
images contains all the dependencies for a
11430.189 -> particular application or a Microsoft Office.
You can create your own image and upload that
11435.5 -> onto the docker Hub. And at the same time
you can also pull the images which are available
11440.1 -> in the public repositories and the in Docker
Hub. Let us move forward and see what our
11444.171 -> Docker containers Docker containers are nothing
but the runtime instances of Docker images
11450.46 -> it contains everything that is required to
run an application or a Microsoft Office and
11455.05 -> at the same time. It is also possible that
more than one image is required to create
11459.18 -> a one container. Alright, so for better understanding
of Docker images and Docker containers, what
11464.739 -> I'll do on my Ubuntu box, I will pull a sin
2x image and I'll run a sin to waste container
11470.12 -> in that. So let us move forward and first
install Docker in my Ubuntu box. So guys,
11475.72 -> this is my Ubuntu box over here first. I'll
update the packages. So for that I will type
11480.439 -> sudo apt-get update. asking for password it
is done now. Before installing Docker. I need
11503.24 -> to install the recommended packages for that.
I'll type sudo. Apt get install. Line-X - image
11515.969 -> - extra - you name space - are and now a line
irks - image - extra - virtual and here we
11550.479 -> go. Press why? So we are done with the prerequisite.
So let us go ahead and install Docker for
11564.75 -> that. I'll type sudo. apt-get install Docker
- engine so we have successfully installed
11581.58 -> Docker if you want to install Docker and send
two ways. You can refer the center is Docker
11586.04 -> installation video. Now we need to start this
docker servicer for that. I'll type sudo service
11595.08 -> docker start. So it says the job is already
running. Now. What I will do I will pull us
11604.02 -> into his image from Docker Hub and I will
run the center waste container. So for that
11608.63 -> I will type sudo. Docker pull and the name
of the image. That is st. OS the first it
11616.97 -> will check the local registry for Centos image.
If it doesn't find there then it will go to
11621.56 -> the docker hub for st. OS image and it will
pull the image from there. So we have successfully
11632.53 -> pulled us into his image from Docker Hub.
Now, I'll run the center as container. So
11637.91 -> for that I'll type sudo Docker Run - it sent
OS that is the name of the image. And here
11646.199 -> we go. So we are now in the Centre ice container.
Let me exit from this. Clear my terminal.
11656.22 -> So let us now recall what we did first. We
installed awkard on open to after that. We
11660.79 -> pulled sent to his image from Docker Hub.
And then we build a center as container using
11666 -> that Center West image now. I'll move forward
and I'll tell you what exactly Docker compose
11672.239 -> is. So let us understand what exactly Docker
compose is suppose you have multiple applications
11678.689 -> on various containers and all those containers
are actually linked together. So you don't
11683.6 -> want to actually execute each of those containers
one by one but you want to run those containers
11689.04 -> at once with a single command. So that's where
Docker compose comes into the picture with
11694.25 -> Docker compose. You can actually run multiple
applications present on various containers
11698.819 -> with one single command that is docker - compose
up as you can see that there is an example
11704.66 -> in front of you imagine you're able to Define
three containers one running a web app another
11709.25 -> running a post Kris. And another running a
red is in a uml file that is called Docker
11715.17 -> compose file. And from there. You can actually
execute all these three containers with one
11719.96 -> single command. That is Takin - compose up
let us understand this with an example suppose.
11725.819 -> You want to publish a Blog for that you'll
use CMS and WordPress is one of the most widely
11731.06 -> used CMS so you need one. Default WordPress
and you need one more container for my SQL
11736.729 -> as bakit and that my SQL container should
be linked to the WordPress container apart
11742.25 -> from that. You need one more container for
phpmyadmin that should be linked to my SQL
11747.38 -> database as it is used to access mySQL database.
So what if you are able to Define all these
11753.56 -> three containers in one yamen file and with
one command that is docker - composer, all
11758.39 -> three containers are up and running. So let
me show you practically how it is done on
11763.439 -> the same open to box where I've installed
Docker and I've pulled a center s image. This
11768.88 -> is my Ubuntu box first. I need to install
Docker compose here, but before that I need
11773.81 -> python pip so for that I will type sudo. Opt
get installed. Titan - VIP and here we go.
11798.33 -> So it is done now. I will clear my terminal
and now I'll install Docker compose for that.
11803.529 -> I'll type sudo VIP install Docker - compose
and here we go. So Docker compose is successfully
11819.89 -> installed. Now I'll make a directory and I'll
name it as WordPress mkdir WordPress. Now
11829.75 -> I'll enter this WordPress directory. Now over
here, I'll edit Docker - compose dot HTML
11838.5 -> file using G edit. You can use any other editor
that you want. I'll use G edit. So I'll type
11844.43 -> sudo G edit Docker - compose dot HTML and
here we go. So what here what I'll do, I'll
11858.62 -> first open a document. And I'll copy this
yeah Mel code. And I will paste it here. So
11869.79 -> let me tell you what I've done first. I have
defined a container as and I'm named it as
11873.61 -> WordPress. It is built from an image WordPress
that is present on the docker Hub. But this
11878.989 -> WordPress image does not have a database.
So for that I have defined one more container
11883.58 -> and I've named it as WordPress underscore
DB. It is actually built from the image that
11888.479 -> is called Maria DB which is present in the
word press and I need to link this WordPress
11893.42 -> underscore DB with the WordPress container.
So for that I have written links WordPress
11898.33 -> underscore DB: my SQL. All right, and in the
post section this port 80 of the docker container
11905.569 -> will actually be linked to Port eight zero
eight zero of by host machine. So are we clear
11911.64 -> till here now? What I've done I've defined
a password here as a deer a cow. You can give
11916.1 -> whatever password that you want and have defined
one more container called phpmyadmin. This
11922.12 -> container is built from the image corbino's
/ talker - phpmyadmin that is present on the
11927.53 -> docker Hub again. I need to link this particular
container with WordPress underscore DB container
11933.14 -> for that. I have written links WordPress underscore
DB: my SQL and the port section the port 80
11939.43 -> of my Docker container will actually be linked
to Port 80 181 of the host machine and finally
11945.739 -> I've given a username that is root and I've
given a password as Ed Eureka. So let us now
11950.92 -> save it and we'll quit Let me first clear
my terminal. And now I run a command sudo
11961.12 -> Docker - compose. Up - D and here we go. So
this command will actually pull all the three
11972.54 -> images and we'll build the three containers.
So it is done now. Let me clear my terminal.
11988.729 -> Now what I'll do, I'll open my browser and
over here. I'll type the IP address of my
11994.85 -> machine or I can type the hostname as well.
First name of my machine is localhost. So
11999.649 -> I'll type localhost and put a zero eight zero
that I've given for WordPress. So it will
12005.68 -> direct you to a WordPress installation page
over here. You need to fill this particular
12009.729 -> form, which is asking you for site title.
I'll give it as editor acre username. Also,
12014.989 -> I will give as edureka password. I'll type
area Rekha confirm the use of weak password
12023.239 -> then type your email address and it is asking
search engine or visibility which I want.
12029.681 -> So I want click here and finally, I'll click
on install WordPress. So this is my WordPress
12037.06 -> dashboard and WordPress is now successfully
installed. Now what I'll do, I'll open one
12042.41 -> more top on over here. I'll type localhost
or the IP address of a machine and I'll go
12048.29 -> to Port 80 1814 phpmyadmin. And over here,
I need to give the user name. If you can recall.
12055.359 -> I've given route and password has given as
a do Rekha and here we go. So PHP, my admin
12063.029 -> is successfully installed. This phpmyadmin
is actually used to access a my SQL database
12069.399 -> and this my SQL database is used as back-end
for WordPress. If you've landed on this video,
12080.39 -> then it's definitely because you want to install
a Kubernetescluster at your machine. Now,
12084.729 -> we all know how tough the installation process
is hence this video on our YouTube channel.
12089.069 -> My name is Walden and I'll be your host for
today. So without wasting any time let me
12093.23 -> show you what are the various steps that we
have to follow. Now. There are various steps
12096.65 -> that we have to run both at the Masters and
and the slave end and then a few commands
12101.39 -> only at the master sent to bring up the cluster
and then one command which has to be run at
12105.57 -> all the slave ends so that they can join the
cluster. Okay. So let me get started by showing
12111.19 -> you those commands on those installation steps,
which have to be run commonly on both the
12115.649 -> Masters and and the slave and first of all,
we have to update your repository. Okay, since
12121.08 -> I am using Ubuntu To update my app to get
repository. Okay, and after that we would
12126.351 -> have to turn up this vapp space be the Masters
end or the slaves and communities will not
12130.62 -> work if the swap space is on. Okay, we have
to disable that so there are a couple of commands
12134.43 -> for that and then the next part is you have
to update the hostname the hosts file and
12139.44 -> we have to set a static IP address for all
the nodes in your cluster. Okay, we have to
12143.949 -> do that because at any point of time if your
master or if your node in the cluster of fails,
12148.37 -> then when they restart they should have the
same IP address if you have a dynamic IP address
12153.239 -> and then if they restart because of a failure
condition, then it will be a problem because
12157.2 -> they are not be able to join the cluster because
you'll have a different IP address. So that's
12160.71 -> all you have to do these things. All right,
there are a couple of commands for that and
12164.17 -> after that we have to install the openssh
server and docker that is because Humanity's
12169.93 -> requires the openssh functionality and it
of course needs Docker because everything
12173.95 -> in kubernetes is containers, right? So we
are going to make use of Docker containers.
12177.49 -> So that's why we have to install these two
components and finally we have to install
12181.89 -> Q barium. You're black and you have cereal
now. These are the core components of your
12186.1 -> Kubernetes. All right. So these are the various
components that have to be installed on both
12190.449 -> your master and your slave and so let me first
of all open up my VMS and then show you how
12195.819 -> to get started now before I get started. Let
me tell you one thing. You have a cluster
12200.47 -> you have a master and then you have slaves
in that cluster, right? Your master should
12204.04 -> always have better configurations than your
slave. So for that reason, if you're using
12208.21 -> virtual machines on your host, then you have
to ensure that your master has at least 2
12212.45 -> GB of RAM and to core CPUs. Okay, and your
slave has 2GB of RAM and at least one core
12218.41 -> CPU. So these are the basic necessities for
your master and slave machines on that note.
12223.8 -> I think I can get started. So first of all,
I'll bring up my virtual machine and go through
12227.95 -> these installation processes. So I hope everyone
can see my screen here. This is my first VM
12234.63 -> and what I'm going to do is I'm going to make
this my master. Okay, so all the commands
12239.23 -> to install the various components are present
with me in my notepad Okay, so I'm going to
12244.529 -> use this for reference and then quickly execute
these commands and show you how communities
12248.79 -> is installed. So first of all, we have to
update our Advocate repository. Okay, but
12253.399 -> before that, let's log in as s you okay, so
I'm going to do a sudo OSU so that I can execute
12258.819 -> all the following commands as pseudo user.
Okay. So so to OSU there goes my root password
12265.89 -> and now you can see the difference here right
here. I was executing it as a normal user,
12269.649 -> but from here am a root user. So I'm going
to execute all these commands as s you so
12274.359 -> first of all Let's do an update. I'm going
to copy this and paste it here apt-get update
12281.399 -> update my Ubuntu repositories. All right,
so it's going to take quite some time. So
12288.31 -> just hold on till it's completed. Okay. So
this is done. The next thing I have to do
12296.17 -> is turn off my swap space. Okay. Now the command
to disable my strap space is swap off space
12303.12 -> flag a let me go back here and do the same.
Okay swap off but flag. And now we have to
12311.949 -> go to this FS tab. So this is a file called
FS tap OK and we will have a line with the
12317.43 -> entry of swap space because at any point of
time if you have enabled swap space, then
12321.08 -> you will have a line over there. Now we have
to disable that line. Okay, we can disable
12325.029 -> that line by commenting out that line. So
let me show you how that's done. I'm just
12328.91 -> using the Nano Editor to open this fstab file.
Okay, so you can see this land right where
12335.01 -> it says swap file. This is the one which after
comment out. So just let me come down here
12339.62 -> and comment it out like this. Okay with the
hash now, let me save this and exit. Now the
12349.199 -> next thing after do is update my host name
and my hosts file and then set a static IP
12354.43 -> address. So let me get started by first updating
the hostname. So for that I have to go to
12359.43 -> this file host name, which is in this /hc
path. So I'm again using Nano for that. You
12366.81 -> can see here. It's a director - virtualbox,
right? So let me replace this and say okay
12371.739 -> Master as in Cuba not he's master. So let
me save this and exit now if you want your
12378.22 -> host name to reflect over here because right
now it says root at the rate at Oracle virtualbox
12383.649 -> the host name is does not look updated as
yet and if you want it to be updated to k
12388.34 -> Master, then you have to first of all restart
this VM or your system. If you're doing it
12392.89 -> on a system, then you have to restart your
system. And if you do it on a VM, you have
12396.59 -> to restart your VM. Okay, so let me restart
my VM in some time. But before that there
12400.529 -> are a few more commands, which I want to run
and that is set a static IP address. Okay,
12405 -> so I'm going to copy this if conflict I'm
going to run this config command Okay. So
12409.949 -> right now my IP address is one ninety two
dot one sixty eight dot 56.1 not one and the
12415 -> next time when I turn on this machine, I do
not want a different IP address. So to set
12418.83 -> this as a static IP address. I have a couple
of commands. Let me execute that command first.
12423.64 -> So you can see this interface is file. Right?
So under SC / Network, we have a file called
12429.5 -> interfaces. So this is where you define all
your network interfaces. Now, let me enter
12433.47 -> this file and add the rules to make it static
IP address as you can see here. The last three
12440.05 -> lines are the ones which ensure that this
machine will have a static IP address. These
12443.91 -> three lines are already there on my machine.
Now if you want to set a static IP address
12447.592 -> of your and then make sure that you have these
things defined correctly. Okay. My IP address
12452.34 -> is not one not one. So I would just read in
it like this. So let me just exit. So the
12458.08 -> next thing that I have to do is go to the
hosts file and update my IP address over there.
12463.02 -> Okay, so I'm going to copy this and go to
my Etsy / hosts files now over here. You can
12470.279 -> see that there is no entry. So after mention
that this is Mike a master. So let me specify
12475.39 -> my IP address first. This is my IP address
and now we have to update the name of the
12480.25 -> host. So this host of - Kay Master so I'm
just going to enter that and save this. Okay.
12489.01 -> Now the thing that we have to do now is restart
this machine. So let me just reset this machine
12494.02 -> and get back to you in the meanwhile. Okay.
So now that we are back on let me check if
12499.979 -> my host name and hosts have all been updated.
Yes. There you go. You can see here, right
12505.47 -> it recorded k Master. So this means that my
host name has been successfully updated we
12510.41 -> can also verify my IP address is the same
let me do an if config and as you can see
12515.67 -> my appearance has not changed. All right,
so this is good. Now. This is what we wanted.
12520.939 -> Now. Let's continue with our installation
process. Let me clear the screen and go back
12525.77 -> to the notepad and execute those commands
which first of all install my openssh server.
12531.689 -> So this is going to be the command to do that
and we have to execute this as pseudo user.
12536.819 -> Right so sudo apt-get install openssh server.
That's the command. Okay, let me say yes and
12545.02 -> enter. Okay. So my SSH server would have been
installed by now that makes clear the screen
12556.72 -> and install Docker. But before I run this
command which installs Dhaka and it will update
12563.06 -> my repository. Okay, so let me log in as pseudo
first fault. Okay, so do is use the command
12569.399 -> and okay I have logged in as root user. Now.
The next thing is update my repository so
12574.649 -> after do an update update. Now again, this
is going to take some more time. So just hold
12579.78 -> on till then. Okay, this is also done. Now
we can straight away run the command to install
12585 -> Docker. Now. This is the command to install
Docker. Okay from the aggregate repository.
12590.02 -> I'm installing Docker and this specifying
- why because - why is my flag? So whenever
12595.56 -> there's a problem that comes in while installation
saying do you want to install it? Yes or no,
12600.12 -> then when you specify - why then it means
that by default it will accept why as a parameter.
12605.149 -> Okay, so that is the only constant behind
- why so again inserting Dockers going to
12609.62 -> take a few more minutes. Just hang on till
then. Okay, great. So Docker is also installed.
12623.21 -> Okay. So let me go back to the notepad. So
to establish the Kubernetes environment the
12628.13 -> three main components that Kubernetes is made
up of RQ barium cubelet and Cube cereal, but
12634.06 -> just before I install these three components
there are a few things I have to do they are
12638.779 -> like installing curl and then downloading
certain packages from this URL and then running
12644.649 -> an update. Okay. So let me execute these commands
one after the other first and then install
12649.23 -> Kubernetes. So let's first of all start with
this command where I'm installing curl. Okay.
12660.41 -> Now the next command is basically downloading
these packages using curl and curl is basically
12665.77 -> this tool using which you can download these
packages using your command line. Okay. So
12669.79 -> this is basically a web URL right so I can
access whatever packages are there on this
12675.1 -> web URL and download them using curl. So that's
why I've installed car in the first place.
12680.93 -> So when executing this command I get this
which is perfect now when I go back then there
12686.06 -> is this which we have to execute. Okay, let
me hit enter and I'm done and finally I have
12693.449 -> to update my app get repository and common
for that. Is this one apt-get update? Okay,
12702.2 -> great. So all the presentation steps are also
done. Now. I can say to me set up my Kubernetes
12708.25 -> environment by executing this command. So
in the same command I say install cubelet
12712.64 -> you barium and Cube CDL and to just avoid
the yes prompt am specifying the - wife lat.
12718.569 -> Okay, which would by default take yes as a
parameter. And of course I'm taking it from
12724.4 -> the aggregate repository, right? So, let me
just copy this and paste it here. Give it
12733.152 -> a few more minutes guys because in Sony kubernetes
is going to take some time. Okay bingo. So
12744.29 -> my humanities has also been installed successfully.
Okay. Let me conclude the setting up of this
12749.649 -> cube root is environment by updating the communities
configuration. Okay. So there's this file.
12755.21 -> You're right Q beta m dot f so, this is the
cube ADM is the one that's going to let me
12758.819 -> administer my Kubernetes. So after go to this
file and add this one line, okay, so let me
12764.72 -> first of all open up this file using my Nano
editor. So let me again log in as soda OSU
12770.42 -> and this is the command. So as you can see
we have these set of environment variables.
12775.899 -> So right after the last environment variable
have to add this one line and that line is
12780.82 -> this one All right. Now, let me just save
this and exit brilliant. So with that the
12794.78 -> components which have to be installed at both
the master and the slave come to an end. Now.
12798.96 -> What I will do next is run certain commands
only at the master to bring up the cluster
12803.33 -> and then run this one command at all my slaves
to join the cluster. Alright. So before I
12808.26 -> start doing anything more over here, let me
also tell you that I have already done the
12812.72 -> same steps on my node. So if you are doing
it at your end, then whatever steps you've
12817.06 -> done so far run the same set of commands on
another VM because that will be acting as
12820.819 -> your node v m but in my case, I have already
done that just to save some time, you know,
12825.29 -> so let me show you that this is Mike a master
of and right here. I have my K node, which
12832.71 -> is nothing but my communities node and I've
basically run the same set of commands in
12836.99 -> both the places, but there is one thing which
I have to ensure before I bring up the cluster
12841.18 -> and that is and short the network IP addresses
and the host name and the hosts. So this is
12846.39 -> my communities node, so All I'm going to do
what chat and say /hc posts. Okay. Now over
12853.979 -> here. I have the IP address of my Cube ladies
node. That is this very machine and a specify
12859.109 -> the name of the host. However, the name of
my Kubernetes Master host is not present and
12863.68 -> neither is the IP address. So that is one
manual entry we have to do if you remember
12867.85 -> let me go to my master on check. What is the
IP address? Yes. So the IP address over here
12871.41 -> is one ninety two dot one sixty eight dot
56.1 not one. So this is the IP address. I
12876.62 -> have to add in my node end. So after modify
this file for that, all right, but before
12882.101 -> that you have to also ensure that this is
a static IP address. So let me ensure that
12886.279 -> the IP address of my cluster node does not
change. So the first thing we have to do before
12890.76 -> anything is check. What is the current IP
address and for my node the IP addresses one?
12896.55 -> Ninety two dot one sixty eight dot 56.1 not
to okay now, let me run this command. Network
12905.51 -> interfaces. Okay. So as you can see here,
this is already set to be a static IP address.
12910.63 -> We have to ensure that these same lines are
there in your machine if you wanted to be
12914.69 -> a static IP address since it's already there
for me. I'm not going to make any change but
12918.63 -> rather I'm going to go and check. What's my
host name? I mean the whole same should anyways
12922.62 -> give the same thing because right now it's
keynote. So that's what it's gonna reflect.
12925.58 -> But anyways, let me just show it to you. Okay,
so my host name is keynote brilliant. So this
12931.91 -> means that that is one thing which I have
to change and that is nothing but adding the
12935.859 -> particular entry for my master. So let me
first clear the screen and then using my Nano
12942.2 -> editor. In fact, I'll have to run it as pseudo.
So as a pseudo user I'm going to open my Nano
12947.699 -> editor and edit my hosts file. Okay, so here
let me just add the IP address of my master.
12957.08 -> So what exactly is the IP address of the master?
Yes, this is my k Master. So I'm just going
12962.13 -> to copy this IP address come back here and
paste the IP address and I'm gonna say the
12968.33 -> name of that particular host is came master.
And now let me save this perfect. Now, what
12974.35 -> I have to do now is go back to my master and
ensure that the hosts file here has raised
12979.21 -> about my slave. I'll clear the screen and
first I'll open up my hosts file. So on my
12987.45 -> masters and the only entry is there for the
master. So I have to write another line where
12991.19 -> that specify the IP address or my slave and
then add the name of that particular host.
12995.06 -> That is K node. And again, let me use the
Nano editor for this purpose. So I'm going
12999.449 -> to say sudo Nano /hc posts. Okay, so I'm going
to come here say one ninety two dot one sixty
13008.66 -> eight dot 56.1 not to and then say Okay node.
All right. Now all the entries are perfect.
13017.56 -> I'm going to save this and Exit so the hosts
file on both my master and my slave has been
13024.08 -> updated the static IP address for both my
master and the slave has been updated and
13028.33 -> also the kubernetes environment has been established.
Okay. Now before we go further and bring up
13033.88 -> the cluster, let me do a restart because I've
updated my hosts file. Okay. So let me restart
13038.72 -> both of my master and my slave VMS and if
you're doing it at your and then you have
13042.95 -> to do the very same, okay, so let's say restart
and similarly. Let me go to my load here and
13050.77 -> do a restart. Okay, so I've just logged in
and now that my systems are restarted. I can
13064.67 -> go ahead and execute the commands at only
the Masters and to bring up the cluster. Okay.
13074.99 -> So first of all, let me go through the steps
which are needed to be run on the Masters
13079.05 -> end. So add the master of first of all, we
have to run a couple of commands to initiate
13083.39 -> the Kubernetes cluster and then we have to
install a pod Network. We have to install
13087.92 -> a pod Network because all my containers inside
a single port will have to communicate over
13092.39 -> a network Port is nothing but a network of
containers. So there are various container
13097.09 -> networks, which I can use so I can use the
Calico poor Network. I can use a flannel poor
13101.76 -> Network or I can use anyone you can see the
entire list in the communities documentation.
13107.17 -> And in this session, I am going to use the
calcio network. Okay, so that's pretty simple
13110.63 -> and straightforward and that's what I'm going
to show you next. So once you've set up the
13114.609 -> Pod Network, you can straight away bring up
the communities dashboard and remember that
13119.659 -> you have to set up the communities dashboard
and bring this up before your notes join the
13123.87 -> cluster because in this version of Cuba Nettie's
if you first get your notes to join the cluster
13128.909 -> and after that if you try bringing the kubernetes
dashboard up then your communities dashboard
13132.93 -> gets hosted on the And you don't want that
to happen, right? If you want the dashboard
13137.24 -> to come up at your Masters and you have to
bring up the dashboard before your nodes join
13140.85 -> the cluster. So these would be the three commands
that we will have to run initiating the cluster
13145.08 -> of inserting the poor Network and then setting
up the Kubernetes dashboard. So let me go
13149.37 -> to my master and execute commands for each
of these processes. So I suppose this is my
13154.48 -> master. And yes, this is my k Master. So so
first of all to bring up the cluster we have
13161.99 -> to execute this command. Let me copy this
and over here. We have to replace the IP addresses.
13168.29 -> So the IP address of my master, right? So
this machine after specified that IP address
13173.33 -> over here because this is where the other
IP addresses can come and join This is the
13179.659 -> master right? So I'm just seeing a pi server
advertise the address 56.1 not one so that
13186.77 -> all the other nodes can come and join the
cluster on this IP address and along with
13191.8 -> this. I have to also specify the port Network
since I've chosen the Calico poor Network.
13196.67 -> There is a network range which my Calico poor
Network uses so a cni basically stands for
13201.97 -> container network interface. If I'm using
the Calico poor Network then after use this
13206.279 -> network range, but in case of few want to
use a flannel poor Network, then you can use
13211.27 -> this network range. Okay, so let me just copy
this one and paste it. All right. So the command
13218.63 -> is pseudo Cube ADM in it for Network followed
by the IP address from where the other nodes
13224.449 -> will have to join. So let's go ahead and enter
So since you're doing for the first time give
13231.63 -> it a few minutes because kubernetes take some
time to install. Just hold on until that happens.
13239.04 -> All right. Okay, great. Now it says that your
kubernetes master has initialized successfully
13244.899 -> that's good news. And it also says that to
start using your cluster. We need to run the
13249.4 -> following commands as a regular user. Okay,
so we'll note that log out as a pseudo user
13253.991 -> and as a regular user executes these three
commands and also if I have to deploy a poor
13259.34 -> Network then after run a command, okay. So
this is that command which I have to run to
13264.8 -> bring up my poor Network. So I'll be basically
cloning the yamen file which is present over
13268.91 -> here. So before I get to all these things
let me show you that we have a cube joint
13274.84 -> command which is generated. Right? So this
is generated in my masters and and I have
13278.49 -> to execute this command at my node to join
the cluster, but that would be the last step
13282.8 -> because like I said earlier these three commands
will have to be first executed then after
13286.41 -> bring up my poor Network then after bring
up my dashboard and then I have to get my
13290.83 -> notes to join the class are using this command.
So for my reference, I'm just going to copy
13294.58 -> this command and store it somewhere else.
Okay. So right under this Let me just do this
13302.16 -> command for later reference. And in the meanwhile,
let me go ahead and execute all these commands
13308.88 -> one after the other. These are as per Cube
entities instructions, right? Yes. I would
13314.16 -> like to rewrite it. And then okay. Now that
I've done with this let me first of all bring
13322.17 -> up my pod Network. Okay. Now the command to
bring up my pod network is this Perfect. So
13329.909 -> my calcio pod has been created now I can verify
if my poor has been created by running the
13340.109 -> cube CDL get pods command. Okay. So this is
my Cube serial get pods. I can say - oh wide
13350.34 -> all namespaces. Okay by specifying the - oh
wide and all namespaces. I'll basically get
13357.02 -> all the pods ever deployed. Even the default
pose with get deployed when the Kubernetes
13361.14 -> cluster initiates. So basically the kubernetes
cluster is initiated and deployed along with
13365.3 -> a few default ones especially for your poor
Network. There is one part which is hosted
13370.63 -> for your cluster. There's one pod For Your
Rocker board itself, and then there's one
13374.17 -> pot which is deployed for your dashboard and
whatnot. So this is the entire list, right?
13378.979 -> So if you're calcio for your SED, there's
one pod for your Cube controller. There's
13382.89 -> a pot and we have various spots like this
right for your master and you're a pi server
13389.17 -> and many things. So these are the default
deployments that you get So anyways, as you
13393.76 -> can see the default deployments are all healthy
because it says the status is all running
13398.27 -> and everything is basically you're running
in the cube system namespace. All right, and
13402.659 -> it's all running on my k Master That's Mike
unit is master. So the next thing that I have
13407.43 -> to do is bring up the dashboard before I can
get my notes to join. Okay, so I'll go to
13414 -> the notepad and copy the command to bring
up my dashboard. So copy and paste so great.
13421.12 -> This is my communities dashboard, which as
you know, basically this part has come up
13424.359 -> now. If I execute this same Cube serial, get
pods command, then you can see that I've got
13430.729 -> one more pot which is deployed for my dashboard
basically. So last time this was not there
13434.67 -> because I had not deployed my dashboard at
that time, right? So I don't need to plug
13438.75 -> my iPod Network and whatnot and the other
things right? So I've deployed it and the
13443.27 -> continuous creating so in probably a few more
seconds, this would also be running anyways
13447.79 -> in the meanwhile, what we can do is we can
work on the other things which are needed
13450.479 -> to bring up the dashboard the first fall.
Abel your proxy and get it to be hope for
13455.58 -> web server. There's a skip serial proxy command
Okay. So with this your service would be starting
13460.81 -> to be served on this particular port number.
Okay, localhost port number eight thousand
13464.83 -> one of my master. Okay, not from the nodes.
So if I could just go to my Firefox and go
13470.09 -> to local Lowe's 8001 then my dad would be
up and running over there. So basically my
13480.93 -> dashboard is being served on this particular
port number. But if I want to actually get
13484.55 -> my dashboard which shows my deployments and
on my services then that's a different URL.
13489.22 -> Okay. So yeah as you can see here. Localized
8,000 / API slash V 1 right this entire URL
13498.84 -> is which is going to lead me to my dashboard.
But at this point of time I cannot log into
13502.92 -> my dashboard because it's prompting me for
a token and I do not have a token because
13506.649 -> I have not done any cluster old binding and
I have not mentioned that I am the admin of
13509.79 -> this particular dashboard. So to enable all
those things there are a few more commands
13513.61 -> that we have to execute starting with creating
a service account for your dashboard. So this
13518.199 -> is the command to create your service account.
So go back to the terminal and probably a
13522.29 -> new terminal window execute this command Okay.
So with this you're creating a service account
13527.699 -> for your dashboard, and after that you have
to do the cluster roll binding for your newly
13533 -> created service account. Okay. So the dashboard
has been created and default namespace as
13538.14 -> per this. Okay, and here I'm saying that my
dashboard is going to be for admin and I'm
13541.85 -> doing the cross the road binding. Okay, and
now that this is created I can straight away
13547.39 -> get the token because if you remember it's
asking me for a token to login, right? So
13551.62 -> even though I am the admin now have a not
be able to log in without D token, so to generate
13556.449 -> the token I have to again run this command
Cube City will get secret key. Okay, so I'm
13562.22 -> going to copy this and paste it here. So this
is the token or this is the key that basically
13569.28 -> needs to be used. So let me copy this entire
token and paste it over here. So let me just
13583.06 -> save this and yeah, now you can see that my
community's cluster has been set up and I
13588.791 -> can see the same thing from the dashboard
over here. So basically by default the communities
13592.58 -> service is deployed. Right? So this is what
you can see but I've just brought the dashboard
13597.699 -> now and the cluster is not ready under my
nodes join in. So let's go to the final part
13602.12 -> of this demonstration. We're in I'll ask my
slaves to join the cluster. So you remember
13607.39 -> I copied the joint cluster which was generated
at my Master's end in my notepad. So I'm going
13611.359 -> to copy that and execute that at the slaves
and to join the cluster. Okay. So let me first
13615.66 -> of all go to my notepad and yeah, this is
the joint command which I had copyright. So
13621.72 -> I'm going to copy this and now I'm going to
go to my node. Yep. So, let me just paste
13627.909 -> this and let's see what happens. Let me just
run this command as pseudo. It's a perfect.
13635.25 -> I've got the message that I have successfully
established connection with the API server
13638.8 -> on this particular IP address and port number,
right? So this means that my node has joined
13642.739 -> the cluster we can verify that from the dashboard
itself. So if I go back to my dashboard, which
13648.31 -> is hosted on my master master Zen, so I have
an option here as nodes. If I click on this
13654.09 -> then I will get the details about my nodes
over here. So earlier I only have the keymaster
13658.33 -> but now I have both the key master and the
K node give it a few more seconds until my
13663.1 -> note comes up. I can also verify the same
from my terminal. So if I go to my terminal
13668.17 -> here and if I run the command Cube CTL get
nodes then if we give me the details about
13674.89 -> the nodes which are there in my cluster soak
a master is one that is already there in the
13679.04 -> cluster but cannot however will take some
more time to join my cluster. Alright, so
13684.38 -> that's it guys. So that is about my deployment
and that's how you deploy a community's cluster.
13689.01 -> So from here on you can do whatever deployment
you want. Whatever you want to deploy you
13692.8 -> can deploy it. Easily very effectively either
from the dashboard or from the CLI and there
13697.24 -> are various other video tutorials of ours,
which you can refer to to see how a deployment
13701.7 -> is made on Kubernetes. So I would request
you to go to the other videos and see how
13705.739 -> deployment is made and I would like to conclude
this video on that note. If you're a devops
13715.319 -> guy, then you would have definitely heard
of communities but I don't think the devops
13720.109 -> world knows enough of what exactly kubernetes
is and where it's used. And that's why we
13725.06 -> had Erica of come up with this video on what
is communities. My name is Walden and I'll
13730.34 -> be representing a tรกrrega in this video.
And as you can see from the screen, these
13734.899 -> will be the topics that we'll be covering
in today's session as first start off by talking
13738.88 -> about what is the need for communities? And
after that I will talk about what exactly
13744.14 -> it is and what it's not and I will do this
because there are a lot of myths surrounding
13748.52 -> communities and there's a lot of confusion
people have misunderstood communities to be
13752.72 -> a containerization platform. Well, it's not
okay. So I will explain what exactly it is
13757.87 -> over here. And then after that I will talk
about how exactly communities works. I will
13761.811 -> talk about the architecture and all the related
things. And after that I will give you a use
13766.6 -> case. I will tell you how communities was
used at Pokemon go and how it helped Pokemon
13771.75 -> go become one of the best games of the year
2017 And finally at the end of the video,
13776.399 -> you will get a demonstration of how to do
deployment with Kubernetes. Okay. So I think
13781.96 -> the agenda is pretty clear you I think we
can get started with our first topic then
13786.699 -> now first topic is all about. Why do we need
Kubernetes? Okay now to understand why do
13791.72 -> we need Cuba Nettie's let's understand what
are the benefits and drawbacks of containers.
13795.729 -> Now, first of all containers are good. They
are amazingly good right any container for
13801.05 -> that matter of fact a Linux container or a
Docker container or even a rocket Continuum,
13805.68 -> right? They all do one thing they package
your application and isolated from everything
13810.77 -> else, right? They isolate the application
from the host mainly and this makes the container
13814.989 -> of fast reliable efficient light weight and
scalable now hold the thought yes containers
13821.58 -> are scalable, but then there's a problem that
comes with that and this is what is the resultant
13827.29 -> of the need for Kubernetes even though continues
are scalable. They are not very easily scalable.
13832.939 -> Okay, so let's look at it this way. You have
one container you might want to probably scale
13837.64 -> it up to to contain over three containers.
Will it's possible right? It's going to take
13841.46 -> a little bit of manual effort. But yeah, you
can scale it up. You know what I have a problem.
13845.649 -> But then look at a real world scenario where
you might want to scale up to like 5200 containers
13850.89 -> then in that case what happens I mean after
scaling up, would you do you have to manage
13856.04 -> those containers? Right? We have to make sure
that they are all working. They are all active
13859.74 -> and they're all talking to each other because
if they're not talking to each other then
13863.939 -> there's no point of scaling up itself because
in that case the server's would not be able
13868.14 -> to handle the roads if they're not able to
talk to each other correct. So it's really
13872.149 -> important that they are manageable when they
are scaled up and now let's talk about this
13877.53 -> point. Is it really tough to scale up containers?
Well the answer for that might be know. It
13882.46 -> might not be tough. It's pretty easy to scale
up containers, but the problem is what happens
13886.63 -> after that. Okay, once you scale up containers,
you will have a lot of problems. Like I told
13891.36 -> you the containers first for should have to
communicate with each other because Not so
13895.09 -> many in number and they work together to basically
host the service right the application and
13900.779 -> if they are not working together and talking
together then the application is not hosted
13905.3 -> and scaling up is a waste so that's the number
one reason and the next is that the containers
13910.3 -> have to be deployed appropriately and they
have to also be managed they have to be deployed
13914.449 -> appropriately because you cannot have the
containers deployed in this random places.
13919.189 -> You have to deploy them in the right places.
You cannot have one container in one particular
13923.689 -> cloud and the other one somewhere else. So
that would have a lot of complications. Well,
13927.04 -> of course it's possible. But yeah, it would
lead to a lot of complications internally
13930.57 -> you want to avoid all that so you have to
have one place where everything is deployed
13934.979 -> appropriately and you have to make sure that
the IP addresses are set everywhere and the
13939.26 -> port numbers are open for the containers to
talk to each other and all these things. Right.
13942.81 -> So these are the two other points the next
Point our the next problem with scaling up
13947.14 -> is that auto scaling is never a functionality
over here? Okay, and this is one of the things
13952.56 -> which is the biggest benefit with Cuba Nets.
The problem technically is there is no Auto
13957.13 -> scaling functionality. Okay, there's no concept
of that at all. And you may ask at this point
13961.04 -> of time. Why do we even need auto-scaling?
Okay, so let me explain the need for auto
13965.3 -> scaling with an example. So let's say that
you are an e-commerce portal. Okay, something
13970.71 -> like an Amazon or a flip card and let's say
that you have decent amount of traffic on
13975.699 -> the weekdays, but on the weekends, you have
a spike in traffic. Probably you have like
13979.92 -> 4X or 5x the usual traffic and in that case
what happens is maybe your servers are good
13985.39 -> enough to handle the requests coming in on
weekdays, right? But the requests that come
13990.21 -> on the weekends right from the increased traffic
that cannot be serviced by our servers right?
13995.79 -> Maybe it's too much for your servers to handle
the load and maybe in the short term. It's
14000.569 -> fine maybe once or twice you can survive but
they will definitely come a time when your
14004.46 -> server will start crashing because it cannot
handle that many requests per second or permanent.
14009.05 -> And if you want to really avoid this problem
what you do you have to scale up and now would
14014.37 -> you Lead keep scaling up every weekend and
scaling down after the weekend, right? I mean
14019.44 -> technically is it possible? Will you be buying
your servers and then setting it up and every
14023.529 -> Friday would you be again by new Star Wars
setting up your infrastructure? And then the
14027.659 -> moment your weekday starts. Would you just
destroy all your servers? Whatever you build.
14032.159 -> Would that would you be doing? No, right?
Obviously, that's a pretty tedious task. So
14036.189 -> that's where something like Cuban Aires comes
in and what communities does is it keeps analyzing
14040.46 -> your traffic and the load that's being used
by the container and as and when the traffic
14045.89 -> is are reaching the threshold auto-scaling
happens where if the server's have a lot of
14050.199 -> traffic and if it needs no more such servers
for handling requests, then it starts killing
14054.069 -> of the containers on its own. There is no
manual intervention needed at all. So that's
14058.56 -> one benefit with Kubernetes and one traditional
problem that we have with scaling up of containers.
14063.65 -> Okay, and then yeah, the one last problem
that we have is the distribution of traffic
14067.84 -> that is still challenging without something
that can manage your containers. I mean you
14071.76 -> have so many containers, but how will the
traffic be distributed? Load balancing. How
14075.5 -> does that happen? You just have containers
right? You have 50 containers. How does the
14078.76 -> load balancing happen? So all these are questions.
We should really consider because containerization
14083.51 -> is all good and cool. It was much better than
VMS. Yes containerization. It was basically
14088.489 -> a concept which was sold on the basis of for
scaling up. Right? We said that vm's cannot
14092.979 -> be scaled up easily. So we told use containers
and with containers you can easily scale up.
14097.37 -> So that was the whole reason we basically
sold containers with the tagline of scaling
14101.979 -> up. But in today's world, our demand is ever
more that even the regular containers cannot
14107.04 -> be enough so scaling up a so much or and so
detailed that we need something else to manage
14112.681 -> your containers, correct. Do we agree that
we need something right? And that is exactly
14117.08 -> what Cuban Aries is. So Kubernetes is a container
management tool. All right. So this is open
14122.77 -> source and this basically automate your container
deployment your continue scaling and descaling
14128.38 -> and your continual load balancing the benefit
with this is that it works brilliantly with
14133.06 -> all the cloud vendors with all A big cloud
vendors or your hybrid Cloud vendors and it
14137.25 -> also works on from Isis. So that is one big
selling point of kubernetes. Right? And if
14142.37 -> I should give more information about communities
then let me tell you that this was a Google
14146.91 -> developed product. Okay. It's basically a
brainchild of Google and that pretty much
14151.58 -> is the end of the story for every other competitor
out there because the community that Google
14155.79 -> brings in along with it is going to be huge
or basically the Head Start that communities
14159.899 -> would get because of being a Google brain
child is humongous. And that is one of the
14164.46 -> reasons why kubernetes is one of the best
container management tools in the market period
14169.6 -> and given that communities is a Google product.
They have written the whole product on go
14174.61 -> language. And of course now Google has contributed
this whole communities project to the CN CF
14179.899 -> which is nothing but the cloud native Computing
Foundation or simply Cloud native Foundation,
14184.11 -> right? You can just call them either that
and they have donated their open source project
14187.899 -> to them. And if I have to just summarize what
Humanities is you can just think of it like
14192.54 -> this it can group like a number. Containers
into one logical unit for managing and deploying
14198.04 -> an application or a particular service. So
that's a very simple definition of what communities
14203.399 -> is. It can be easily used for deploying your
application. Of course. It's going to be Docker
14208.6 -> containers which you will be deploying. But
since you will be using a lot of Docker containers
14212.699 -> as part of your production, you will also
have to use Kubernetes which will be managing
14216.87 -> your multiple Docker containers, right? So
this is the role it plays in terms of deployment
14221.96 -> and scaling upskilling down is primarily the
game of communities from your existing architecture.
14226.97 -> It can scale up to any number you want. It
can scale down anytime and the best part is
14231.42 -> the scaling can also be set to be automatic.
Like I just explained some time back right
14236.13 -> you can make communities communities would
analyze the traffic and then figure out if
14239.97 -> the scaling up needs to be done or the Skilling
noun can be done and all those things. And
14244.26 -> of course the most important part load balancing,
right? I mean what good is your container
14248.67 -> or group of containers if load balancing cannot
be enabled right? So communities does that
14253.68 -> also and these Some of the points on based
on which kubernetes is built. So I'm pretty
14258.479 -> sure you have got a good understanding of
what communities is by now Write a brief idea
14262.26 -> at least so moving forward. Let's look at
the features of Kubernetes Okay. So we've
14267.78 -> seen what exactly kubernetes is how would
users Docker containers or other connector
14272.021 -> or containers in general? But now let's see
some of the selling points of humanities or
14276.399 -> why it's a must for you. Let's start off with
automatic bin packing when we say automatic
14281.621 -> bin packing. It's basically that communities
packages your application and it automatically
14286.239 -> places containers based on their requirements
and the resources that are available. So that's
14292.05 -> the number one advantage the second thing
service Discovery and load balancing. There
14296.739 -> is no need to worry. I mean if you know, if
you're if you're going to use Kubernetes then
14300.81 -> you don't have to worry about networking and
communication because communities will just
14305.46 -> automatically assign containers their own
IP addresses and probably a single DNS name
14310.199 -> for a set of containers which are performing
a logical operation. And of course, there
14314.149 -> will be loads. Dancing across them so you
don't have to worry about all these things.
14317.55 -> So that's why we say that there is service
Discovery and load balancing with kubernetes
14322.86 -> and the third feature of kubernetes. Is that
storage orchestration with communities, you
14327.271 -> can automatically Mount your storage system
of your choice. You can choose that to be
14331.46 -> either a local storage or maybe on a public
Cloud providers such as a gcp or AWS or even
14337.149 -> a network storage system such as NFS or other
things, right? So that was the feature number
14342.37 -> three now, please remember for self-healing
now, this is one of my favorite parts of Humanity's
14347.87 -> actually not just communities even with respect
to dr. Swamp. I really like this part of self-healing
14353.35 -> what self feeling is all about is that whenever
kubernetes realizes that one of your containers
14357.72 -> has failed then it will restart that container
on its own right and we create a new container
14361.83 -> in place of this crashed one and in case you're
node itself fails, then what you bilities
14366.27 -> would do in that case has whatever containers
were running in that failed node. Those containers
14370.859 -> would be started in another node, right? Of
course, you would have to have more In that
14374.76 -> cluster if there's another node in the cluster
definitely room would be made for this field
14378.93 -> container to start a service. So that happens
so the next feature is batch execution. So
14385.229 -> when we say batch execution, it's that along
with Services Humanities can also manage your
14389.56 -> batch and CIA work loads, which is more of
a devops roll. Right? So as part of your CIA
14394.939 -> workloads communities can replace your containers
which fail and it can restart and restore
14399.67 -> the original state that is what is possible
with kubernetes and secret and configuration
14404.909 -> management. That is another big feature with
kubernetes. And that is the concept of where
14409.569 -> you can deploy and update your secrets and
application configuration without having to
14413.939 -> rebuild your entire image and without having
to expose your secrets in your stack configuration
14418.479 -> or anything, right? So if you want to deploy
an update your secrets only that can be done.
14423.11 -> So it's not available with all the other tools,
right? So communities is one that does that
14427.399 -> you don't have to restart everything and rebuild
your entire container. That's one benefit
14432.12 -> and then we have Horizonte scaling which of
course you will My that of already you can
14436.17 -> scale your applications up and down easily
with a simple command. The simple command
14439.68 -> can be run on the CLI or you can easily do
it on your GUI, which is your dashboard. Your
14445.949 -> community is dashboard or Auto scaling is
possible Right based on the CPU usage. Your
14451.199 -> containers would automatically be scaled up
or scaled down. So that's one more feature
14456.21 -> and the fun feature that we have is automatic
rollbacks and roll outs now Kubernetes what
14461.069 -> it does is whenever there's an update your
application, which you want to release communities
14465.95 -> progressively rolls out these changes and
updates to the application or its complications
14469.6 -> by this ensuring that one instance after the
other is send these updates and it makes sure
14474.779 -> that not all instances are updated at the
same time thus ensuring that yes, there is
14478.67 -> high availability. And even if something goes
wrong, then the Cuban ladies will roll back
14482.67 -> that change for you. So all these things are
enabled and these are the features with Humanities.
14487.659 -> So if you're really considering a solution
for your containers from managing your containers,
14492.25 -> then communities should be your solution.
To that should be your answer. So that is
14496.739 -> about the various features of Kubernetes now
moving forward here. Let's talk about a few
14502.37 -> of the myths surrounding communities and we
are doing this because a lot of people have
14506.29 -> confusion with respect to what exactly it
is. So people have this misunderstanding that
14510.489 -> communities is like docker which is a continuation
platform, right? That's what people think
14514.859 -> and that is not true. So this kind of a confusion
is what I intend to solve in the upcoming
14520.78 -> slides. I will not talk about what exactly
kubernetes is and what communities is not
14526.21 -> let me start with what it's not now. The first
thing is that communities is not to be compared
14531.15 -> with Docker because it's not the right set
of parameters which are comparing them against
14535.449 -> Docker is a containerization platform and
a Kubernetes is a container management platform,
14541.21 -> which means that once you have containerized
your application with the help of Docker containers
14545.3 -> or Linux containers, and when you are scaling
up these containers to a big number like 50
14549.68 -> or a hundred that's where communities would
come in when you have like multiple containers
14553.1 -> which need to be managed. That's where communities
can comment and effectively do it. You can
14557.52 -> specify the configurations and communities
would make sure that at all times these conditions
14562.069 -> are satisfied. So that's what community is
you can tell in your configurations that at
14566.39 -> all time. I want these many containers running.
I want these many pods running and so many
14570.83 -> other needs right you can specify much more
than that and whatever you do at all times
14576.04 -> your cluster master or your communities Master
would ensure that this condition is satisfied.
14581.01 -> So that is what exactly Community is, but
that does not mean that talker does not solve
14585.54 -> this purpose. So Docker also have their own
plug-in. I wouldn't call it a plug-in. It's
14590.05 -> actually another tool of there's so there's
something called as Docker swamp and Dockers
14594.01 -> warm does a similar thing it does contain
a management like Mass container management
14599.31 -> so similar to what communities does when you
have like 50 to 100 containers Docker swarm
14603.55 -> would help you in managing those containers,
but if you look at who is prevailing in the
14607.699 -> market today, I would say it's communities
because communities came in first and the
14612.311 -> moment they came in they were backed by Google
They had this huge Community with they just
14616.859 -> swept along with them. So they have like hardly
left any in any market for Docker and for
14621.61 -> dr. Stromm, but that does not mean that they
are better than Docker because they are at
14625.12 -> the end of the day using Docker. So communities
is only as good as what Docker is if there
14629.46 -> are no Docker containers, then there's no
need for communities in the first place. So
14633.46 -> Cuban adiz and Docker they go hand in hand.
Okay. So that is the point you have to note
14637.74 -> and I think that would also explain the point
that kubernetes is not for continue Rising
14642.2 -> applications. Right? And the last thing is
that Kubernetes is not for applications with
14646.939 -> a simple architecture. Okay, if your architecture
review your applications architecture is pretty
14652.17 -> complex, then you can probably use Cuban IDs
to uncomplicate that architecture. Okay, but
14656.939 -> if you're having a very simple one in the
first place then using kubernetes would not
14660.79 -> serve you any good and it could probably make
it a little more complicated than what it
14664.529 -> already is, right. So this is what kubernetes
is not now speaking of what exactly kubernetes
14671.199 -> is. The first point is Kubernetes is robust.
And reliable now when I see a robust and reliable,
14676.779 -> I'm referring to the fact that the cluster
that is created the communities cluster, right?
14681.55 -> This is very strong. It's very rigid and it's
not going to be broken easily. The reason
14686.16 -> being the configurations which is specified
right at any point of time if any container
14690.55 -> fails a new container would come up right
or that whole container would be restarted.
14694.92 -> One of the things will definitely happen.
If your node fails then the containers which
14698.951 -> are running in a particular node. They would
start running in a different node, right?
14703 -> So that's why it's reliable and it's strong
because at any point of time your cluster
14707.56 -> would be at full force. And at any time if
it's not happening, then you would be able
14711.99 -> to see that something's wrong and you have
to troubleshoot your node and then everything
14715.72 -> would be fine. So Cuban, it's would do everything
possible and it pretty much does everything
14719.979 -> possible to let us know that the problem is
not at its end and it's giving the exact result
14725.3 -> that we want. That's what communities are
doing. And the next thing is that Humanity's
14730.909 -> actually is the best solution for scaling
up containers at least in today's. I could
14735.1 -> because the two biggest players in this market
are radhika swamp and Humanities and Docker
14740.03 -> swarm is not really the better one here because
they came in a little late even though doctor
14745.28 -> was there from the beginning communities came
after that but doc a swarm which we are talking
14750.06 -> about came in somewhere around 2016 or 2017.
Right? But communities came somewhere around
14755.311 -> 2015 and they had a very good Head Start.
They were the first ones to do this and they're
14760.569 -> backing by Google is just icing on the cake
because whatever problem you have with respect
14764.91 -> to Containers, if you just go up and if you
put your error there then you will have a
14769.069 -> lot of people on github.com and get up queries
and then on stack overflow will be resolving
14774.409 -> those errors, right? So that's the kind of
Market they have so it's back be a really
14778.149 -> huge Community. That's what kubernetes is
and to conclude this slide Humanities is a
14784.729 -> container orchestration platform and nothing
else. All right. So I think these two slides
14789.38 -> would have given you more information and
more clarity with respect to what kubernetes
14794.14 -> is. And how different it is from docker and
docker swamp, right? So now moving on let's
14799.949 -> go to the next topic where we will compare
Humanities with DACA swamp and we are comparing
14804.939 -> with Docker swamp because we cannot compare
Docker and Kubernetes head on. Okay, so that
14810.56 -> is what you have to understand if you are
this person over here if you are Sam who is
14815.09 -> wondering which is the right comparison then
let me reassure you that the difference can
14818.67 -> only be between Humanities and doctors Mom.
Okay. So let's go ahead and see what the difference
14824.109 -> is. Actually. Let's start off with your installation
and configuration. Okay. So that's the first
14828.86 -> parameter will use to compare these two and
over here doc a swarm comes out on top because
14834.17 -> Dockers little easier you have around two
or three commands which will help you have
14838.13 -> your cluster up and running that includes
the node joining the cluster, right? But with
14842.33 -> kubernetes it's way more complicated than
talking swamp, right? So you have like close
14846.069 -> to ten to eleven commands, which you have
to execute and then there's a certain pattern
14849.81 -> you have to follow to ensure that there are
no errors, right? Yes, and that's why I'm
14854.92 -> consuming and that's why it's complicated.
But once your cluster is ready that time kubernetes
14860.8 -> is the winner because the flexibility the
rigidness and the robustness that communities
14865.12 -> gives you cannot be offered by dr. Swamp.
Yes, dr. Storm is faster, but yes not as good
14869.83 -> as communities when it comes to your actual
working and speaking of the GUI. Once you
14875.95 -> have set up your cluster or you can use a
GOI with communities for deploying your applications.
14880.87 -> Right? So you don't need to always use your
CLI. You have a dashboard which comes up and
14884.85 -> the dashboard. If you give it admin privileges,
then you can use it. You can deploy your application
14889.13 -> from the dashboard itself everything just
drag-and-drop click functionality right with
14893.689 -> just click functionality. You can do that.
The same is not the case with Docker swarm.
14897.979 -> You have no GUI in Dhaka swamp Okay. So doc
Islam is not the winner over here. It's Kubernetes
14904.569 -> and he is going to the third parameter scalability.
So people again have a bad misconception that
14910.72 -> communities is better it is the solution for
scaling up. And it is better and faster than
14915.729 -> dr. Stromm. Well, it could be better but yes,
it's not faster than doctors warm. Even if
14919.99 -> you want to scale up right? There is a report
where I recently read that the scaling up
14924.67 -> in Dhaka swarm is almost five times faster
than the scaling up with Kubernetes. So that
14929.35 -> is the difference. But yes, once you are scaling
up is done after that your cluster strength
14934.64 -> with kubernetes is going to be much stronger
than your doctor swamp plus the strength.
14938.58 -> That's again because of the various configurations.
That should have been specified by then. That
14942.42 -> is the thing now moving on to the next parameter
we have is load balancing requires manual
14947.949 -> service configuration. Okay. This is in case
of kubernetes and yes, this could be shortfall.
14953.68 -> But with dr. Storm there is inbuilt load balancing
techniques, which you don't need to worry
14957.04 -> about. Okay, even the load balancing which
requires manual effort in case of communities
14961.76 -> is not do much there are times when you have
to manually specify what are your configuration
14966.069 -> you have to make a few changes but yes, it's
not as much as what you thinking and speaking
14970.51 -> of updates and rollbacks. What communities
does is it does the Scheduling to maintain
14975.069 -> the services while updating. Okay. Yeah, that's
very similar to how it works of darkness form
14980.05 -> wherein you have like Progressive updates
and service Health monitoring happens throughout
14984.43 -> the update, but the difference is when something
goes wrong Humanity's goes that extra mile
14989.119 -> of doing a roll back and putting you back
to the previous state right before the update
14993.11 -> was launched. So that is the thing with kubernetes
and the next parameter. We are comparing those
14997.899 -> two upon is data volumes. So data volumes
in Cuba nattie's can be shared with other
15004.149 -> containers, but only within the same pod,
so we have a concept called pods in communities.
15009.29 -> Okay, now board is nothing but something which
groups related containers right a logical
15014.529 -> grouping of containers together. So that is
a pot and whichever containers are there inside
15019.05 -> this pod. They can have a shared volume. Okay,
like storage volume, but in case of doctors
15024.28 -> from you don't have the concept of poor at
all. So the shared volumes can be between
15028.3 -> any other container. There is no restriction
with respect to that and dr. Swann and then
15032.63 -> finally we have this All the logging and monitoring.
So when it comes to logging and monitoring
15038.18 -> Humanities provides inbuilt tools for this
purpose. Okay, but with dr. Storm you have
15042.369 -> to install third-party tools if you want to
do logging and monitoring so that is the fall
15046.63 -> backward. Dr. Swann because logging is really
important one because you will know what the
15050.779 -> problem is. You'll know which card in a failed
what happened there is exactly the error,
15055.12 -> right? So logs would help you give that answer
and monitoring is important because you have
15060.779 -> to always keep a check on your nodes, right?
So as the master of the cluster it's very
15065.74 -> important that there's monitoring and that's
where our communities has a slight advantage
15069.72 -> over doc a swarm. Okay, but before I finish
this topic there is this one slide. I want
15075.46 -> to show you which is about the statistics.
So this stat I picked it up from this Platform
15081.17 -> 9, which is nothing but a company that writes
about tech. Okay and what they've said is
15085.75 -> that the number of news articles there were
produced right in that one particular year
15090.899 -> had 90% of those covered on Kubernetes compared
to the 10 percent. It on Docker swamp amazing,
15095.62 -> right? That's a big difference. That means
for every one blog written or for everyone
15100.64 -> article written on Docker swamp. There are
nine different articles written on humanities
15104.92 -> and similarly for web searches for web searches
kubernetes is 90 percent compared to Dhaka
15109.54 -> swarms 10% and Publications GitHub Stars.
The number of commits on GitHub. All these
15114.779 -> things are clearly one vacuum energy is everywhere.
So communities is the one that's dominating
15118.85 -> this market and that's pretty visible from
this stat also, right? So I think that pretty
15125.029 -> much brings an end to this particular topic
now moving forward. Let me show you a use
15130.04 -> case. Let me talk about how this game this
amazing game called Pokemon go was powered
15135.29 -> with the help of communities. I'm pretty sure
you all know what it is, right? You guys know
15139.649 -> Pokemon go. It's the very famous game and
it was actually the best game of the year
15143.72 -> 2017 and the main reason for that being the
best is because of kubernetes and let me tell
15148.159 -> you why but before I tell you why there are
few things, which I want to just talk about
15151.869 -> I'll give you an overview of Pokemon goers
and let me talk about a few key Stacks. So
15157.78 -> Pokemon go is an augmented reality game developed
by Niantic for your Android and for iOS devices.
15164.939 -> Okay, and those key stats read that they've
had like 500 million plus downloads overall
15171.35 -> and 20 million plus daily active users. Now
that is massive daily. If you're having like
15177.189 -> 20 million users plus then you have achieved
an amazing thing. So that's how good this
15182.119 -> game is. Okay, and then this game was actually
initially launched only in North America Australia
15187.4 -> New Zealand, and I'm aware of this fact because
I'm based out of India and I did not get access
15193.22 -> to this game because the moment news got out
that we have a game like this. I started downloading
15197.33 -> it, but I couldn't really find any link or
I couldn't download it at all. So they launched
15201.92 -> it only in these countries, but what they
faced right in spite of just reading it in
15205.989 -> these three countries. They had like a major
problem and that problem is what I'm going
15210.12 -> to talk about in the next slide, right? So
my use case is based on that very fact that
15214.23 -> In spite of launching it only in these three
countries or in probably North America and
15218.979 -> then in Australia New Zealand, they could
have had a meltdown but rather with the help
15223.909 -> of Humanity's they used that same problem
as the basis for their raw success. So that's
15228.939 -> what happened. Now let that be a suspense
and before I get to that let me just finish
15232.88 -> this slide one amazing thing about Pokemon
go is that it has inspired users to walk over
15237.619 -> 5.4 billion miles an hour. Okay. Yes do the
math five point four billion miles in one
15243.25 -> year. That's again a very big number and it
says that it has surpassed engineering Expectations
15248.779 -> by 50 times. Now this last sign is not with
respect to the Pokemon Go the game but it
15253.77 -> is with respect to the backend and the use
of Kubernetes to achieve whatever was needed.
15258.93 -> Okay, so I think I've spent enough time over
here. Let me go ahead and talk about the most
15263.28 -> interesting part and tell you how the back
in architecture of Pokemon go was okay. So
15268.1 -> you have a Pokรฉmon go container, which had
two primary components one is your Google
15272.55 -> big table, which is your main. Database where
everything is going in and coming out and
15276.979 -> then you have your programs which is a run
on your java Cloud, right? So these two things
15281.05 -> are what is running your game mapreduce and
Cloud dataflow wear something it was used
15286.119 -> for scaling up. Okay, so it's not just the
container scaling up but it's with respect
15291.02 -> to the application how the program would react
when there are these increased number of users
15296.06 -> and how to handle increased number of requests.
So that's where the mapper uses. The Paradigm
15301.16 -> comes in right the mapping and then reducing
that whole concept. So this was their one
15305.939 -> deployment. Okay, and when we say in defy,
it means that they had this over capacities
15311.04 -> which could go up til five times. Okay. So
technically they could only serve X number
15315.199 -> of requests but in case of failure conditions
or heavy traffic load conditions, the max
15320.44 -> the server could handle was 5x because after
5x the server would start crashing that was
15326.06 -> their prediction. Okay, and what actually
happened at Pokemon go on releasing in just
15331.14 -> those three different geographies. Is that
the Deployed it the usage became so much that
15336.729 -> it was not XM R of X, which is technically
they're a failure limit and it is not even
15342.31 -> 5 x which is the server's capability but the
traffic that they got was up to 50 times 50
15347.91 -> times more than what they expected. So, you
know that when your traffic is so much then
15352.18 -> you're going to be brought down to your knees.
That's a definite and that's a given right.
15356.369 -> This is like a success story and this is too
good to be true kind of a story and in that
15361.11 -> kind of a scenario if the request start coming
in are so much that if they reach 50 x then
15366.189 -> it's gone, right the application is gone for
a toss. So that's where kubernetes comes in
15370.04 -> and they overcome all the challenges. How
did you overcome the challenges because Cuban
15375.13 -> areas can do both vertical scaling and horizontal
scaling at ease and that is the biggest problem
15380.55 -> right? Because any application and any other
company can easily do horizontal scaling where
15385.38 -> you just spin up more containers and more
instances and you set up the environment but
15389.881 -> vertical scaling is something which is very
specific and this is even more challenging.
15394.649 -> Now it's more specific to this particular
game because the virtual reality would keep
15399.159 -> changing whenever a person moves around or
walks around somewhere in his apartments or
15403.56 -> somewhere on the road. Then the ram right
that would have to increase the memory the
15408.189 -> in memory and the storage memory all this
would increase so in real time your servers
15413.02 -> capacity also has to increase vertically.
So once they have deployed it, it's not just
15418.21 -> about horizontal scalability anymore. It's
not about satisfying more requests. It's about
15422.439 -> satisfying that same request with respect
to having more Hardware space more RAM space
15427.25 -> and all these things right that one particular
server should have more performance abilities.
15432.03 -> That's what it's about and communities solve
both of these problems effortlessly and neon
15437.239 -> tape were also surprised that kubernetes could
do it and that was because of the help that
15441.68 -> they got from Google. I read an article recently
that they had a neon thick slab. He met with
15446.189 -> some of the top Executives in Google and then
gcp right and then they figure out how things
15451.949 -> are supposed to go and they of course Met
with the Hedgehog communities and they figure
15456.229 -> out a way to actually scale it up to 50 time
in a very short time. So that is the challenge
15461.71 -> that they represented and thanks to communities.
They could handle three times the traffic
15465.81 -> that they expected which is like a very one
of story and which is very very surprising
15470.04 -> that you know, something like this would happen.
So that is about the use case and that pretty
15476.17 -> much brings an end to this topic of how Pokemon
go used communities to achieve something because
15482.149 -> in today's world Pokemon go is a really revered
game because of what it could write it basically
15487.89 -> beat all the stereotypes of a game and whatever
anybody could have anything negative against
15493.12 -> the game, right? So they could say that these
mobile games and video games make you lazy.
15496.72 -> They make you just sit in one place and all
these things. Right and Pokemon go was something
15501.561 -> which was different it actually made people
walk around and it made people exercise and
15506.96 -> that goes on to show how popular this game
became if humanity is lies at the heart of
15511.85 -> something which became so popular and something
Now became so big then you should imagine
15516.04 -> how big the humanities or how beautiful communities
is, right? So that is about this topic now
15522.97 -> moving forward. Let me just quickly talk about
the architecture of communities. Okay. So
15527.47 -> the communities architecture is very simple.
We have the cube Master which controls a pretty
15532.09 -> much everything. We should note that it is
not a Docker swarm where your Cube Master
15537.17 -> will also have containers running. Okay, so
they won't be containers over here. So all
15541.77 -> the containers will be running all the services
which will be running will be only on your
15545.58 -> nodes. It's not going to be on your master
and you would have to first of all create
15549.93 -> your rock Master. That's the first step in
creating your cluster and then you would have
15554.01 -> to get your notes to join your cluster. Okay.
So bead your pods or beat your containers
15558.859 -> everything would be running on your nodes
and your master would only be scheduling or
15563.8 -> replicating these containers across all these
nodes and making sure that your configurations
15567.619 -> are satisfied, right? Whatever you specify
in the beginning and the way you access your
15571.93 -> Cube Master is why are two ways You can either
use it via the UI or where the CLI. So the
15577.48 -> CLI is the default way and this is the main
way technically because if you want to start
15582.35 -> setting up your cluster you use the CLI, you
set up your cluster and from here, you can
15586.23 -> enable the dashboard and when you enable the
dashboard then you can probably get the GUI
15591.189 -> and then you can start using your communities
and start deploying by just with the help
15596.45 -> of the dashboard right my just the click functionality.
You can deploy an application which you want
15601.689 -> rather than having to write. I am L file or
feed commands one after the other from the
15606.24 -> CLI. So that is the working of Kubernetes.
Okay. Now, let's concentrate a little more
15612.06 -> on how things work on the load end. Now as
said before communities Master controls your
15617.64 -> nodes and inside nodes you have containers.
Okay, and now these containers are not just
15622.79 -> contained inside them but they are actually
contained inside pods. Okay, so you have nodes
15628.58 -> inside which there are pots and inside each
of these pods. They will be a number of containers
15633.6 -> depending upon Your configuration and your
requirement right now these pods which contain
15638.33 -> a number of containers are a logical binding
or logical grouping of these containers supposing
15644.07 -> you have one application X which is running
in Node 1. Okay. So you will have a part for
15648.22 -> this particular application and all the containers
which are needed to execute this particular
15651.89 -> application will be a part of this particular
part, right? So that's how God works and that's
15656.649 -> what the difference is with respect to what
Doc is warm and two bananas because I'm dr.
15660.29 -> Swamp. You will not have a pot. You just have
continuous running on your node and the other
15664.569 -> two terminologies which you should know is
that of replication controller and service.
15669.979 -> Your replication controller is the Masters
resource to ensuring that the request number
15674.08 -> of pods are always running on the nodes, right?
So that's trigger confirmation or an affirmation
15679.979 -> which says that okay. This many number of
PODS will always be running and these many
15683.97 -> number of containers will always be running
something like that. Right? So you see it
15687.729 -> and the replication controller will always
ensure that's happening and your service is
15691.739 -> just an object on the master that provides
load. I don't think of course is replicated
15695.779 -> group of PODS. Right? So that's how Humanities
works and I think this is good enough introduction
15703.71 -> for you. And I think now I can go to the demo
part where and I will show you how to deploy
15715.21 -> applications on your communities by either
your CLI, or either via your Jama files or
15716.21 -> by or dashboard. Okay guys, so let's get started
and for the demo purpose. I have two VMS with
15720.17 -> me. Okay. So as you can see, this is my Cube
Master which would be acting as my master
15725.59 -> in my cluster. And then I have another VM
which is my Cube Node 1. Okay. So it's a cluster
15730.82 -> with one master and one node. All right. Now
for the ease of purpose for this video, I
15736.869 -> have compiled the list of commands in this
text document right? So here I have all the
15741.939 -> commands which are needed to start your cluster
on then the other configurations and all those
15746.37 -> things. So I'll be using these every copying
these commands and then I'll show you side-by-side
15752.32 -> and I will also explain when I do that as
to what each of these commands mean now there's
15757.27 -> one prerequisite that needs to be satisfied.
And that is the master of should have at least
15761.939 -> two core CPUs. Okay and 4GB of RAM and your
node should have at least one course if you
15767.99 -> and 4GB of ram so just make sure that this
much of Hardware is given to your VMS right
15773.619 -> if you are using To what a Linux operating
system well and good but if you are using
15777.649 -> a VM on top of a Windows OS then I would request
you to satisfy these things. Okay, these two
15783.659 -> criterias and I think we can straight away
start. Let me open up my terminal first fault.
15787.96 -> Okay. This is my node. I'm going back to my
master. Okay. Yes. So first of all, if you
15796.189 -> have to start your cluster, you have to start
it from your Masters end. Okay, and the command
15800.401 -> for that is Q barium in it, you specify the
port Network flag and the API server flag.
15806.55 -> We are specifying the port Network flag because
the different containers inside your pod should
15811.399 -> be able to talk to each other easily. Right?
So that was the whole concept of self discovery,
15815.319 -> which I spoke about earlier during the features
of communities. So for this self-discovery,
15821.109 -> we have like different poor networks using
which the containers would talk to each other
15824.89 -> and if you go to the documentation the community
is documentation. You can find a lot of options
15829.37 -> are you can use either Calico pod or you can
use a flannel poor Network. So when we say
15834.149 -> poor Network, it's basically a framed as the
cni. Okay container network interface. Okay,
15840.409 -> so you can use either a Calico cni or a flannel
cni or any of the other ones. This is the
15845.36 -> two popular ones and I will be using the calcio
cni. Okay. So this is the network range for
15850.27 -> this particular pod, and this will Specify
over here. Okay, and then over here we have
15854.619 -> to specify the IP address of the master. So
let me first of all copy this entire line.
15861.08 -> And before I paste it here, let me do an if
config and find out what is the IP address
15865.6 -> of this particular machine of my master machine.
The IP address is one ninety two dot one sixty
15870.39 -> eight dot 56.1. Not one. Okay. So let's just
keep that in mind and let me paste the command
15876.18 -> over here in place of the master IP address.
I'm going to specify the IP address of the
15882.062 -> master. Okay, but I just read out. It is one.
Ninety two dot one sixty eight dot 56.1 not
15887.76 -> one and the Pod Network. I told you that I'm
going to use the Calico pod. So let's copy
15894.97 -> this network range and paste it here. So all
my containers inside this particular pot would
15900.989 -> be assigned an IP address in this range. Okay.
Now, let me just go ahead and hit enter and
15906.56 -> then your cluster would begin to set up. So
it's going X expected. So it's going to take
15915.31 -> a few minutes. So just to hold on there. Okay,
perfect. My Cuban its master has initialized
15922.02 -> successfully and if you want to start using
your cluster, you have to run the following
15926.31 -> as a regular user. Right so we have three
commands which is suggested by kubernetes
15930.319 -> itself. And that is actually the same set
of commands or even I have here. Okay, so
15935.739 -> I'll be running the same commands. This is
to set up the environment. And then after
15939.08 -> that we have this token generated, right the
joining token. So the token along with the
15943.99 -> inlet address of the IP of the master if I
basically execute this command in my nodes,
15949.18 -> then I will be joining this cluster where
this is the master, right? So this is my master
15954.491 -> machine. This is created the cluster. So now
before I do this though, there are a few steps
15958.229 -> in the middle. One of those steps is executing
all these three commands and after that comes
15962.979 -> bring up the dashboard and setting up the
board Network right - the calcio apart. So
15968.54 -> I have to set up the Calico pod and then after
also set up the dashboard because if I do
15973.17 -> not start the And this before the nodes then
the node cannot join and I will have very
15978.8 -> severe complications. So let me first of all
go ahead and run these three commands one
15982.541 -> of the other. Okay, since I have the same
commands in my text doc. I'll just copy it
15987.52 -> from there. Okay, say ctrl-c paste enter.
Okay, and I'll copy this line. So remember
15996.26 -> you have to execute all these things as regular
user. Okay, you can probably use your pseudo.
16000.62 -> But yeah, you'll be executing it as your regular
user and it's asking me if I want to overwrite
16005.88 -> the existing whatever is there in this directory,
I would say yes because I've already done
16009.77 -> this before but if you are setting up the
cluster for the first time, you will not have
16013.36 -> this prompt. Okay. Now, let me go to the third
line copy this and paste it here. Okay, perfect.
16021.449 -> Now I've ran these three commands as I was
told by communities. Now, the next thing that
16026.08 -> I have to do is before I check the node status
and all these things. Let me just set up the
16031.279 -> network. Okay, the poor Network. So like I
said, this is the Line This is the command
16035.569 -> that we have to run to set up the Calico Network.
Okay to all of the notes to join our particular
16041.39 -> Network. So it will be copying the template
of this Calico document file is present over
16045.54 -> here in this box. Okay. So hit enter and yes,
my thing is created. Calcio Cube controllers
16052.5 -> created now, I'll just go back here and see
at this point of time. I can check if my Master's
16058.909 -> connected to the particular pod. Okay, so
I can run the cube CDL get loads command Okay.
16065.41 -> This would say that I have one particular
resource connected to the cluster. Okay name
16070.64 -> of the machine and this role is master and
yet the state is ready. Okay, if you want
16075.699 -> to get an idea of all the different pods which
are running by default then you can do the
16079.569 -> cubes. He'll get pods along with few options.
Okay should specify these flags and they are.
16087.739 -> All namespaces and with the flag O specify
wide. Okay. So this way I get all the pods
16095.81 -> which are started by default. Okay. So there
are different services like at CD4 Cube controllers
16102.22 -> for the Calico node for the SED Master for
every single service. There's a separate container
16107.72 -> and pot started. Okay, so that's what you
can understand from this part. Okay, that
16112.07 -> is the safe assumption. Now that we know the
cluster the cluster is ready and the Masters
16116.04 -> part of a cluster. Let's go ahead and execute
this dashboard. Okay. Remember if you want
16121.25 -> to use a dashboard then you have to run this
command before your notes join this particular
16125.939 -> cluster because the moment your notes join
into the cluster bring up the dashboard is
16129.61 -> going to be challenging and it will start
throwing arrows. OK it will say that it's
16133.119 -> being hosted on the Node which we do not want
we want the dashboard to be on the server
16138.25 -> itself right on the master. So first, let's
bring the dashboard up. So I'm going to copy
16142.41 -> this and paste it here. Okay, Enter great.
Communities dashboard is created. Now the
16150.09 -> next command that you have to get your dashboard
up and running is Cube cereal proxy. Okay
16156.229 -> with this we get a message saying that it's
being served at this particular port number
16160.819 -> and yes, you are right now there you can if
you access Local Host. What was the port number
16166.21 -> again? Localhost? Yeah one 27.0 or 0.1 is
localhost. Okay followed by port number eight
16172.96 -> thousand one, okay. Yeah, so right now we
are not having the dashboard because it is
16179.159 -> a technically accessed on another URL. But
before we do that, there are various other
16183.41 -> things that we have to access. I mean we have
to set okay, because right now we have only
16188.13 -> enabled the dashboard now if you want to access
the dashboard you have to first of all create
16192.33 -> a service account. Okay. The instructions
are here. Okay, you have to first of all create
16197.021 -> a service account for dashboard. Then you
have to say that okay, you are going to be
16201.29 -> the admin user of this particular service
account and we have to enable that functionality
16206.02 -> here. You should say dashboard admin privileges
and you should do the cluster binding. Okay,
16210.17 -> the cluster roll binding is what you have
to do and after that to join to that poor
16215.01 -> to get access to that particular dashboard.
We have to basically give a key. Okay. It's
16219.521 -> like a password. So we have to generate that
token first and then we can access the dashboard.
16223.56 -> So again for the dashboard there are these
three commands. Well, you can get confused
16227.899 -> down the line. But remember this is separate
from the above. Okay. So what we did initially
16232.72 -> is rant these three commands which kubernetes.
Oh To execute and after that the next necessity
16237.97 -> was bring up a pod. So this was that command
for the Pod and then this was the command
16242.391 -> for getting the dashboard up and right after
that run the proxy and then on that particular
16247.71 -> port number will start being served. So my
dad would is being served but I'm not getting
16251.13 -> the UI here and if I want to get the you--if
you create the service account and do these
16254.87 -> three things, right? So let's start with this
and then continue. I hope this wasn't confusing
16259.369 -> guys. Okay, I can't do it here. So let me
open a new terminal. Okay here I'm going to
16264.96 -> paste it. And yes service account created.
Let me go back here and execute this command
16271.49 -> when I'm doing the role binding I'm saying
that my dashboard will should have admin functionalities
16276.76 -> and that's going to be the cluster roll. Okay
cluster admin, and then the service account
16280.35 -> is what I'm using and it's going to be in
default namespace. Okay. So when I created
16285.069 -> the account I said that I want to create this
particular account in default namespace. So
16289.239 -> the same thing I'm specifying here. Okay - good
admin created good. So let's generate the
16295.5 -> That is needed to access my dashboard. Okay
before I execute this command, let me show
16299.79 -> you that once so if you go to this URL, right
/ API slash V 1 / namespaces. Yep, let me
16307.67 -> show to you here. Okay. So this is the particular
URL where you will get access to the dashboard.
16312.37 -> Okay login access to the dashboard localhost
8001 API V1 namespaces / Cube system / Services
16318.84 -> slash HTTP Cuban eighties. Dashboard: / proxy.
Okay. Remember this one that is the same thing
16325.119 -> over here and like I told you it's asking
me for my password. So I would say token but
16330.26 -> let me go here and hit the command and generate
the token. So this is the token amount of
16336.47 -> copy this from here till here going to say
copy and this is what I have to paste over
16343.409 -> here. All right. So Simon update. Yes, perfect
with this is my dashboard, right? This is
16351.33 -> my Cuban eighties dashboard. And this is how
it looks like whatever I want. I can get an
16355.54 -> overview of everything. So that is workloads.
If I come down there is deployments. I have
16361.21 -> option to see the pods and then I can see
what are the different Services running among
16366.5 -> most of the other functionalities. Okay. So
right now we don't have any bar graph or pie
16370.84 -> graph shown you which clusters up which board
is up and all because I have not added any
16375.02 -> node and there is no service out as running
right. So I mean, this is the outlay of the
16380.62 -> dashboard. Okay, you will get access to everything
you want from the left. You can drill down
16384.471 -> into each of these namespaces or pods on containers
right now. If you want to deploy something
16390.33 -> through the dashboard right through the click
functionality, then you can go here. Okay,
16395.66 -> but before I create any container or before
I create any pot or any deployment for that
16399.74 -> matter of fact, I have to have nodes because
these will be running only on nodes. Correct,
16404.17 -> whatever. I deploy they have done only on
node. So let me first open up my node and
16409.1 -> get the node to join this particular cluster
of mine. Now, if you remember the command
16414.09 -> to join the node got generated at the master
and correct. So, let me go and fetch that
16420.012 -> again. So that was the first command that
we ran right this one. So, let's just copy
16426.561 -> this. And paste this one at my node end. This
is the IP of my master and it will just join
16435.43 -> at this particular port number. Let me hit
enter. Let's see what happens. Okay, let me
16440.76 -> run it as root user. Okay? Okay, perfect successfully
established connection with the API server
16449.41 -> and it says this node has joined the cluster
Right Bingo. So this is good news to me. Now
16455.061 -> if I go back to my master and in fact, if
I open up the dashboard there would be an
16460.01 -> option of nodes. Right? So initially now,
it's showing this master Masters. The only
16464.621 -> thing that is part of my nodes, let me just
refresh it and you would see that even node
16469.301 -> - 1 would be a part of it. Right? So there
are two resources to instances one is the
16474.32 -> master itself and the other is the node now
if I go to overview, you will get more details
16480.41 -> if I start my application if I start my servers
or containers then all those would start showing
16485.021 -> up your right. So it's high time. I start
showing you how to deploy it to deployed using
16490.021 -> the dashboard. I told you this is the functionality.
So let's go ahead and click on this create.
16494.891 -> And yeah mind you from the dashboard is the
easiest way to deploy your application, right?
16499.58 -> So even developers around the world do the
same thing for the first time probably they
16502.912 -> created using the Amal file. And then from
there on they start editing the ml file on
16507.561 -> top of the dashboard itself or the create
or deploy the application from here itself.
16512.191 -> So we'll do the same thing. Go to create an
app using functionality click functionality.
16515.92 -> You can do it over here. So let's give a name
to your application. I'll just say it you
16522.111 -> recur demo. Okay, let that be the name of
my application and I want to basically pull
16528.121 -> an engines image. Okay. I want to launch an
engine service. So I'm going to specify the
16532.951 -> image name in my Docker Hub. Okay. So it says
either the URL of a Public Image or any registry
16539.521 -> or a private image hosted on Docker Hub or
Google container registry. So I don't have
16544.66 -> to specify the URL per se but if you are specifying
a Docker Hub, if you are specifying this image
16549.5 -> to be pulled from Docker Hub, then you can
just use the name of the image which has to
16552.941 -> be pulled. That's good enough. Right engine
to the name and that's good enough and I can
16557.541 -> choose to set my number of ports to one or
two in that way. I will have two containers
16562.99 -> running in the pot. Right? So this is done
and the final part is actually without the
16567.922 -> final part. I can strip it deployed. Okay,
but if I deployed then my application would
16572.721 -> be created but I would just don't get the
UI. I mean, I won't see the engine service
16577.281 -> so that I get the service. I have to enable
one more functionality here. Okay, the server's
16582.201 -> here click on the drop down and you will have
external option right? So click on external
16586.521 -> this would let you access this particular
service from your host machine, right? So
16591.891 -> that is the definition so you can see the
explanation here and internal or external
16597.15 -> service can be defined to map and incoming
port to a Target Port seen by the container
16602.57 -> so engines which would be hosted on one of
the container ports. That could not be accessible
16606.99 -> if I don't specify anything here, but now
that I've said access it externally on a particular
16611.471 -> port number then it will get mapped for me
by default. And jinkx runs on port number
16615.73 -> 80. So the target put would be the same but
the port I want to expose it to that. I can
16620.922 -> map into anything I want so I'm going to say
82. All right, so that's it. It's as simple
16625.721 -> as this this way. Your application is launched
with two pods, so I can just go down and click
16631.222 -> on deploy and this way my application should
be deployed. My deployment is successful.
16636.66 -> There are two pods running. So what I can
do is I can go to the service and try to access
16643.701 -> the UI, right? So it says that it's running
on this particular port number 82153. So copy
16649.691 -> this and say localhost 321530 k hit enter
bingo. So it says welcome to Jenkins and I'm
16658.48 -> building the UI, right? So I'm able to access
my application which I just launched through
16663.082 -> the dashboard. It was as simple as that. So
this is one way of for launching or making
16667.43 -> a deployment. There are two other ways. Like
I told you one is using your CLI itself your
16672.17 -> command line interface of your draw Linux
machine, which is the terminal or you can
16676.361 -> do it by uploading the yamen file. You can
do it by uploading the yamen file because
16682.24 -> everything here is in the form of Yama Lord
Jason. Okay, that's like the default way.
16687.32 -> So whatever deployment I made right that also
those configurations are stored in the form
16691.42 -> of Yaman. So if I click on view or edit yeonggil,
all the configurations are specified the default
16696.5 -> ones have been taken. So I said the name should
be a director demo that is what has been.
16700.432 -> Oh you're that is the name of my deployment?
Okay. So kind is deployment the version of
16705.91 -> my API. It's this one extension /we 1 beta
1 and then other metadata I have various other
16712.33 -> lists. So if you know how to write a normal
file then I think it would be a little more
16717.09 -> easier for you to understand and create your
deployment because you will file is everything
16721.861 -> about lists and maps and these are all files
are always lists about maps and maps about
16727.311 -> lists. So it might be a little confusing.
So probably will have another tutorial video
16732.07 -> on how to write a normal file for Cuban its
deployment but I would keep that for another
16736.691 -> session. Okay. Let me get back to this session
and show you the next deployment. Okay, the
16742.58 -> next deployment technique, so let me just
close this and go back to overview. Okay,
16748.041 -> so I have this one deployment very good. Okay.
So let's go to this. Yeah. So what I'll do
16755.58 -> is let me delete this deployment. Okay our
let me at least scale it down because Don't
16760.57 -> want too many resources to be used on my node
also because I will have to show two more
16764.74 -> deployments. Right so I have reduced my deployment
over here. And I think it's be good enough.
16772.172 -> Great. So let's go back to the cube set up
this document of mine. So this is where we're
16779.032 -> at. Right we could check our deployments we
could do all these things. So one thing which
16784.452 -> I might have forgotten is showing the nodes
which are part of the cluster of right. So
16788.81 -> this is my master. Yeah, so I kind of forgot
to show you this Cube CDL get node. So the
16798.702 -> same view that you got on your dashboard you
get it here. Also, I mean, these are the two
16803.89 -> nodes and this is the name and all these things.
Okay, and I can also do the cube CDL get pods
16811.1 -> which would tell me all the pods that are
running under a car. Demo is the pot which
16815.452 -> I have started. Okay. This is my God. Now
if I specify with the other flags right with
16822 -> all namespaces and with wide then all the
default pause which get created along with
16827.76 -> your kubernetes cluster. Those will also get
displayed. Let me show you that also just
16831.5 -> in case Okay. Yeah. So this is the one which
I created and the other ones are the default
16837.97 -> of deployments that come with few minutes
the moment you install set up the cluster
16842.91 -> these get started. Okay, and if you can see
here this particular that this particular
16848.25 -> a dareka demo, which I started is running
on my Node 1 along with this Cube proxy and
16855.522 -> this particular Calico node. So Easter services
are running on master and node. And this one
16860.59 -> is running only on my Node 1 right you can
see this right the Calico node runs both on
16867.272 -> my node over here and on my master and similarly
the queue proxy runs on my node here and on
16873.872 -> my master. So this is the one that's running
only on my Note. Okay, so getting back to
16878.772 -> what I was about to explain you. The next
part is how to deploy anything through your
16885.792 -> terminal now to deploy your same engines application
through your CLI. We can follow these set
16892 -> of commands Okay, so there are a couple of
steps here. First of all to create a deployment.
16896.82 -> We have to run this command. OK Cube cereal
create deployment and drinks and then the
16901.202 -> name of the image that you want to create.
This is going to be the name of your deployment.
16904.09 -> And this is the name of the image which you
want to use so control C and let me go to
16909.15 -> the terminal here on my master. I'm executing
this command Cube cereal create a deployment.
16914.66 -> Okay. So the deployment engines is created
if you want we can verify that also over here
16919.792 -> so under deployments right now, we have one
entry in the array card Mo and yes now you
16925.042 -> can see there are two engines and arica demo.
So this is pending. I mean, it would take
16929.08 -> a few seconds. So in the meanwhile let this
continue with the other steps. Once you have
16934.032 -> created a deployments, you have to create
the service. Okay after say which is the node
16939.67 -> Port which can be used to access that Particular
service, right because deployment of just
16943.96 -> a deployment you're just deploying your container
if you want to access it. Like I told you
16947.542 -> earlier from your local from your host machine
all those things. Then you have to enable
16951.692 -> the node board. If you want to get your deployments
on your terminal you can run this command
16955.782 -> Cube CDL get deployments. Okay engines also
comes up over here, right? If you want more
16963.46 -> details about your diploma. You can use this
command Cube CDL describe you get like more
16967.9 -> details about this particular development
as to what is the name? What is the port number?
16972.4 -> It's sort of siding on all these things. Okay.
Let's not complicate this you can probably
16977.542 -> use that for understanding later. So once
that is done, the next thing that you have
16981.81 -> to do is you have to create the service on
the nodes you have created the deployment,
16985.532 -> but yes create the service on the nodes using
this particular command Cube cereal. Create
16990.792 -> service and say note Port. Okay, this means
you want to access it at this particular Point
16995.661 -> number you're doing the port mapping 80 is
280. Okay, container Port 80 to the internal
16999.522 -> node, Port 80. Okay. So service for engines
is created. And if you want to check which
17005.912 -> of the diplomas are running in which nodes
you can run the command Cube City L. Get SVC.
17011.33 -> Okay, this would tell you okay, you have two
different services at a record Mo and engines
17016.32 -> and they are anyone these port numbers and
on these nodes, right? So communities is the
17021.58 -> one which God created automatically enter
a car. Demo is the one which I created. Okay
17026.16 -> engines is again, the one which I created
communities comes up on its own just specifying
17031.73 -> to you because this is a container for the
cluster itself. Okay. So let's just go back
17036.57 -> here and then yes and similarly if you want
to delete a deployment then you can just use
17041.002 -> this command Cube CDL delete deployment followed
by the name of the deployment, right? It's
17046.17 -> pretty simple. You can do it this way. Otherwise
from the dashboard. You can delete it like
17050.952 -> how I showed you all your click over here
and then you can click on delete and then
17054.75 -> if you want to scale you can scale it. So
both of these deployment of mine have one
17059.202 -> porridge, right? So let's do one thing. So
let's just go to the engines service. And
17067.46 -> here let's try accessing this particular service.
Local Host. Okay, perfect here. Also it says
17074.622 -> welcome to engines right. So with this you
can understand that the port mapping worked
17080.002 -> and by going to service you will get to know
on which port number you can access it on
17084.41 -> your host machine, right? So this is the internal
container Port map to this particular Port
17089.05 -> of mine. Okay. Now if one if not for this
if this doesn't work, you can also use the
17093.71 -> cluster IP for the same thing trust ripe is
going to basically the IP using which all
17099.452 -> your containers access each other, right?
So if your body will have an IP. So whatever
17103.99 -> is running in their containers that will again
be accessible on your cluster I be so so it's
17109.47 -> the same thing right? So let me just close
these pages and that's how you deploy an application
17114.9 -> through your CLI. So this comes to our last
part of this video, which is nothing but deployment
17119.592 -> via Yaman file. So for again deployment where
I am and file you have to write your yawm
17124.522 -> Al code, right? You have to either write your
yawm Al code or your Json code, correct? So
17129.532 -> this the code which I have written. Just in
Jama format. And in fact, I already have it
17134.17 -> in my machine here. So how about I just do
an LS? Yeah, there is deployment at Dotty.
17140.72 -> Alright, so let me show you that so this is
my yamen file. Okay. So here I specify various
17147.63 -> configurations similar to how I did it using
the GUI or Rider reducing the CLI it something
17152.032 -> similar gesture. I specify everything and
one particular file here. If you can see that.
17156.5 -> I have a specify the API version. Okay, so
I'm using extensions dot a slash b 1 or beta
17161.71 -> 1. Okay. I can do this or I can just simply
specify version 1 I can do either of those
17166.88 -> and then the next important line is the kind
so kind is important because you have to specify
17171 -> what kind of file it is. Is it a deployment
file or is it for a pod deployment or is it
17176.24 -> for your container deployment or is it the
overall deployment? What is it? So I've said
17179.57 -> deployment okay, because I want to deploy
the containers also along with the pot. So
17184.022 -> I'm saying deployment in case you want to
deploy only the pod which you realistically
17188.241 -> don't need to. Okay. Why would it just deploy
up? But in case if you want to deploy a pot
17192.82 -> then you can go ahead and write Port here
and then just specify what are the different
17196.18 -> containers. Okay, but in my case, it's a complete
deployment right with the pods and the services
17200.872 -> and the containers. So I will go ahead and
write other things and under the metadata.
17205.272 -> I will specify the name of my application.
I can specify what I want. I can put my name
17210.622 -> also over here like Warden, okay, and I can
save this and then the important part is this
17216.24 -> back part. So here is where you set the number
of replicas. Do you remember I told you that
17221.32 -> there's something called has replication controller
which controls the number of ports that you
17225 -> will be running. So it is that line. So if
I have a set to over here, it means that I
17228.921 -> will have two pods running of this particular
application of Verdun. Okay, what exactly
17234.38 -> am I doing here under spec AB saying that
I want to Containers so I have intended or
17240.872 -> container line over here and then I have two
containers inside. So the first container
17244.702 -> which I want to create is of the name front
end. Okay, and I'm using an engines image
17250.15 -> and similarly. The port number that this would
be active on is container Port 80. All right,
17255.862 -> and then I'm saying that I want a second container
and the container for this could I could rename
17260.682 -> this to anything? I can say back end and I
can choose which image I want. I can probably
17266.272 -> choose a httpd image also. Okay, and I can
again say the port's that this will be running
17272.5 -> on I can say the container Port that it should
run on is put number is 88 right? So that's
17277.862 -> how simple it is. All right. And since it's
your first video tutorial the important takeaways
17282.192 -> from this yawm Al file configuration is that
under specular have to specify the containers?
17287.31 -> And yes everything in Json format with all
the Intel dacians and all these things. Okay,
17292.41 -> even if you have an extra space anywhere over
here, then you are real file would throw an
17296.942 -> invalid error. So make sure that is not there.
Make sure you specify the containers appropriately
17301.58 -> if it's going to be just one container. Well
and good it's two containers. Make sure you
17305.08 -> intend it in the right way and then you can
specify the number of PODS. You want to give
17309.292 -> a name to your deployment and Mainly established
read these rules. Okay. So once you're done
17314.6 -> with this just save it and close the yamen
file. Okay. So this is your deployment djamel.
17321.442 -> Now, you can straight away upload this table
file to your Kubernetes. Okay, and that way
17326.22 -> your application would be straight with deployed.
Okay. Now the command for that is Cube cereal
17330.532 -> create - F and the name of the file. Okay.
So let me copy this and then the name of my
17335.92 -> file is deployment or djamel. So let me hit
enter. Perfect. So my deployment the third
17341.772 -> deployment vardhan is also created right so
we can check our deployments from the earlier
17347.032 -> command. That is nothing but Cube CDL get
deployments. Okay. It's not get deployment
17352.55 -> audiometer. Sorry. It's get deployments. And
as you can see here, there is an Adder a guard
17359.34 -> Mo there is engines and there is Verdun and
the funny thing which you should have noticed
17364.042 -> is that I said, I want to replicas right to
pods. So that's why the desire is to currently
17369.82 -> we have to up to date is one. So okay update
is to brilliant available is 0 because let's
17375.48 -> just give it a few seconds in 23 seconds.
I don't think the board would have started.
17379.512 -> So let's go back to our dashboard and verify
if there's a third deployment that comes up
17383.192 -> over here. Okay, perfect. So that's how it's
going to work. Okay, so probably is going
17390 -> to take some more time because the containers
just restarting. So let's just give it some
17393.57 -> more time. This could well be because of the
fact that my node has very less resource,
17398.58 -> right? So I have too many deployments that
could be the very reason. So what I can do
17403.35 -> is I could go ahead and delete other deployments
so that my node can handle these many containers
17408.872 -> and pods right? So let me delete this particular
deployment and Rings deployment and let me
17415.56 -> also delete this Adder a car demo deployment
of mine. Okay. Now let's refresh and just
17423.282 -> wait for this to happen. Okay. So what I can
do instead is I could have a very simple deployment
17428.452 -> right? So let me go back to my terminal and
let me delete my deployment. Okay, and let
17433.341 -> me redeployed again, so Cube CDL delete deployment.
Okay, so what then this deployment has been
17444.5 -> deleted? Okay. So let's just clear the screen
and let's do G edit of the yamen file again
17450.88 -> and here let's make things simpler. Let me
just delete this container from here. Let
17457.172 -> me save this right and close this now. Let
me create a deployment with this. Okay. So
17464.3 -> what then is created, let me go up here and
refresh. Let's see what happens. Okay. So
17470.84 -> this time it's all green because it's all
healthy. My nodes are successful or at least
17474.83 -> it's going to be successful container creating.
Perfect. So two parts of mine are up and running
17482.05 -> and both my paws are running right and both
are running on Node 1 pause to or of to those
17489.13 -> are the two deployments and replica set and
then Services, right? So it's engines which
17494.51 -> is the basement which is being used. So well
and good. This is also working. So guys. Yeah,
17500.432 -> that's about it. Right. So when I try to upload
it, maybe there was some other error probably
17504.67 -> in the arm will file they could developments
from small mistake or it could have been because
17508.98 -> my known had too many containers running those
could have been the reasons. But anyways,
17512.22 -> this is how you deployed through your yamen
file. All right, so that kind of brings us
17516.502 -> to the end of this session where I've showed
you a demonstration of deploying your containers
17521.63 -> in three different ways CLI dashboard and
your yamen files. Hey everyone, this is Reyshma
17531.63 -> from Edureka. And today we'll be learning
what is ansible. First,let us look at the
17537.88 -> topics that we'll be learning today. Well,
it's quite a long list. It means we'll be
17543.22 -> learning a lot of things today. Let us take
a look at them one by one. So first we'll
17548.71 -> see the problems that were before configuration
management and how configuration management
17553.89 -> help to solve. It will see what ansible is
and the different features of ansible after
17559.512 -> that. We'll see how NASA is implemented and
civil to solve all their problems. After that.
17565.82 -> We'll see how we can use ansible for orchestration
provisioning configuration management application
17572.692 -> deployment and security. And in the end, we'll
write some ansible playbooks to install lamp
17578.83 -> stack on my node machine and host your website
in my note machine. Now before I tell you
17584.452 -> about the problems, let us first understand
what configuration management actually is.
17589.49 -> Well configuration management is actually
the management of your software on top of
17594.73 -> your Hardware. What it does is that it maintains
the consistency of your product based on its
17601.17 -> requirements its design and its physical and
functional attributes. Now, how does it maintain
17607.55 -> the consistency it is because the configuration
management is applied over the entire life
17613.432 -> cycle of your system. And hence. It provides
you with a very good visibility and control
17618.48 -> when I say visibility. It means that you can
continuously check and monitor the performances
17624.3 -> of all your assistants. So if at any time
the performance of any of his system is degrading
17630.042 -> the configuration management system will notify
you and hence. You can prevent errors before
17635.07 -> it actually occurs and by control, I mean
that you have the power to change anything.
17640.24 -> So if any of your servers failed you can reconfigure
it again to repair it so that it is up and
17645.772 -> running again, or you can even replace the
server if needed and also the configuration
17651.502 -> management system holds the entire historical
data of your infrastructure it DOC. Men's
17656.81 -> all the snapshots of every version of your
infrastructure. So overall the configuration
17662.092 -> management process facilitates the orderly
management of your system information and
17666.92 -> system changes so that it can use it for beneficial
purposes. So let us proceed to the next topic
17672.81 -> and see the problems before configuration
management and how configuration management
17677.55 -> solved it and with that you'll understand
more about configuration management as well.
17682.38 -> So, let's see now, why do we need configuration
management now, the necessaries behind configuration
17689.332 -> management was dependent upon a certain number
of factors and certain number of reasons.
17694.202 -> So let us take a look at them one by one.
So the first problem was managing multiple
17699.26 -> servers now earlier every system was managed
by hand and by that, I mean that you have
17705.49 -> to login to them via SSH make changes and
then log off again. Now imagine if a system
17712.112 -> administrator would have to make changes in
multiple number of servers. You'll have to
17717.322 -> do this task of logging in making changes
and longing of again and again repeatedly,
17723.032 -> so this would take up a lot of time and there
is no time left for the system administrators
17728.112 -> to monitor the performances of the system
continuously safe at any time any of the servers
17733.63 -> would fail it took a lot of time to even detect
the faulty server and to even more time to
17739.362 -> repair it because the configuration scripts
that they wrote was very complex and it was
17744.57 -> very hard to make changes on to them. So after
configuration management system came into
17749.362 -> the picture what it did is that it divided
all the systems in my infrastructure according
17754.912 -> to their dedicated tasks their design or architecture
and the organize my system in an efficient
17761.46 -> way. Like I've proved my web servers together
my database servers together application servers
17768.362 -> together and this process is known as baselining.
Now. Let's for an example say that I wanted
17775.33 -> to install lamp stack in my system and lamp
stack is a software bundle where L stands
17780.612 -> for Linux a for Apache and for MySQL and P
for PHP. So I need this different software's
17787.662 -> for different purposes. Like I need Apache
server to host my web pages and it PHP for
17793.402 -> my web development. I need Linux as my operating
system and MySQL as my data definition language
17800.23 -> or data manipulation language since now all
the systems in my infrastructure is Baseline.
17806.49 -> I would know exactly where to install each
of the software's. For example, I'll use Apache
17811.282 -> as my web server here for database. I will
install the MySQL here and also begin easy
17816.442 -> for me to monitor my entire system. For example,
if my web pages are not running I would know
17821.81 -> that there's something wrong. With my web
servers, so I'll go check in here. I don't
17825.642 -> have to check the database servers and application
servers for that. Similarly. If I'm not able
17831.192 -> to insert data or extract data from my database.
I would know that something is wrong with
17835.852 -> my database servers. I don't need to check
these too for that matter. So what configuration
17841.08 -> management system did with baselining is that
it organized mess system in an efficient way
17845.58 -> so that I can manage and monitor all my servers
efficiently. Now, let us see the second problem
17851.33 -> that we had which were scaling up and scaling
down. See nowadays, you can come up with requirements
17858.612 -> at any time and you might have to scale up
or scale down your systems on the Fly and
17864.6 -> this is something that you cannot always plan
ahead and scaling up. Your infrastructure
17869.872 -> doesn't always mean that you just buy new
hardware and just place them anywhere. Haphazardly.
17875.88 -> You cannot do that. You also need to provision
and configure this new machines properly.
17881.34 -> So with configuration management system, I've
already got my infrastructure baselined so
17886.25 -> I know exactly how this new machines are going
to work according to their dedicated task
17891.17 -> and where should I actually place them and
the scripts that configuration management
17896.05 -> uses are reusable so you can use the same
scripts that you use to configure your older
17900.942 -> machines to configure your new machines as
well. So let me explain it to you with an
17905.43 -> example. So let me explain it to you with
an example. Let's say that if you're working
17911.48 -> in an e-commerce website and you decide to
hold a mega sale. New Year Christmas sale
17916.89 -> or anything? So it's obvious that there is
going to be a huge rise in the traffic. So
17922.26 -> you might need more web servers to handle
that amount of requests and you might even
17926.672 -> need a load balancers or maybe to to distribute
that amount of traffic onto your web servers
17932.17 -> and these changes however need to be made
at a very short span of time. So after you've
17937.55 -> got the necessary Hardware, you also need
to provision them accordingly and with configuration
17942.42 -> management, you can easily provision this
new machines using either recipes or play
17946.72 -> books or any kind of script that configuration
management uses. And also after the sale is
17952.66 -> over you don't need that many web servers
or a load balancer so you can disable them
17957.112 -> using the same easy scripts as well and also
scaling down is very important when you are
17962.522 -> using cloud services when you do not need
any of those machines, it's no point in keeping
17967.85 -> them. So you have to scale down as well because
you have to reconfigure your entire infrastructure
17972.24 -> as well and with configuration management.
It is a very easy. Anything to Auto scale
17976.67 -> up and scale down your infrastructure. So
I think you all have understood this problem
17981.112 -> and how configuration management salted so
let us take a look at the third problem. Third
17986.942 -> problem was the work velocity of the developers
were affected because the system administrators
17992.002 -> were taking time to configure the server's
after the developers have written a code.
17997.402 -> The next job is to deploy them on different
servers like test servers and production servers
18002.452 -> for testing it out and releasing it but then
again every server was managed by hand before
18008.07 -> so the system administrators would again have
to do the same thing log in to its server
18013.47 -> configure them properly by making changes
and do the same thing again to all servers.
18018.38 -> So this was taking a lot of time now before
devops game you the picture there was already
18023.022 -> agility in the developers end for which they
were able to release new software's very frequently,
18028.292 -> but it was taking a lot of time for the system
administrators to configure the servers for
18033.352 -> testing so the developers would have Wait
for all the test results and this highly hamper
18038.76 -> the word velocity of the developers. But after
there was configuration management the system
18044.9 -> administrator had got access to a configuration
management tool which allowed them to configure
18049.68 -> all the servers at one go. All they had to
do is write down all the configurations and
18055.032 -> write down the list of all the software's
that there need to provision this servers
18059.17 -> and deploy it on all of the servers at one
go. So now agility even came into the system
18065.542 -> administrators and as well. So now after configuration
management the developers and the system administrators
18072.352 -> were finally able to work in the same base.
Now, this is how configuration management
18077.48 -> solve the third problem now, let us take a
look at the last problem. Now the last problem
18083.65 -> was rolling back in today's scenario. Everyone
wants a change and you need to keep making
18090.192 -> changes frequently because customers will
start losing interest if things stay the same
18095.282 -> so you need to keep releasing new features
to upgrade your application even giants like
18101.82 -> Amazon and Facebook. They do it now and then
and still they're unsure if the users are
18106.92 -> going to like it or not. Now imagine if the
users did not like it they would have to roll
18112.202 -> back to the previous version again, so, let's
see how it creates a problem. Now before there
18117.542 -> was configuration management. Let's say you've
got the old version which is the version one
18121.542 -> when you're upgrading it you're changing all
the configurations in the production server.
18126.192 -> You're deleting the old configurations completely
and deploying the new version now if the users
18132.542 -> did not like it you would have to reconfigure
This Server again with the old configurations
18138.542 -> and that will take up a lot of time. So application
is going to be Down for that amount of time
18143.692 -> that you need for reconfiguring the server
and this might create a problem. But when
18148.97 -> you're using configuration management system,
as you know that it documents every version
18153.8 -> of your infrastructure when you're upgrading
it with configuration management, it will
18158.84 -> remove the configurations of the older version,
but it will be well documented. It will be
18164.032 -> kept there and then the newer version is deployed.
Now if the users did not like it this time,
18169.96 -> the older of the configuration version was
already documented. So all you have to do
18174.18 -> is just switch back to the old version and
this won't take up any time and you can upgrade
18179.56 -> or roll back your application in zero downtime
zero downtime means that your application
18185.792 -> would be down for zero time. It means that
the users will not notice that your application
18191.32 -> went down and you can achieve it seamlessly
and this is how configuration management system
18197.15 -> solved all the problems that was before. So
guys. I hope that if all understood how Management
18202.872 -> did that let us now move on to the next topic?
Now the question is how do I incorporate configuration
18210.89 -> Management in my system? Well, you do that
using configuration management tools. So let's
18216.75 -> take a look at all the available configuration
management tools. So here I've got the four
18222.122 -> most popular tools that is available in the
market right now. I've got ansible and Saul
18228.34 -> stack which are push-based configuration management
tool by push-based. I mean that you can directly
18234.39 -> push all those configurations on to your node
machines directly while chef and puppet are
18239.942 -> both pull based configuration management tools.
It means that they rely on a central server
18245.4 -> for configurations the pull all the configurations
from a central server. There are other configuration
18252.13 -> management tools available in the market to
but but these four are the most popular ones.
18257.91 -> So now let's know more about ansible now ansible
is a configuration management tool that can
18263.542 -> be used for provisioning orchestration application
deployment Automation and it's a push based
18270.202 -> configuration management tool. Like I told
you what it does is that it automate your
18275.372 -> entire it infrastructure and gives you large
productivity gains and it can automate pretty
18280.912 -> much anything. It can automate your Cloud
your networks your servers and all your it
18286.542 -> processes. So let us move on to the next topic.
So now let us see the features of ansible.
18292.55 -> The first feature is that it's very simple.
It's simple to install and setup and it's
18297.9 -> very easy to learn because ansible Play books
are written in a very simple data serialization
18303.702 -> language, which is known as Gamal and it's
pretty much like English. So anyone can understand
18309.522 -> that and it's very easy to learn next feature
because of which ansible is preferred over
18314.292 -> other configuration management tools is because
it's Agent kallus it means that you do not
18319.97 -> need any kind of Agents or any kind of plan
software's to manage your node machines. All
18326.08 -> you have to do is install ansible in your
control machine and just make an SSH connection
18330.83 -> with your nodes and start pushing configurations
right away. The next feature is that it's
18336.99 -> very powerful, even though you call ansible
simple and it does not require any agent.
18342.16 -> It has the capabilities to model very complex
it workflows and it comes with a very interesting
18349.07 -> feature, which is called the batteries included.
It means that you've got everything that you
18354.18 -> already need and in ansible it's because it
comes with more than 750 inbuilt modules,
18361.18 -> which you can use them for any purpose in
your project. And it's very efficient because
18368.33 -> all the modules that ansible comes with they
are extensible. It means that you can customize
18374 -> them according to your needs and for doing
that you do not need to use the same programming
18378.58 -> language that it was originally written in
you can choose any kind of programming language
18383.74 -> that you're comfortable with and then customize
those modules for your own use. So this is
18389.16 -> the power and Liberty that ansible gives you
now, let us take a look at the case study
18394.18 -> of NASA. What were the problems that NASA
was facing and how ansible solved all those
18399.67 -> problems? Now NASA is an organization that
has been sending men to the Moon. They are
18405.932 -> carrying out missions and Mars and they're
launching satellites now and then to monitor
18410.83 -> the Earth and not just the Earth. They're
even monitoring other galaxies and other planets
18416.25 -> as well. So you can imagine the kind and the
amount of data that NASA might be dealing
18421.782 -> with but all the applications were in a traditional
Hardware based Data Center and they wanted
18427.89 -> to move into a cloud-based environment because
they wanted better agility and they wanted
18433.92 -> better adaptive planning for that. And also
they wanted to save costs because a lot of
18440.18 -> money was spent on just the maintenance of
the hardware and also they wanted more security
18445.362 -> because NASA is a government organization
of the United States of America and obviously,
18450.49 -> they wanted more security because NASA is
a government organization of the United States
18455.551 -> of America and the hold a lot of confidential
details as well for the government. So they
18461.013 -> just Cannot always rely on the hardware to
store all This Confidential files, they needed
18466.58 -> more security because if at any time the hardware
fails, they cannot afford to lose that data
18472.122 -> and that is why they wanted to move all their
65 applications from a hardware environment
18478.38 -> to a cloud-based environment. Now, let us
take a look. What was the problem now for
18483.59 -> this migration of all the data into a cloud
environment. They contacted a company called
18489.47 -> in Frozen now in Frozen is a company who is
a cloud broker and integrator to implement
18495.332 -> solutions to meet needs with security. So
in phase and was responsible for making this
18500.332 -> transition and NASA wanted to make this transition
in a very short span of time. So all the applications
18507.25 -> were migrated as it is into the cloud environment
and because of this all the AWS accounts and
18514.39 -> all the virtual private clouds that was previously
defined they all got accumulated in a single
18519.59 -> data space and this It up a huge chunk of
data and NASA had no way of centrally managing
18525.82 -> it and even simple tasks like giving a particular
system administrator access rights to a particular
18532.012 -> account. This became a very tedious job with
NASA wanted to automate and to and deployment
18537.792 -> of all their apps and for that they needed
a management system. So this was the situation
18543.49 -> when NASA moved into the cloud so you can
see that all those AWS accounts and virtual
18548.57 -> private cows. They got accumulated and made
a huge chunk of data and everyone was excessing
18553.58 -> directly to it. So there is a problem in managing
the credentials for all the users and the
18559.32 -> different teams, but NASA needed was divided
up all their inventories all the resources
18564.34 -> into groups and number of hosts. And also
they wanted to divide up all the users in
18569.88 -> two different teams and give each team different
credentials and permissions. And also if you
18575.342 -> look in the more granular level each user
in each team could also have different credentials
18580.23 -> and permissions. Let's say that you want to
give the team leader of a particular team
18584.532 -> access to some kind of data what you don't
want the other users in the team to access
18589.16 -> that data. So also NASA wanted to Define different
credentials for each individual member as
18594.55 -> well the wanted to divide up all the data
according to the projects and jobs also now,
18600.15 -> so I wanted to move from chaos into a more
organized Manner and for that they adopted
18606.38 -> ansible tower now ansible Tower is ansible
in and more enterprise-level ansible Tower
18613.33 -> provides you with the dashboard which provides
all the status summary of all the hosts and
18618.58 -> job and simple Tower is a web-based interface
for managing your organization. It provides
18625.49 -> you with a very easy to use user interface
for managing quick deployments and monitoring
18631.202 -> all the configurations. So, let's see what
answer build our did it has the credential
18636.782 -> management system which could give different
access permission to each individual user
18641.25 -> and Teams and also divided up the user into
teams and single individual users as well
18647.13 -> and it has a job assignment system and you
can also assign jobs using ansible tower X
18654.41 -> suppose. Let's say that you have assigned
job one to a single user job to another single
18658.96 -> user while job to could be assigned to a particular
team. Similarly. The whole inventory was also
18664.9 -> managed all the servers. Let's say dedicated
to a particular mission was grouped together
18669.5 -> all the host machines and other systems as
well Sansa built our help NASA to organize
18674.032 -> everything now, let us take a look at the
dashboard that ansible Tower provides us.
18679.63 -> So this is the screenshot of the dashboard
at a very initial level. You can see right
18684.122 -> now there is zero host. Nothing is there but
I'm just showing you what ansible tower provides
18689.24 -> you so on the top you can check all the users
and teams. You can manage the credentials
18694.762 -> from here. You can check your different projects
and inventories. You can make job templates
18700.362 -> and schedule job. As well. So this is where
you can schedule jobs and provide every job
18705.47 -> with a particular ID so that you can track
it. You can check your job status here whether
18710.352 -> your job was successful or failed and since
ansible Tower is a configuration management
18715.56 -> system. It will hold the historical data as
well. So you can check the job statuses of
18720.81 -> the past month or the month before that. You
can check the host status as well. You can
18726.122 -> check how many hosts are up and running you
can see the host count here. So this dashboard
18731.47 -> of ansible tower provides you with so much
ease of monitoring all your systems. So it's
18736.372 -> very easy to use ansible to our dashboard
anyone in your company anyone can use it because
18741.47 -> it's very user-friendly now, let us see the
results that NASA achieved after it has used
18746.502 -> ansible tower now updating nasa.gov used to
take one hour of time and after using ansible
18753.39 -> it got down to just five minutes security
patching updates where a multi-day process
18759.952 -> and now it requires only 45 minutes the provisioning
of os accounts can be done in just 10 minutes
18768.25 -> earlier the application Stack Up time required
one to two hours and now it's done in only
18773.622 -> 10 minutes. It also achieved a near real-time
RAM and this monitoring and baselining all
18779.63 -> the standard Amazon machine image has this
used to be a one-hour manual process. And
18784.692 -> now you don't even need manual interference
for that. It became a background invisible
18789.202 -> process. So you can see that how ansible has
drastically changed the overall management
18793.99 -> system of NASA. So guys, I hope that if understood
how I answered will help NASA. If you have
18799.532 -> any question, you may ask me at any time on
the chat window. So let us proceed to the
18804.73 -> next topic. Now this was all about how others
have used ansible. So now let us take a look
18810.612 -> at the ansible architecture so that we can
understand more about ansible and decide how
18815.74 -> we can use ansible. So this is the overall
ansible architecture. I've got the answer.
18821.5 -> Automation engine and I've got the inventory
and a Playbook inside the automation engine.
18826.59 -> I've got the configuration management database
here and host and this configuration management
18831.9 -> database is a repository that acts as a data
warehouse for all your it installations. It
18838.042 -> holds all the data relating to the collection
of your all it assets and these are commonly
18843.42 -> known as configuration items and it also holds
the data which describe the relationships
18848.58 -> between such assets. So this is a repository
for all your configuration management data
18854.46 -> and here I've got the ansible automation engine.
I've got the inventory year and inventory
18859.55 -> is nothing but the list of all the IP addresses
of all my host machines now as I told you
18865.67 -> how to use configuration management you use
it with the configuration management tool
18870.66 -> like ansible but how do you use ansible? Well,
you do that using playbooks. And playbooks
18877.262 -> describe the entire workflow of your system.
Inside playbooks. I've got modules apis and
18885.31 -> plugins now modules are the core files now
play books contain a set of place which are
18891.76 -> a set of tasks and inside every task. There
is a particular module. So when you run a
18897.602 -> play book, it's the modules that actually
get executed on all your node machines. So
18903.032 -> modules are the core files and like I told
you before ansible already comes with inbuilt
18908.16 -> modules, which you can use and you can also
customize them as well as comes with different
18912.93 -> Cloud modules database modules. And don't
worry. I'll be showing you how to use those
18917.83 -> modules in ansible and there are different
apis as well. Well API is an answerable are
18924.32 -> not meant for direct consumption. They're
just there to support the command line tools.
18929.372 -> For example, they have the python API and
these apis can also be used as a transport
18934.3 -> for cloud services, whether it's public or
private you can use it then I've got plugins
18940.122 -> now plug in Our special kind of module that
allowed to execute ansible task as job Bill
18946.25 -> step and plugins are pieces of code that augment
the ansible score functionality and ansible
18952.09 -> also comes with a number of Handy plugins
that you can use. For example, you have action
18957.08 -> plugins cash plugins callback plugins and
also you can create plugins of your own as
18961.66 -> well. Let me tell you how exactly different
it is from a module. Let me give you the example
18966.832 -> of action plug-in now action plug in our front-end
modules and what it does is that when you
18972.89 -> start running a Playbook something needs to
be done on the control machine as well. So
18978.13 -> this action plugins trigger those action and
execute those tasks in the controller machine
18983.92 -> before calling the actual modules that are
getting executed in the Playbook. And also
18989.05 -> you have a special kind of plug-in called
The Connection plug in which allows you to
18993.832 -> connect to the docker containers in your note
machine and many more and finally I have this
18999.73 -> host machine that is Elected via SSH and this
was machines could be either windows or Linux
19006.98 -> or any kind of machines. And also let me tell
you that it's not always needed to use SSH
19012.76 -> for connection. You can use any kind of network
Authentication Protocol you can use Kerberos
19017.98 -> and also you can use the connection plugins
as well. So this is fairly a very simple ansible
19023.14 -> architecture. So now that you've understood
the architecture, let us write a play book
19028.251 -> now now let me tell you how to write a play
book and playbooks and ansible are simple
19033.39 -> files written in HTML code and yambol is a
data serialization language. You can think
19039.05 -> of data serialization language as a translator
for breaking down all your data structure
19044.43 -> and serialize them in a particular order which
can be reconstructed again for later use and
19050.1 -> you can use this reconstructed data structure
in the same environment or even in a different
19055.4 -> environment. So this is the control machine
where ansible will be installed and this is
19059.49 -> where you'll be writing your playbooks. Let
me show you the structure of how to write
19064.08 -> a play book. However, play book starts with
three dashes on the top. So first you have
19068.84 -> to mention the list of all your host machines
here. It means where do you want this Playbook
19073.782 -> to run? Then you can mention variables by
gathering facts, then you can mention the
19078.93 -> different tasks that you want. Now remember
that the task get executed in the same order
19084.46 -> that you write them. For example, if you want
to install software a first and then softer
19090.13 -> beef later on. So make sure that the first
task would be install software and the next
19095.452 -> task would be install software be and then
I've got handlers at the bottom. The handlers
19100.67 -> are also tasks but the difference is in order
to execute handlers. You need some sort of
19106.14 -> triggers in the list of tasks. For example,
we use notify. I'll show you an example now.
19112.24 -> Okay, let me show you an example of Playbook
so that you can relate to this structure.
19116.752 -> So this is an example of an ansible Playbook
to install Apache like I told It starts with
19122.39 -> three dashes on the top remember that every
list starts with a dash in the front or a
19128.841 -> - here. I've only mentioned just the name
of one group. You can mention the name of
19133.282 -> several groups where you want to run your
playbook. Then I've got the tasks you give
19138.582 -> a name for the task which is install Apache
and then you use a module here. I'm using
19144.32 -> the app module to download the package. So
this is the syntax of writing the app module.
19150.18 -> So you give the name of the package which
is Apache to update cache is equal to yes.
19155.952 -> So it means that it will make sure that app
get is already updated in your note machine
19160.72 -> before it installs the Apache 2 and you mentioned
State equal to latest. It means that it will
19166.481 -> download the latest version of Apache 2. And
this is the trigger because I'm using handlers
19171.792 -> you're right and the Handler here is to restart
Apache and I'm using the service module here
19177.492 -> and the name of the software that I want to
restart is Apache. And state is able to restart
19183.622 -> it. So notify have mentioned that there is
going to be a Handler whose job would be to
19188.24 -> restart Apache 2 and then the task in the
Handler would get executed and it will restart
19193.862 -> Apache 2. So this is a simple Playbook and
will also be writing similar kind of playbooks
19198.792 -> later on the Hands-On part. So you'll be learning
again. So if it's looking a little gibberish
19203.762 -> for you will be doing and that on the Hands-On
part so then it will clear all your doubts.
19209.3 -> So now let us see how to use ansible and understand
its applications so we can use ansible for
19215.41 -> application deployment configuration management
security and compliance provisioning and orchestration.
19222.21 -> So let us take a look at them one by one first.
Let us see how we can use ansible for orchestration.
19227.523 -> Well orchestration means let's say that we
have defined configurations for each of my
19233.522 -> systems, but I also need to make sure how
this configurations will interact with each
19239.39 -> other. So this is the process of Orchestration
but I decide that how the different configurations
19245.63 -> on different of my systems and my infrastructure
would interact with each other in order to
19251.502 -> maintain a seamless flow of my application
and your application deployments need to be
19256.84 -> orchestrated because you've got a front-end
and back-end Services. You've got databases
19262.702 -> you've got monitoring networks and storage
and each of them has their own role to play
19268.17 -> with with their configuration and deployment
and you cannot just run all of them is ones
19273.43 -> and expect that the right thing happens. So
what you need is that you need an orchestration
19278.92 -> tool that all this task happen in the proper
order that the database is up before the backend
19284.822 -> server and the front end server is removed
from the load balancer before it gets upgraded
19289.752 -> and that your networks would have their proper
vlans configured. So this is what ansible
19295.08 -> helps you to do. So, let me give you a simple
example so that you can understand it better.
19300.782 -> Let's say that I want to host a website on
my node machines. And this is precisely what
19305.1 -> we're going to do later on the Hands-On part.
So first and in order to do that first, I
19311.042 -> have to install the necessary software, which
is the lamp stack and after that I have to
19316.1 -> deploy all the HTML and PHP files on the web
server. And after that I'll be gathering some
19321.71 -> kind of information from my web pages that
will go inside my database server. Now, if
19326.692 -> you want to perform these all tasks, you have
to make sure that the necessary software is
19331.07 -> installed first now, I cannot deploy the HTML
PHP files on the web servers. If I don't have
19336.75 -> a web servers if a party is not installed.
So this is orchestration where you mention
19342.362 -> that the task that needs to be carried out
before and the task that needs to be carried
19346.35 -> out later. So this is what ansible playbooks
allow you to do. Now. Let's see what provisioning
19352.272 -> is like provisioning in English means to provide
with something that is needed. It is same
19358.282 -> in case of ansible it. That ansible will make
sure that all the necessary software is that
19364.38 -> you need for your application to run is properly
installed in each of the environments of your
19369.702 -> infrastructure. Let us take a look at this
example here to understand what provisioning
19374.26 -> actually is. Now if I want to provision a
python web application that I'm hosting on
19379.09 -> Microsoft Azure and Microsoft is your is very
similar to AWS and it is also a cloud platform
19385.872 -> on which you can build up all your applications.
So let's say so now if I want to host my if
19391.3 -> I'm developing a python web application for
coding I would need the Microsoft is your
19397.18 -> document database. I would need Visual Studio
or need to install python also and some kind
19403.46 -> of software development kit and different
apis for that so ansible so you can list out
19409.07 -> the name of all the software development kits
and all this necessary software's that you
19413.582 -> will require for coding this web that it would
require in order to develop your web application.
19418.372 -> So you can list out all the necessary software
is that you'd be needing in ansible playbook
19425.18 -> in order to develop your web application and
for testing your code out you will again need
19430.4 -> Microsoft Azure document database you would
again note visual studio and some kind of
19435.34 -> testing software. So again, you can list out
all the software's and ansible Playbook and
19440.5 -> it will provision your testing environment
as well. And it's the same thing while you're
19444.33 -> deploying it on the production server as well
and Sybil will provision your entire application
19449.292 -> at all stages at coding stage a testing and
at the production stage also, so guys, I hope
19455.49 -> you've understood what provisioning is let
us move on to the next topic and see how we
19460.66 -> can achieve configuration management with
ansible now ansible configurations are simple
19466.18 -> data descriptions of your infrastructure,
which is both human readable and machine possible
19471.442 -> and app server requires. Nothing more than
an SSH key in order to start managing systems
19477.32 -> and you can start managing them without installing.
Any kind of agent or client software? So you
19482.59 -> can avoid the problem of managing the management
which is very common in different automation
19488.292 -> systems. For example, I've got my host machines
and Apache web servers installed in each of
19493.07 -> the host machines. I've also got PHP and MySQL
installed if I want to make configuration
19498.22 -> changes if I want to update a party and update
my MySQL I can do it directly. I can push
19503.97 -> those new configuration details directly onto
my host machines or my note machines and my
19509.5 -> server and you can do it very easily using
ansible playbooks. So let us move on to the
19514.532 -> next topic and let us see how application
deployment has been made easier with ansible
19519.85 -> now ansible is the simplest way to deploy
your applications. It gives you the power
19525.23 -> to deploy all your multi-tier applications
where reliably and consistently and you can
19531.192 -> do it all from a common framework. You can
configure all the needed Services as well
19535.9 -> as push application artifacts from one system.
With ansible you can write Play books which
19541.23 -> are the description of the desired state of
your system and it is usually kept in the
19545.762 -> source control sensible. Then does all the
hard work for you to get your systems to the
19551.55 -> state. No matter what state they are currently
in and play books make all your installations
19556.782 -> all your upgrades for day-to-day management,
very repeatable. So with ansible you can write
19562.47 -> Play books which are the descriptions of the
desired state of the systems. And these are
19567.41 -> usually kept in the source control and simple
then does all the hard work for you to get
19572.14 -> all your systems in the desired State no matter
what state they're currently in and playbooks
19578.282 -> make all your installations your upgrades
and for all your day-to-day Management in
19583.52 -> a very repeatable and reliable way. So let's
say that I am using a version control system
19589 -> like get while I'm developing my app. And
also I'm using Jenkins for continuous integration
19595.192 -> now Jenkins will extract code from get every
time there is a new Commit and then making
19600.622 -> software built and later. This build will
get deployed in the test server for testing.
19606.9 -> Now if changes are kept making in the code
base continuously. You would have to configure
19611.532 -> your test and the production server continuously
as well according to the changes. So what
19617.07 -> ansible does is that it continuously keeps
on checking the Version Control System here
19621.68 -> so that it can configure the test and the
production server accordingly and quickly
19625.77 -> and hence. It makes your application deployment
like a piece of cake. So guys, I think you
19632.292 -> have understood the application deployment.
Don't worry in the Hands-On part will also
19637.292 -> be deploying our own applications on different
servers as well. Now, let us see how we can
19642.01 -> achieve security with ansible in today's complex.
It environment security is Paramount you need
19648.75 -> security for your systems you need security
for your data and not just your data your
19653.38 -> customers data as well. Not only you must
be able to Define what it means for your systems
19658.542 -> to be. You also need to be able to Simply
apply that security and also you need to constantly
19664.852 -> monitor your systems in order to ensure that
they remain compliant with that security and
19670.08 -> with ansible. You can simply Define security
for your systems using playbooks with playbooks.
19676.07 -> You can set up firewall rules. You can log
down different users or groups and you can
19680.592 -> even apply custom security policies as well
now ansible also works with the Mind Point
19685.4 -> Group which rights and civil rules to apply
these aesthetic now disa stick is a cybersecurity
19691.5 -> methodology for standardizing security protocols
within your network servers and different
19697.14 -> computers. And also it is very compliant with
the existing SSH and win RM protocols. And
19703.27 -> this is also a reason why ansible is preferred
over other configuration management tools
19708.13 -> and it is also compatible with different security
verification tools like opens Gap and stigma
19714.27 -> what tools like opens cap and stigma does
is that it carries out a timely inspection.
19719.23 -> All your software inventory and check for
any kind of vulnerabilities and it allows
19723.952 -> you to take steps to prevent those attacks
before they actually happen and you can apply
19728.8 -> the security over your entire infrastructure
using ansible. So, how about some Hands-On
19734.442 -> with ansible? So let us write some ansible
playbooks now. So what are we going to do
19738.55 -> is that we are going to install lamp stack
and then we're going to host a website on
19743.41 -> the Apache server and will also collect some
data from our webpage and store it in the
19749.5 -> MySQL server. So guys, let's get started.
So here I'm using the Oracle virtualbox manager
19755.51 -> and here I've created two virtual machines.
The first is the ansible control machine and
19759.83 -> the ansible host machine. So ansible control
machine is the machine where I have installed
19764.522 -> and simple and this is where I'll be writing
all my playbooks and answer will host one
19768.75 -> here is going to be my note machine. This
is where the playbooks are going to get deployed.
19773.16 -> So in this machine, I'll deploy my website.
So I'll be hosting a website in the answer
19777.75 -> will host one. Just go to my control machine
and start writing the playbooks. So this is
19783.202 -> my ansible control machine. Now. Let's go
to the terminal first. So this is the terminal
19789.16 -> of my ansible control machine. And now I've
already installed ansible here and I've already
19794.07 -> made an SSH connection with my note machine.
So let me hear just become the root user first
19799.51 -> now, you should know that you do not always
need to become the root user in order to use
19803.84 -> ansible. I'm just becoming the root user for
my convenience because I like to get all the
19808.17 -> root privileges while I'm using ansible, but
you can pseudo to any user if you like So
19816.442 -> let me clear my screen first. Now before we
start writing play boo status first check
19821.9 -> the version of ansible that is installed here.
And for that I'll just use the command ansible
19827.29 -> - - version. And as you can see here that
I have got the ansible two point two point
19834.622 -> zero point zero version here. Now. Let me
show you my host inventory file since I've
19840.42 -> got only one node machine here. So I'm going
to show you where exactly the IP address of
19844.97 -> my node machine is being stored. So open the
hosts file for you now, so I'm just going
19849.93 -> to open the file and show it to you. So I'm
using the G edit editor and the default location
19855.72 -> of your host inventory file is at sea. I'm
supposed / posts. And this is your host inventory
19866.84 -> file and now have mentioned the IP address
of my host machine here, which is one. Ninety
19871.88 -> two point one sixty eight point 56.1 02 and
I have named it under the group name test
19878.07 -> servers. So always write the name of your
group under the square brackets now, I just
19884.46 -> have one node machine. So there is only one
IP address. If you have many node machines,
19888.63 -> you can just let us down the IP address under
this line. It's as simple as that or if you
19893.3 -> even want to group it under a different name,
you can use a different name use another square
19897.82 -> bracket and put a different name for another
set of your hosts. Okay. Now, let me clear
19903.172 -> my screen first. So first, let me just test
out the SSH connection whether it's working
19908.75 -> properly or not using ansible. So for that
I'll just type in the command and Sybil and
19917.5 -> pink and then the name of the group of my
host machines, which is test servers in my
19923.75 -> case. And thank changed to Paul. It means
that an SSH connection is already established
19933.15 -> between my control machine and my note machine.
So we are all ready to write playbooks and
19937.56 -> start deploying it on the notes. So the first
thing that I need to do is write a provisioning
19943.702 -> Playbook now, since I'm going to host a website,
I would first need to install the necessary
19948.91 -> software's so I'll be writing a provisioning
playbook for that and out provision my node
19953.362 -> machine using lamp stack. So let us write
a Playbook to install lamp stack on my Note
19958.55 -> machine now, I've already written that Playbook.
So I'm just going to show it to you. I'm using
19965.76 -> the Gia did editor again and the name of my
provisioning playbook is lamp stack. And the
19972.25 -> extension for AML file is Dot. Yml, and this
is my playbook. Now. Let me tell you how I
19979.292 -> have written this Playbook as I told you that
every play book starts with three dashes on
19983.75 -> the top. So here are the three dashes and
then I've given a name to this Playbook which
19987.692 -> is to install Apache PHP and MySQL. Now, I've
already got the L in my lamb because I'm using
19993.442 -> a Ubuntu machine which is a Linux operating
system. So I need to install Apache PHP and
19998.5 -> MySQL now and then you have to mention the
host here on which you want this Playbook
20003.442 -> to get deployed. So I've mentioned this over
here and then I want to escalate my privileges
20009.352 -> for which I'm using become and become user
it is because sometimes you want to become
20014.17 -> another user different from what you are actually
logged into the remote machine. So you can
20019.172 -> use escalating privileges tools like so or
pseudo to gain root privileges. And so and
20024.542 -> that is why I've used become and become user
for that. So I'm becoming the user root and
20029.502 -> I'm using become true here on the top. What
it does is that it activates Your Privilege
20034.17 -> escalation and then you become the root user
on the remote machine and then gather facts
20038.99 -> true. Now, what it will do is that we gather
useful variables about the remote host. Now
20045.49 -> what exactly it will gather is some sort of
files or some kind of keys which can be used
20050.782 -> later in a different Playbook. And as you
know that every Playbook is a list of tasks
20055.372 -> that you need to perform. So this is the list
of all my tasks that I'm going to perform
20060.9 -> and since it's a provisioning Playbook, which
means I'm only installing the necessary softwares.
20065.18 -> That will be needed in order to host a website
on my Note machine. So first I'm installing
20070.57 -> Apache so given the task name as install apache2
and then I'm using the package module here.
20076.9 -> And this is the syntax of the package module.
So you have to first specify the name of the
20082.88 -> package that you are going to download which
is Apache 2 and then you put State equal to
20087.72 -> present now since we're installing something
for the first time and it won't this package
20092.252 -> to be present in your node machine. So you're
putting State equal to present now similarly
20097.05 -> if you want to delete something you can put
State equal to absent and it works that way
20102.362 -> so I've installed in Apache PHP module and
I've installed PHP client PHP Emperor PHP
20108.692 -> GD library of install a package PHP MySQL.
And finally, I've installed the MySQL server
20114.88 -> in the similar way that I've installed a party
to this is a very simple Playbook to provision
20119.56 -> your node machine and actually all the playbooks
are simple. So I hope that you have understood
20124.34 -> how to write a Book now, let me tell you something
that you should always keep in mind while
20128.48 -> you were writing playbooks make sure that
you are always extra careful with the indentation
20134.09 -> because Gamal is a data serialization language
and it differentiates between elements with
20139.4 -> different indentations. For example, I've
got a name here and a name here also, but
20145.25 -> you can see that the indentations are different
it is because this is the name of my entire
20149.55 -> Playbook while this is just the name of my
particular task. So these two are different
20153.9 -> things and they need to have different indentations
the ones with the similar indentations are
20158.702 -> known as siblings like this one. This is also
doing the same thing. This is also installing
20163.14 -> some kind of package and this is also installing
some kind of package. So these are similar,
20167.622 -> so that's why you should be very careful with
indentation. Otherwise, it will create a problem
20172.033 -> for you. So what are we waiting for? Let us
run this Playbook clear my screen first. So
20177.56 -> in order to run a play book and the command
that you should be using to run an answerable
20182.282 -> Playbook is ansible - Playbook And then the
name of your file, which is lamp stack dot
20191.02 -> Jama and here we go. And here it is. Okay
because it is able to connect to my note machine.
20197.82 -> Apache 2 has been installed. And it's done.
My playbook is successfully run. And how do
20213.99 -> I know that? I know that seeing these common
return values. So these common return values
20220.06 -> like okay changed unreachable and fate. They
give me the status summary of how my playbook
20226.06 -> was run. So okay equal to 8, it means there
were eight tasks. That was Run Okay changed
20231.172 -> equal to 7. It means that something in my
note between has been changed because obviously
20235.92 -> I've install new packages into my note machine.
So it's showing changed equal to 7 unreachable
20242.1 -> is equal to 0 it means that there is zero
host that were unreachable and failed equal
20246.97 -> to 0 it means that zero tasks where fate so
my playbook was run successfully on to my
20253.68 -> note between. So let us check my note machine
and see if Apache and MySQL has been installed.
20259.05 -> So let us go to my node machine now. So this
is my node machine. So let us check knife.
20265.51 -> Apache server has been installed. So I'm going
to my web browser. So this is my web browser
20270.67 -> in my note machine. Let me go to the Local
Host and check if Apache web server has been
20276.33 -> downloaded and it's there. It works. Now.
This is the default web page of apache2 web
20281.88 -> server. So now I know for sure that Apache
was installed in my note machine now. Let
20286.343 -> us see if MySQL server has been installed.
Let me go to my terminal. This is the terminal
20293.3 -> of my load machine. Now. If you want to check
if MySQL has installed just use this following
20299.22 -> command. mice ql user is root then - p sudo
password password again for MySQL and there
20316.05 -> it is. So MySQL server was also successfully
installed in my note machine. So let's go
20321.91 -> back to my control machine and let's do what
is left to do. So we're back into our control
20327.122 -> machine. Now. I've already provisioned my
note machine. So let's see what we need to
20331.832 -> do next now since we are deploying a website
on the Node machine, let me first show you
20336.91 -> how my first web page looks like let me first
show you how my first web page looks like
20342.202 -> so this is going to be my first web page which
is index dot HTML and I've got two more PHP
20349.032 -> files also this salvi actually deploying these
files onto my node machine. So let me just
20354.92 -> open the first webpage to you. So this is
going to be my first web page. And what I'm
20360.64 -> going to do is that I'm going to ask for name
and email because this is a registration page
20365.182 -> for at Eureka where you have to register with
your name and email and I want this name and
20371.532 -> email to go into my database. So for that
I need to create a database and also need
20376.601 -> to create a table for this name and email
data to store into so for that will write
20381.59 -> another play book and we'll be using database
modules in that clear the screen first now
20387.24 -> again, I've already written that Playbook.
So let me just show it to you. So using the
20391.58 -> G edit editor here again and the name of this
Playbook is my school module. Okay. So this
20403.3 -> is my playbook. So like all Playbook it starts
with three dashes and here I have mentioned
20409.032 -> host all now. I just have only one host. I
know I could have mentioned either the only
20415.16 -> one IP address directly or even given the
name of my group but I've written just all
20419.762 -> your so that you can know that if you had
many group names or you have many notes and
20424.13 -> you want this Playbook to run on all of your
node machines, you can use this all and this
20429.39 -> Playbook will get deployed on all your note
machines. So this is another way of mentioning
20434.47 -> your hosts and I'm using remote user root
and this is another method to escalate your
20440.75 -> privileges. It's similar to become and become
user. So on the remote user to have root privileges
20446.49 -> while this Playbook would run and then the
list of the tasks and so what I'm doing in
20451.292 -> this Playbook is that since I have to connect
to my MySQL server, which is present in my
20455.862 -> note machine. I need a particular software
for that which is the MySQL python module
20461.46 -> and I'm Download and install it using tip
now dip is the python package manager with
20466.292 -> which you can install and download python
packages. But first, I need to install Pippin
20471.49 -> my note machine. So since I told you that
the tasks that you write in a Playbook it
20475.96 -> gets executed in the same order that you write
them. So my first task is to install pip and
20480.9 -> then I'm using the app module here here. I've
given the name of the package which is python
20485.64 -> bit and state equal to present and after that.
I'm installing some other software's using
20491.08 -> bit and I'm stalling some other related software's
as well. I'm also installing Library - with
20496.682 -> blind deaf. And after that using pip, I'm
installing the MySQL python module now notice
20503.75 -> that so you can consider this as an orchestration
Playbook because here I'm making sure that
20509.542 -> pip has to get installed first and after papers
installed I'm using pip to install another
20515.67 -> python package. So you see what we did here
right and then I'm going to use the database
20521.3 -> modules for Getting a new user to access the
database and then I'm creating the database
20526.622 -> named a do so for creating a MySQL user. I've
used the MySQL user database module that ansible
20534.14 -> comes with and this is the syntax of the MySQL
user module recreate the name of the new user
20541.26 -> which is edureka, you mentioned the password
and the preview here. It means what privileges
20547.71 -> do you want to give it to the new user and
here I'm granting all privileges for all database.
20553.75 -> And since you're creating it for the first
time and you want state to be present. Similarly,
20559.39 -> I'm using the mysqldb module to create a database
in my MySQL server named ed you so this is
20567.21 -> the very simple syntax of using mysqldb module.
We have to just give the name of the database
20573.25 -> in DB equal to and state equal to present.
So this will create a database named Eddie
20579.83 -> also and after that I also need to create
a table inside the database for storing my
20584.58 -> name and email details, right and and unfortunately
ansible does not have any MySQL table creating
20592.612 -> modules. So what I did is that I've used a
Command Module here. We Command Module and
20598.5 -> directly going to use my SQL queries to create
a table and the syntax is something like this
20604.52 -> so you can write it down or remember it if
you want to use it. So for that since I'm
20609.33 -> writing a MySQL Query I started with mySQL
user Eddie wake up the - us for the user and
20617.792 -> then for password Etc. Wake up. Now after
- e just write the query that you need to
20623.122 -> execute on the MySQL server and write it in
single quotations. So I have written the query
20628.66 -> to create a table and this is create table
are EG the name the email and then after that
20635.43 -> just mention the name of the database on which
you want to create this table, which is a
20640.16 -> do for me. So this is my orchestration PlayBook.
Clear my screen first. The command is ansible
20649 -> - Playbook and the name of your play book,
which is MySQL modding. And here we go. Again,
20662.5 -> my common return values tell me that the Playbook
was done successfully because there are no
20666.432 -> fail task and no unreachable host and there
are change task in my note machine. So now
20672.99 -> all the packages are downloaded now, my node
machine is well provisioned. It's properly
20678.412 -> orchestrated. Now. What are we waiting for?
Let's deploy your application. Well clear
20683.12 -> the screen first. So now let me tell you what
exactly do we need to do in order to deploy
20688.792 -> my application and in my case, these are just
three PHP files and HTML files that I need
20695.25 -> to deploy it on my Note machine in order to
display this HTML files and PHP files on my
20700.99 -> web server in my note machine. What I need
to do is that I need to copy this files from
20705.782 -> my control machine to the proper location
in my notebook machine and we can do that
20711.22 -> using playbooks. So let me just show you the
Playbook to copy files. And the name of my
20718.75 -> father is deployed website. So this is my
playbook to deploy my application and here
20728.32 -> again, I've used the three dashes and then
the name of my playbook is copy the host as
20734.762 -> you know that it's going to be test servers.
I'm using privilege escalation again, and
20739.112 -> I'm using become and become user Again The
Gather facts again true. And here is the list
20745.202 -> of the task the task is to just copy my file
from my control machine and paste it in my
20751.47 -> destination machine, which is my node machine
and for that and for copying I've used a copy
20756.63 -> module and copy module is a file module that
ansible comes with so this is the syntax of
20763.33 -> the copy module here. You just need to mention
a source and source is the path where my file
20769.48 -> is contained in my control machine, which
is home at Eureka documents. And the name
20774.06 -> of the file is index dot HTML, and I wanted
to go too far www HTML and it's index dot
20781.55 -> HTML, so I should be copying my files. Into
this location in order for it to display it
20787.032 -> on the web page and similarly have copied
my other PHP files using the same copy module.
20792.6 -> I've mentioned the source and destination
and copying them to the same destination from
20796.942 -> the same source. So I don't think any of you
would have questions here. This is the most
20801.39 -> easiest Playbook that we have written today.
So let us deploy our application now and for
20806.702 -> that we need to run this play book and before
that we need to clear the screen because there
20811.72 -> are a lot of stuff on our screen right now.
So let's run the Playbook. And here we go,
20827.77 -> and it was very quick because there was nothing
much to do. You just have to copy files from
20831.502 -> one location to another and these are very
small files. Let us go back to our host machine
20836.702 -> and see if it's working. So you're back again
at our host machine. Let's go to my web browser
20843.15 -> to check that. So let me refresh it and there
it is. And so here is my first web page. So
20852.592 -> my application was successfully deployed.
So now let us enter our name and email here
20858.532 -> and check if it is getting entered in my database.
So let's put our name and the email. It's
20869.63 -> why z.com and add it so new record created
successfully. It means that it is getting
20876.422 -> inserted into my database. Now, let's go back
and view it and there it is. So congratulations,
20884.58 -> you have successfully written playbooks to
deploy your application your provision your
20889.49 -> node machines using playbooks and orchestrated
them using playbooks now, even though at the
20894.862 -> beginning it seemed like a huge task to do
and so we'll play both made it so easy. Hello
20901.8 -> everyone. This is Saurabh from Edureka in
today's session will focus on what his puppet.
20910.83 -> So without any further Ado let us move forward
and how look at the agenda for today first.
20914.752 -> We'll see why we need configuration management
while the various problems are industries
20919.17 -> were facing before configuration management
was introduced after that will understand
20923.502 -> what exactly is configuration management and
we'll look at various configuration management
20927.81 -> tools after We'll focus on puppet and we'll
see the puppet architecture along with the
20932.9 -> various puppet components and finally in our
hands on part will learn how to deploy my
20937.89 -> SQL and PHP using puppet. So I'll move forward
and we'll see what are the various problems
20942.84 -> before configuration management. So this is
the first problem guys, let us understand
20947.85 -> this with an example suppose. You are a system
administrator and your job is to deploy mean
20953.43 -> stack say on four nodes. All right means dark
is actually Mongo DB Enterprise angularjs
20959.362 -> and node.js so you need to deploy means dark
on four notes that is not a big issue. You
20963.922 -> can manually deploy that and four nodes but
what happens when your infrastructure becomes
20968.792 -> huge you may need to deploy the same means
tax a on hundreds of notes. Now, how will
20973.21 -> you approach the task? You can't do it manually
because if you do it manually, it'll take
20977.42 -> a lot of time plus they will be wastage of
resources along with that. There is a chance
20982.98 -> of human error. I mean, it increases the risk
of human error. All right, so we'll take the
20987.55 -> same example forward. And we'll see what are
the other problems before configuration management.
20992.102 -> Now, this is the second problem guys. So it's
fine like you in the previous step you have
20997.702 -> deployed means that one hundreds of nodes
manually. Now what happens there is an updated
21001.952 -> version of Mongo DB available and your organization
wants to shift that updated version. Now,
21007.702 -> how will you do that? You want to go to the
updated version of Mongo DB? So what you'll
21012.52 -> do you'll actually go and manually update
mongodb on all the nodes in your infrastructure.
21017.442 -> Right? So again, that will take a lot of time
but now what happens that updated version
21022.81 -> of the software has certain glitches your
company wants to roll back to the previous
21027.272 -> version of the software, which is mongo DB
in this case. So you want to go back to the
21032.17 -> previous version. Now, how will you do that?
Remember you have not kept the historical
21036.93 -> record of Mongo DB during the updating. I
mean you have updated mongodb biannually on
21042.481 -> all the nodes. You don't have the record of
the previous version of Mongo DB. So what
21046.672 -> you need to do you need to go and manually
Reinstall mongodb on all the nodes. So rollback
21051.772 -> was a very painful task. I mean it used to
take a lot of time. Now. This is the third
21057.43 -> problem guys over here what happens you have
updated mongodb in the previous step on say
21062.8 -> development environment and in the testing
environment, but when we talk about the production
21066.73 -> environment, they're still using the previous
version of mongodb. Now what happens there
21071.22 -> might be certain applications that work that
are not compatible with the previous version
21076.39 -> of mongodb All right. So what happened developers
write a code and that works fine in his own
21081.68 -> environment or beat his own laptop after that.
It works fine till testing is well. Now when
21087.061 -> it reaches production since they're using
the older version of Mongo DB which is not
21091.35 -> compatible with the application that developers
have built so it won't work properly there
21095.46 -> might be certain functions which won't work
properly in the production environment. So
21099.81 -> there is an inconsistency in the Computing
environment due to which the application might
21104.22 -> work in the development environment, but in
product it is not working properly. Now what
21109.41 -> I'll do, I'll move forward and I'll tell you
how important configuration management is
21113.612 -> with the help of a use case. So configuration
management. Add New York Stock Exchange. All
21119.122 -> right. This is the best example of configuration
management that I can think of what happened
21124.51 -> a software glitch prevented the New York Stock
Exchange from Trading stocks for almost 90
21129.922 -> minutes this led to millions of dollars of
loss a new software installation caused the
21134.73 -> problem. The software was installed on 8 of
its twenty trading Terminals and the system
21140.532 -> was tested out the night before however in
the morning it failed to operate properly
21145.65 -> on the a terminals. So there was a need to
switch back to the old software you might
21150.85 -> think that this was a failure of New York
Stock Exchange has configuration management
21155.282 -> process, but in reality, it was a success
as a result of proper configuration management
21161.003 -> process NYSE recovered from that situation
in 90 minutes, which was pretty fast. Let
21166.362 -> me tell you guys had the problem continued
longer the consequences would have been more
21171.612 -> severe so because the proper configuration
management, New York Stock Exchange Painted
21176.09 -> loss of millions of dollars they were able
to roll back to the previous version of the
21180.012 -> software within 90 minutes. So we'll move
forward and we'll see what exactly configuration
21184.99 -> management is. So what is configuration management
configuration management is basically a process
21190.88 -> that helps you to manage changes in your infrastructure
in a more systematic and structured way. If
21195.8 -> you're updating a software you keep a record
of what all things you have updated. What
21199.59 -> will change is you are making in your infrastructure
all those things and how you achieve configuration
21204.47 -> management you achieve that with the help
of a very important concept called infrastructure
21208.42 -> as code. Now. What is the infrastructure is
code infrastructure as code simply means that
21212.66 -> you're writing code for infrastructure. Let
us refer the diagram that is present in front
21216.83 -> of your screen. Now what happens in infrastructure
is code you write the code for infrastructure
21221.47 -> in one central location. You can call it a
server. You can call it a master or whatever
21226.071 -> you want to call it. All right. Now that code
is deployed onto the dev environment test
21231.452 -> environment and the product environment. Basically
your entire infrastructure. All right, whatever.
21235.34 -> No, do you want to configure your configure
that with the help of that one central location?
21240.48 -> So let us take an example. All right suppose
you want to deploy Apache Tomcat say on all
21245.692 -> of your notes. So what you'll do in one location
will write the code to install Apache tomcat
21251.35 -> and then you'll push that onto the nodes which
you want to configure. What are the advantage
21255.622 -> you get here. First of all the first problem
if you can recall that configuring large infrastructure
21260.932 -> was very hectic job, but because of configuration
management, it becomes very easy how it becomes
21267.202 -> easy. You just need to write the code in one
central location and replicate that on hundreds
21271.73 -> of notes it is that easy. You don't need to
go and manually install or update the software
21276.46 -> on all the nodes. All right. Now the second
problem was you cannot roll back to the previous
21281.282 -> table version in time. But what happens here,
since you have everything well documented
21286.102 -> in the central location rolling back to the
previous version was not a time-consuming
21290.73 -> task. Now the third problem was there was
a variation or inconsistency in Various teams,
21296.772 -> like Dev team Testament product team like
the environment the Computing environment
21301.1 -> was a different in-depth testing product.
But with the help of infrastructure as code
21305.6 -> what happens all your three environment that
is there tested product have the same Computing
21310.05 -> environment. So I hope we all are clear with
what is configuration management and what
21314.85 -> is infrastructure is code. So we'll move forward
and we'll see what are the different type
21319.262 -> of configuration management approaches are
there now, there are two types of configuration
21323.97 -> management approaches one is push configuration.
Another is pull configuration. All right.
21328.23 -> Let me tell you push configuration first input
configuration what happens there's one centralized
21333.22 -> server and it has all the configurations inside
it if you want to configure certain amount
21338.21 -> of nodes. All right, say you want to configure
for notes as shown in the diagram. So what
21343.01 -> happens if you push those configuration to
these nodes there are certain commands that
21347.042 -> you need to execute on that particular central
location and with the help of that command
21351.98 -> those are configurations, which are present
will be pushed onto the nodes now, Let us
21355.852 -> see what what happens in pull configuration
in pull configuration. There is one centralized
21360.99 -> server, but it won't push all the configurations
on to the nodes what happens nodes actually
21365.84 -> posed the central server at say 5 minutes
or 10 minutes basically at periodic intervals.
21371.27 -> All right, so it will pose the central servers
for the configurations and after that it will
21376.31 -> pull the configurations that are there in
the central server so over here, you don't
21380.18 -> need to execute any command nodes will add
automatically pull all the configurations
21384.71 -> that are there in the centralized server and
pop it in Chef both uses full configuration.
21388.952 -> But when you talk about push configuration
ansible unsourced accuses push configuration,
21394.1 -> so I'll move forward and we'll look at various
configuration management tools. So these are
21399.74 -> the four of most widely adopted tools for
configuration management. I have highlighted
21403.98 -> puppet because in this session, we are going
to focus on puppet and it uses pull configuration
21408.422 -> and when we talk about Saul stock, it uses
push configuration, so does ansible ansible
21414.57 -> also uses push. Listen Chef also uses the
pulley configuration. All right, so pop it
21419.66 -> and chef uses pull configuration, but ansible
and solve Stark uses push configuration. Now,
21424.64 -> let us move forward and see what exactly puppet
is. So pop it is basically a configuration
21430.782 -> management tool that is used to deploy a particular
application configure your nodes and manager
21435.96 -> service. Like they can possibly take your
servers online and offline as required configure
21440.73 -> them and deploy a certain package or an application
onto the node. So right with the help of puppet,
21446.82 -> you can do that with ease and the architecture
that it uses master-slave architecture. Let
21452.183 -> us understand this with an example. So this
is Puppet Master over here. All the configurations
21457.01 -> are present and these are all the puppet agents.
All right, so these puppet agents pole the
21462.38 -> central or the Puppet Master at regular intervals
and whatever configurations are present. It
21467.471 -> will pull those configuration basically. So
let us move forward and focus on the Puppet
21472.612 -> Master Slave architecture now, this is a Also
slave architecture guys over here what happens
21478.5 -> the puppet agent or the puppet node sends
facts to the puppet master and these facts
21483.49 -> are basically a key value our data pair that
represents some aspect of slave state that
21489.122 -> aspect can be its IP address time operating
system or whether it's a virtual machine and
21494.782 -> then Factor gathers those basic information
about puppet slave such as Hardware details
21499.67 -> network settings operating system type and
version IP addresses Mark addresses all those
21505.112 -> things. Now these parts are then made available
in Puppet Masters manifest as variables now
21511.41 -> Puppet Master uses those facts that it has
received from the puppet agent or the puppet
21515.601 -> node to compile a catalog that catalog defines
how the slave should be configured and at
21521.56 -> the catalog is a document that describes a
desired state for each resource that Puppet
21527.172 -> Master manages, honestly, so it is basically
a compilation of all the resources that Puppet
21531.98 -> Master applies to a given slave as well as
at the relationship between Those resources
21536.63 -> so the catalog is compiled by the puppet master
and then it is sent back to the node and then
21541.65 -> finally slave provides data about how it has
implemented that catalog and if sandbags our
21548.74 -> report. So basically the node or the agent
sends the report back that the configurations
21553.292 -> are complete and they can actually view that
in the puppet dashboard as well. Now what
21558.16 -> happens is the connection between the node
or the puppet agent and the puppet master
21561.82 -> happens with the help of SSL secure encryption.
All right, we'll move forward and we'll see
21568.05 -> how actually the connection between the puppet
master and puppet node happens. So this is
21572.58 -> how puppet master and slave connection happens
what happens first of all the puppets slave
21577.81 -> it requests for the Puppet Master certificate.
All right. It sends a request to the master
21582.65 -> certificate and once Puppet Master receives
that request it will send the master certificate
21587.82 -> and once puppet slave has received the master
certificate Puppet Master will again send
21591.782 -> a request to the slave regarding the its own
certificate. All right. So it will request
21596.272 -> a for the puppet agent to send its own certificate.
The puppet slave is generate its own certificate
21601.58 -> and send it to Puppet Master. Now what puppet
master has to do puppet master has to sign
21607.292 -> that certificate. Alright. So once it has
signed the certificate puppet slave can actually
21611.71 -> request for the data. All right all the configurations
and then finally Puppet Master will send those
21617.042 -> configurations on to the puppets late. This
is how puppet master and slave communicates.
21621.98 -> Now, let me show you practically how this
happens. I have installed puppet master and
21626.622 -> puppet slave on my sent to West machines.
All right, I'm using 2 virtual machines 14
21631.622 -> puppet master and another for puppet sleep.
So let us move forward and execute this practically
21636.602 -> now, this is my Puppet Master virtual machine
over here. I've already created a puppet master
21643.102 -> certificate, but there is no puppet agent
certificate right now and how will you confirm
21647.85 -> that there is a command that is puppet. Third
list and it will display all the certificates
21654.15 -> that are pending in puppet master. I mean
that are pending for the approval from the
21658.272 -> master. All right, so currently there are
no certificates available. So what I'll do
21662.002 -> is I'll go to my puppet agent and I'll fetch
the Puppet Master certificate which are generated
21667.282 -> earlier and at the same time generate the
puppet agent certificate and send it to master
21672.202 -> for signing it. So this is my puppet agent
virtual machine now over here as I've told
21677.952 -> you earlier as well. I'll generate a puppet
agent certificate and at the same time I'll
21681.91 -> fetch the Puppet Master certificate and that
agent certificate will be sent to puppet master
21687.032 -> and it will sign that puppet my agent certificate.
So let us proceed with that for that. I'll
21692.032 -> type up it agent - t and here we go. All right,
so it is creating a new SSL key for the puppet
21700.022 -> agent as you can see in the logs itself. So
it has sent a Certificate request and this
21706.14 -> is the fingerprint for that. So exiting no
certificate found and wait for sword is disabled.
21712.292 -> So what I need to do is I need to go back
to my Puppet Master virtual machine and the
21716.3 -> signed this particular certificate that is
generated by puppet agent. Now over here if
21723.35 -> you want to see the list of certificates,
what do you need to do? You need to type up
21726.47 -> it so at least I have told you earlier as
well. So let us see what all certificates
21731.032 -> are there now, so as you can see that there
is a certificate that has been sent by puppet
21735.99 -> agent. All right, so I need to sign this particular
sort of again. So for that what I will do
21740.4 -> I'll type pop it. Search sign on the name
of the certificate that is puppet agent and
21749.8 -> here we go. So that successfully signed the
certificate that was requested by puppet agent.
21756.82 -> Now what I'll do, I'll go back to my puppet
agent virtual image and over there. I'll update
21761.432 -> the changes that have been made in the Puppet
Master. Let me first clear my terminal and
21767.202 -> now again, I'll type puppet agent - tea. All
right, so we have successfully established
21776.23 -> a secure connection between puppet master
and puppet agent. Now. Let me give you a quick
21781.76 -> recap of what we have discussed a lot first.
We saw what are the various problems before
21786.23 -> configuration management be focused on three
major problems that were there. All right.
21790.542 -> And after that we saw how important configuration
management is with the help of a use case
21795.202 -> of New York Stock Exchange. And finally we
saw what exactly configuration management
21800.14 -> is. And what do you mean by infrastructure
is code. We also looked at various configuration
21805.51 -> management tools are namely Chef puppet ansible
and saltstack and after that we understood
21810.96 -> what exactly pop it is. And what is the master-slave
architecture that it has and how puppet master
21817.242 -> and puppet slave communicates. All right,
so I'll move forward and we'll see what use
21821.92 -> case I have for you today. So what we are
going to do in today's session or we are going
21825.782 -> to deploy a my SQL and PHP using puppet. So
for that what I will do, I'll first a download
21832.122 -> the predefined modules for my dad. SQL and
PHP that are there in the puppet Foods. All
21837.112 -> right, those modules will actually Define
the two classes that is PHP and MySQL. Now
21842.55 -> you cannot deploy the class directly onto
the nodes. So what do you need to do? When
21846.71 -> you in puppet Boniface you need to declare
those classes, whatever class you have defined.
21851.542 -> You need to declare those classes. I'll tell
you what our manifest modules you don't need
21854.92 -> to worry about that. I'm just giving a general
overview of what we are going to do in today's
21859.862 -> session. So you just need to declare those
two classes at as PHP and MySQL and finally
21864.76 -> just deploy that onto the nose it is that
simple guys. So as you can see that there
21869.042 -> will be a code for PHP and MySQL from that
Puppet Master, it will be deployed onto the
21875.21 -> nose or the puppet agents will move forward
and we'll see what are the various phases
21880.692 -> in which will be implementing the use case.
Alright. So first we'll define a class has
21885.26 -> all right classes are nothing but the collection
of various resources. How will do that will
21889.782 -> do that with the help of modules that will
actually download a module from the puppet.
21893.84 -> Boat and we'll use that module that defines
who classes as I've told you PHP and MySQL
21899.042 -> and then I'm going to declare that class in
the Manifest and finally deploy that onto
21903.292 -> the nodes. All right. So let us move forward
and before actually doing this it is very
21908.93 -> important for you to understand certain basics
of pop it like code basics of puppet like
21913.06 -> what our classes resources manifest modules
all those things. So we'll move forward and
21918.542 -> understand those things one by one. Now. What
happens is first of all, I'll explain you
21923.05 -> resources classes manifests in modules separately.
But before that, let me just give you an overview
21928.47 -> of what are these things? All right, how do
they work together? So what happens there
21933.48 -> are certain resources or write a user is a
resource of pile is a resource. Basically
21937.33 -> anything that is there can be considered as
a resource. So multiple resources actually
21942.97 -> combine together to form a class. So now this
class you can declare it in any of the benefits
21948.31 -> that you want. You can declare it in multiple
manifests. All right, and then finally you
21952.81 -> can bundle all These manifest together to
form a module. Now. Let me tell you guys it
21957.64 -> is not mandatory that with you will combine
the resources and define a class. You can
21962.73 -> actually deploy the resources directly. It
is a good practice if you combine the resources
21967.07 -> in the form of classes because it becomes
easier for you to manage the same goes for
21971.07 -> manifest as well. And I'll tell you how to
do that as well. You can write a puppet code
21975.192 -> and deploy that onto the nodes and at the
same time it is not necessary for you to bundle
21980.122 -> the Manifest that you are using in the form
of modules. But if you do that, it becomes
21984.17 -> more manageable and it becomes more structured.
All right, so it becomes easier for you to
21988.58 -> handle multiple manifests. All right. So let
us move forward and have a look at what exactly
21993.66 -> are resources and what our class is in puppet.
Now what our resources anything that is there
21998.922 -> is a resource a user is a resource other told
you about file can be a resource. Basically
22003.68 -> anything that is there can be considered as
a resource. So puppet code is composed primarily
22009.442 -> of a resource declarations a resource describes
something about the state of the System it
22014.25 -> can be such as a certain user or a file should
exist or a package should be installed now
22019.692 -> here we have the syntax of the resource. All
right, first you write the type of the resource.
22024.13 -> Then you give a name to it in the single quotes
and various attributes that you want to Define
22028.74 -> in the example. I've shown you that it will
create a file that is I need d.com and this
22035.42 -> attribute will make sure that it is present.
So let us execute this practically guys. I'll
22040.702 -> again go back to my Center as virtual machine
now over here. What I'll do I'll use the G
22046.792 -> edit editor you can use whatever editor you
want and I'll type the path for my manifest
22053.172 -> directory and in this directory. I let Define
a file. All right and with the dot DB extension,
22060.17 -> so I'll just name it as a side dot p p and
here we go. Now what head are the resource
22068.57 -> examples that I've shown you in this light?
I will just write the same example and the
22072.18 -> let us see what happens file open the braces
now give the path HC. / I knit DDOT conf Ina
22089.48 -> DDOT conf. Colon, and antenna, and now I'm
going to write the attribute, so I'm going
22094.452 -> to make sure that it is present in sure. The
Define is created. Etsy I knit / I knit. DDOT
22112.48 -> conf comma and the now-closed the braces save
it and close it. Now what you need to do.
22126.912 -> You need to go to the puppet Asian once more
and over there. I'm going to execute agent
22131.782 -> - t command that will update the changes made
in the Puppet Master. Now we're here. I'll
22144.862 -> use the puppet agent - t command and let us
see if the file I need the dot-coms is created
22150.98 -> or not. All right, so it has done it successfully
now. What I'll do is just to confirm that
22157.66 -> I'll use LS command for that. I will type
LS Etsy. Ina DDOT Kant And as you can see
22166.57 -> that it has been created, right so we have
understood what exactly a resources in puppet,
22173.202 -> right? So now let us see what our classes
classes are nothing but the group of resources.
22178 -> All right, so you group multiple resources
together to form one single sauce and you
22183.792 -> can declare that class in multiple manifest
as we have seen earlier. It has a syntax error.
22188.56 -> Let us see first you need to write class then
give a name to that class open the braces
22194.57 -> write the code in the body and then close
the brace is it's very simple and it is pretty
22198.72 -> much similar to the other coding languages
that you if you if you have come across any
22203.49 -> other coding languages. It is pretty much
similar to the class that you define over
22207.43 -> there as well. All right, so we have a question
from my uncle he's asking can you specify
22212.192 -> what exactly the difference between a resource
and a class classes are actually nothing but
22217.15 -> the bundle of resources. All right, all those
Resources Group together forms a class and
22222.89 -> what you can say is a resource describes a
single. Or a package but what happens a class
22228.16 -> describes everything needed to configure an
entire service or an application? So we'll
22232.65 -> move forward and we'll see what our manifest
so this is puppet manifest now what exactly
22238.49 -> it is, every slave has got its configuration
details in puppet master and it is written
22243.97 -> in the native puppet language. These details
are written in the language that puppet can
22249.09 -> understand and that language is termed as
manifests. So this is Manifest all the puppet
22254.971 -> programs are basically termed as Manifest.
So for example, you can write a manifest in
22260.56 -> puppet master that creates a file and install
the party's over on puppet slaves connected
22265.51 -> to the Puppet Master. Alright, so you can
see I've given you an example over here. It
22271.08 -> uses a class that is called Apache and this
class is defined with the help of predefined
22276.622 -> modules that are there in puppet port and
then various our tributes like Define the
22281.502 -> virtual hosts in the port and the root directory,
so Basically, there are two ways to actually
22286.862 -> declare a class in puppet manifest either.
You can just write include and the name of
22291.582 -> the class or you can if you don't want to
use a default attributes of that class, you
22296.01 -> can make the changes in that by using this
particular syntax that is you write the class
22300.682 -> open the braces and the class name: whatever
changes or whatever the attributes that you
22305.362 -> want apart from the one which are there in
Deep by default and then finally close the
22309.22 -> braces. All right. So now I'll execute a manifest
practically that will install Apache on my
22314.24 -> notes. All right now need to deploy Apache
using puppets. All right. So what I need to
22319.46 -> do, I need to write the code to deploy apart
a in the Manifest directory. I've already
22323.66 -> created a file with DOT CPP extension. If
you can remember when I was talking about
22327.17 -> resources, right? So now again, I'll use the
same file that is side b p and I'll write
22332.05 -> the code to deploy a partay. All right. So
what I'll do I'll just I'll use the G editor
22337.91 -> you can use whatever editor you feel like
it see Pop It manifest and site. Art p p and
22346.542 -> here we go. Now over here. I'll just delete
the resource that I've defined here. I like
22353.17 -> my screen to be nice and clean and now I will
write the code to deploy a party so for that
22362.032 -> I will tight package. httpd : now I need to
ensure it is install. So for that I'll type
22372.68 -> in sure installed. Give a comma Now I need
to start this Apache service for that. I'll
22379.09 -> type service. httpd in short running through
a coma now close the braces the save it and
22397.51 -> close it. Let me clear my terminal. And now
what I'll do, I'll go to my puppet agent from
22403.5 -> there. It will pull the configurations that
are present in my Puppet Master. Now what
22407.59 -> happens periodically puppet agent actually
pulls the configuration from Puppet Master
22411.442 -> and it is around 30 minutes, right? It takes
around half an hour after every half an hour
22415.99 -> puppet agent pulls the configuration from
Puppet Master, right so you can configure
22420.26 -> that as well. If you don't want to do it just
throw in a command puppet agent - tea and
22424.27 -> it will automatically pull the configurations
are representing the puppet master. So for
22428.692 -> that I will go to my puppet agent virtual
machine now here what I'll do, I'll type a
22434.77 -> command puppet agent - t and let us see what
happens. So it is done now now what I'll do
22442.75 -> just to confirm that I will open my browser.
And over here, I will type the hostname of
22448.14 -> my machine which is localhost and let us see
if a party is installed. All right, so Apache
22453.96 -> has been successfully installed now, let us
go back to our slides and see what exactly
22458.862 -> modules are. So what our puppet modules puppet
module can be considered as a self-contained
22463.792 -> bundle of code and data. Let us put it in
another way. We can say that puppet module
22470.66 -> is a collection of manifest and data such
as Parks files templates Etc. All right, and
22477.922 -> they have a specific directory structure.
Modules are basically used for organizing
22483.362 -> your puppet code because they allow you to
split your code into multiple manifest. So
22488.72 -> they provide you a proper structure in order
to manage a manifest because in real time,
22492.872 -> you'll be having multiple manifest to manage
those manifests. It is always a good practice
22497.64 -> to bundle them together in the form of modules.
So by default puppet modules are present in
22503.1 -> the directly / HC / puppet / modules, whatever
modules you download from Puppet force will
22507.6 -> be present in this module directory. All right,
even if you create your own modules, you have
22511.862 -> to create in this particular directory. That
is / HC / puppet / modules. So now let us
22518.092 -> start the most awaited topic of today's session
that is deploying PHP and my SQL using puppet.
22524.48 -> Now, what I'm going to do is I'm going to
download the two modules one is for PHP and
22530.43 -> another is for MySQL. So those two modules
will actually Define PHP and MySQL class for
22536.202 -> me now after that I need to declare that class
in the Manifest. Then site dot PHP file present
22541.57 -> in the puppet manifest. So I'll declare that
class in the Manifest. And then finally, I'll
22546.23 -> throw in a command puppet agent - teen my
agent and it will pull those configurations
22550.57 -> and PHP and MySQL will be deployed. So basically
when you download a module you are defining
22555.782 -> a class. You cannot directly deploy the class
you need to declare it in the Manifest and
22561.032 -> I will again go back to my sin to icebox now
over here. What I'll do, I'll download the
22566.192 -> my SQL module from the puppet forward. So
forth are all type puppet mode. You'll install
22576.82 -> Puppet Labs. - my sequel - - give the night
version name so I will use three point one
22590.442 -> zero point zero and here we go. So what is
happening here as you can see the saying preparing
22599.09 -> to install into / HC / puppet / modules, right?
So it will be installed in this directories
22604.81 -> apart from that. It is actually downloading
this from the forge a pi dot puppet labs.com.
22612.06 -> So it is done now, that means that successfully
install MySQL module from Puppet Fort. All
22616.67 -> right. Let me just clear my terminal and now
I will install PHP modules for that. I'll
22620.75 -> type puppet module install. - a PHP - - version
that is four point zero point zero - beta
22637.76 -> 1 and here we go. So it is done. Now that
means we have successfully installed two modules
22645.21 -> one is PHP and other is my SQL. All right.
Let me show you where it is present in my
22650.182 -> machine. So what I'll do, I'll just hit an
LS command and I'll show you in puppet modules.
22657.372 -> And here we go. So as you can see that there's
a my SQL module and PHP module that we have
22664.02 -> just downloaded from Puppet Foods. Now what
I need to do is I have defined by SQL and
22669.992 -> PHP class, but I need to declare that in the
site dot PHP file present in the puppet manifest.
22675.23 -> So for that what I will do I'll first use
the G edit editor you can use whatever editor
22680.702 -> that you want. I'm saying it again and again,
but you can use whatever editor that you want.
22684.782 -> I personally prefer G edit and now manifest
side dot p p and here we go. Now as I told
22695.92 -> you earlier is well, I like my screen to be
clean and nice. So I'll just remove this and
22699.31 -> over here. I will just declare the two classes.
That is my secret and PHP. Include my sequel.
22713.59 -> Server and the next line. I'll include the
PHP class for that anti PHP. Just save it
22728.24 -> now close it. Let me clear my terminal now
what I'll do, I'll go to my puppet agent.
22734.292 -> And from there. I'll hit a command puppet
agent - t that will pull the configurations
22739.202 -> from Puppet Master. So let us just proceed
with that. Let me first clear my terminal
22747.67 -> and now I'll tie puppet agent - t and here
we go. So we have successfully deployed PHP
22769.252 -> and MySQL using puppet. All right, let me
just clear my terminal and I'll just confirm
22776.182 -> it by typing my sequel - we All right, this
will display the version now as just exit
22785.21 -> from here and now I'll show you the PHP versions
of adult type PHP - version and here we go.
22794.77 -> Alright, so this means that we have successfully
installed PHP and MySQL using puppet. So now
22803.061 -> let me just give you a quick recap of what
we have discussed in love. All right. So first
22807.3 -> we saw why we need configuration management.
What are the various problems that were there
22811.032 -> before configuration management? And we understood
the importance of configuration management
22815.71 -> with a use case of New York Stock Exchange.
All right, after that we saw what exactly
22820.75 -> configuration management is and we understood
a very important concept called infrastructure
22824.82 -> as code. Then we focused on various type of
configuration management approaches namely
22830.112 -> push and pull then we saw various configuration
management tools are namely puppet chef ansible
22836.92 -> and Source tag after that. We focus on pop
it and we saw what exactly puppet is its Master
22843.162 -> Slave architecture how puppet master and slave
communicates all those things then we understood
22848.192 -> the puppet code Basics. We understood what
our resources what a class is Manifest modules
22854 -> and finally in our hands on part. I told you
how to deploy PHP and MySQL using puppet My
22864.49 -> name is Sato. And today we'll be talking about
Nagi ways. So let's move forward and have
22868.82 -> a look at the agenda for today. So this is
what we'll be discussing. Will Begin by understanding
22872.97 -> why we need continuous monitoring what is
continuous monitoring and what are the various
22876.64 -> tools available for continuous monitoring.
Then we are going to focus on Nagi OS we are
22881.46 -> going to look at its architecture how it works.
We are also going to look at one case study
22886.23 -> and finally in the demo. I will be showing
you how you can monitor a remote host using
22890.542 -> NRP, which is nothing but nagios remote plug-in
executor. So I hope you all are clear with
22894.55 -> the agenda. Let's move forward and we'll start
by understanding why we need continuous monitoring.
22899.4 -> Well, there are multiple reasons guys, but
I mentioned for very important reasons why
22903.6 -> we need continuous monitoring. So let's have
a look at each of these one by one. The first
22907.98 -> one is failure of see ICD pipelines since
devops is a buzzword in the industry right
22912.16 -> now. And most of the organizations are using
devops practices. Obviously, they are implementing
22917.172 -> see ICD pipelines or it is also called as
digital pipelines right now the idea behind
22922.102 -> these SED pipeline is to make sure that the
release should happen more frequently and
22926.012 -> it should be more stable in an automated fashion.
Right because there are a lot of competitors
22930.73 -> you might have in the market and you want
to release your product before them. So agility
22934.952 -> is very very important. And that's why we
use eicd pipelines. Now when you implement
22939.372 -> such a pipeline you realize that there can't
be any manual intervention at any step in
22944.56 -> the process or the entire pipeline slows down.
So you will basically defeat the entire purpose
22949.4 -> manual monitoring slows down your deployment
Pipeline and increases the risk of performance
22954.792 -> problems propagating in production, right?
So I hope you have understood this. If you
22959.02 -> notice the three points that I've mentioned
it's pretty self-explanatory rapid introduction
22963.33 -> of performance problems and errors, right
because you are releasing software and more
22967.22 -> frequently. So there has to be rapid introduction
of performance problems rapid introduction
22971.622 -> of new endpoints causing monitoring issues.
Again, this is pretty self-explanatory then
22976.48 -> the root cause analysis as a number of services
expands because you are releasing software
22980.282 -> more frequently, right? So definitely the
number Services are going to increase and
22984.42 -> there's a lengthy root cause analysis, you
know, because of which you lose a lot of time,
22988.022 -> right? So let's move forward and we look at
the next reason why we need continuous monitoring.
22992.032 -> For example, we have an application which
is light, right? We have deployed it on the
22996.342 -> production server. Now. We are running a p.m.
Solutions which is basically application performance
23001.06 -> monitoring. We are monitoring our application
how the performance is. Is there any down
23004.452 -> time all those things? Right? And then we
figure out certain issues with our applications
23008.57 -> on performance issues now to go back basically
to roll back and to incorporate those changes
23013.782 -> to remove those bugs developers are going
to take some time because the process is huge
23018.16 -> because your application is already live,
right? You cannot afford any downtime. Now,
23022.88 -> imagine what if before releasing the software
on a pre production server, which is nothing
23027.91 -> but the replica of my production server. I
can run those APM solutions to figure out
23032.592 -> how my application is going to perform and
it actually goes live right so that way whatever
23037.52 -> issues of their developers will be notified
before and they can take the corrective action.
23042.782 -> So I hope you have understood my point. The
next thing is server Health cannot be compromised
23047.162 -> at any cost. So I think it's pretty obvious
guys. Your application is running on a server.
23051.782 -> You cannot afford any downtime in that particular
server or increase in the response time also,
23057.122 -> right. So you require some sort of a monitoring
system to check your server Health as well.
23062.17 -> Right? What if your application goes down
because you're so it isn't responding right?
23066.622 -> So you don't want any scenario like that in
a world like today where everything is so
23071.202 -> Dynamic, and the competition is growing. Exponentially.
You want to give best service to your customers,
23076.88 -> right? And I think so / health is very very
important because that's where your application
23081.33 -> is running guys are not things. I have to
stress too much on this right, so we basically
23084.702 -> require continuous monitoring of a server
as well. Now, let me just give you a quick
23088.63 -> recap of the things that we have discussed.
So we have understood why we need continuous
23092.41 -> monitoring by looking at three four examples,
right? The first thing is we solve what are
23096.532 -> the issues with see ICD pipeline right? We
cannot have any sort of manual intervention
23100.92 -> for monitoring in source of bye. Because you're
going to defeat the purpose of such pipeline.
23105.27 -> Then we saw that developers have to be notified
about the performance issues of the application
23110.33 -> before releasing it in the market. Then we
saw server Health cannot be compromised at
23114.71 -> any cost. Right? So these are the three major
reasons why I think continuous monitoring
23119.88 -> is very important for most of the organization's
right? Although there are many other reasons
23124.13 -> as well right now. Let's move forward and
understand what exactly is continuous monitoring
23129.94 -> because we just talked a lot of scenarios
where Manuel monitoring or a traditional monitoring
23132.6 -> processes are not going to be enough. Right?
So let us understand what exactly is continuous
23138.96 -> monitoring and how is it different from what
relation process so basically continuous monitoring
23143.85 -> tools resolve any sort of system errors before
they have any negative impact on your business.
23148.59 -> It can be low memory unreachable server, etc.
Etc. Apart from that. They can also monitor
23153.47 -> the business processes and the application
as well as your server which we have just
23157.91 -> discussed. Right? So continuous monitoring
is basically an effective system where The
23163.32 -> entire it infrastructure starting from your
application to your business process to your
23167.002 -> server is monitored in an ongoing way and
in an automated fashion, right? That's what
23171.48 -> basically is the Crux of continuous monitoring.
So these are the multiple phases given to
23176.042 -> us by n is T for implementing continuous monitoring
and is is basically National Institute of
23179.94 -> Standards and technology. So let me just take
you through each of these stages first thing
23184.96 -> is defined so in to basically develop a monitoring
strategy, then what you're going to do you
23189.612 -> are going to establish measures and Matrix
and you also going to stablish monitoring
23194.122 -> and assessment frequencies at how frequently
are going to monitor it right. Then you are
23198.362 -> going to implement whatever you have stablished
the plan that you have laid down. Then you're
23202.4 -> going to analyze data and report findings,
right? So whatever issues that are there you're
23206.752 -> going to find that pose that you're going
to respond and mitigate that error and finally
23210.77 -> you're going to review and update the application
or whatever you were monitoring right now.
23215.34 -> Let us move forward and patreon is also given
us multiple phases involved in continuous
23219.73 -> monitoring. So let us have a look at those
old. So one by one The first thing is continuous
23224.15 -> Discovery. So contentious Discovery is basically
discovering in maintaining near real-time
23228.58 -> inventory of all networks and information
assets, including hardware and software if
23233.362 -> I have to give an example basically identifying
and tracking confidential and critical data
23238.81 -> stored on desktops laptops and servers. Right
next comes continuous assessment. It basically
23245.112 -> means automatically scanning and comparing
information assets against industry and data
23250.48 -> repositories determine oner abilities. That's
the entire point of continuous assessment.
23254.752 -> Right? So one way to do that is prioritizing
findings and providing detailed reports right
23259.73 -> by Department platform Network asset and vulnerability
type next comes continuous audit, so continuously
23266.282 -> evaluating your client server and network
device configurations and comparing them with
23272.31 -> standard policies is basically what continues
audit is, right. So basically what you're
23276.23 -> going to do here is gain insights into problematic
controls using patterns and access permission
23281.452 -> of sensitive data. Then comes continuous patching.
It means automatically deploying and updating
23287.3 -> software to eliminate vulnerabilities and
maintain compliance. Right? So if I have to
23292.26 -> give you an example may be correcting configuration
settings, including network access and provision
23296.842 -> software according to end users role in policies.
All those things next comes continents reporting.
23302.25 -> So aggregating the scanning results from different
departments scan types and organizations into
23308.21 -> one Central repository is basically what content
is reporting is right for automatically analyzing
23313.73 -> and correlating unusual activities in compliance
with regulations. So I think it's pretty easy
23318.6 -> to understand if I have to repeat it once
more I would say continuous Discovery is basically
23322.74 -> discovering and maintaining an inventory a
near real-time inventory of all the network
23327.76 -> and information assets. Whether it's your
Hardware or software then continuous assessment
23331.9 -> means automatically scanning and comparing
the information assets from Gardens discovery
23335.88 -> that we have seen against industry and data
repositories to determine vulnerabilities
23340.072 -> continuous audit is basically Continuously
evaluating your client server and network
23344.96 -> device with configurations and comparing them
with standards and policies Contreras patching
23350 -> is automatically deploying and updating software
to eliminate vulnerabilities and maintain
23355.14 -> compliance right patching is basically your
remedy kind of a thing where you actually
23359.14 -> respond to the threats that you see or vulnerabilities
that you see in your application Garden is
23363.55 -> reporting is basically aggregating scanning
results from different departments scan types
23367.99 -> are organizations into one Central repository.
So these are nothing but the various phases
23372.76 -> involved in continuous monitoring. Let us
have a look at various continents monitoring
23376.43 -> tools available in the market. So these are
pretty famous tools. I think a lot of you
23379.96 -> might have heard about these tools one is
Amazon cloudwatch, which is nothing but a
23383.24 -> service provided to us by AWS Splunk is also
very famous. And we have e LK and argue ways
23388.42 -> right CLK is basically elastic log stash and
Cabana in this session. We are going to focus
23392.612 -> on argue is because it's a pretty mature to
lot of companies have used this tool and it
23397.51 -> has a major market share as well and it's
basically well suited for your entire it Whether
23402.64 -> it's your application or server or even it's
your business process now, let us have a look
23407.112 -> at what exactly is not your ways and how it
works. So now I give which is basically a
23411.22 -> tool used for continuous monitoring of systems
your application your services and business
23416.17 -> processes Etc in a devops culture right now
in the event of failure. Nagios can alert
23421.38 -> technical staff of the problem allowing them
to begin a remedy ation processes before outages
23426.74 -> affect business processes and users or customers.
So I hope you are getting my point. It can
23431.47 -> allow the technical staff of the problem and
they can begin remediation processes before
23436.59 -> outages affect their business process or end
users or customers right with the argues.
23441.112 -> You don't have to explain why an answer in
infrastructure outage affect your organization's
23445.182 -> bottom line, right? So let us focus on the
diagram that is there in front of your screen.
23449.362 -> So now use basically runs on a server usually
as a Daemon or a service and it periodically
23454.89 -> runs plugins residing in the same server what
they do they basically contact hosts on servers
23460.08 -> or on your network or on the Internet. Now
one can view the status information using
23464.792 -> the web interface and you can also receive
email or SMS notification if something goes
23469.07 -> wrong, right so basically nagas Damon behaves
like a scheduler that runs certain scripts
23474.48 -> at certain moments. It stores the results
of those cribs and we'll run other scripts
23480.39 -> if these results change. I hope you are getting
my point here right now. If you're wondering
23485.272 -> what our plugins of these are nothing but
compiled executables or scripts. It can be
23490.65 -> pearls great shell script Etc that can run
from a command line to check the status of
23494.91 -> a host or a service noun argue is uses the
results from the plugins to determine the
23498.97 -> current status of the host. And so this is
on your network. Now, let us see various features
23503.952 -> of Naga ways. Let me just take you through
all these features one by one. It's pretty
23508.84 -> scalable and secure and manageable as well.
It has a good log in database system. It automatically
23513.99 -> sends alerts which we just saw it. It takes
network errors and server crashes. It has
23518.872 -> easy writing plug-in. You can write your own
plugins right based on. Requirement yours
23522.93 -> business need then you can monitor your business
process and it infrastructure with a single
23527.72 -> pass guys issues can be fixed automatically.
If you have configured in such a way then
23531.98 -> definitely you can fix those issues automatically
and it also has support for implementing redundant
23537.24 -> monitoring posts. So I hope you are understood
these features there are many more but these
23541.542 -> are the pretty attractive features and why
and argue s is so popular is because of these
23545.59 -> features, let us now discuss the architecture
of nagios in detail. So basically now argue
23550.442 -> is has a server agent architecture right now
usually in a network an argue a server is
23555.622 -> running on a host which we just saw in the
previous diagram, right? So consider this
23559.52 -> as my host. So now I guess server is running
on a host and plugins interact with local
23564.362 -> and remote Hood. So here we have plugins.
So these will interact with the local resources
23569.08 -> or services and these will also interact with
the remote resources or services or host right.
23574.372 -> Now. These plugins will send the information
to the scheduler which will display that in
23578.23 -> the GUI right now. Let me repeat it. Again.
Nargis is build on a circuit. Good Agent architecture.
23583.85 -> Right and usually in argue is server is running
on a host and these plugins will interact
23588.452 -> with the local host or services or even the
remote host Services. Right? And these plugins
23593.602 -> will send the information to the scheduler
nagios process scheduler, which will then
23597.66 -> display it on the web interface and if something
goes wrong the concern teams will be notified
23602.32 -> Via SMS or through email, right? So I think
we have covered quite a lot of theory. So
23607.26 -> let me just go ahead and open my centralized
virtual machine where I've already installed
23611.68 -> now. Gos, so let me just open my Center as
virtual machine first. So this is my Center
23616.772 -> is virtual machine guys. And this is how the
nagios dashboard looks like. I'm running it
23621.24 -> at Port 8000. You can run it wherever you
want to explain that in the installation video
23625.09 -> how you can install it now. If you notice
there are a lot of options on the left hand
23629.08 -> side you can you know, go ahead and play around
with it. You'll get a better idea. But let
23632.542 -> me just focus on few important ones. So here
we have a map option here, right? If you click
23637.24 -> on that, then you can see that you have a
local host and you have a remote host as well.
23642.17 -> My nagas process is monitoring both the local
host and the remote host the remote host is
23646.792 -> currently down. That's why you see it like
this when I will be running it'll be showing
23650.56 -> you how it basically looks like now if I go
ahead and click on host. You will see all
23655.09 -> the hoes that I'm currently monitoring some
monitoring edureka and Local Host said Eureka
23659.13 -> is basically a remote server and Local Host
is currently on which my Onaga server is running
23663.19 -> right? So obviously it is up at the other
server is down. If I click on Services, you
23668.09 -> can see that these are the services that I'm
monitoring for my remote host our monitoring
23671.792 -> CPU load ping and SSH and for my Local Host.
I'm watching current load current users HTTP
23677.182 -> paying root partition SSH swap usage in total
processes. You can add as many services as
23682.202 -> you want. All you have to do is change the
host dot CFG file, which I'm going to show
23685.772 -> you later. But for now, let us go back to
our slides will continue from there. So let
23690.241 -> me just give you a small recap of what all
things we have discussed. So we first saw
23694.032 -> why we need continuous monitoring. We saw
various reasons why Industries need continuous
23698.532 -> monitoring and how it is different from the
traditional monitoring systems. Then we saw
23702.48 -> what is exactly continuous monitoring and
what are the various phases involved in implementing
23706.33 -> a continuous monitoring strategy. Then we
saw what are the various continuous monitoring
23709.9 -> tools available in the market and we focus
on argue as we saw what is not gue base how
23715.112 -> it works? What is its architecture right.
Now we're going to talk about something called
23719.522 -> is n RP e nagios remote plug-in executor of
which is basically used for monitoring remote
23725.09 -> Linux or Unix machines. So it'll allow you
to execute nagios plugins on those remote
23730.07 -> machines. Now the main reason for doing this
is to allow nog you wish to monitor local
23733.82 -> resources, you know, like CPU load memory
usage Etc on remote machines now since these
23739.34 -> public resources are not usually exposed to
external machines and agent like NRP must
23744.522 -> be installed on the remote Linux or Unix machines.
So even I have installed that in my Center
23749.16 -> ice box, that's why I was able to monitor
the remote Linux host that I'm talking about.
23753.412 -> Also. If you check out my nagas installation
video, I have also explained how you can install
23757.51 -> NRP now if you notice the diagram here, so
what we have is basically the Jake underscore
23762.89 -> n RP plug-in residing on the local monitoring
machine. This is your local monitoring machine,
23768.542 -> which we just saw right? So this is where
mine argue our server is now the Czech underscore
23773.07 -> in RP plug-in resides in a local monitoring
machine where you're not arguing over is right.
23777.8 -> So the one which we saw is basically my local
machine or you can say where my Naga server
23782 -> is, right? So this check underscoring RP plug-in
resides on that particular machine now this
23787.14 -> NRP Daemon which you can see in the diagram
runs on remote machine the remote Linux or
23792.34 -> Unix machine which in my case was edureka
if you remember and since I didn't start that
23796.282 -> machine so it was down right so that NRP Damon
will run on that particular machine now, there
23801.82 -> is a secure socket layer SSL connection between
monitoring host and the remote host you can
23807 -> see it in the diagram as well the SSL connection,
right? So what it is doing it is checking
23811.06 -> the disk space load HTTP FTP remote services
on the other host site then these are local
23816.73 -> resources and services. So basically this
is how an RP Works guys. Do you have and check
23820.96 -> underscore an Plug in designing in the host
machine. You have NRP Daemon running on the
23825.772 -> remote machine. There's an SSL connection,
right? Yeah, you have SSL connection and this
23830.8 -> NRP plug-in basically helps us to monitor
that remote machine. That's how it works.
23835.612 -> Let's look at one very interesting case study.
This is from bitten attics. And I found it
23840.202 -> on the nagios website itself. So if you want
to check out go ahead and check out their
23844.282 -> website as well. They have pretty cool case
studies the power from Internet Explorer.
23847 -> So there are a lot of other case studies on
their website. So bit etics provides basically
23851.07 -> Outsource it management and Consulting to
nonprofit or small to medium businesses right
23856.31 -> now bitnet has got a project where they were
supposed to monitor an online store for an
23861.3 -> e-commerce retailer with a billion dollar
annual revenue, which is huge guys. Now, it
23866.24 -> was not only supposed to you know monitor
the store but it also needed to ensure that
23870.41 -> the cart and the checkout functionality is
working fine and was also supposed to check
23874.702 -> for website deformation and notify the necessary
staff if anything went wrong right seems like
23879.96 -> an easy task but let us see what are the Problems
that bitnet X phase now bitnet X hit a roadblock
23885.612 -> upon realizing that the clients data center
was located in New Jersey more than 500 miles
23891.89 -> away from their staff in New York, right?
There was a distance of 500 miles between
23897.51 -> their their staff is located and the data
center. Now, let us see what are the problems
23901.282 -> they face because of this now the two areas
needed a unique but at the same time a comprehensive
23906.33 -> monitoring for their Dev test and prod environment
of the same platform, right and the next challenge
23912.032 -> was monitoring would be hampered by the firewall
restrictions between different applications
23916.112 -> sites functions Etc. So I think you have a
lot of you know about this firewalls is basically
23920.362 -> sometimes can be a nightmare right apart from
that most of the notification that were sent
23925.09 -> to the client what ignored because mostly
those are false positive, right? So the client
23929.102 -> didn't bother to even check those notifications
now, what was the solution? So the first solution
23933.5 -> the thought is adding SSH firewall rules for
Network Operation Center personnel and Equipment
23939.022 -> second is analyzing web pages to see if there's
any problem with Occurrences the third and
23943.452 -> the very important point was converting notification
to nag, uh alerts and the problem that we
23949.042 -> saw a false positive was completely removed
with this escalation logic. We're converting
23953.46 -> not as notifications of Nargis alerts and
escalations with specific time periods for
23957.96 -> different groups, right? I hope you are getting
my point here now configuring event handlers
23961.84 -> to restart Services before notification, which
was basically a fixed for 90% of the issues
23966.75 -> and using nagios core and multiple servers
at the NOC facility and each Target is worker
23972.25 -> was deployed at the application Level with
direct access to the host. So whatever bag
23976.74 -> is worker or agent or remote machine we have
was deployed at the application Level and
23981.21 -> had the direct access to the host or the master
whatever you want to call it and they have
23985.65 -> implemented the same architecture for production
quality assurance staging and development
23990.34 -> environments. Now, let's see what was the
result now because of this there was a dramatic
23994.862 -> reduction in notifications. Thanks to the
event handlers new configuration. Then there
23998.831 -> was an increase in up time from 85% Early
298 personally, which is significant guys,
24004.932 -> right then they saw a dramatic reduction in
false positive because if the escalation is
24009.56 -> logic that I was just talking about then fourth
point is estimating the need to log into multiple
24014.31 -> boxes and change configuration file. Thanks
to nagas configuration maintained in a central
24018.75 -> repository and post automatically to appropriate
service fourth point is estimating the need
24023.74 -> to log into multiple boxes and change the
configuration files and that happens because
24028.202 -> the inauguration configuration maintained
in a central repository or essential master
24032.4 -> and can be pushed automatically to all these
slaves to all the servers are slaves are agents
24036.41 -> whatever you want to call it. So this was
a result of using nog u.s. Right now is the
24040.952 -> time to check out a demo where what I'll be
doing is I'll be monitoring couple of services
24045.26 -> actually more than a couple of services offer
remote Linux machine through mine argue Ace
24049.6 -> hose which I just showed you right? So from
there, I'll be monitoring a remote Linux host
24054.102 -> Caldera Rekha, and I'll be monitoring like
34 Services you can have whatever you want
24058.27 -> and let me just show you watch the process
once you have installed. I guess what you
24062.441 -> need to do in order to make sure that you
have remote host or a remote machine being
24066.452 -> monitored by your nagios host. Now in order
to execute this demo, which I'm going to show
24070.852 -> you. You must have lamp stack on your system.
Right Linux Apache MySQL and PHP and I'm going
24076.702 -> to use Center West 7 here. Let me just quickly
open my Center as virtual machine and we'll
24081.89 -> proceed from there. So guys, this is my sent
to us virtualbox where I've already installed
24085.952 -> argue as I've told you earlier as well in
this is where mine argue is host is running
24089.88 -> or you can see the NOG your server is running
and you can see the dashboard in front of
24094.202 -> your screen as well. Right? So let me just
quickly open the terminal first me clear the
24097.64 -> screen. So let me just show you where I've
installed argue is that this is the path right?
24101.46 -> If you notice in front of your screen, it's
in user local Nagi OS what I can do is just
24106.58 -> clear the screen and I'll show you what our
law directories are inside this so we can
24111.18 -> go inside this Etsy directory. And inside
this I'm going to go inside the objects directory,
24117.782 -> right? So why I'm doing this is basically
if I want to add any command for example Ample
24122.182 -> I want to add the check underscore n RP command.
That's how I'm going to monitor my remote
24127.692 -> Linux host if you remember in the diagram,
right? So that's what I'm going to do. I'm
24131.292 -> going to add that particular command. I've
already done that. So let me just show you
24134.46 -> how it looks so just type generator you can
choose whatever editor that you like and go
24139.52 -> inside the commands dot CFG file and let me
just open it. So these are the various commands
24144.58 -> that I was talking about. Now, you can just
have a look at all these commands. This is
24148.97 -> to basically notify host a by email if anything
goes down anything goes wrong in the host.
24154.202 -> This is for service. Basically it'll notify
if there's any problem with the service through
24157.6 -> email. This will check if my host machine
is alive. I mean, is it up and running now
24162.432 -> this command is basically to check the disk
space like the local disk, then load rights.
24167.4 -> You can see all of these things here swap
FTP. So I've added these commands and you
24172.59 -> can have a look at all of these commands which
I've mentioned here and the last command you
24176.31 -> see is I've added manually because all these
commands once you install your get it by default,
24181.39 -> but the IP take underscore n RP which I'm
highlighting right now with my cursor is something
24186.021 -> which I have added in order to make sure that
I will monitor the remote clinics horse. Now,
24190.4 -> let me just go ahead and save this right.
Let me clear my screen again and I'll go back
24196.46 -> to my nagios directory. Let me share my screen
again now, basically what this will do is
24201.64 -> this will allow you to use a check and the
score an RP command in you're not give service
24205.5 -> definitions right. Now. What we need to do
is update the NRP configuration file. So use
24211.51 -> your favorite editor and open NR P dot c f
g which you will find in this particular directory
24216.71 -> itself. So all I have to do is first I'll
hit LS and then I can just check out the set
24222.98 -> C directory. Now if you notice there is an
NR P dot CFG file, right? I've already added
24228.05 -> it. So I'll just go ahead and show you what
the help of G edit or you can use whatever
24232.59 -> editor that you prefer now over here. You
need to find this allowed host directive and
24237.06 -> add the private IP address of your Nas device
over to the gamma delimited list is Scroll
24239.38 -> down you will find something all allowed host.
Right? So just add a comma and start with
24241.05 -> the IP address of the machine that you want
to monitor So currently let me just open it
24244.5 -> once more. So I'm going to use sudo because
I don't have the Privileges now in this allowed
24256.76 -> host directory. All I have to do is comma
and the IP address of the host said I want
24262.63 -> to monitor so it is one. Ninety two dot one
sixty eight dot 1.21. Just go ahead save it
24270.85 -> come back clear the terminal now save and
exit. Now this configures in RP to accept
24275.56 -> requests from your Nas device over why it's
private IP address, right and then just go
24280.032 -> ahead and restart NRP to put the changes into
effect now on you and argue server. You need
24284.702 -> to create a configuration file for each of
the remote host that you monitor as I was
24288.862 -> mentioning before is well now where you're
going to find it in HC servers directory and
24293.73 -> let me just go ahead and open that for you.
Let me go to the server's directory. Now if
24298.08 -> you notice here, there is a deer a card or
CFG file. This is basically the host. We'll
24301.83 -> be monitoring right now. If I go ahead and
show you what I have written here is basically
24307.66 -> first what I have done is I have defined the
host. It's basically a Linux server and the
24312.74 -> name of that. So what is Eddie raker allies?
Whatever you want to give this is the IP address
24317.77 -> maximum check attempts the periods. I want
to check it 24/7 notification interval is
24322.27 -> what I have mentioned here and notification
period so this is basically about all my host
24327.02 -> now in that hose what all services are going
to monitor our new monitor generic services,
24331.65 -> like pink then I want to monitor SSH then
I'm going to monitor CPU load is when these
24336.15 -> are the three services that I'll be monitoring
and you can find that in your side C. So was
24340.93 -> that a tree over there? You have to create
a proper configuration file for all of the
24345.39 -> hose that you want to monitor Let Me Clear
My terminal again the just to show you. My
24349.51 -> remote machine is well, let me just open that.
So this is my remote machine guys over here.
24353.952 -> I've already installed NRP so over here, I'm
just going to show you how you can restart
24358.4 -> an RP systemctl restart. And rpe service and
here we go the asking for the password. I've
24367.33 -> given that a man not a piece of its has started
actually have restarted again. I've already
24372.17 -> started it before as well. Let me just show
you how my nagios dashboard looks like in
24376.282 -> my server. Now. This is my dashboard again.
If I go to my host tab, you can see that we
24382.272 -> are monitoring to host a dinner a kind localhost.
Erica is the one which I just showed you which
24386.73 -> is up and running right? I can go ahead and
check out this map Legacy map viewer as well
24392.68 -> which basically tells me that my a direct
as remote host then also I have various sources
24398.21 -> that are monitoring. So if you remember I
was monitoring CPU load ping and SSH which
24402.612 -> you can see it over here as well. Right? So
this is all it for today's session. I hope
24407.32 -> you guys have enjoyed listening to this video.
If you have any questions, you can go ahead
24410.63 -> and mention that in the comment section. And
if you're looking to gain hands-on experience
24414.14 -> and devops, you can go ahead and check out
our website www.guitariq.com / devops. You
24419.74 -> can view upcoming patches and enroll for the
That will set you on the path of becoming
24423.52 -> a successful devops engineer, and if you're
still curious to know more about the divorce
24427.69 -> roles and responsibilities, you can check
out the videos mentioned in the description.
24431.73 -> Thank you and happy learning.
Source: https://www.youtube.com/watch?v=hQcFE0RD0cQ