1
00:00:04,210 --> 00:00:07,480
Hello and welcome to this lecture on Khubani Despots.

2
00:00:09,640 --> 00:00:15,580
Before we head into understanding part, we would like to assume that the following have been set up

3
00:00:15,580 --> 00:00:23,260
already at this point, we assume that the application is already developed and built into darker images

4
00:00:23,620 --> 00:00:29,540
and it is available on a darker repository like Docker Hub, so this can pull it down.

5
00:00:30,190 --> 00:00:36,540
We also assume that the company, this cluster has already been set up and is working.

6
00:00:37,270 --> 00:00:41,010
This could be a single node set up or a multi node setup.

7
00:00:41,230 --> 00:00:41,950
Doesn't matter.

8
00:00:42,460 --> 00:00:45,370
All the services need to be in a running state.

9
00:00:46,610 --> 00:00:53,630
As we discussed before with Coburn, it is our ultimate aim is to deploy our application in the form

10
00:00:53,630 --> 00:00:58,990
of containers on a set of machines that are configured as worker nodes in a cluster.

11
00:00:59,450 --> 00:01:05,240
However, it is does not deploy containers directly on the worker notes.

12
00:01:05,960 --> 00:01:12,170
The containers are encapsulated into a commonalties object known as pods.

13
00:01:12,740 --> 00:01:16,400
A pod is a single instance of an application.

14
00:01:16,820 --> 00:01:22,250
A pod is the smallest object that you can create in Cuba unless.

15
00:01:24,490 --> 00:01:32,380
Here we see the simplest of simplest cases where you have a single node carbonate as cluster with a

16
00:01:32,380 --> 00:01:38,860
single instance of your application running in a single dock or container encapsulated in a pod.

17
00:01:39,820 --> 00:01:45,870
What if the number of users accessing your application increase and you need to scale your application,

18
00:01:46,510 --> 00:01:51,490
you need to add additional instances of your Web application to share the load.

19
00:01:52,250 --> 00:01:55,250
Now, where would you pin up additional instances?

20
00:01:55,780 --> 00:01:59,530
Do we bring up new container instance within the same pod?

21
00:02:00,070 --> 00:02:06,580
No, we create new pod altogether with a new instance of the same application.

22
00:02:07,090 --> 00:02:14,020
As you can see, we now have two instances of our Web application running on two separate pods on the

23
00:02:14,020 --> 00:02:16,360
same kobrin at a system or node.

24
00:02:17,050 --> 00:02:22,690
What if the user base further increases and your current node has no sufficient capacity?

25
00:02:22,930 --> 00:02:27,340
Well, then you can always deploy additional pods on a new node.

26
00:02:27,340 --> 00:02:33,640
In the cluster, you will have a new node added to the cluster to expand the clusters physical capacity.

27
00:02:34,270 --> 00:02:41,470
So what I'm trying to illustrate in this slide is that pods usually have a one to one relationship with

28
00:02:41,470 --> 00:02:45,040
containers running your application to scale up.

29
00:02:45,230 --> 00:02:48,250
You create new pods and to scale down.

30
00:02:48,250 --> 00:02:50,050
You delete existing pod.

31
00:02:50,740 --> 00:02:56,140
You do not add additional containers to an existing pod to scale your application.

32
00:02:56,650 --> 00:03:02,290
Also, if you're wondering how we implement all of this and how we achieve load balancing between the

33
00:03:02,290 --> 00:03:06,380
containers, etc., we will get into all of that in a later lecture.

34
00:03:06,880 --> 00:03:11,290
For now, we are only trying to understand the basic concepts.

35
00:03:12,710 --> 00:03:19,370
We just said that parts usually have a one to one relationship with the containers, but are we restricted

36
00:03:19,370 --> 00:03:22,290
to having a single container in a single pod?

37
00:03:22,790 --> 00:03:30,170
No, a single pod can have multiple containers, except for the fact that they're usually not multiple

38
00:03:30,170 --> 00:03:32,300
containers of the same kind.

39
00:03:32,840 --> 00:03:35,000
As we discussed in the previous slide.

40
00:03:35,310 --> 00:03:41,420
If our intention was to scale our application, then we would need to create additional pods.

41
00:03:42,140 --> 00:03:48,410
But sometimes you might have a scenario where you have a helper container that might be doing some kind

42
00:03:48,410 --> 00:03:55,100
of supporting task for our Web application, such as processing a user, enter data, processing a file

43
00:03:55,100 --> 00:04:01,370
uploaded by the user, etc. and you want these helper containers to live alongside your application

44
00:04:01,370 --> 00:04:01,930
container.

45
00:04:02,600 --> 00:04:09,920
In that case, you can have both of these containers, part of the same pod, so that when a new application

46
00:04:09,920 --> 00:04:16,120
container is created, the helper is also created and when it dies, the helper also dies.

47
00:04:16,130 --> 00:04:22,850
Since they are part of the same pod, the two containers can also communicate with each other directly

48
00:04:22,850 --> 00:04:28,160
by referring to each other as localhost, since they share the same network space.

49
00:04:28,670 --> 00:04:32,300
Plus they can easily share the same storage space as well.

50
00:04:33,750 --> 00:04:39,210
If you still have doubts in this topic, I would understand if you did, because I did the first time

51
00:04:39,210 --> 00:04:44,900
I learned these concepts, we could take another shot at understanding parts from a different angle.

52
00:04:46,150 --> 00:04:52,770
Let's for a moment keep commentators out of our discussion and talk about simple docker containers,

53
00:04:53,260 --> 00:04:59,530
let's assume we were developing a process or a script to deploy our application on a Docker host.

54
00:05:00,520 --> 00:05:07,840
Then we would first simply deploy our application using a simple token python at command and the application

55
00:05:07,840 --> 00:05:10,450
runs fine and our users are able to access it.

56
00:05:11,310 --> 00:05:17,730
When the load increases, we deploy more instances of our application by running the docker run commands

57
00:05:17,730 --> 00:05:18,720
many more times.

58
00:05:19,540 --> 00:05:24,220
This works fine and we are all happy now sometime in the future.

59
00:05:24,240 --> 00:05:30,630
Our application is further developed, undergoes architectural changes and grows and gets complex.

60
00:05:31,230 --> 00:05:37,860
We now have a new helper container that helps our web application by processing or vetting data from

61
00:05:37,860 --> 00:05:38,440
elsewhere.

62
00:05:39,150 --> 00:05:45,810
These helper containers maintain a one to one relationship with our application container and thus needs

63
00:05:45,810 --> 00:05:51,810
to communicate with the application containers directly and access data from those containers.

64
00:05:52,900 --> 00:05:59,710
For this, we need to maintain a map of what app and help our containers are connected to each other,

65
00:06:00,200 --> 00:06:07,030
we would need to establish network connectivity between these containers ourselves using links and custom

66
00:06:07,030 --> 00:06:07,560
networks.

67
00:06:07,900 --> 00:06:11,650
We would need to create shareable volumes and shared among the containers.

68
00:06:12,340 --> 00:06:14,860
We would need to maintain a map of that as well.

69
00:06:15,550 --> 00:06:20,380
And most importantly, we would need to monitor the state of the application container.

70
00:06:20,620 --> 00:06:25,450
And when it dies, Manolete kill the helper container as well as it's no longer required.

71
00:06:26,080 --> 00:06:31,800
When a new container is deployed, we would need to deploy the new helper container as well with part

72
00:06:32,110 --> 00:06:34,720
cabinet, as does all of this for us.

73
00:06:34,720 --> 00:06:41,590
Automatically we just need to define what containers a port consists of and the containers in a part

74
00:06:41,590 --> 00:06:48,700
by default will have access to the same storage, the same network namespace and same fate as in.

75
00:06:48,700 --> 00:06:52,210
They will be created together and destroyed together.

76
00:06:52,870 --> 00:06:58,660
Even if our application didn't happen to be so complex and we could live with a single container Cuban,

77
00:06:58,660 --> 00:07:01,570
it is still requires you to create pods.

78
00:07:02,380 --> 00:07:09,160
But this is good in the long run as your application is now equipped for architectural changes and scale

79
00:07:09,160 --> 00:07:09,940
in the future.

80
00:07:10,870 --> 00:07:17,530
However, also note that multiple containers are a rare use case and we are going to stick to single

81
00:07:17,530 --> 00:07:20,170
containers per pod in this course.

82
00:07:21,250 --> 00:07:23,820
This is now look at how to deploy pods.

83
00:07:24,640 --> 00:07:31,780
Earlier, we learned about the cube control run command, what does command really does is it deploys

84
00:07:31,810 --> 00:07:34,510
a docker container by creating a pod.

85
00:07:35,050 --> 00:07:40,830
It first creates a pod automatically and deploys an instance of the engine X docker image.

86
00:07:41,590 --> 00:07:48,610
But where does it get the application image from or that you need to specify the image name using the

87
00:07:48,610 --> 00:07:51,990
image parameter, the application image.

88
00:07:52,000 --> 00:07:59,380
In this case, the Engine X image is downloaded from the repository Docker hub, as we discussed, is

89
00:07:59,380 --> 00:08:03,860
a public repository where latest images of various applications are stored.

90
00:08:04,180 --> 00:08:11,410
You could configure cabernets to pull the image from the public or a private repository within the organization.

91
00:08:12,370 --> 00:08:17,230
Now that we have a port created, how do we see the list of ports available?

92
00:08:18,230 --> 00:08:26,390
The control jet parts command helps us see the list of pods in our cluster, in this case, we see the

93
00:08:26,390 --> 00:08:33,050
pod is in a container creating state and soon changes to a running state when it is actually running.

94
00:08:34,000 --> 00:08:39,850
Also, remember that we haven't really talked about the concepts on how a user can access the Ingenix

95
00:08:39,850 --> 00:08:46,000
Web server, and so in the current state, we haven't made the Web server accessible to external users.

96
00:08:46,480 --> 00:08:49,330
You can access it internally from the node.

97
00:08:49,810 --> 00:08:54,640
But for now, we will just the how to deploy a pod.

98
00:08:55,030 --> 00:09:01,600
And later, in a later lecture, once we learn about networking and services, we will get to know how

99
00:09:01,600 --> 00:09:04,570
to make the service accessible to end users.

100
00:09:06,830 --> 00:09:08,420
Well, that's it for this lecture.

101
00:09:08,660 --> 00:09:11,540
Head over to a demo and I will see you in the next one.
