1
00:00:00,180 --> 00:00:06,450
So now we're going to deploy ElasticSearch Helme chart in our cluster, and the first step for that

2
00:00:06,450 --> 00:00:11,030
is to go and look up the chart for ElasticSearch.

3
00:00:11,310 --> 00:00:16,200
So whatever application you are installing, there is a chance that there is a Helme chart already for

4
00:00:16,200 --> 00:00:16,370
it.

5
00:00:16,590 --> 00:00:22,650
And this one is actually from elastic itself, which is another advantage of Helme charts, because

6
00:00:23,100 --> 00:00:30,120
when official organization creates them, you can rely on the fact that they can do it better or better

7
00:00:30,120 --> 00:00:34,960
than you putting all these configuration files and logic together.

8
00:00:35,190 --> 00:00:42,720
So this is the repository for elastic Helme charts and they have one for each of different applications.

9
00:00:42,870 --> 00:00:44,490
We're going to use Kibwana later.

10
00:00:44,550 --> 00:00:51,200
Right now, we need the ElasticSearch chart and the installation is very simple.

11
00:00:51,450 --> 00:00:57,300
You have a guide here, but I will put all the necessary comments in the repository so you can just

12
00:00:57,300 --> 00:00:59,580
copy them from the commands file from there.

13
00:00:59,790 --> 00:01:04,670
But just so you see, this is documentation with the installation commands.

14
00:01:04,680 --> 00:01:12,660
And here in this section you see all the parameters that you can externally configure for your cluster.

15
00:01:12,670 --> 00:01:19,620
So you have a pretty long list of things that you can actually change in the default values of the cluster,

16
00:01:19,620 --> 00:01:20,790
which is pretty neat.

17
00:01:21,600 --> 00:01:27,120
We're not going to need most of these parameters because we're not going to configure much.

18
00:01:27,270 --> 00:01:31,020
But one thing that we need to configure is the volume.

19
00:01:31,410 --> 00:01:35,160
So by default, our elasticsearch will just get some temporary volume.

20
00:01:35,160 --> 00:01:42,270
But this is not what we need because obviously we need to process that log data that elastics stauss

21
00:01:42,390 --> 00:01:43,230
permanently.

22
00:01:43,260 --> 00:01:47,070
So even when the pods die, we want the data to be available.

23
00:01:47,520 --> 00:01:54,450
If you don't know the concept of volumes yet, you can actually check out my YouTube video where I explained

24
00:01:54,450 --> 00:01:55,230
that in detail.

25
00:01:55,440 --> 00:02:01,770
But what you need to know for our setup is that we need to process the data and we're going to do that

26
00:02:01,770 --> 00:02:08,970
using storage or physical storage on Lenard's servers in the set up is going to look like this.

27
00:02:09,480 --> 00:02:12,750
We're going to install ElasticSearch Helme chart.

28
00:02:13,650 --> 00:02:21,600
We're going to configure this attribute externally as a parameter and set it to Lenard's storage class.

29
00:02:22,200 --> 00:02:29,310
Well, this is going to do is it's going to dynamically or automatically create volumes that we need

30
00:02:29,310 --> 00:02:38,880
for each of three replicas of ElasticSearch and it will back them up with actual physical storage on

31
00:02:38,880 --> 00:02:40,000
LYNARD servers.

32
00:02:40,500 --> 00:02:49,170
So even if the pods die or we delete the whole elasticsearch chart and reinstall it or upgraded or whatever,

33
00:02:49,440 --> 00:02:51,180
the data will still be there.

34
00:02:51,540 --> 00:03:00,060
And the great thing about it is that with storage class for Lynard Block storage, you have minimal

35
00:03:00,450 --> 00:03:02,300
effort for setting this up.

36
00:03:02,310 --> 00:03:04,590
You don't have to create volumes manually.

37
00:03:04,590 --> 00:03:08,520
You have to set up the storage and then connect them to the volume.

38
00:03:08,520 --> 00:03:11,340
Everything just happens dynamically.

39
00:03:11,550 --> 00:03:15,660
We just need to provide this value right here.

40
00:03:15,750 --> 00:03:18,720
And the way we do that is also very simple.

41
00:03:19,260 --> 00:03:28,050
In order to override any of these values in Helme chart installation, we need to create a values YAML

42
00:03:28,050 --> 00:03:36,750
file where we just list all of these parameters and set them to our custom values or what we want the

43
00:03:36,750 --> 00:03:37,500
values to be.

44
00:03:37,890 --> 00:03:46,380
So here I have the values YAML file already created, so I'm basically overriding some of the resource

45
00:03:46,560 --> 00:03:50,710
size attributes, which is not very interesting here.

46
00:03:50,760 --> 00:03:52,290
This is the most interesting part.

47
00:03:52,590 --> 00:03:57,440
The volume claim template is basically an attribute of stateful set.

48
00:03:58,200 --> 00:04:02,730
So we're writing that exactly as it would be in stateful set.

49
00:04:02,820 --> 00:04:03,160
Right.

50
00:04:04,140 --> 00:04:08,260
So we have exis mode and resources here as well.

51
00:04:08,280 --> 00:04:12,340
So basically how much storage you want to be able for each pod?

52
00:04:12,570 --> 00:04:20,010
So each pod replica of ElasticSearch application will get storage size defined here.

53
00:04:21,090 --> 00:04:23,460
And this is the interesting part.

54
00:04:23,460 --> 00:04:28,440
Storage class name is basically Lynard Block Storage and that's it.

55
00:04:28,830 --> 00:04:36,060
This configuration here will be enough to dynamically create the volumes we need to connect them to

56
00:04:36,060 --> 00:04:41,580
lenard's storage, actual storage and save all the data and per system there.

57
00:04:42,120 --> 00:04:42,660
That's it.

58
00:04:44,310 --> 00:04:47,130
So now we have all the resources, we have the file.

59
00:04:47,880 --> 00:04:48,750
We have to come in.

60
00:04:48,750 --> 00:04:53,520
So let's go ahead and install elastic search for that.

61
00:04:53,520 --> 00:04:55,200
We need just to commence.

62
00:04:55,200 --> 00:04:59,850
The first one is we need to edit repository to help with.

63
00:05:00,290 --> 00:05:08,090
Again, a package manager for communities, we need to add a repository that holds or contains our desired

64
00:05:08,090 --> 00:05:11,160
chart and this is a plastic repository.

65
00:05:11,360 --> 00:05:12,590
So let's go and do that.

66
00:05:15,290 --> 00:05:22,350
It's been edited and now we have access to the chart, so again, note we're using Helme three.

67
00:05:24,350 --> 00:05:25,340
This is the comment.

68
00:05:25,820 --> 00:05:26,810
I'm going to explain that.

69
00:05:27,290 --> 00:05:27,630
Yeah.

70
00:05:27,950 --> 00:05:32,730
Second, so Helme installed is the come in to install the chart.

71
00:05:33,080 --> 00:05:34,970
This is the name that we are getting.

72
00:05:34,970 --> 00:05:37,160
So you can basically call it anything you want.

73
00:05:37,400 --> 00:05:39,700
And this is the name of the chart.

74
00:05:40,280 --> 00:05:48,890
And here is how we are going to overwrite those parameters using our values file.

75
00:05:49,160 --> 00:05:50,240
So pretty simple.

76
00:05:50,240 --> 00:05:53,150
We just say minus F, which is file.

77
00:05:53,300 --> 00:06:01,370
And here we are going to pass our values YAML file and this will install ElasticSearch chart.

78
00:06:03,020 --> 00:06:09,620
Note that you will find links to all the charts we use in this course in the git repository in Linc's

79
00:06:09,620 --> 00:06:09,960
file.

80
00:06:10,490 --> 00:06:18,440
Also important thing I have to note here is help install chart name will install the latest chart version

81
00:06:18,650 --> 00:06:21,800
so it can be different from the version I am installing right now.

82
00:06:22,130 --> 00:06:27,900
If you want to have the same versions that I have to be consistent with the tutorial, you can fixate

83
00:06:27,920 --> 00:06:33,800
the version of each chart we install with Desh version Fleg.

84
00:06:34,130 --> 00:06:39,950
I have added the commands with the exact versions in Git Repository so you can just easily copy them

85
00:06:39,950 --> 00:06:40,550
from their.

86
00:06:42,630 --> 00:06:50,670
You would just get some status data and now we can actually watch the components and different stuff

87
00:06:50,670 --> 00:06:51,420
getting created.

88
00:06:54,840 --> 00:07:02,130
And this may need some time to start running because it's pulling the images and doing all the setup

89
00:07:04,260 --> 00:07:06,340
and now everything is running.

90
00:07:07,020 --> 00:07:08,160
These are just the parts.

91
00:07:08,160 --> 00:07:11,700
But we can also see what other components got created.

92
00:07:11,950 --> 00:07:16,580
So let's do it all and let's actually filter on elastic.

93
00:07:18,330 --> 00:07:21,720
So here we see our three pod replicas.

94
00:07:22,200 --> 00:07:31,380
We see the services ElasticSearch Master and Master Headlam services and the stateful said that got

95
00:07:31,380 --> 00:07:35,330
created shortly to these two services.

96
00:07:35,760 --> 00:07:42,150
And the reason why you have two of them is that this is the service that load balances, so to say,

97
00:07:42,150 --> 00:07:45,080
requests to the pod replicas.

98
00:07:45,510 --> 00:07:53,010
So, for example, when you have queries from the data to get some information, you would be talking

99
00:07:53,010 --> 00:07:59,760
to this service and this service will then forward that request to one of those pods, whichever is

100
00:08:00,030 --> 00:08:00,930
least busy.

101
00:08:01,290 --> 00:08:08,130
Now, the second one is important because when you were making update requests to ElasticSearch, meaning

102
00:08:08,400 --> 00:08:18,480
your writing to its database, we want to address the master pod specifically because the master pod

103
00:08:18,480 --> 00:08:21,710
is the only one that is allowed to write.

104
00:08:22,050 --> 00:08:26,040
As I mentioned, they're both working on the same data or database.

105
00:08:26,230 --> 00:08:31,900
So if we allow all the pods to write to the same data, we will get data inconsistencies.

106
00:08:32,220 --> 00:08:38,220
That's why only the master pod is allowed to write and the others are basically synchronizing the data

107
00:08:38,220 --> 00:08:41,760
from the master to get up to date data state.

108
00:08:42,060 --> 00:08:49,770
Again, I explain the whole concept of how this works, including the concept of attached volumes in

109
00:08:49,770 --> 00:08:57,150
far more detail in my YouTube video, which you can see in the link so you can learn more about this

110
00:08:57,150 --> 00:08:58,600
concept from that video.

111
00:08:58,980 --> 00:09:07,650
So overall, as you see with just one simple command of helm install, we have the whole elasticsearch

112
00:09:07,650 --> 00:09:14,850
cluster running in our communities cluster and we don't need much elasticsearch specific knowledge in

113
00:09:14,850 --> 00:09:16,370
order to make that happen.

114
00:09:16,710 --> 00:09:21,410
And you can also see the overview of Deployed ElasticSearch in Dashboard.

115
00:09:21,420 --> 00:09:25,890
So here, if you scroll down, you see the parts of ElasticSearch got created.

116
00:09:25,890 --> 00:09:32,640
By the way, the name of ElasticSearch Mester for each pod may be misleading because only one of those

117
00:09:32,640 --> 00:09:39,390
three pods is actually the master, which is usually the first one, the one with the zero index, the

118
00:09:39,390 --> 00:09:45,740
other two, or if you had four replicas, the other three would all be slave pod, so to say.

119
00:09:46,470 --> 00:09:53,760
And here you also have the stateful said, no, we configure the storage bakin for our elasticsearch.

120
00:09:53,760 --> 00:09:54,050
Right.

121
00:09:54,360 --> 00:10:00,840
So that means it must have created volumes in our cluster.

122
00:10:01,200 --> 00:10:07,680
And these happen dynamically without our manual interaction, which is the best part, because doing

123
00:10:07,680 --> 00:10:12,210
that manually, which is also possible, but it will be much more effort.

124
00:10:12,420 --> 00:10:14,520
So we have the persistent volumes here.

125
00:10:14,730 --> 00:10:17,370
Each pod gets its own persistent volume.

126
00:10:17,580 --> 00:10:18,890
That's important to note.

127
00:10:19,110 --> 00:10:26,130
And each of these volumes is connected to their respective pod using these claims.

128
00:10:26,130 --> 00:10:29,070
So it's like an intermediate sort of sea between them.

129
00:10:29,080 --> 00:10:36,180
The Connector and these claims are also dynamically created using this template that we defined right

130
00:10:36,180 --> 00:10:37,590
here and here.

131
00:10:37,590 --> 00:10:41,700
You see in the configuration storage class is Lynard Block Storage.

132
00:10:42,270 --> 00:10:47,850
Now, that also means that a physical storage must have been created on Lenos service.

133
00:10:48,070 --> 00:10:56,520
So if you go back to the node and volumes, you see three storage use or storage beacons were created

134
00:10:56,520 --> 00:10:58,470
in the region that might cluster is running.

135
00:10:58,710 --> 00:11:03,860
We have this size for each of those storage volumes, which we define here as well.

136
00:11:04,590 --> 00:11:13,350
This is the storage and this is actually the path to actual folder or actual storage that will contain

137
00:11:13,350 --> 00:11:16,260
our information on Lenard's systems.

138
00:11:16,590 --> 00:11:24,090
To give you a graphical overview and to end a physical storage in some folder is created which gets

139
00:11:24,090 --> 00:11:27,070
attached to work or node, which is right here.

140
00:11:27,120 --> 00:11:28,440
So we have three working nodes.

141
00:11:28,440 --> 00:11:30,770
Each one of them gets this storage.

142
00:11:31,230 --> 00:11:38,160
Then respectively, three persistent volumes will get created in the cluster, which are backed up by

143
00:11:38,370 --> 00:11:44,640
the storage, and they will then be passed into each pod using this claim.

144
00:11:45,090 --> 00:11:50,430
So that's how the whole string of connection actually works.

145
00:11:50,920 --> 00:11:52,710
And as I mentioned, if.

146
00:11:53,140 --> 00:12:01,570
Stay or even if I remove the stateful set of ElasticSearch, these capillaries volumes and the LYNARD

147
00:12:01,570 --> 00:12:07,840
volumes will all be there, even though the stateful said and the pots aren't running anymore, so your

148
00:12:07,840 --> 00:12:09,400
data will be persisted.
