1
00:00:00,090 --> 00:00:06,000
I'm also going to show you some of the important parts about the Demons said configuration as well,

2
00:00:06,180 --> 00:00:09,900
so you have a bit of a more understanding of how it works.

3
00:00:10,320 --> 00:00:14,640
So first, let's check the demon said we can simply do edit.

4
00:00:15,270 --> 00:00:19,500
And here you see the YAML file or the configuration file of the demon set.

5
00:00:19,920 --> 00:00:22,790
And of course, you don't have to understand all this stuff.

6
00:00:23,010 --> 00:00:26,600
One thing that I want to point out is this part here.

7
00:00:26,610 --> 00:00:26,920
Right.

8
00:00:27,090 --> 00:00:33,270
So the volumes these are all the volumes that are defined for the pod replicas of the demon said in

9
00:00:33,270 --> 00:00:37,320
here you see something called Vare Liebe Docker Containers.

10
00:00:37,650 --> 00:00:44,880
And the Perth host, Perth, so to say, is slash bar, slash the slash dockers slash container's.

11
00:00:45,330 --> 00:00:55,020
So this is an important part because every worker node has exactly this folder on the server where the

12
00:00:55,020 --> 00:00:58,110
container logs are being saved.

13
00:00:58,290 --> 00:01:05,430
So each container will get its own uniquely named folder within these containers folder and that will

14
00:01:05,430 --> 00:01:12,980
contain all the logs that the container is logging, including our Java and node web containers.

15
00:01:13,320 --> 00:01:18,100
And this is how fluid can access the logs of those containers.

16
00:01:18,600 --> 00:01:20,430
So that's an important part here.

17
00:01:20,820 --> 00:01:28,700
And obviously this pod volume then gets passed on to the container volume at this level.

18
00:01:28,710 --> 00:01:29,050
Right.

19
00:01:29,070 --> 00:01:30,820
So we see it here as well.

20
00:01:31,830 --> 00:01:38,960
And the second interesting part is this volume right here, which is called Fluence the config.

21
00:01:39,240 --> 00:01:41,250
And this is a config map.

22
00:01:41,250 --> 00:01:41,610
Right.

23
00:01:41,820 --> 00:01:45,390
Which was also dynamically created using this Helme chart.

24
00:01:45,930 --> 00:01:50,080
And the config maps name is fluently forward to see him.

25
00:01:50,400 --> 00:01:53,030
So this is the configuration that we have to adjust.

26
00:01:53,250 --> 00:01:59,580
And in this configuration, we define how fluent this should process the logs that it reads from these

27
00:01:59,760 --> 00:02:00,940
containers folder.

28
00:02:01,000 --> 00:02:01,310
Right.

29
00:02:01,620 --> 00:02:07,020
So let's actually go ahead and look at the contents of this configuration file.

30
00:02:07,320 --> 00:02:08,810
In the config maps section.

31
00:02:09,120 --> 00:02:12,150
You see we have a fluency for the configuration map.

32
00:02:12,330 --> 00:02:13,770
This is the one we'll be adjusting.

33
00:02:13,770 --> 00:02:15,890
The other one is for the aggregator.

34
00:02:15,900 --> 00:02:20,390
And this is that fluent stateful set which was also deployed.

35
00:02:20,400 --> 00:02:22,350
We're not going to touch that or adjust that.

36
00:02:22,350 --> 00:02:23,860
So we're going to leave that alone.

37
00:02:24,690 --> 00:02:30,910
So let's go to the folder and here are the contents of this configuration file.

38
00:02:31,140 --> 00:02:35,650
We can also do the edit here so we can see better so fluently config.

39
00:02:35,700 --> 00:02:43,740
This is the configuration file that gets mounted into each fluent part so that fluent knows how to process

40
00:02:43,740 --> 00:02:45,440
the logs and what to do with them.

41
00:02:45,780 --> 00:02:51,780
This configuration has three four most important tags and with those takes you can basically do most

42
00:02:51,780 --> 00:02:52,650
of the configuration.

43
00:02:52,680 --> 00:02:58,790
So the first one here at the top is basically tech, which would use a couple of times.

44
00:02:59,070 --> 00:03:03,770
So all of this log entries are loaded into fluent right.

45
00:03:03,990 --> 00:03:06,780
And these log entries actually have takes.

46
00:03:06,960 --> 00:03:15,840
Each log entry has a name tag that contains the application name, pod name, namespace name, etc.

47
00:03:16,500 --> 00:03:24,630
So using this method, regular expression, we can filter out or target specific applications or multiple

48
00:03:24,630 --> 00:03:28,750
applications with the similar name, with the regular expressions.

49
00:03:28,770 --> 00:03:35,430
So here what we are saying is we're targeting old logs produced by the fluent container itself and we're

50
00:03:35,430 --> 00:03:36,870
just throwing them away.

51
00:03:37,050 --> 00:03:40,230
This is what type NUL means, so we're not interested in them.

52
00:03:40,240 --> 00:03:42,130
We don't want to collect or process them.

53
00:03:42,180 --> 00:03:43,940
These are just some health checks.

54
00:03:43,950 --> 00:03:45,190
We're not interested in them.

55
00:03:45,330 --> 00:03:47,410
This is another interesting part here.

56
00:03:47,430 --> 00:03:54,690
You see the source you saw in the volume configuration that we are reading, container logs from a path

57
00:03:54,690 --> 00:03:58,680
called slash bar, slash, leap, slash dockers, slash containers.

58
00:03:58,950 --> 00:04:04,590
And this is another path where those log entries are temporarily stored.

59
00:04:05,070 --> 00:04:09,660
So we use that as a source for the fluency to process the data.

60
00:04:10,020 --> 00:04:13,560
Again, we exclude any influence the logs.

61
00:04:13,560 --> 00:04:19,760
And here we're trying to pass each log entry using a regular expression.

62
00:04:19,980 --> 00:04:23,610
We're not going to go into detail here because we're going to change his regular expression.

63
00:04:23,730 --> 00:04:25,830
But that's what the parse tag is for.

64
00:04:25,840 --> 00:04:32,580
So you can pass each log entry of a different format using different regular expressions so you can

65
00:04:32,580 --> 00:04:33,760
have JSON format.

66
00:04:33,780 --> 00:04:38,610
You can have, for example, Ingenix will have its own format and so on.

67
00:04:39,180 --> 00:04:41,590
In other tag that you see here is filter.

68
00:04:41,630 --> 00:04:46,950
So what we're doing is we are targeting again, just like in match.

69
00:04:47,310 --> 00:04:53,430
We are targeting all the logs which have Coronado's dot something tech.

70
00:04:53,610 --> 00:04:59,730
And this is going to be all the logs coming from here because you see this line here, we are adding.

71
00:05:00,020 --> 00:05:07,160
A prefix to each log entry called Communities Dot, so that we can filter them right here.

72
00:05:07,490 --> 00:05:14,750
So what this filter here does is basically it catches all those logs coming from here and it gets them

73
00:05:14,750 --> 00:05:16,510
some metadata from Carbonetti.

74
00:05:16,570 --> 00:05:22,070
So it enriches these base logs with some additional data.

75
00:05:22,280 --> 00:05:27,410
This data will be, for example, a pod name and pod ID, which the logs belong to.

76
00:05:27,710 --> 00:05:30,120
This can also be namespace and so on.

77
00:05:30,140 --> 00:05:36,200
And this final part here, basically what it does, again, you see the match take here, it matches

78
00:05:36,200 --> 00:05:36,950
everything.

79
00:05:36,950 --> 00:05:38,600
So it kind of it's like a funnel.

80
00:05:38,810 --> 00:05:40,540
So it comes from top down.

81
00:05:40,880 --> 00:05:45,170
So here we match everything and we're just forwarding them.

82
00:05:45,200 --> 00:05:46,910
We're also going to change that apart.

83
00:05:46,910 --> 00:05:49,640
So you don't have to understand each detail here.

84
00:05:49,880 --> 00:05:51,500
But that's the configuration file.

85
00:05:51,680 --> 00:05:54,710
It's not very beginner friendly, I would say.

86
00:05:55,310 --> 00:06:00,560
But then again, you just have to understand a couple of takes and how this works, and you should be

87
00:06:00,560 --> 00:06:01,160
good to go.

88
00:06:01,550 --> 00:06:09,830
So now let's go and actually check the logs of fluently parts themselves and let's see what it actually

89
00:06:09,830 --> 00:06:12,890
does using this configuration out of the box.

90
00:06:13,670 --> 00:06:20,150
So let's go to our fluent teapots, these three right here, and let's actually open two of them so

91
00:06:20,150 --> 00:06:21,110
we can see the logs.

92
00:06:23,330 --> 00:06:29,480
And here, as you see, a lot of stuff gets locked here, so first of all, as you see here, there

93
00:06:29,480 --> 00:06:33,350
is a whole bunch of Péter not matched messages.

94
00:06:33,350 --> 00:06:35,280
Right, or warnings from fluently.

95
00:06:35,660 --> 00:06:44,570
So what this means is actually it read all those log entries from the containers and using the pattern

96
00:06:44,570 --> 00:06:51,590
matching or the regular expression that we have in this default configuration, which is this one right

97
00:06:51,590 --> 00:06:54,220
here in the past, we have this regular expression.

98
00:06:54,530 --> 00:07:02,720
This one doesn't match the log entry formats because our log entries are in JSON indices expression

99
00:07:02,720 --> 00:07:04,740
for something other than Jason.

100
00:07:04,940 --> 00:07:07,280
So that's why we have all these warnings.

101
00:07:07,280 --> 00:07:11,600
We're going to fix that problem by changing that parser.

102
00:07:11,840 --> 00:07:14,150
So let's go back to the configuration.

103
00:07:15,030 --> 00:07:23,010
And we can edit it directly here, so I'm going to remove that pass completely and we're going to write

104
00:07:23,190 --> 00:07:27,440
format, Jason, this is not going to pass the log entries.

105
00:07:27,450 --> 00:07:28,620
We're going to do that later.

106
00:07:28,830 --> 00:07:32,120
But this is just for formatting logs.

107
00:07:33,240 --> 00:07:41,310
So if I update it, whenever I update a config map, I have to restart fluent parts so that they can

108
00:07:41,520 --> 00:07:43,330
reload the new config map.

109
00:07:43,350 --> 00:07:45,810
You can do that by an automated script.

110
00:07:45,810 --> 00:07:50,490
You can do that by configuring the demon set to automatically reload it.

111
00:07:50,970 --> 00:07:54,600
But I'm just going to restart the demon set with cubes.

112
00:07:54,600 --> 00:07:56,610
It'll roll out restart.

113
00:07:57,210 --> 00:08:02,930
And this is the name of our demon said, and this will actually restart demon set in its pots.

114
00:08:03,150 --> 00:08:06,570
So here we see the status of pots restarting.

115
00:08:06,930 --> 00:08:08,910
It will take a couple of seconds.

116
00:08:08,990 --> 00:08:10,640
So now the pots have restarted.

117
00:08:10,860 --> 00:08:12,900
We can actually check them again.

118
00:08:15,000 --> 00:08:16,010
So here are the logs.

119
00:08:16,320 --> 00:08:21,730
So we have a more clear output of the logs so we can actually see what's going on here.

120
00:08:22,260 --> 00:08:26,240
So right at the top, we have a couple of interesting stuff.

121
00:08:26,520 --> 00:08:30,060
The first thing to note here are these lines.

122
00:08:30,210 --> 00:08:34,350
Here you see a couple of fluent plugins are loaded on the start up.

123
00:08:34,620 --> 00:08:37,680
One of them that we are going to use is ElasticSearch.

124
00:08:37,680 --> 00:08:44,640
This is the plugin we're going to need to connect to the plastic surgeon, push or forward those logs

125
00:08:44,640 --> 00:08:45,630
to ElasticSearch.

126
00:08:45,630 --> 00:08:48,690
Another one is this community's metadata filter.

127
00:08:48,840 --> 00:08:52,740
This is what we saw in the configuration file right here.

128
00:08:53,100 --> 00:09:00,930
This is a Carbonetti metadata plugin which enriches each log entry with Khubani data like pod name,

129
00:09:01,380 --> 00:09:03,620
container name, port ID and so on.

130
00:09:03,750 --> 00:09:07,710
And we also have multi format parser means.

131
00:09:07,710 --> 00:09:12,370
If you have logs from different containers which are in different formats, like one of them is logging

132
00:09:12,370 --> 00:09:17,220
in Jason, another one is logging in some other format and so on.

133
00:09:17,370 --> 00:09:21,420
You can actually define a couple of different formats using this plugin.

134
00:09:21,990 --> 00:09:29,070
And other thing to note here is that you have an output of the configuration file that is loaded by

135
00:09:29,070 --> 00:09:29,550
fluency.

136
00:09:29,610 --> 00:09:33,560
This is basically what we saw or what we edited here.

137
00:09:34,170 --> 00:09:39,990
And the third interesting part in our logs are these entries right here, you see following tail off

138
00:09:39,990 --> 00:09:41,590
and then there's Pece.

139
00:09:41,990 --> 00:09:50,670
So all of this basically is a list of different container logs that fluently was able to collect or

140
00:09:50,670 --> 00:09:53,050
read out of this log location.

141
00:09:53,220 --> 00:09:57,780
So here you see var log containers elasticsearch master Ingenix.

142
00:09:58,130 --> 00:10:04,440
So basically it has logs of every single container that is running in our cluster.

143
00:10:04,680 --> 00:10:10,200
And here we also have the health check logs so we can actually clean up all the stuff that we don't

144
00:10:10,200 --> 00:10:18,210
need so we can actually turn off the health check logs and also remove taling or collecting logs from

145
00:10:18,210 --> 00:10:19,850
containers that we're not interested.

146
00:10:19,910 --> 00:10:26,640
So basically we just want to collect the logs of Java and know that that's it in real production scenarios

147
00:10:26,640 --> 00:10:27,630
in my differ.

148
00:10:27,640 --> 00:10:34,140
So you might want to collect logs of Aldermaston processes or all the services like ElasticSearch and

149
00:10:34,140 --> 00:10:39,960
so on, or you may want to limit your log collection only on your applications.

150
00:10:40,110 --> 00:10:49,710
And the final thing that you may already noticed is this one right here you see following tale of var

151
00:10:49,710 --> 00:10:51,620
log containers, Java EBP.

152
00:10:52,080 --> 00:10:59,590
So this specific flew antipode right here is actually following and processing the logs of Java YPP

153
00:10:59,610 --> 00:11:02,000
and there is no node EP on the list.

154
00:11:02,010 --> 00:11:11,370
So if we go to next Fluence and depart and we check its logs, you see that it has, for example, Kibwana

155
00:11:11,370 --> 00:11:18,990
logs in the third influenzae pod is actually the one that is collecting and processing our node.

156
00:11:18,990 --> 00:11:21,450
EBP logs this one right here.

157
00:11:21,750 --> 00:11:29,430
So how come one fluency part is collecting node web logs and another one is collecting Java epilogues?

158
00:11:29,760 --> 00:11:37,260
The way it works is that, as I mentioned, one fluency part is running on every worker, node and node

159
00:11:37,260 --> 00:11:43,110
and maybe running on one worker node in Java is running on another worker node so fluently will actually

160
00:11:43,110 --> 00:11:51,810
exis and process logs of only those containers that are running on the same worker node as that specific

161
00:11:51,810 --> 00:11:52,830
fluency pod.
