1
00:00:00,180 --> 00:00:05,970
So now let's actually go back to our configuration and let's clean all this stuff up, let's remove

2
00:00:05,970 --> 00:00:12,600
the tailing of all the containers we're not interested in and maybe all the health checks so we can

3
00:00:12,600 --> 00:00:13,980
see a little bit more.

4
00:00:14,520 --> 00:00:21,480
So let's edit the config map in here where it says for detailed health check, which is sending that

5
00:00:21,480 --> 00:00:25,740
output to standard out of the console that we saw the container console.

6
00:00:26,070 --> 00:00:31,710
We're going to send it to know basically we're just going to discard those health checks so we don't

7
00:00:31,710 --> 00:00:32,700
see them in the logs.

8
00:00:32,730 --> 00:00:38,900
And another thing we want to do is we want to provide here a path only to Java.

9
00:00:39,120 --> 00:00:45,650
And no, we don't want to process logs from other containers so we can do it using a regular expression.

10
00:00:45,900 --> 00:00:53,490
So instead of start the log, we're going to write minus if this will match Java minus F node minus

11
00:00:53,490 --> 00:00:55,650
EBP and then log.

12
00:00:55,770 --> 00:01:02,310
So this will basically only match our two applications, which means we don't need this exclusion because

13
00:01:02,310 --> 00:01:05,460
it's excluded already by the first line.

14
00:01:05,790 --> 00:01:13,260
And another thing that we can do is instead of forwarding it to an aggregator, we are going to send

15
00:01:13,260 --> 00:01:17,360
it to a console output so we can actually see the logs as well.

16
00:01:17,880 --> 00:01:20,860
So instead of forward, we're going to do a standard out.

17
00:01:22,140 --> 00:01:28,410
This comment doesn't apply anymore and we can update again.

18
00:01:28,410 --> 00:01:33,880
We have to restart the demon set because we changed the configuration.

19
00:01:34,050 --> 00:01:36,140
So let's check the logs again.

20
00:01:43,810 --> 00:01:52,150
And here you see the logs of one of our applications, so no EP is the one that's processed by this

21
00:01:52,150 --> 00:01:52,990
one pod.

22
00:01:53,200 --> 00:01:54,970
And here we see the logs.

23
00:01:55,120 --> 00:01:56,100
Here's the message.

24
00:01:56,110 --> 00:01:57,280
Hello, elastic world.

25
00:01:57,550 --> 00:02:00,150
So these are all our no thep logs.

26
00:02:00,640 --> 00:02:02,440
And let's check the second one.

27
00:02:02,950 --> 00:02:06,590
This is the one that logs Java Web logs.

28
00:02:06,790 --> 00:02:12,960
So this is how the output looks like aggregated and processed by fluently.

29
00:02:13,240 --> 00:02:16,360
So here you have the timestamp, then you have the take.

30
00:02:16,360 --> 00:02:17,890
This is the take that I mentioned.

31
00:02:18,040 --> 00:02:21,250
Have Khubani this dot as a prefix.

32
00:02:22,420 --> 00:02:24,910
This is the one that we have added here.

33
00:02:26,110 --> 00:02:32,980
And then you have instead of bar slash, log slash, blah, blah, blah, you have that path, so to

34
00:02:32,980 --> 00:02:34,840
say, transform into a tag.

35
00:02:35,110 --> 00:02:41,340
Then you have the application named namespace name and all this stuff, and then you have the log entry.

36
00:02:41,620 --> 00:02:43,470
So we're almost ready.

37
00:02:43,810 --> 00:02:46,810
We have the logs that are processed by flow.

38
00:02:46,810 --> 00:02:54,190
And now instead of sending them to a standard output of fluency, we want to send them to ElasticSearch.
