1
00:00:00,210 --> 00:00:06,570
So there is one thing that we still need to do, and that is actually passing the log entry.

2
00:00:07,020 --> 00:00:14,910
So right now we have that whole log is one line, but we actually want to have them as individual key

3
00:00:14,910 --> 00:00:17,580
value pair, just like these values here.

4
00:00:17,590 --> 00:00:22,900
So we want to level 30 time, the time stamp value, et cetera.

5
00:00:23,070 --> 00:00:29,850
So we want these values actually split and passed in this kind of structure so we can actually search

6
00:00:30,090 --> 00:00:32,750
and filter those values as well.

7
00:00:32,940 --> 00:00:39,630
And that will also give us a timestamp which later we can use to filter based on time.

8
00:00:39,690 --> 00:00:42,570
So this is just the first draft, so to say.

9
00:00:42,780 --> 00:00:46,570
So the final step, we're going to configure that Jason parser.

10
00:00:47,010 --> 00:00:53,100
So let's go back to the config map here and configure the parsing for both application logs.

11
00:00:53,640 --> 00:00:59,450
So after Fluent has read the application logs, we're going to add a parser here.

12
00:00:59,460 --> 00:01:02,530
So I'm just going to paste in the code that I have.

13
00:01:02,850 --> 00:01:10,440
So right here we are just matching everything that comes from this source log is basically the attribute

14
00:01:10,440 --> 00:01:12,340
that holds the whole log entry.

15
00:01:12,570 --> 00:01:13,680
So that's the key name.

16
00:01:13,860 --> 00:01:16,800
And this is the parser multiform.

17
00:01:16,800 --> 00:01:21,530
That is the plugin that we saw influent D container logs.

18
00:01:21,840 --> 00:01:28,050
So this is where we use it, basically with multiform that you can define different formats.

19
00:01:28,410 --> 00:01:34,020
We're going to use Jason, because both of our applications are logging in JSON and we're going to put

20
00:01:34,020 --> 00:01:35,880
this configuration in the pattern.

21
00:01:36,210 --> 00:01:42,240
So the format is, Jason, the time key that is going to be added is can we call the time?

22
00:01:42,270 --> 00:01:48,140
And this one will just keep the time attribute in the log as well.

23
00:01:48,360 --> 00:01:52,480
And there is one more thing that we have to do in order to make this work.

24
00:01:52,500 --> 00:01:57,560
So basically, Java, Evernote blog entries will be past using this filter.

25
00:01:58,230 --> 00:02:02,180
And here we are going to create separate indexes for these two applications.

26
00:02:02,370 --> 00:02:10,620
And the reason why is because those two applications have the same filename log level with two different

27
00:02:10,620 --> 00:02:11,490
data types.

28
00:02:11,820 --> 00:02:19,950
So ElasticSearch won't be able to create one index with this filename that has two different data types.

29
00:02:20,100 --> 00:02:23,100
That's why we're going to create separate indices for both.

30
00:02:23,550 --> 00:02:28,820
And we can do that by defining own match for each application.

31
00:02:29,040 --> 00:02:36,400
And here, instead of matching both, we're going to match each individually for matching Java tag.

32
00:02:36,630 --> 00:02:37,980
And this is the index name.

33
00:02:37,980 --> 00:02:40,380
So we're going to call it Java logs.

34
00:02:40,680 --> 00:02:47,160
And we also need to adjust this one because this is the file where the logs get Bufford.

35
00:02:47,160 --> 00:02:50,550
So obviously we need separate buffers for both.

36
00:02:51,000 --> 00:03:01,050
And I'm just going to copy this one and create the second match, second forwarding for node up.

37
00:03:03,180 --> 00:03:04,020
Let's call it node.

38
00:03:04,020 --> 00:03:07,490
It looks index and this is it.

39
00:03:07,860 --> 00:03:09,120
So it's updated.

40
00:03:10,240 --> 00:03:17,080
We're going to restart the demon set and we should see our logs inelastic in a couple of seconds, all

41
00:03:17,080 --> 00:03:18,460
flow and pods restarted.

42
00:03:18,490 --> 00:03:22,420
So let's go back to eye actually removed the old Ines's.

43
00:03:22,690 --> 00:03:24,070
So we have a clean slate.

44
00:03:24,460 --> 00:03:27,790
And here you see those two Ines's got created.

45
00:03:28,090 --> 00:03:31,210
This one has nine entries and this one has 12 entries.

46
00:03:31,510 --> 00:03:37,610
Now, let's go to keyboarder index and create one.

47
00:03:39,520 --> 00:03:47,020
So this name here, this name pattern will actually match both of industry so we can create one Kibwana

48
00:03:47,020 --> 00:03:55,270
index that contains or that holds multiple elasticsearch indices so you can do filtering and search

49
00:03:55,870 --> 00:03:56,600
on that level.

50
00:03:56,830 --> 00:03:58,480
So we're going to go and do that.

51
00:03:59,330 --> 00:04:06,680
And here you see time filled selection, which we didn't have previously when we created the Kibwana

52
00:04:06,680 --> 00:04:14,750
index, and this comes from this log, this structure in see, so we moved all the key value pairs out

53
00:04:14,750 --> 00:04:17,690
of that field, including the time filled.

54
00:04:18,050 --> 00:04:20,300
And we also named it.

55
00:04:21,360 --> 00:04:28,830
Here, and that's why we have a time stamp filled selection on the metro level of the index, so we're

56
00:04:28,830 --> 00:04:30,180
going to create the index here.

57
00:04:33,140 --> 00:04:42,350
So let's go back to Cubano and here we have the list of looks, we have both looks mixed here and you

58
00:04:42,350 --> 00:04:47,840
see Timeful to you that we didn't have before because of this time, feel that we chose an indication.

59
00:04:48,050 --> 00:04:52,260
And here you have a separate column for that as well, so you can filter for the time stamp, et cetera.

60
00:04:52,640 --> 00:04:57,620
So we have Java entries here and down here.

61
00:04:57,620 --> 00:05:01,040
You have no interest in here.

62
00:05:01,040 --> 00:05:03,200
You see, the time is in the same format.

63
00:05:03,740 --> 00:05:12,080
In other thing is that if I spend any one of those logs, you also see that our data got the structure.

64
00:05:12,080 --> 00:05:19,640
So we have level info, logger method and all those different values as structured key value pairs.

65
00:05:19,640 --> 00:05:22,430
And same for no Jessep.

66
00:05:23,030 --> 00:05:29,600
If I spend one of those here again, we don't have this log take anymore.

67
00:05:29,600 --> 00:05:37,250
We have again, message level, take time and all this takes separated so you can actually now filter

68
00:05:37,250 --> 00:05:38,360
and log all this stuff.
