1
00:00:02,440 --> 00:00:10,570
So let's go back to our configuration file, and here we are going to define where all our logs that

2
00:00:10,570 --> 00:00:13,650
are processed up until here are going to land.

3
00:00:13,960 --> 00:00:22,210
And this is the plugin that I showed you in the fluency part, which is called elastic search.

4
00:00:22,960 --> 00:00:29,020
So this will connect fluently to ElasticSearch in order to send the data there.

5
00:00:29,230 --> 00:00:31,510
And we're going to add some configuration here.

6
00:00:31,690 --> 00:00:34,730
So obviously, we want to know where the host address is.

7
00:00:35,200 --> 00:00:39,040
So this is the ElasticSearch host in our application,

8
00:00:41,380 --> 00:00:43,480
ElasticSearch Master.

9
00:00:43,480 --> 00:00:45,590
That is a service name of ElasticSearch.

10
00:00:45,820 --> 00:00:51,090
This is the namespace and this is basically the whole name of service inelastic.

11
00:00:52,000 --> 00:00:53,710
So that is going to be the host.

12
00:00:54,370 --> 00:00:57,820
Then we have the port where ElasticSearch is running at

13
00:01:00,460 --> 00:01:04,070
and we have the index name and we can give it any name we want.

14
00:01:04,090 --> 00:01:06,310
I'm going to call that epilogues.

15
00:01:06,580 --> 00:01:13,570
If you have multiple micro services, you can also create index per micro service or you can group a

16
00:01:13,570 --> 00:01:15,530
couple of micro services in one index.

17
00:01:15,530 --> 00:01:23,800
So basically the index allows you to group log entries from services applications in a way that is logical

18
00:01:23,800 --> 00:01:29,640
for you and some other configuration here, like include TICKIE.

19
00:01:29,950 --> 00:01:32,680
This is the tag name that I mentioned.

20
00:01:32,980 --> 00:01:35,800
And we also have some buffer configuration.

21
00:01:35,890 --> 00:01:41,680
So obviously we don't want to write every single entry line by line as it's processed by fluid.

22
00:01:41,980 --> 00:01:45,060
So we buffer them and then we send that to ElasticSearch.

23
00:01:45,280 --> 00:01:53,040
So this is the configuration for that and this will be enough to send our logs to ElasticSearch.

24
00:01:53,800 --> 00:01:55,810
So I'm going to save that.

25
00:01:56,020 --> 00:01:59,140
We see the updates and again, I'm going to restart.

26
00:02:01,050 --> 00:02:08,880
Demon set, so the pots have restarted and now let's go to our cabana here, you basically see list

27
00:02:08,880 --> 00:02:10,230
of different applications.

28
00:02:10,420 --> 00:02:11,820
For now, we just need keyboarder.

29
00:02:12,120 --> 00:02:18,720
And here in the management section, in instict management, you can actually manage both ElasticSearch

30
00:02:18,720 --> 00:02:21,210
indices and Kibwana indices.

31
00:02:21,240 --> 00:02:28,290
So in the data section, in index management, we can actually see and manage ElasticSearch indices.

32
00:02:29,070 --> 00:02:33,870
And this is the index that we define in our configuration.

33
00:02:34,290 --> 00:02:35,670
This is The Epilogues.

34
00:02:35,880 --> 00:02:38,230
So our index was successfully created.

35
00:02:38,230 --> 00:02:44,480
A good thing with ElasticSearch indices is that you don't have to create them before they get dynamically

36
00:02:44,490 --> 00:02:46,140
created if it doesn't exist.

37
00:02:46,770 --> 00:02:53,040
And here we also see how many entries this index has, which is 21.

38
00:02:53,050 --> 00:02:56,180
This is a number of logs in our application.

39
00:02:56,670 --> 00:03:00,180
So if we click here, we just see some metadata.

40
00:03:00,180 --> 00:03:06,870
We can see the log entries in order to do that or in order to see those log entries and Cubano, we

41
00:03:06,870 --> 00:03:13,700
need to go to index patterns in the Cubano section and create what's called banner index.

42
00:03:13,710 --> 00:03:20,730
So based on the ELASTICSEARCH index that we created, we're going to create here a key banner index.

43
00:03:22,550 --> 00:03:30,200
So here we're just going to say the name, this is actually the name Petten, so you can match multiple

44
00:03:30,200 --> 00:03:37,870
indices or we can also write the name direct because we just have one and create index pattern.

45
00:03:38,660 --> 00:03:40,910
So now we can see the logs.

46
00:03:41,870 --> 00:03:44,600
So let's go back to Keyboarder Discover.

47
00:03:45,800 --> 00:03:52,550
And this is our index, and here you see logs from Java EP, we have the log tag here and the whole

48
00:03:52,670 --> 00:03:54,560
logline is a value.

49
00:03:54,770 --> 00:03:58,490
And after Java, if we also have the logs from the node.

50
00:03:58,510 --> 00:04:04,790
Jessep So if I spend one of those, I have a structured view of all the techs.

51
00:04:05,060 --> 00:04:10,170
And these are the ones actually that got added by this company is metadata.

52
00:04:10,400 --> 00:04:13,990
So these are enriched with some capabilities, additional data.

53
00:04:14,000 --> 00:04:18,550
You have a container named the host port name, ID namespace, et cetera.

54
00:04:18,890 --> 00:04:23,810
So you can actually filter and search based on those values.
