1
00:00:02,650 --> 00:00:09,460
Here I see with the wording here that this is the conflict field, because here in the node it has a

2
00:00:09,610 --> 00:00:17,010
value of a number or integer, and in Java it has a value of string, which is info.

3
00:00:17,470 --> 00:00:26,320
So how LASTIC indices actually work is that when you send ElasticSearch data entries, it will create

4
00:00:26,320 --> 00:00:27,250
an index.

5
00:00:27,520 --> 00:00:35,510
And based on the first very first entry that you send it for that index, it will also create the schema.

6
00:00:35,680 --> 00:00:41,300
So it will create the fields and it will create data types for each field.

7
00:00:41,650 --> 00:00:47,620
So for example, when the next log entry comes in with different fields, those fields get added to

8
00:00:47,620 --> 00:00:48,280
the schema.

9
00:00:48,640 --> 00:00:55,750
However, if the next entry sent to a plastic contains a field name that already exists in the schema

10
00:00:55,900 --> 00:01:00,610
but the value is of different type, then ElasticSearch will not accept it.

11
00:01:00,760 --> 00:01:05,980
So basically, based on your configuration, it will either discard that value or it will give you an

12
00:01:05,980 --> 00:01:10,850
exception that you can't add this very because it doesn't match the schema.

13
00:01:10,900 --> 00:01:16,210
And this is the case with these two FS because we have the same field name with two different value

14
00:01:16,210 --> 00:01:16,570
types.

15
00:01:16,890 --> 00:01:22,820
Now, when you are logging in your application, you can, of course, try to adjust it.

16
00:01:22,870 --> 00:01:29,120
So in this case, we could go to know JS and configure Peno to log string instead of no.

17
00:01:29,230 --> 00:01:36,340
But if you don't have so much control over these fields, for example, if you're collecting logs from

18
00:01:36,340 --> 00:01:39,150
other application, then obviously you can change the code.

19
00:01:39,280 --> 00:01:43,240
In that case, you have to just create different indices as we created.

20
00:01:43,270 --> 00:01:51,310
Now, this is maybe an important point here to mention when working with Lustick indices and then displaying

21
00:01:51,310 --> 00:01:52,100
them and keep on them.

22
00:01:52,440 --> 00:01:57,820
So this is exactly the mapping conflict mentioned here that says a field is defined as several types

23
00:01:58,030 --> 00:01:59,590
string an integer.

24
00:01:59,960 --> 00:02:04,000
Another thing that I also want to show you is an exception in Java.

25
00:02:04,540 --> 00:02:08,430
So if I expanded this was our exception log in here.

26
00:02:08,440 --> 00:02:14,140
Also important that you actually see the whole stack is one log entry.

27
00:02:14,320 --> 00:02:18,520
So you also have this multi-line log support.

28
00:02:18,880 --> 00:02:20,770
So this is the end of our setup.

29
00:02:20,950 --> 00:02:27,000
So we basically have both application logs in the elasticsearch and we have them visualized in Kobana.

30
00:02:27,130 --> 00:02:35,140
So of course, if you want to collect and store logs of other applications like Kibwana itself or Ingenix

31
00:02:35,140 --> 00:02:42,760
Ingress ElasticSearch, you can add them to the flow and configuration and pass them using other parsers

32
00:02:42,760 --> 00:02:45,220
and also save them as separate Ines's.

33
00:02:45,320 --> 00:02:46,560
So you have that data as well.
