1
00:00:00,120 --> 00:00:05,480
OK, so now that we have a prompt health service pushing data to I like service and we've set up a like

2
00:00:05,490 --> 00:00:11,610
data source in California and we can read it in the Explore tab Loki and shows log browser.

3
00:00:11,640 --> 00:00:13,590
We can continue when you open Loki.

4
00:00:13,590 --> 00:00:19,230
It already has some information from the Loki data source about the kind of information that it's already

5
00:00:19,230 --> 00:00:19,850
collecting.

6
00:00:19,860 --> 00:00:23,780
Now the information that we're seeing here has come from our prompt configuration.

7
00:00:23,790 --> 00:00:28,430
If we look at the prompt tail configuration from the last video where we download it, install the prompter

8
00:00:28,440 --> 00:00:31,020
binary, we create this one scrape config.

9
00:00:31,020 --> 00:00:32,549
Here we named system.

10
00:00:32,549 --> 00:00:34,440
It targets the local server.

11
00:00:34,470 --> 00:00:38,040
It has a job called VAR logs and it has a path variable.

12
00:00:38,040 --> 00:00:41,070
So it's scanning everything in the VAR log folder.

13
00:00:41,100 --> 00:00:42,150
Star Lock.

14
00:00:42,180 --> 00:00:43,970
That's a wild card for anything log.

15
00:00:43,980 --> 00:00:45,930
So VAR log stare, we can see that here.

16
00:00:45,960 --> 00:00:47,550
Our job is VAR logs.

17
00:00:47,550 --> 00:00:54,150
This file name is also created, and that's because the scraped config path property here is showing

18
00:00:54,150 --> 00:00:57,030
us all the file names it is found that follow that pattern.

19
00:00:57,220 --> 00:01:01,620
OK, now I can see both those options there because I've got them highlighted here so I can highlight

20
00:01:01,620 --> 00:01:02,730
it or highlighted again.

21
00:01:02,730 --> 00:01:05,099
Same thing the job I can make it active or not.

22
00:01:05,099 --> 00:01:11,580
If I my job active and then select VAR logs, it then shows me here this string here job equals VAR

23
00:01:11,580 --> 00:01:11,820
log.

24
00:01:11,970 --> 00:01:14,850
That string is called a log stream selector.

25
00:01:14,860 --> 00:01:19,700
That's one of the required things we need when we're doing a logical query in this window here.

26
00:01:19,710 --> 00:01:25,680
So very quickly we can just show logs and has put it into their job equals VAR logs is showing me everything

27
00:01:25,680 --> 00:01:28,230
in the job, VAR logs in the last one hour.

28
00:01:28,260 --> 00:01:34,110
So if I just scroll up, these are all the log lines from all the files that that scrape config has

29
00:01:34,110 --> 00:01:34,620
collected.

30
00:01:34,620 --> 00:01:38,100
I make it look at each of those individually and open it up, and we can see it has two labels file,

31
00:01:38,100 --> 00:01:39,030
name and job.

32
00:01:39,120 --> 00:01:43,290
So looking at job equals file logs, every one of these has job VAR logs.

33
00:01:43,320 --> 00:01:45,230
The file name will be different.

34
00:01:45,270 --> 00:01:50,550
For example, that's VAR log or slope, where as that is VAR log syslog.

35
00:01:50,940 --> 00:01:52,470
OK, so that's a log stream selector.

36
00:01:52,470 --> 00:01:58,260
So if we go back to log browser and data like that press phoneme press VAR Log Auth log, that's also

37
00:01:58,260 --> 00:01:59,310
a log stream selector.

38
00:01:59,310 --> 00:02:00,390
There is just a different one.

39
00:02:00,450 --> 00:02:01,200
I show logs.

40
00:02:01,230 --> 00:02:05,490
It's showing me all the logs it has or the labeled file name equals VAR log of log.

41
00:02:05,490 --> 00:02:10,110
So if I open those up, you'll see the label phoneme equals VAR log or log.

42
00:02:10,199 --> 00:02:16,020
So all these lines here all have thought I'm VAR local log and that's the log stream selector.

43
00:02:16,050 --> 00:02:19,530
Now we can do more with logs, streams like that we can select to log streams at the same time.

44
00:02:19,530 --> 00:02:24,880
So I go back to the log browser and all select auth and sites at the same time.

45
00:02:24,900 --> 00:02:27,980
So if we look at the log stream selector now, there are a few differences.

46
00:02:27,990 --> 00:02:34,490
There is a tilt there indicating it's using a rejects pattern equals rejects that is a port character.

47
00:02:34,500 --> 00:02:38,520
So VAR Log Auth Log or VAR log syslog slots, show logs.

48
00:02:39,180 --> 00:02:44,700
It's now showing me all the lines from both of those files so that one is VAR log syslog.

49
00:02:44,940 --> 00:02:46,130
I go further down.

50
00:02:46,130 --> 00:02:52,710
-- VAR log syslog and there's a VAR log or slot that was selected before going to my documentation

51
00:02:52,710 --> 00:02:53,160
website.

52
00:02:53,160 --> 00:02:56,510
Log stream selectors now inside looks, frame selectors out operators.

53
00:02:56,520 --> 00:02:59,320
OK, and this is an example of using a rejects.

54
00:02:59,340 --> 00:03:00,660
There are several of them here.

55
00:03:00,660 --> 00:03:03,330
There's a course which is the most common one you'll see.

56
00:03:03,330 --> 00:03:07,680
For example, job equals VAR log, so they're not equals down here.

57
00:03:07,680 --> 00:03:08,430
I've got fault.

58
00:03:08,430 --> 00:03:09,870
I'm not equal to VAR.

59
00:03:09,870 --> 00:03:17,100
Log syslog also rejects matches, so we've seen that we've searched for a rejects VAR log or log or

60
00:03:17,130 --> 00:03:20,200
VAR log syslog and rejects does not match.

61
00:03:20,220 --> 00:03:24,210
So all these examples here we can type those by hand into the search query.

62
00:03:24,240 --> 00:03:29,400
For example, if I was to delete that and create a curly brace, it's given me the available labels

63
00:03:29,400 --> 00:03:32,580
that it knows about in the last one hour only have two labels.

64
00:03:32,580 --> 00:03:37,980
So there's no point in me changing that to 24 hours, for example, because I still only have two to

65
00:03:37,980 --> 00:03:38,450
choose from.

66
00:03:38,460 --> 00:03:40,560
But if I just selected for, I'm there.

67
00:03:40,560 --> 00:03:45,120
It then says Equals and then it shows me what values I can search for and phoneme.

68
00:03:45,130 --> 00:03:49,020
That's everything that I have from tail scrape config figures found and put into the likely service

69
00:03:49,020 --> 00:03:49,490
for us.

70
00:03:49,500 --> 00:03:53,280
So for naming a DB and package manager log, so I could run that query.

71
00:03:53,280 --> 00:03:56,380
And that's everything it has in the last 24 hours.

72
00:03:56,400 --> 00:03:59,280
We can say that something was stored Lipsy bin, for example.

73
00:04:00,170 --> 00:04:07,520
OK, so now let's say I wanted to search for all of those two ways we have of doing that job equals

74
00:04:07,520 --> 00:04:12,110
VAR logs because just so happens, I have one scrape config and the job name was bollocks.

75
00:04:12,110 --> 00:04:15,530
Or I could say phoneme equals using a rejects.

76
00:04:15,620 --> 00:04:22,070
That's the title character they dot plus like that ended off with a curly brace press shift enter and

77
00:04:22,070 --> 00:04:26,000
that will return all the file names, for example VAR log syslog.

78
00:04:26,880 --> 00:04:28,870
But look at that one, that's VAR log syslog.

79
00:04:29,570 --> 00:04:31,250
Most of the time it's VAR log syslog.

80
00:04:31,310 --> 00:04:36,590
Sure, there's some auth ones there, but let's say I didn't want one of those comma I write file name

81
00:04:36,590 --> 00:04:43,100
again, not equals to syslog, for example, and then shift enter or press run query.

82
00:04:43,290 --> 00:04:46,400
OK, so it's given me all the file names, auth logs in there.

83
00:04:46,940 --> 00:04:51,710
We'll find the Debian Package Manager logs as well, but we won't find anything in there with the label

84
00:04:51,710 --> 00:04:56,450
phoneme, VAR log syslog or slog or slog or slog.

85
00:04:56,480 --> 00:05:00,710
Anyway, that is outlined here, so you can read more about the log stream.

86
00:05:00,710 --> 00:05:07,610
Select operators now looking at filter expressions, filter expressions allow us to filter what's returned

87
00:05:07,610 --> 00:05:08,770
from the log stream selector.

88
00:05:08,780 --> 00:05:11,750
Even more so, let's go back to job equals VAR logs.

89
00:05:12,260 --> 00:05:20,010
So everything that I can find where the label job equals bar logs, the job, VAR logs, I can say I

90
00:05:20,010 --> 00:05:23,990
really want the jobs with the word error in them.

91
00:05:23,990 --> 00:05:26,680
So some of those who have error was quite hard to see.

92
00:05:26,690 --> 00:05:28,370
So let's level equals error there.

93
00:05:28,400 --> 00:05:31,310
So I can say pop equals error.

94
00:05:32,300 --> 00:05:38,040
So everything in here now has the word error in it now for this filter expression here.

95
00:05:38,060 --> 00:05:43,310
It doesn't work to actually write equals equals like that or just a single equals needs to be a pipe

96
00:05:43,310 --> 00:05:43,940
equals.

97
00:05:44,030 --> 00:05:49,580
The pipe is quite an ambiguous character to use, but that's just how it does because pipe is the same

98
00:05:49,580 --> 00:05:51,640
as writing or in a rejects expression.

99
00:05:51,650 --> 00:05:55,460
But that's how you use that filter expression if you want to include everything with error.

100
00:05:55,490 --> 00:05:59,330
Then you use those two characters there in older versions of local.

101
00:05:59,360 --> 00:06:00,430
You could just do that.

102
00:06:00,470 --> 00:06:01,610
But that now shows an arrow.

103
00:06:01,610 --> 00:06:06,410
But the more recent versions that says Give me everything with the word error in it.

104
00:06:06,650 --> 00:06:10,670
OK, so we can say, give me everything that doesn't have error in it.

105
00:06:10,940 --> 00:06:14,240
So for that, we can use the not character not equals the error.

106
00:06:14,630 --> 00:06:18,200
So everything return doesn't have the word error in it somewhere.

107
00:06:18,530 --> 00:06:22,700
Now moving on, we could say we want to use the rejects in our query.

108
00:06:22,710 --> 00:06:26,750
So in that case would use the pipe and then the rejects tilt.

109
00:06:26,750 --> 00:06:30,710
And we could say, give me everything with error or info.

110
00:06:30,740 --> 00:06:36,470
Now that pipe is used twice here, there, in there in this expression, that means give me everything

111
00:06:36,470 --> 00:06:38,630
that matches the rejects and in the rejects.

112
00:06:38,630 --> 00:06:43,490
That just means the or so error or in fact, so we'll see everything with error or info.

113
00:06:43,520 --> 00:06:44,540
So shift enter.

114
00:06:44,900 --> 00:06:51,770
If I scroll down, it's error Iraq info error info so we can find error or info in those results that

115
00:06:51,770 --> 00:06:56,540
say I didn't want error or info, I could say not read GICS error or info.

116
00:06:57,080 --> 00:06:58,430
OK, so everything returns.

117
00:06:58,430 --> 00:07:02,210
Doesn't have error and it doesn't have info written anywhere in there.

118
00:07:02,660 --> 00:07:03,640
We can do more than that.

119
00:07:03,650 --> 00:07:11,210
Sometimes you might get a line with error and info in it, so you can say give me everything with error,

120
00:07:11,210 --> 00:07:13,160
but not if it contains info.

121
00:07:13,880 --> 00:07:18,530
So we're getting error there, and none of those lines also contain info, which is quite hard to find.

122
00:07:18,530 --> 00:07:20,930
And this was also anyway, but just showing that it can be done.

123
00:07:20,960 --> 00:07:22,190
Also, we could do another one.

124
00:07:22,190 --> 00:07:29,080
Invalid user rejects invalid user will find everything with invalid user, followed by bob or radius,

125
00:07:29,120 --> 00:07:30,740
so there may be a few those.

126
00:07:31,050 --> 00:07:32,150
And so there's none of those.

127
00:07:32,150 --> 00:07:35,420
If I check in the last two days, there are two occurrences of that.

128
00:07:35,450 --> 00:07:39,710
There was invalid user followed by bulb and invalid user, followed by Radix.

129
00:07:39,800 --> 00:07:46,160
I could just say give me everything with invalid user data, and we can see all the attempts where someone's

130
00:07:46,160 --> 00:07:50,300
tried to log into my server in all the usernames that they're using their IP addresses.

131
00:07:50,720 --> 00:07:54,740
This is pretty normal for a server on the internet, though, so if they were logging web server logs,

132
00:07:54,740 --> 00:07:58,940
we could also search for status equals for a three or statistical support base, right?

133
00:07:58,970 --> 00:08:03,230
That would be rich, except that if you know, rejects, rejects can become quite sophisticated and

134
00:08:03,230 --> 00:08:03,770
quite long.

135
00:08:03,770 --> 00:08:05,750
So we'll see some more rejections later.

136
00:08:05,810 --> 00:08:09,170
OK, now look at scalar vectors and series of scalar vectors.

137
00:08:09,260 --> 00:08:12,800
OK, so the data returns so far are returned as streams of log lines.

138
00:08:12,830 --> 00:08:19,040
OK, so log lines many, many local ones and we can look at them individually how Gryffindor tries to

139
00:08:19,040 --> 00:08:19,640
break them up.

140
00:08:19,650 --> 00:08:21,640
But that's the line as it's written in a text file.

141
00:08:21,650 --> 00:08:27,380
We can't really grasp that, despite the fact that Gravano in the Explore section is creating a graph

142
00:08:27,380 --> 00:08:27,750
of that.

143
00:08:27,770 --> 00:08:31,360
So, for example, I just put job equals VAR logs.

144
00:08:31,700 --> 00:08:36,770
It's drawing us a graph and it's colored it in some way corrupted into common information like info,

145
00:08:36,770 --> 00:08:37,880
error and unknown.

146
00:08:37,909 --> 00:08:42,470
If we look at the tool tip there now, if we use the log visualization in the Gryffindor dashboards,

147
00:08:42,470 --> 00:08:43,549
it won't show us a graph.

148
00:08:43,850 --> 00:08:46,240
It will just show us the log information.

149
00:08:46,350 --> 00:08:47,690
So I'll quickly demonstrate that.

150
00:08:47,690 --> 00:08:48,610
So I'll copy that.

151
00:08:48,620 --> 00:08:52,490
Go into create dashboard, add a new empty panel, select.

152
00:08:53,860 --> 00:08:55,630
The locks option just there.

153
00:08:56,770 --> 00:09:00,820
Select low key and base your local query, and so there it is, click out of it.

154
00:09:00,820 --> 00:09:01,820
So there are bonds.

155
00:09:01,840 --> 00:09:03,850
That's the information I was saying in the log panel.

156
00:09:03,880 --> 00:09:07,170
There's no graph there, but if you wanted to see a graph, we can do that.

157
00:09:07,180 --> 00:09:13,090
And that is by converting our log lines into scalar vectors or a series of scalable vectors.

158
00:09:13,140 --> 00:09:13,420
Okay.

159
00:09:13,420 --> 00:09:18,580
So what we do is we wrap our query into a function that somehow counts our data.

160
00:09:18,610 --> 00:09:24,490
So the first one count over time shows the total count of log lines for a time range going back into

161
00:09:24,490 --> 00:09:25,600
the Explore tab.

162
00:09:26,320 --> 00:09:28,940
I've got that and I'll do that query again.

163
00:09:28,950 --> 00:09:36,070
Curly brace job VAR logs count over time bracket and the range one minute.

164
00:09:36,610 --> 00:09:36,910
OK.

165
00:09:36,970 --> 00:09:43,660
So that has taken out log lines and created two Skyla vectors, so it's done a count of VAR logs where

166
00:09:43,660 --> 00:09:46,480
the file name label equals VAR log syslog or so.

167
00:09:46,480 --> 00:09:51,250
Another one at the bottom here with a phoneme label, says VAR Log Auth log in the last one hour.

168
00:09:51,250 --> 00:09:55,330
There are only two log files being written if I change that for last 24 hours.

169
00:09:55,450 --> 00:09:59,920
Then there are four log files that it can create scalar vectors for.

170
00:09:59,920 --> 00:10:04,990
It's quite hard to say there is some blue dots down here that would be for the dropper agent update

171
00:10:04,990 --> 00:10:11,380
log, and there's a yellow one just over here, which I'll zoom in to Nat to log Davian Package Manager

172
00:10:11,380 --> 00:10:12,250
log just down there.

173
00:10:12,280 --> 00:10:14,710
I can image that even more zoom out.

174
00:10:15,820 --> 00:10:21,670
So that is taking our data, our log lines and converting it into a graph by doing a count on the information

175
00:10:21,670 --> 00:10:24,670
that it's getting back for the log stream selector just there.

176
00:10:24,790 --> 00:10:27,940
Now this range parameter here we need to put in.

177
00:10:28,000 --> 00:10:33,820
That's a range of how far back it should count every time it creates one of these scales for us and

178
00:10:33,820 --> 00:10:34,210
graph.

179
00:10:34,270 --> 00:10:42,250
So for example, if I look at this value just here for VAR log syslog, it says 16 in the last one minute

180
00:10:42,250 --> 00:10:43,540
that they're that range.

181
00:10:43,570 --> 00:10:47,520
There are 16 lines in VAR log syslog at that point there.

182
00:10:47,530 --> 00:10:53,230
In the last one minute, there are 21 occurrences of a log line in VAR log systems, and that's what

183
00:10:53,230 --> 00:10:53,980
the one minute is.

184
00:10:54,250 --> 00:10:56,980
I could say give me that for one hour shift.

185
00:10:56,980 --> 00:10:58,310
Enter to update that.

186
00:10:58,330 --> 00:11:00,520
Okay, so now the graph is a little different.

187
00:11:00,880 --> 00:11:01,630
Zoom out.

188
00:11:02,380 --> 00:11:07,110
We can see here that at this point here in the last one hour, there are one thousand three hundred

189
00:11:07,130 --> 00:11:10,090
twenty one lines in the VAR log syslog for.

190
00:11:10,120 --> 00:11:11,350
I could even do one second.

191
00:11:12,350 --> 00:11:18,850
There's not many Detroit ten seconds ago, so the last ten seconds at this point here in the VAR log

192
00:11:18,850 --> 00:11:21,780
syslog, there were 11 occurrences go back to one minute.

193
00:11:21,790 --> 00:11:23,920
There are 21 occurrences in the last one minute there.

194
00:11:24,040 --> 00:11:25,240
That's what the range property is about.

195
00:11:25,240 --> 00:11:27,050
When you use these functions that cover your logs.

196
00:11:27,370 --> 00:11:34,750
The scale vectors does not write is very similar to count over time, except it's showing us the rate

197
00:11:34,750 --> 00:11:35,590
per second.

198
00:11:35,620 --> 00:11:42,550
So once again, if I was to look at that value there, there is a rate of zero point six six seven log

199
00:11:42,550 --> 00:11:46,120
lines per second in the last one minute at that point.

200
00:11:46,270 --> 00:11:47,440
So that's how you read that.

201
00:11:47,530 --> 00:11:52,710
We can also do a bytes over time count shift into via Zoom into that section there.

202
00:11:52,750 --> 00:11:57,070
That is because sometimes a log line might be very long and contain a lot of bytes.

203
00:11:57,070 --> 00:12:01,180
You may want to know something like that, and we can also get the rights of the bytes per second as

204
00:12:01,180 --> 00:12:05,770
well when we're converting our query into a scalable vector.

205
00:12:05,800 --> 00:12:07,720
We could also limit it to using a filter expression.

206
00:12:07,720 --> 00:12:13,960
For example, a copy that shift into job equals bar logs that contain error, but we're counting over

207
00:12:13,960 --> 00:12:16,630
time with the right of one hour in that time, right?

208
00:12:16,780 --> 00:12:19,210
So much nicer to last 12 hours.

209
00:12:19,360 --> 00:12:21,010
Last 24 hours.

210
00:12:21,160 --> 00:12:26,320
So count over time VAR logs that contain the word error at this point here, for the last one hour,

211
00:12:26,350 --> 00:12:27,970
there were 360 entries.

212
00:12:28,100 --> 00:12:29,660
OK, now aggregate functions.

213
00:12:29,680 --> 00:12:32,650
Now what we've been looking at so far are series of vector scales.

214
00:12:32,650 --> 00:12:34,510
So I'm getting two series here.

215
00:12:34,540 --> 00:12:40,000
So when doing the count over time job because VAR logs for one hour, for example, it's giving me a

216
00:12:40,000 --> 00:12:42,520
new series broken up by phone.

217
00:12:42,520 --> 00:12:48,520
I'm there, so I'm doing job because violence, but because each of my log lines has two labels in it,

218
00:12:48,520 --> 00:12:49,960
developing a file name.

219
00:12:49,960 --> 00:12:53,870
It's given me four different series and you can see in the different colors.

220
00:12:53,890 --> 00:12:59,830
Now I can do a total of all of those, which would then become a single, scalable vector by wrapping

221
00:12:59,830 --> 00:13:02,680
that into an aggregate function such as sum.

222
00:13:02,680 --> 00:13:04,930
So sum is one of them shift.

223
00:13:04,930 --> 00:13:07,660
And so it's now giving me the total of all of them.

224
00:13:07,660 --> 00:13:11,710
It's no longer broken that up into the different series depended on the name of the label.

225
00:13:11,750 --> 00:13:12,160
It's now done.

226
00:13:12,160 --> 00:13:14,710
Some can't over time is now given me one.

227
00:13:14,740 --> 00:13:15,730
There are other ones as well.

228
00:13:15,730 --> 00:13:19,810
We can get the maximum of count over time or the minimum count over time.

229
00:13:19,840 --> 00:13:25,030
There are several of them, their average standard deviation of standard variance account, which is

230
00:13:25,030 --> 00:13:26,330
count the number of elements.

231
00:13:26,380 --> 00:13:27,280
So count.

232
00:13:28,090 --> 00:13:34,870
OK, so here there was to a three to three I here there was four file name series that contained log

233
00:13:34,870 --> 00:13:39,120
lines with label job logs and then down here got a bottom K and top K.

234
00:13:39,130 --> 00:13:45,400
Now they won't convert all our series into a single Skylar Vector like these other ones to above here.

235
00:13:45,430 --> 00:13:51,520
These will give us only two series or three series of the bottom values, depending on what we use the

236
00:13:51,520 --> 00:13:54,400
K or the top value dependent of what we use the case.

237
00:13:54,400 --> 00:13:59,470
So an example over here there are four series return because I can see that when I use count.

238
00:13:59,470 --> 00:14:03,720
So if I zoom into that, I really want to know where the values were highest.

239
00:14:03,730 --> 00:14:09,910
So I can say Top K two comma, and it's only going to show me the top two series now, even though there

240
00:14:09,910 --> 00:14:11,410
are four series to choose from.

241
00:14:11,440 --> 00:14:13,480
OK, so let's just show me the top two.

242
00:14:13,510 --> 00:14:16,810
I want to see what the bottom two series were in that collection there.

243
00:14:16,820 --> 00:14:21,050
So bottom to show me the bottom two series.

244
00:14:21,070 --> 00:14:26,710
So at that point, there were only two series that contained log lines being phoneme, VAR log syslog

245
00:14:26,710 --> 00:14:27,930
and full name also.

246
00:14:28,300 --> 00:14:30,710
But at this point, there are other fault names being written.

247
00:14:30,730 --> 00:14:33,810
That's a packager and drop of agent update.

248
00:14:33,850 --> 00:14:36,580
So that's the use of top and bottom k excite.

249
00:14:36,580 --> 00:14:37,660
Give me the bottom three.

250
00:14:38,380 --> 00:14:42,310
Now got one of three or four, which is just going to give me all of them anyway.

251
00:14:43,180 --> 00:14:46,060
OK, so there are some examples there that you can use.

252
00:14:46,060 --> 00:14:48,600
And of course, you can also filter those further down.

253
00:14:48,610 --> 00:14:54,190
So give me everything with error or everything with the info, etc. Now not only that, let's say l

254
00:14:54,190 --> 00:14:56,710
can't over time, VAR logs one minute.

255
00:14:57,710 --> 00:15:03,470
For example, returns four different series based on the labels in the information, so there are two

256
00:15:03,470 --> 00:15:04,070
labels here.

257
00:15:04,130 --> 00:15:08,930
Job because logs and fall name may course whatever the name of the phone name was, we could have a

258
00:15:08,930 --> 00:15:13,760
third label, which I'll show you in the next video and we can choose what we're going to group by.

259
00:15:13,790 --> 00:15:17,900
So, for example, I'm saying some give me the sum of everything.

260
00:15:17,900 --> 00:15:21,360
As one, I don't really want one anymore.

261
00:15:21,380 --> 00:15:24,530
I want to split them up again, and I'll use my phone name.

262
00:15:24,860 --> 00:15:25,970
So it's broken up again.

263
00:15:25,970 --> 00:15:30,050
So that's essentially exactly the same response as what was returned by that.

264
00:15:30,200 --> 00:15:33,470
But I'm saying explicitly to a group of by file name.

265
00:15:33,500 --> 00:15:40,310
Now this is good in those cases where you have more than two labels in your log lines, I only have

266
00:15:40,310 --> 00:15:42,430
two, so I'm not really changing in the query.

267
00:15:42,440 --> 00:15:44,630
Much so it's quite a useless query that one.

268
00:15:44,720 --> 00:15:48,860
Let's just say I had a label called host, for example, wicked group by host.

269
00:15:49,220 --> 00:15:53,810
And not only that wicked group, multiple log streams and cetera, you can read my documentation for

270
00:15:53,810 --> 00:15:55,610
examples if you want those kinds of things.

271
00:15:55,640 --> 00:15:59,460
Now, comparison operators is not the thing we can do with the aggregate functions there.

272
00:15:59,480 --> 00:16:04,810
For example, show me the count over time of job, VAR logs one minute where it's greater than four

273
00:16:05,450 --> 00:16:06,080
presenter.

274
00:16:06,500 --> 00:16:10,800
OK, so that's given me a total bang, the sum where the value was greater than four.

275
00:16:10,820 --> 00:16:14,870
So none of those values are less than four, but we could have less than four if we wanted to.

276
00:16:15,650 --> 00:16:16,370
There's nothing there.

277
00:16:16,400 --> 00:16:17,420
What about less than 10?

278
00:16:17,720 --> 00:16:20,680
I think there was some 20, I guess a few less than 20.

279
00:16:20,690 --> 00:16:24,200
So same things greater than greater than our equals, not equals.

280
00:16:24,770 --> 00:16:30,680
There's a few examples logical operators and we could say, give me something where the numbers are

281
00:16:30,680 --> 00:16:33,320
greater than four or less than equal to one.

282
00:16:33,740 --> 00:16:34,250
Copy that.

283
00:16:34,730 --> 00:16:35,510
Put that in there.

284
00:16:35,540 --> 00:16:37,130
Some kind of a time of job.

285
00:16:37,130 --> 00:16:40,340
VAR logs one minute credit and four or less than an equal to one.

286
00:16:40,350 --> 00:16:42,650
So everything's over for anyway.

287
00:16:42,660 --> 00:16:45,530
So this example or change up to 24 hours.

288
00:16:46,200 --> 00:16:48,800
So another example is between 100 and 200.

289
00:16:48,890 --> 00:16:49,430
Copy that.

290
00:16:50,790 --> 00:16:55,020
There's not very many values between 100 and 200 anyway.

291
00:16:55,200 --> 00:16:58,480
Operator order, that's the order of how operators are processed.

292
00:16:58,500 --> 00:17:00,150
That's common in computer programming.

293
00:17:00,180 --> 00:17:05,770
If you don't wrap your operations in brackets, it will default to a particular order.

294
00:17:05,819 --> 00:17:11,170
Example PAM does is an acronym that you can use to understand that will process parentheses first.

295
00:17:11,190 --> 00:17:17,430
If you don't have that, it will then move on to processing exponents, multiplication divisions, additions,

296
00:17:17,430 --> 00:17:18,780
subtractions in that order.

297
00:17:18,839 --> 00:17:24,780
Anyway, some examples there will say that that there are no parentheses in there, so it's processing

298
00:17:24,780 --> 00:17:29,790
the exponent first in the multiplication and division and also the modulus, then the addition.

299
00:17:29,790 --> 00:17:33,900
And that's the same equation, but with parentheses wrapped around everything.

300
00:17:33,900 --> 00:17:35,330
So that's a particular order.

301
00:17:36,400 --> 00:17:36,850
There we go.

302
00:17:37,240 --> 00:17:37,560
OK.

303
00:17:37,590 --> 00:17:42,460
Local, you can read more about local here, the official documentation, it's very versatile and it

304
00:17:42,460 --> 00:17:43,630
can do anything to it.

305
00:17:43,750 --> 00:17:45,940
OK, now become more useful when we can a dashboard later.

306
00:17:46,000 --> 00:17:51,580
Now, in the next video, I'm going to set up from table service on another server down here, showing

307
00:17:51,580 --> 00:17:56,860
you that you can set up multiple prompt health services or pointing to the same wiki service that Gryffindor

308
00:17:56,860 --> 00:17:58,740
is pointing to in this like data source.

309
00:17:58,750 --> 00:18:01,750
And you can have as many of those as you want on only the service anyway.

310
00:18:01,780 --> 00:18:02,290
Excellent.

