1
00:00:01,050 --> 00:00:04,140
Hello and welcome to this lecture. In this lecture

2
00:00:04,170 --> 00:00:10,370
we talk about Monitoring a Kubernetes cluster.

3
00:00:10,410 --> 00:00:16,890
So how do you monitor resource consumption on Kubernetes? Or more importantly what would you like

4
00:00:16,890 --> 00:00:18,300
to monitor?

5
00:00:18,300 --> 00:00:23,330
I’d like to know Node level metrics such as the number of nodes in the cluster,

6
00:00:23,460 --> 00:00:31,500
how many of them are healthy as well as performance metrics such as CPU. Memory, network and disk utilization.

7
00:00:32,160 --> 00:00:38,670
As well as POD level metrics such as the number of PODs, and performance metrics of each POD such

8
00:00:38,670 --> 00:00:41,940
as the CPU and Memory consumption on them.

9
00:00:41,940 --> 00:00:48,600
So we need a solution that will monitor these metrics store them and provide analytics around this data

10
00:00:49,830 --> 00:00:56,040
As of this recording, Kubernetes does not come with a full featured built-in monitoring solution.

11
00:00:56,040 --> 00:01:03,510
However, there are a number of open-source solutions available today, such as the Metrics-Server, Prometheus,

12
00:01:04,020 --> 00:01:09,450
Elastic Stack, and proprietary solutions like Datadog and Dynatrace.

13
00:01:12,580 --> 00:01:18,870
Heapster was one of the original projects that enabled monitoring and analysis features for kubernetes

14
00:01:18,870 --> 00:01:25,640
You will see a lot of reference online when you look for reference architectures on monitoring Kubernetes.

15
00:01:25,780 --> 00:01:33,580
However, Heapster is now Deprecated and a slimmed down version was formed known as the Metrics Server.

16
00:01:33,580 --> 00:01:40,310
You can have one metrics server per kubernetes  cluster the metric server retrieves metrics from each

17
00:01:40,310 --> 00:01:46,100
of the kubernetes nodes and pods, aggregates them and stores them in memory.

18
00:01:46,100 --> 00:01:52,340
Note that the metric server is only an in memory monitoring solution and does not store the metrics

19
00:01:52,400 --> 00:01:57,480
on the desk and as a result you cannot see historical performance data.

20
00:01:57,650 --> 00:02:04,760
For that you must rely on one of the advanced monitoring solutions we talked about earlier in this lecture.

21
00:02:04,760 --> 00:02:10,050
So how are the metrics generated for the PODs on these nodes?  Kubernetes

22
00:02:10,060 --> 00:02:17,040
runs an agent on each node known as the kubelet, which is responsible for receiving instructions

23
00:02:17,040 --> 00:02:22,430
from the kubernetes API master server and running PODs on the nodes.

24
00:02:22,650 --> 00:02:30,750
The kubelet also contains a subcomponent known as as cAdvisor or Container Advisor.  cAdvisor is

25
00:02:30,750 --> 00:02:37,140
responsible for retrieving performance metrics from pods, and exposing them through the kubelet API

26
00:02:37,200 --> 00:02:44,480
to make the metrics available for the Metrics Server. If you are using minikube for your local cluster,

27
00:02:44,840 --> 00:02:46,850
run the command minikube

28
00:02:47,000 --> 00:02:54,770
addons enable metrics-server. For all other environments deploy the metrics server by cloning the

29
00:02:54,770 --> 00:03:00,950
metrics-server deployment files from the github repository. And then deploying the required components

30
00:03:01,310 --> 00:03:08,810
using the kubectl create command.  This command deploys a set of pods,  services and roles to enable

31
00:03:08,810 --> 00:03:14,150
metrics server to poll for performance metrics from the nodes in the cluster.

32
00:03:14,150 --> 00:03:22,190
Once deployed, give the metrics-server some time to collect and process data. Once processed, cluster performance

33
00:03:22,190 --> 00:03:27,700
can be viewed by running the command kubectl top node.

34
00:03:27,770 --> 00:03:34,850
This provides the CPU and Memory consumption of each of the nodes. As you can see 8% of the CPU

35
00:03:34,850 --> 00:03:40,480
on my master node is consumed, which is about 166 milli cores.

36
00:03:40,820 --> 00:03:48,730
Use the kubectl top pod command to view performance metrics of pods in kubernetes.

37
00:03:48,890 --> 00:03:50,420
That's it for this lecture.

38
00:03:50,420 --> 00:03:55,740
Head over to the coding exercises section and practice viewing performance metrics on the kubernetes

39
00:03:55,740 --> 00:03:57,120
cluster.

40
00:03:57,170 --> 00:03:57,660
Thank you.
