1
00:00:00,420 --> 00:00:03,120
Hello and welcome to this lecture.

2
00:00:03,120 --> 00:00:06,420
And in this lecture we will discuss about ingress in kubernetes.

3
00:00:06,420 --> 00:00:15,220
One of the common questions that students reach out about usually is regarding services and ingress.

4
00:00:15,240 --> 00:00:18,990
What's the difference between the two and when to use what.

5
00:00:19,080 --> 00:00:25,150
So we're going to briefly revisit services and work our way towards ingress.

6
00:00:25,200 --> 00:00:31,710
We will start with a simple scenario. You are deploying an application on Kubernetes for a company

7
00:00:32,040 --> 00:00:39,480
that has an online store selling products. Your application would be available at say my-online-store.com.

8
00:00:39,490 --> 00:00:46,420
You build the application into a Docker Image and deploy it on the kubernetes cluster as a

9
00:00:46,420 --> 00:00:54,530
POD in a Deployment. Your application needs a database so you deploy a MySQL database as a POD

10
00:00:54,950 --> 00:01:02,570
and create a service of type ClusterIP called mysql-service to make it accessible to your application.

11
00:01:03,720 --> 00:01:10,680
Your application is now working. To make the application accessible to the outside world, you create another

12
00:01:10,680 --> 00:01:18,780
service, this time of type NodePort and make your application available on a high-port on the nodes

13
00:01:18,840 --> 00:01:20,430
in the cluster.

14
00:01:20,430 --> 00:01:29,660
In this example a port 38080 is allocated for the service. The users can now access your application

15
00:01:29,690 --> 00:01:37,550
using the URL: http: // IP of any of your nodes followed by port

16
00:01:37,820 --> 00:01:47,870
38080. That setup works and users are able to access the application. Whenever traffic increases, we increase

17
00:01:47,870 --> 00:01:53,870
the number of replicas of the pod to handle the additional traffic and the service takes care of splitting

18
00:01:53,870 --> 00:02:02,760
traffic between the pods however if you have deployed a production grade application before you know

19
00:02:02,760 --> 00:02:09,040
that there are many more things involved in addition to simply splitting the traffic between the pods.

20
00:02:09,180 --> 00:02:16,590
For example we do not want the users to have to type in IP address every time you configure your DNS

21
00:02:16,590 --> 00:02:24,280
server to point to the IP of the nodes your users can now access your application using the URL.

22
00:02:24,420 --> 00:02:29,960
my-online-store.com and port 38080.

23
00:02:30,110 --> 00:02:34,400
Now you don't want your users to have to remember port number either.

24
00:02:34,970 --> 00:02:41,630
However service node ports can only allocate high numbered ports which are greater than 30000

25
00:02:44,210 --> 00:02:50,300
so you then bring in an additional layer between the DNS server and your cluster like a proxy server

26
00:02:50,720 --> 00:02:59,240
that proxies requests on port 80 to port 38080 on your nodes. You then point your DNS to this

27
00:02:59,240 --> 00:03:07,960
server, and users can now access your application by simply visiting my-online-store.com.

28
00:03:07,960 --> 00:03:13,530
Now this is if your application is hosted on prem in your data center.

29
00:03:13,700 --> 00:03:19,970
Let's take a step back and see what you could do if you were on a public cloud environment like Google

30
00:03:19,970 --> 00:03:21,030
Cloud Platform.

31
00:03:22,110 --> 00:03:29,340
In that case, instead of creating a service of type NodePort for your wear application, you could set it to

32
00:03:29,340 --> 00:03:31,290
type load balancer.

33
00:03:31,500 --> 00:03:37,980
When you do that Kubernetes would still do everything that it has to do for a NodePort, which is to

34
00:03:37,980 --> 00:03:40,210
provision a high port for the service.

35
00:03:40,380 --> 00:03:47,970
but in addition to that kubernetes also sends a request to Google Cloud Platform to provision a network

36
00:03:47,970 --> 00:03:55,830
load balancer for the service on receiving the request DCP would then automatically deploy a load balancer

37
00:03:55,860 --> 00:04:02,770
configured to route traffic to the service ports on all the nodes and return its information to kubernetes.

38
00:04:03,890 --> 00:04:10,220
The LoadBalancer has an external IP that can be provided to users to access the application.

39
00:04:10,340 --> 00:04:17,720
In this case we set the DNS to point to this IP and users access the application using the URL

40
00:04:17,720 --> 00:04:20,660
my-online-store.com.

41
00:04:20,810 --> 00:04:27,620
Perfect your company's business grows and you now have new services for your customers.

42
00:04:27,620 --> 00:04:34,790
For example a video streaming service you want your users to be able to access your new video streaming

43
00:04:34,790 --> 00:04:40,680
service by going to my-online-store.com/watch.

44
00:04:40,680 --> 00:04:49,160
You’d like to make your old application accessible at my-online.store.com / wear. Your developers

45
00:04:49,190 --> 00:04:55,280
developed the new video streaming application as a completely different application as it has nothing

46
00:04:55,280 --> 00:04:57,070
to do with the existing one.

47
00:04:57,080 --> 00:05:05,000
However in order to share the same cluster resources you deploy the new application as a separate deployment

48
00:05:05,120 --> 00:05:12,620
within the same cluster. You create a service called video-service of type LoadBalancer. Kubernetes

49
00:05:12,680 --> 00:05:22,460
provisions port 38282 for this service and also provisions a Network LoadBalancer on the cloud. The

50
00:05:22,460 --> 00:05:30,790
new load balancer has a new IP remember you must pay for each of these load balancers and having many

51
00:05:30,790 --> 00:05:34,110
such load balancers can inversely affect your cloud build.

52
00:05:34,420 --> 00:05:38,140
So how do you direct traffic between each of these load balancers.

53
00:05:38,140 --> 00:05:45,370
Based on the URL that the users type in you need yet another proxy or load balancer that can redirect

54
00:05:45,370 --> 00:05:52,920
traffic based on URLs to the different services.  Every time you introduce a new service

55
00:05:52,920 --> 00:06:00,090
You have to reconfigure the load balancer and finally you also need to enable SSL for your applications

56
00:06:00,270 --> 00:06:04,370
so your users can access your application using https.

57
00:06:04,370 --> 00:06:07,650
Where do you configure that?

58
00:06:07,650 --> 00:06:13,410
It can be done at different levels either at the application level itself or at the load balancer or

59
00:06:13,410 --> 00:06:19,890
proxy server level but which one you don't want your developers to implement it in their application

60
00:06:20,040 --> 00:06:22,080
as they would do it in different ways.

61
00:06:22,080 --> 00:06:26,730
You want it to be configured in one place with minimal maintenance.

62
00:06:26,730 --> 00:06:33,900
Now that's a lot of different configuration and all of this becomes difficult to manage when your application

63
00:06:33,900 --> 00:06:35,430
scales.

64
00:06:35,430 --> 00:06:39,600
It requires involving different individuals in different teams.

65
00:06:39,600 --> 00:06:45,810
You need to configure your firewall rules for each new service and it's expensive as well as for each

66
00:06:45,810 --> 00:06:53,100
service in you cloud native load balancer needs to be provision wouldn't it be nice if you could manage

67
00:06:53,220 --> 00:06:59,560
all of that within the Kubernetes cluster, and have all that configuration as just another kubernetes

68
00:06:59,580 --> 00:07:07,790
as definition file that lives along with the rest of your application deployment files that's where

69
00:07:07,880 --> 00:07:13,340
ingress comes in ingress helps your users access your application.

70
00:07:13,340 --> 00:07:20,030
using a single Externally accessible URL, that you can configure to route to different services

71
00:07:20,390 --> 00:07:21,560
within your cluster.

72
00:07:21,680 --> 00:07:23,350
based on the URL path,

73
00:07:23,570 --> 00:07:28,780
At the same time implement SSL security as well.

74
00:07:28,890 --> 00:07:30,040
Simply put.

75
00:07:30,120 --> 00:07:37,380
think of ingress as a layer 7 load balancer built-in to the kubernetes cluster that can be configured

76
00:07:37,470 --> 00:07:39,800
using native kubernetes primitives

77
00:07:39,990 --> 00:07:42,120
just like any other object in kubernetes.

78
00:07:42,130 --> 00:07:48,720
Now remember, even with Ingress you still need to expose it

79
00:07:48,720 --> 00:07:56,160
To make it accessible outside the cluster so you still have to either publish it as a node port or with

80
00:07:56,160 --> 00:07:58,350
a cloud native load balancer.

81
00:07:58,350 --> 00:08:01,320
But that is just a one time configuration.

82
00:08:01,320 --> 00:08:08,310
Going forward you are going to perform all your load balancing, Auth, SSL and URL based

83
00:08:08,310 --> 00:08:12,340
routing configurations on the Ingress controller.

84
00:08:12,410 --> 00:08:13,760
So how does it work.

85
00:08:13,790 --> 00:08:14,680
What is it.

86
00:08:14,680 --> 00:08:15,350
Where is it.

87
00:08:15,340 --> 00:08:16,510
How can you see it.

88
00:08:16,520 --> 00:08:18,160
How can you configure it

89
00:08:18,230 --> 00:08:19,800
How does it load balance.

90
00:08:19,820 --> 00:08:24,180
How does it implement SSL?  Without ingress,

91
00:08:24,180 --> 00:08:31,480
how would YOU do all of these? I would use a reverse-proxy or a load balancing solution like

92
00:08:31,500 --> 00:08:39,130
NGINX or HAProxy or Traefik. I would deploy them on my kubernetes cluster and configure them to route traffic

93
00:08:39,130 --> 00:08:40,850
to other services.

94
00:08:40,960 --> 00:08:49,140
The configuration involves defining URL Routes, configuring SSL certificates etc. Ingress is implemented

95
00:08:49,170 --> 00:08:57,180
by Kubernetes in kind of the same way. You first deploy a supported solution, which happens to be any of these

96
00:08:57,300 --> 00:09:02,940
listed here and then specify a set of rules to configure ingress.

97
00:09:02,940 --> 00:09:10,290
The solution you deploy is called as an ingress controller and the set of rules you configure are called

98
00:09:10,350 --> 00:09:18,390
as ingress resources ingress resources are created using definition files like the ones we use to create

99
00:09:18,570 --> 00:09:20,550
pods deployments and services

100
00:09:20,550 --> 00:09:28,310
earlier in this course. Now remember a kubernetes cluster does NOT come with an Ingress Controller

101
00:09:28,340 --> 00:09:29,720
by default.

102
00:09:29,720 --> 00:09:35,460
If you setup a cluster following the demos in this course, you won’t have an ingress controller built

103
00:09:35,450 --> 00:09:36,380
into it.

104
00:09:36,380 --> 00:09:43,390
So if you simply create ingress resources and expect them to work they won't let us look at each of

105
00:09:43,390 --> 00:09:44,920
these in a bit more detail.

106
00:09:46,130 --> 00:09:51,950
As I mentioned you do not have an Ingress Controller on Kubernetes by default. So you MUST deploy

107
00:09:51,950 --> 00:09:54,910
one. What do you deploy?

108
00:09:54,960 --> 00:09:58,290
There are a number of solutions available for ingress.

109
00:09:58,290 --> 00:10:02,580
a few of them being GCE - which is Googles Layer 7

110
00:10:02,580 --> 00:10:14,460
HTTP Load Balancer. NGINX, Contour, HAPROXY, TRAFIK and Istio. Out of this, GCE and NGINX

111
00:10:14,640 --> 00:10:19,710
are currently being supported and maintained by the Kubernetes project.

112
00:10:19,860 --> 00:10:28,510
And in this lecture we will use NGINX as an example. These Ingress Controllers are not just another

113
00:10:28,600 --> 00:10:34,730
load balancer or nginx server. The load balancer components are just a part of it.

114
00:10:34,810 --> 00:10:41,560
The Ingress controllers have additional intelligence built into them to monitor the kubernetes cluster

115
00:10:41,590 --> 00:10:49,620
for new definitions or ingress resources and configure the nginx server accordingly. An NGINX

116
00:10:49,630 --> 00:10:53,910
Controller is deployed as just another deployment in Kubernetes.

117
00:10:53,980 --> 00:11:01,630
So we start with a deployment file definition, named nginx-ingress-controller. With 1 replica and

118
00:11:01,630 --> 00:11:04,210
a simple pod definition template.

119
00:11:04,210 --> 00:11:10,960
We will label it nginx-ingress and the image used is nginx-ingress-controller with the right

120
00:11:10,960 --> 00:11:12,560
version.

121
00:11:12,640 --> 00:11:19,050
Now this is a special build of NGINX built specifically to be used as an ingress controller in kubernetes.

122
00:11:19,050 --> 00:11:19,700
Now this is a special build of NGINX built specifically to be used as an ingress controller in kubernetes.

123
00:11:19,800 --> 00:11:26,830
So it has its own set of requirements. Within the image the nginx program is stored at location

124
00:11:27,010 --> 00:11:28,800
/nginx-ingress-controller.

125
00:11:28,920 --> 00:11:31,760
So you must pass that as the command to start

126
00:11:31,770 --> 00:11:34,830
the nginx-controller-service.

127
00:11:34,830 --> 00:11:40,560
If you have worked with NGINX before, you know that it has a set of configuration options such as

128
00:11:40,620 --> 00:11:42,630
the path to store the logs.

129
00:11:42,630 --> 00:11:50,640
keep-alive threshold, ssl settings,  session timeout etc. In order to decouple these configuration data

130
00:11:51,000 --> 00:11:58,070
data from the nginx-controller image, you must create a ConfigMap object and pass that in.

131
00:11:58,110 --> 00:12:04,890
Now remember the ConfigMap object need not have any entries at this point. A blank object will do.

132
00:12:04,890 --> 00:12:10,680
But creating one makes it easy for you to modify a configuration setting in the future.

133
00:12:10,680 --> 00:12:16,190
You will just have to add it in to this ConfigMap and not have to worry about modifying the nginx

134
00:12:16,200 --> 00:12:18,060
configuration files.

135
00:12:18,060 --> 00:12:24,180
You must also pass in two environment variables that carry the POD’s name and namespace it is deployed

136
00:12:24,180 --> 00:12:31,650
to. The nginx service requires these to read the configuration data from within the POD. And finally

137
00:12:31,860 --> 00:12:38,560
specify the ports used by the ingress controller which happens to be 80 and 443.

138
00:12:38,620 --> 00:12:43,290
We then need a service to expose the ingress controller to the external world.

139
00:12:43,390 --> 00:12:49,780
So we create a service of type NodePort with the nginx-ingress label selector to link the service

140
00:12:49,780 --> 00:12:57,580
to the deployment. As mentioned before, the Ingress controllers have additional intelligence built into

141
00:12:57,610 --> 00:13:03,640
them to monitor the kubernetes cluster for ingress resources and configure the underlying nginx

142
00:13:03,770 --> 00:13:10,810
server when something is changed but for the ingress controller to do this it requires a service account

143
00:13:11,080 --> 00:13:17,710
with a right set of permissions for that we create a service account with the correct roles and roles

144
00:13:17,710 --> 00:13:18,830
bindings.

145
00:13:18,970 --> 00:13:27,040
So to summarize, with a deployment of the nginx-ingress image, a service to expose it, a ConfigMap

146
00:13:27,040 --> 00:13:33,760
to feed nginx configuration data, and a service account with the right permissions to access

147
00:13:33,820 --> 00:13:35,350
all of these objects.

148
00:13:35,350 --> 00:13:42,610
We should be ready with an increase controller in its simplest form now on onto the next part of creating

149
00:13:42,700 --> 00:13:49,900
ingress resources an increase resource is a set of rules and configurations applied on the ingress

150
00:13:49,900 --> 00:13:50,800
controller.

151
00:13:50,950 --> 00:13:59,280
You can configure rules to say simply forward all incoming traffic to a single application or route traffic

152
00:13:59,280 --> 00:14:00,640
to different applications.

153
00:14:00,640 --> 00:14:02,100
Based on the URL.

154
00:14:02,210 --> 00:14:09,070
So if user goes to my-online-store.com/wear, then route to one app, or  if

155
00:14:09,070 --> 00:14:16,840
the user visits the /watch URL then route to the video app. Or you could route user based on the

156
00:14:16,840 --> 00:14:18,260
domain name itself.

157
00:14:18,280 --> 00:14:25,390
For example, if the user visits wear.my-online-store.com, the route to the wear app

158
00:14:25,570 --> 00:14:32,980
or else route to the video app. Let us look at how to configure these in a bit more detail. The Ingress

159
00:14:32,980 --> 00:14:36,310
Resource is created with a Kubernetes Definition file.

160
00:14:36,430 --> 00:14:42,140
In this case, ingress-wear.yaml. As with any other object,

161
00:14:42,330 --> 00:14:54,670
we have apiVersion, kind, metadata and spec. The apiVersion is extensions/v1beta1, kind is Ingress,

162
00:14:54,830 --> 00:15:03,060
we will name it ingress-wear. And under spec we have backend. So the traffic is, of course, routed

163
00:15:03,060 --> 00:15:07,130
to the application services and not PODs directly.

164
00:15:07,200 --> 00:15:13,290
As you might know already the back end section defines where the traffic will be routed to.

165
00:15:13,350 --> 00:15:18,270
So if it's a single backend then you don't really have any rules.

166
00:15:18,270 --> 00:15:25,740
You can simply specify the service name and port of the backend wear service. Create the ingress resource

167
00:15:25,770 --> 00:15:32,100
by running the kubectl create command. View the created ingress by running the kubectl

168
00:15:32,220 --> 00:15:39,720
get ingress command.  The new ingress is now created and routes all incoming traffic directly to the

169
00:15:39,720 --> 00:15:46,270
wear-service. You use rules, when you want to route traffic based on different conditions.

170
00:15:46,300 --> 00:15:52,430
For example you create one rule for traffic originating from each domain or hostname.

171
00:15:52,570 --> 00:15:59,350
That means when users reach your cluster using the domain name, my-online-store.com, you can handle

172
00:15:59,350 --> 00:16:06,250
that traffic using rule1. When users reach your cluster using domain name wear.my-online-store

173
00:16:06,250 --> 00:16:14,350
.com, you can handle that traffic using a separate Rule2. Use Rule3 to handle traffic from

174
00:16:14,430 --> 00:16:22,770
watch.my-online-store.com. And say use a 4th rule to handle everything else. Now within each

175
00:16:22,770 --> 00:16:29,970
rule you can handle different paths. For example, within Rule 1 you can handle the wear path to route that

176
00:16:29,970 --> 00:16:36,390
traffic to the clothes application. And a watch path to route traffic to the video streaming application.

177
00:16:36,720 --> 00:16:38,640
And a third path that routes

178
00:16:38,730 --> 00:16:44,060
anything other than the first two to a 404 not found page.

179
00:16:44,250 --> 00:16:50,640
Similarly, the second rule handles all traffic from wear.my-online-store.com.

180
00:16:50,640 --> 00:16:55,380
You can have path definition within this rule, to route traffic based on different paths.

181
00:16:55,380 --> 00:17:01,980
For example, say you have different applications and services within the apparel section for shopping,

182
00:17:02,070 --> 00:17:06,200
or returns, or support, when a user goes to

183
00:17:06,210 --> 00:17:08,570
wear.my-online.store.com/, by default

184
00:17:08,580 --> 00:17:09,900
they reach the shopping page.

185
00:17:09,900 --> 00:17:16,080
But if they go to exchange or support URL, they reach different backend services.

186
00:17:16,080 --> 00:17:22,770
The same goes for Rule 3, where you route traffic to watch.my-online-store.com to the video streaming

187
00:17:22,770 --> 00:17:31,530
application. But you can have additional paths in it such as movies or tv.  And finally anything other

188
00:17:31,530 --> 00:17:35,070
than the ones listed here will go to the fourth rule.

189
00:17:35,070 --> 00:17:39,150
that would simply show a 404 Not Found Error page.

190
00:17:39,150 --> 00:17:47,370
So remember you have rules at the top for each host or domain name and within each rule you have different

191
00:17:47,400 --> 00:17:49,080
paths to route traffic

192
00:17:49,080 --> 00:17:50,450
based on the URL.

193
00:17:50,550 --> 00:17:54,860
Now, let’s look at how we configure ingress resources in Kubernetes.

194
00:17:54,930 --> 00:17:57,370
We will start where we left off.

195
00:17:57,420 --> 00:18:01,260
We start with a similar definition file. This time under spec,

196
00:18:01,260 --> 00:18:03,690
We start with a set of rules.

197
00:18:03,690 --> 00:18:10,170
Now our requirement here is to handle all traffic coming to my-online-store.com and route them based

198
00:18:10,170 --> 00:18:11,830
on the URL path.

199
00:18:12,090 --> 00:18:14,580
So we just need a single rules for this.

200
00:18:14,580 --> 00:18:21,910
since we are only handling traffic to a single domain name, which is my-online-store.com. Under rules

201
00:18:21,970 --> 00:18:29,740
we have one item, which is an http rule in which we specify different paths. So paths is an array of

202
00:18:29,740 --> 00:18:31,280
multiple items.

203
00:18:31,480 --> 00:18:33,020
One path for each

204
00:18:33,020 --> 00:18:36,410
url. Then we move the backend

205
00:18:36,550 --> 00:18:42,240
we used in the first example under the first path. The backend specification remains the same,

206
00:18:42,280 --> 00:18:45,190
It has a service name and service port.

207
00:18:45,190 --> 00:18:51,310
Similarly we create a similar backend entry to the second URL path, for the watch-service to route

208
00:18:51,400 --> 00:18:58,030
all traffic coming in through the /watch url to the watch-service.  Create the ingress resource using

209
00:18:58,030 --> 00:19:00,460
the kubectl create command.

210
00:19:00,460 --> 00:19:06,620
Once created, view additional details about the ingress resource by running the kubectl

211
00:19:06,670 --> 00:19:08,810
Describe ingress command.

212
00:19:08,950 --> 00:19:15,220
You now see two backend URLs under the rules,  and the backend service they are pointing to.

213
00:19:15,220 --> 00:19:17,320
Just as we created it.

214
00:19:17,440 --> 00:19:24,010
Now if you look closely in the output of this command you see that there is something about a default

215
00:19:24,010 --> 00:19:25,840
Back end. Hmmm.

216
00:19:26,230 --> 00:19:33,160
What might that be?  If a user tries to access a URL that does not match any of these rules,

217
00:19:33,280 --> 00:19:38,200
Then the user is directed to the service specified as the default backend.

218
00:19:38,230 --> 00:19:46,180
In this case it happens to be a service named default-http-backend. So you must remember

219
00:19:46,480 --> 00:19:50,850
to deploy such a service back in your application

220
00:19:50,910 --> 00:19:58,530
say a user visits the URL my-online-store.com/listen or /eat and you don’t have an

221
00:19:58,560 --> 00:20:01,090
audio streaming or a food delivery service.

222
00:20:01,230 --> 00:20:04,020
You might want to show them a nice message.

223
00:20:04,020 --> 00:20:09,420
You can do this by configuring a default backend service to display this 404

224
00:20:09,510 --> 00:20:11,620
Not found error page.

225
00:20:11,730 --> 00:20:16,200
The third type of configuration is using domain names or host names.

226
00:20:16,200 --> 00:20:20,610
We start by creating a similar definition file for ingress.

227
00:20:20,610 --> 00:20:27,030
Now that we have two domain names we create two rules one for each domain.

228
00:20:27,270 --> 00:20:29,400
Display traffic by domain name.

229
00:20:29,400 --> 00:20:37,230
We use the host field the host field in each rule matches the specified value with the domain name used

230
00:20:37,260 --> 00:20:42,530
in the request URL and routes traffic to the appropriate backend.

231
00:20:42,540 --> 00:20:47,580
Now remember in the previous case we did not specify the host field.

232
00:20:48,090 --> 00:20:55,290
If you don't specify the host field it will simply consider it as a star or accept all the incoming

233
00:20:55,290 --> 00:21:02,330
traffic through that particular rule without matching the hostname in this case.

234
00:21:02,330 --> 00:21:07,640
Note that we only have a single back and path for each rule which is fine.

235
00:21:07,640 --> 00:21:13,490
All traffic from these domain names will be routed to the appropriate backend irrespective of the

236
00:21:13,490 --> 00:21:15,180
URL path.

237
00:21:15,350 --> 00:21:22,010
You can still have multiple path specifications in each of these to handle different URL paths as

238
00:21:22,010 --> 00:21:27,870
we saw in the example earlier so let's compare the two.

239
00:21:28,100 --> 00:21:35,240
Splitting traffic by URL had just one rule and we split the traffic with two paths. Display traffic

240
00:21:35,240 --> 00:21:36,410
by hostname.

241
00:21:36,410 --> 00:21:42,300
We used two rules and one path specification in each rule.

242
00:21:42,330 --> 00:21:44,670
Well that's it for this lecture.

243
00:21:44,730 --> 00:21:50,390
Let us now head over to the practice test section and practice working on ingress.

244
00:21:50,470 --> 00:21:53,420
Now there are two types of labs in this section.

245
00:21:53,440 --> 00:22:00,940
The first one is where an ingress controller resources and applications are already deployed and you

246
00:22:00,940 --> 00:22:08,520
basically view and walk through the environment gather data and answer questions towards the end.

247
00:22:08,550 --> 00:22:16,400
You would create or modify ingress resources based on the needs in the second practice test which is

248
00:22:16,400 --> 00:22:22,700
a bit more challenging and that is where you will be deploying an ingress controller and resources

249
00:22:22,700 --> 00:22:23,540
from scratch.

250
00:22:24,820 --> 00:22:30,600
Well good luck and I hope you enjoy the labs and I will see you in the next lecture.
