1
00:00:01,640 --> 00:00:08,570
Now, let's continue with our testing strategy and make some optimizations will now remove.

2
00:00:10,110 --> 00:00:15,690
This command, because we don't want our test to fail, we actually want them to succeed and will adequate

3
00:00:15,690 --> 00:00:23,190
flag rescue in order not to have this large output of data in our logs from Gottleib, which is definitely

4
00:00:23,190 --> 00:00:24,260
not interesting for us.

5
00:00:24,930 --> 00:00:32,070
As previously mentioned at the test, artifact now is using the previous default image from Dukkha.

6
00:00:33,150 --> 00:00:37,400
And we can leave it that way or we can specify our own image.

7
00:00:38,100 --> 00:00:43,710
There's an index distribution out there that is very, very minimal, is only five megabytes in size

8
00:00:43,710 --> 00:00:44,630
and it's quite fast.

9
00:00:44,640 --> 00:00:45,870
It's called Alpine.

10
00:00:46,320 --> 00:00:51,840
And let's give it a try out for this job to see if the job succeeds using this much smaller image.

11
00:00:53,170 --> 00:00:58,390
Now, in order to test this artifact, we do not need Noad installed or anything else, so for that

12
00:00:58,390 --> 00:01:05,230
reason, we can simply try out this image and see if at least this utility grap is installed, because

13
00:01:05,230 --> 00:01:06,940
this is the only thing that we actually need.

14
00:01:13,720 --> 00:01:20,410
Additionally, let's go ahead and add another job gets worse as the possibility of starting a server.

15
00:01:21,400 --> 00:01:29,650
And when you're starting a server, we can use HTP in order to access the website and to see how it

16
00:01:29,770 --> 00:01:30,280
looks like.

17
00:01:31,240 --> 00:01:33,190
So this can look like the following.

18
00:01:34,120 --> 00:01:35,800
So let's call this test website.

19
00:01:38,810 --> 00:01:39,240
Uh.

20
00:01:42,020 --> 00:01:47,930
And will add this to the stage as well, and you will later see what this has for an effect.

21
00:01:54,070 --> 00:01:56,950
And now let's see what we need in order to be able to start this.

22
00:01:58,300 --> 00:02:05,080
So we are going to use Gatsby's serve and this will start the server using the production build that

23
00:02:05,080 --> 00:02:06,680
we have previously generated.

24
00:02:07,150 --> 00:02:10,830
Now, in order for Gatsby to run, we still need a couple of steps.

25
00:02:10,840 --> 00:02:15,330
So we still need NPM install and we still need to install to special.

26
00:02:15,550 --> 00:02:16,810
Otherwise, this will not work.

27
00:02:19,430 --> 00:02:21,020
And additionally, we need to following.

28
00:02:22,640 --> 00:02:27,650
We want to use Carl, and this is used for fetching resources.

29
00:02:29,410 --> 00:02:36,660
And the resource that we are actually fetching is the local website, so Gatsby CERV will start a server,

30
00:02:36,760 --> 00:02:40,730
it will be available on localhost, usually import nine thousand.

31
00:02:41,260 --> 00:02:43,000
So let's see how that can look like.

32
00:02:45,310 --> 00:02:52,480
So Carol will basically download the website, will download the contents of this website, and we can

33
00:02:52,480 --> 00:02:58,900
use what we have previously used, we can simply grep or destroying and.

34
00:03:00,260 --> 00:03:07,070
When using commands, we can use the pipe operator so the output from downloading the website will be

35
00:03:07,070 --> 00:03:10,470
piped through will be given as an input for the grep command.

36
00:03:10,510 --> 00:03:15,660
So we do not have to specify the file that we want to search for a string.

37
00:03:16,040 --> 00:03:20,840
So this is the to download the website is the input and is giving it to grep.

38
00:03:21,290 --> 00:03:25,700
And if grep fails, then this will fail as well.

39
00:03:27,290 --> 00:03:32,750
There's one more thing that I need to do, I need to specify the image we still need not for this.

40
00:03:33,530 --> 00:03:36,920
And after this, let's give it a try and see how it looks like.

41
00:03:38,180 --> 00:03:44,600
One of the first things that you will notice if you're now looking at this new pipeline is that we have

42
00:03:44,600 --> 00:03:46,330
two jobs running in parallel.

43
00:03:47,330 --> 00:03:54,650
And this is because we have created the test artifact job and a test website job and we have assigned

44
00:03:54,650 --> 00:03:56,990
them to the same stage name test.

45
00:03:57,650 --> 00:04:03,750
And when we do that, we actually are telling it, Lappe, that it should run this jobs in parallel.

46
00:04:04,400 --> 00:04:10,790
Now, in this specific case, running the job in parallel will probably not make the job run faster

47
00:04:10,940 --> 00:04:14,390
because it will have the overhead of downloading the Docker image.

48
00:04:14,810 --> 00:04:20,720
But there are scenarios when it definitely does make sense to run different tests in parallel, because

49
00:04:20,720 --> 00:04:26,330
if their execution is long enough and you run different tests in parallel, then altogether there will

50
00:04:26,330 --> 00:04:29,780
be much faster and your pipeline as a whole would be much faster.

51
00:04:30,350 --> 00:04:34,780
If you're interested in sort of optimizing your process, then this is a good idea.

52
00:04:35,510 --> 00:04:38,930
I have to say that not every job can be paralyzed.

53
00:04:39,140 --> 00:04:42,840
And if especially if they are dependencies between two jobs, you cannot do it.

54
00:04:42,860 --> 00:04:50,960
So, for example, we cannot run the bill step and a test step in parallel because the in order to test

55
00:04:50,960 --> 00:04:53,240
the artifact, we first have to build the artifact.

56
00:04:53,250 --> 00:04:57,800
So if there are any dependencies between the jobs, then we we cannot do that.

57
00:04:58,100 --> 00:05:03,620
But as we have it in this scenario, we are building the website that doesn't have the artifact and

58
00:05:03,620 --> 00:05:05,180
then we're doing something with the artifact.

59
00:05:05,180 --> 00:05:11,240
The one job is simply testing its output, testing some files and the other one starting a server and

60
00:05:11,240 --> 00:05:11,900
doing something.

61
00:05:12,230 --> 00:05:15,890
Then we have no problem with that, that that is definitely possible.

62
00:05:17,830 --> 00:05:24,070
And you will see now the club is executing both jobs in the test stage in parallel, and the first one

63
00:05:24,070 --> 00:05:25,210
is actually done.

64
00:05:26,300 --> 00:05:28,160
And you will see that it was quite fast.

65
00:05:29,810 --> 00:05:36,440
So using the Alpine image was definitely worth it, because we have executed the entire job in only

66
00:05:36,440 --> 00:05:39,220
20 seconds and that is absolutely amazing.

