1
00:00:03,840 --> 00:00:10,980
Another layer of testing that we might consider adding to our pipeline, our API tests, API testing

2
00:00:10,980 --> 00:00:16,680
is a type of testing that ensures that the API exposed by an application is working properly.

3
00:00:17,170 --> 00:00:19,500
Our application exposes this API.

4
00:00:19,770 --> 00:00:24,780
It can see here, for example, and point cars will give back this information in an adjacent format

5
00:00:24,780 --> 00:00:30,180
with the list of cars and also provides other endpoints to add a car, to get a single car, to delete

6
00:00:30,180 --> 00:00:34,860
a car, toaster to get some statistic information and so on at the high level.

7
00:00:34,890 --> 00:00:39,480
This is the functionality that this application provides doesn't have a graphical interface.

8
00:00:39,600 --> 00:00:43,440
It only provides this API from a testing point of view.

9
00:00:43,440 --> 00:00:46,860
We need to make sure that API is actually working properly.

10
00:00:47,120 --> 00:00:51,960
Otherwise, we are not sure if the application works or not, because just based on the unit test,

11
00:00:51,960 --> 00:00:57,480
for example, we cannot tell if the application actually works, if these endpoints provide this information,

12
00:00:58,050 --> 00:01:04,319
since we have used Bowsman in order to interact with API, can also use Posman to write some very,

13
00:01:04,319 --> 00:01:06,140
very simple API tests.

14
00:01:06,150 --> 00:01:09,780
And even if you are not familiar with Posman, I'm sure you'll be able to follow along.

15
00:01:10,230 --> 00:01:15,420
You will get the collection with all the tests as a resource from this lecture and you can look at it

16
00:01:15,420 --> 00:01:16,290
and play around with it.

17
00:01:16,530 --> 00:01:19,290
But let me show you a very simple example with a test.

18
00:01:20,440 --> 00:01:26,680
In this case, I'm getting all the cars and the fundamental thing that we can test in this situation

19
00:01:26,680 --> 00:01:33,490
is if the status was 200, for example, if I'm just calling something like a car, I'll get a 404 not

20
00:01:33,490 --> 00:01:35,880
found because this page is doesn't exist.

21
00:01:35,890 --> 00:01:39,880
This API address and point doesn't exist for the cars.

22
00:01:39,880 --> 00:01:41,070
I'm getting at 200.

23
00:01:41,080 --> 00:01:43,350
OK, I'm going here to the test.

24
00:01:43,360 --> 00:01:49,030
Deb, we can look at existing snippets that you can open from the right side here in case they are not

25
00:01:49,030 --> 00:01:53,080
opened and we can test for something like status is 200.

26
00:01:53,590 --> 00:01:58,680
What is does is it's this piece of code here, which is actually a test, very simple test.

27
00:01:58,990 --> 00:02:01,510
This will ensure that the status code is 200.

28
00:02:02,710 --> 00:02:08,560
Guess you want to learn more about API testing with Bowsman, all attach a resource to a video which

29
00:02:08,560 --> 00:02:11,770
introduces you to both men and API testing overall.

30
00:02:11,780 --> 00:02:13,930
What is an API and how to get started?

31
00:02:14,110 --> 00:02:19,840
But not to keep this section as short and as focused as possible, just going to go into the absolute

32
00:02:19,840 --> 00:02:20,510
basics.

33
00:02:20,530 --> 00:02:24,280
We just kind of focus on the fundamentals and not go into many details.

34
00:02:24,970 --> 00:02:30,760
I'm going to save the tests and what I can do then is to go over this collection that I have here and

35
00:02:30,760 --> 00:02:31,750
to export it.

36
00:02:32,680 --> 00:02:37,450
Now to select the latest version that is available and hit export.

37
00:02:38,800 --> 00:02:45,010
Navigated a project path and saved the collection inside the folder of the project.

38
00:02:46,650 --> 00:02:53,340
If you go back to intelligence, you'll be able to see here the Cars API Postmen Collection may also

39
00:02:53,340 --> 00:02:58,530
decide to create another folder within the project and to store their multiple collections or however

40
00:02:58,530 --> 00:03:00,000
you decide to organize your code.

41
00:03:02,380 --> 00:03:08,560
In order to actually execute the tests that are in the collection, we need to use the CIA to implement

42
00:03:08,560 --> 00:03:14,530
itself, does not have a CLIA tool, so we need to use a companion tool that comes from the postmen

43
00:03:14,530 --> 00:03:15,000
project.

44
00:03:15,010 --> 00:03:15,970
It's called Newman.

45
00:03:15,970 --> 00:03:18,220
And you can use Newman from UCLA.

46
00:03:18,520 --> 00:03:24,800
Now, I already have a Docker image that has new one installed so we can just jump and use that Docker

47
00:03:24,820 --> 00:03:26,680
image without worrying too much about it.

48
00:03:26,680 --> 00:03:30,760
But just for you to know what is Neumont and what's the relationship with oarsmen?

49
00:03:32,240 --> 00:03:36,230
So let's go ahead and add the stage of API test.

50
00:03:37,940 --> 00:03:43,730
Now, the idea would be to test an application that has already been deployed to a server in this case,

51
00:03:43,730 --> 00:03:46,860
the only server that we have is actually the production server.

52
00:03:47,300 --> 00:03:50,720
This kind of test will not warn us in advance.

53
00:03:50,720 --> 00:03:55,970
If there's something wrong with our application, it will warn us when it is a bit too late.

54
00:03:56,240 --> 00:04:01,310
But it still important sometimes to have post deployment tests just to ensure that the application is

55
00:04:01,310 --> 00:04:02,240
working properly.

56
00:04:02,390 --> 00:04:05,690
And because we do not have any environment in between.

57
00:04:05,720 --> 00:04:08,900
We do not have a testing or a preproduction environment.

58
00:04:08,930 --> 00:04:12,580
We're going to run this test after a production server has been deployed.

59
00:04:12,830 --> 00:04:18,450
But ideally we would run such tests before we deploy to production on a different environment.

60
00:04:19,459 --> 00:04:22,190
With that being said, let's add another stage.

61
00:04:23,640 --> 00:04:26,040
I'm going to call the state police deployment.

62
00:04:26,990 --> 00:04:35,000
For the play that we know that this happens after the deploy and then we can end a new job, API testing.

63
00:04:39,620 --> 00:04:41,550
These will be passed by.

64
00:04:43,530 --> 00:04:49,560
And we're going to use an image that has been created by me and it also contains some additional reports

65
00:04:49,560 --> 00:04:52,990
that are very useful when using human and Posman death.

66
00:04:53,430 --> 00:04:55,950
This is why the name is the Dasbach.

67
00:04:56,900 --> 00:05:01,210
Slash Newman additionally going to specify an empty entry point.

68
00:05:06,160 --> 00:05:12,670
When it comes to the script itself, that's quite easy to execute, the first idea would be to always

69
00:05:12,790 --> 00:05:14,020
run Newman.

70
00:05:14,020 --> 00:05:18,150
There's desperation and what this will do, it will print in the console.

71
00:05:18,160 --> 00:05:19,900
What is the current version of Newman?

72
00:05:20,350 --> 00:05:23,290
This observation is valid for any tools that we use.

73
00:05:23,290 --> 00:05:28,420
It's always a good idea to put in the console the version that we are using, not necessarily because

74
00:05:28,420 --> 00:05:33,400
it provides value right now, just in case all of the sudden something stops working.

75
00:05:33,640 --> 00:05:39,280
It would know exactly which version has been used because in this case we have not specified a fixed

76
00:05:39,280 --> 00:05:40,870
version of this Docker image.

77
00:05:41,020 --> 00:05:46,360
And you can see that we haven't done that for any of the images that we have used, which means that

78
00:05:46,360 --> 00:05:52,930
if the auteur or if the image itself changes the version of the tools that you are using from a pipeline

79
00:05:52,930 --> 00:05:54,520
to the other also change.

80
00:05:54,790 --> 00:05:59,950
Typically, this will not cause any issues because most of the time these tools are pretty stable and

81
00:05:59,950 --> 00:06:04,630
there are no major changes just in case something changes, you at least would be able to tell.

82
00:06:04,750 --> 00:06:08,110
Nothing is wrong with my pipeline itself or with the test that I'm running.

83
00:06:08,110 --> 00:06:13,120
There's something that has changed in the tool so makes it easier to compare the previous pipeline that

84
00:06:13,120 --> 00:06:16,450
worked used human conversion, I don't know, let's say six.

85
00:06:16,450 --> 00:06:20,890
And now in your pipeline uses Newman in version seven, which is obviously a difference.

86
00:06:21,130 --> 00:06:23,140
It's called to the Newman GitHub repository.

87
00:06:23,140 --> 00:06:26,230
And check if there are any issues or anybody else having the same problem.

88
00:06:26,890 --> 00:06:28,050
The next step is quite easy.

89
00:06:28,100 --> 00:06:30,190
Going to simply use the Newman run.

90
00:06:30,190 --> 00:06:34,510
And then we have to specify the name of the collection because it contains a space.

91
00:06:34,510 --> 00:06:39,820
It is always a good idea to put it between double quotes so that it's clear which is the path to this

92
00:06:39,820 --> 00:06:45,460
collection and nothing comes in between or somehow generates any errors as it stands currently.

93
00:06:45,460 --> 00:06:48,890
It will not work because we have used two different things in Posman.

94
00:06:48,910 --> 00:06:54,160
We have used the collection itself, but we have also used environments and we need to specify here

95
00:06:54,160 --> 00:06:54,880
the environment.

96
00:06:54,880 --> 00:06:56,300
Otherwise that will not work.

97
00:06:57,040 --> 00:07:01,240
So going back to Bowsman, I'm going to click here on Manege Environments.

98
00:07:01,720 --> 00:07:06,100
And as I did with a collection, I can export to the environment as well as a file.

99
00:07:06,280 --> 00:07:09,070
So I'm going to export the production environment this time.

100
00:07:09,070 --> 00:07:12,580
And just as previously, I'm going to put it in the root of the project.

101
00:07:12,970 --> 00:07:16,780
And it's always a good idea to just copy the name because we're going to use it right away.

102
00:07:18,070 --> 00:07:22,890
Now, in this Seelie command, we are also going to specify the environment that will be dishdasha,

103
00:07:22,900 --> 00:07:23,500
environment.

104
00:07:25,180 --> 00:07:29,980
And then the environment name not using double quotes, because there are no spaces here, so there's

105
00:07:29,980 --> 00:07:31,500
no chance of something going wrong.

106
00:07:32,640 --> 00:07:35,310
Finally, I'm going to specify to reporters.

107
00:07:37,160 --> 00:07:42,440
And this will be Thalys and the second report there is extra.

108
00:07:43,780 --> 00:07:50,080
The CNN reporter will ensure that if you look at the console from this job, everything will be written

109
00:07:50,080 --> 00:07:56,920
down there as well during the e-mail extra report will generate, well, an example, a report that

110
00:07:56,920 --> 00:07:59,260
can help us understand what's going on with a request.

111
00:08:00,370 --> 00:08:05,380
So because we are generating an excellent report, we're going to add an additional parameter and we

112
00:08:05,380 --> 00:08:12,370
misconfiguration, we are going to tell where to export this fire and we want to put it inside the Neumann

113
00:08:12,370 --> 00:08:12,930
folder.

114
00:08:14,690 --> 00:08:17,720
In the name of the report should be report that e-mail.

115
00:08:20,130 --> 00:08:23,820
An additional reporter that we will use is also Janet.

116
00:08:25,100 --> 00:08:30,710
For Jean, again, we're going to specify a path where this report should be saved would be pretty similar

117
00:08:30,710 --> 00:08:35,450
to what we have used for the human report just with a different file extension.

118
00:08:36,169 --> 00:08:39,020
The report would be instead of simply sent out.

119
00:08:39,830 --> 00:08:45,050
Finally, we need to tell Gittel as well about this reports that we have generated here.

120
00:08:45,380 --> 00:08:49,440
And the way this works is pretty similar to what we have used before thought.

121
00:08:49,490 --> 00:08:54,050
And you've a copy from unit test the artifacts and quickly adapted.

122
00:08:55,700 --> 00:08:59,330
The rule will stay the same, we always generate this artifact and save them.

123
00:08:59,750 --> 00:09:05,000
It will be inside the Neumann folder, so there's nothing else that we need to add to there.

124
00:09:05,450 --> 00:09:11,270
And additionally, we are generating this GAO unit report and we know the path to the report.

125
00:09:11,430 --> 00:09:12,640
This will be quite easy.

126
00:09:13,880 --> 00:09:17,000
So this will be Newman's last report that them out.

127
00:09:17,560 --> 00:09:24,840
It seems that we have everything in place for API testing on the production system that saves this configuration.

128
00:09:24,840 --> 00:09:27,080
Run the pipeline and see how everything looks like.

129
00:09:29,000 --> 00:09:37,040
Now, the pipeline includes an additional stage post deployment and also API testing that is done on

130
00:09:37,040 --> 00:09:44,160
the server because we have specified the J unit report format that has been picked up by catalepsy.

131
00:09:44,450 --> 00:09:46,250
You can see to here under test.

132
00:09:46,250 --> 00:09:50,090
And now apart from unit tests, we also have API testing.

133
00:09:50,300 --> 00:09:50,930
You will see here.

134
00:09:50,930 --> 00:09:55,340
That's only one test that we have because we have only generated one test.

135
00:09:55,700 --> 00:09:58,900
But this still shows up here and can be scaled without any issues.

136
00:10:00,140 --> 00:10:06,590
Additionally, going back to the pipeline, we can see the API testing and we also have the CLIA report

137
00:10:06,590 --> 00:10:10,630
here where we see that there was one assertion, this is a test that we have written.

138
00:10:10,850 --> 00:10:11,720
It was successful.

139
00:10:11,720 --> 00:10:12,770
Nothing has failed.

140
00:10:12,980 --> 00:10:15,430
All the request work without any issues.

141
00:10:15,440 --> 00:10:17,460
And this is the command that we have executed.

142
00:10:17,480 --> 00:10:18,730
This is the Newman version.

143
00:10:19,160 --> 00:10:21,980
Everything works fine from this point of view.

144
00:10:22,280 --> 00:10:26,300
Additionally, what we can do is to look at the job artifacts here.

145
00:10:26,300 --> 00:10:29,200
We will be able to find each team and report as well.

146
00:10:33,060 --> 00:10:38,940
And this is how the report looks like, it contains a lot of information regarding what has been executed,

147
00:10:38,950 --> 00:10:40,280
what are the total request?

148
00:10:40,740 --> 00:10:46,370
It's a very nice report to have saved in the run of the pipeline.

