WEBVTT 00:11.320 --> 00:20.650 Semantic memory, organized knowledge networks, so let's use simple example to clarify this and explain 00:20.650 --> 00:20.890 this. 00:23.080 --> 00:28.660 So what is going on up in our minds when we see this type of situation? 00:28.870 --> 00:33.420 So we have a kid sitting on a bench eating an ice cream together with a puppy. 00:33.880 --> 00:34.220 Right. 00:35.890 --> 00:43.710 First, the picture is such or the situation is such is encoded into iconic memory in this case, because 00:43.720 --> 00:44.920 it's mostly a visual thing. 00:44.920 --> 00:47.170 Of course, there are audio elements in this as well. 00:47.170 --> 00:51.370 But for this example, let's focus on the iconic stuff. 00:52.240 --> 00:54.370 So we see this situation. 00:55.330 --> 01:02.410 We hold that for, as I said, roughly approximately half a second and we will start processing it directly 01:02.410 --> 01:03.910 in the working memory, of course. 01:04.100 --> 01:09.520 So when it comes into working memory, then we are trying to utilize the long term memory to try to 01:09.520 --> 01:11.550 understand it semantically. 01:12.070 --> 01:17.490 So we are retrieving kind of notes of information, notes and knowledge from the semantic memory. 01:18.220 --> 01:25.420 So we are starting to getting a picture about this mental picture saying this is a kid and a puppy sitting 01:25.420 --> 01:27.610 on a bench eating an ice cream together. 01:28.270 --> 01:28.650 Right. 01:30.010 --> 01:34.840 So we have them activated for different notes in this analysts network. 01:35.140 --> 01:37.000 So we have activated a notion of a bench. 01:37.000 --> 01:37.960 We know what a bench is. 01:37.960 --> 01:38.920 We know what a kid is. 01:38.920 --> 01:40.090 We know what an ice cream is. 01:40.090 --> 01:41.050 We know when a puppies. 01:41.650 --> 01:47.080 And I should say that I've been using a picture of a kid here just to exemplify that. 01:47.080 --> 01:52.240 Of course, this knowledge is not just encoded as words in the long term memory. 01:52.630 --> 01:54.880 They could be encoded in a number of different ways. 01:54.880 --> 02:03.550 And I don't know if we actually know exactly how the information is encoded, but we retrieve the specific 02:03.550 --> 02:04.030 notes. 02:04.630 --> 02:09.270 But the notes such are not separated in semantic memory. 02:09.370 --> 02:11.240 That's the whole point of the semantic memory. 02:11.260 --> 02:15.280 So the notes are actually connected to larger network of other notes. 02:15.850 --> 02:20.200 So we know, for example, that kids like ice cream. 02:20.530 --> 02:26.560 White Puppis, which is an animal and puppy, is also a pet. 02:26.920 --> 02:32.440 And animals, they actually like food and ice cream to some point is food. 02:33.070 --> 02:35.840 Maybe a parent wouldn't say that, but a kid would probably say that. 02:36.700 --> 02:39.100 So a puppy and a cat are both pets. 02:39.100 --> 02:42.010 Pets actually could be a part of a family. 02:42.010 --> 02:43.450 So that could be a family member. 02:43.900 --> 02:49.450 A kid is also a family member, family member or containing them both pets, for example. 02:49.450 --> 02:51.910 But usually people people can sit on benches. 02:52.390 --> 02:59.410 And so the semantic memory contained this complete network of nodes and links between them. 03:01.240 --> 03:05.410 And when we start working on them, we start retrieving the notes up and working memory. 03:06.160 --> 03:09.250 Of course, we are also using the other long term memories. 03:09.670 --> 03:14.530 So we are, for example, getting information from the episodic memory and the episodic memory. 03:14.530 --> 03:18.760 Hold specific situations that I've been part of that I remember. 03:18.760 --> 03:25.900 So I remember exactly maybe my dog, but I have myself when I was a kid or I remember being out more 03:25.900 --> 03:29.190 or less being pulled by a larger dog, a specific situation. 03:29.200 --> 03:36.250 So they are not semantic and on contextualised, they are actually contextualized. 03:36.880 --> 03:42.160 So the semantic memory is kind of you don't know exactly why you know it, you just know it. 03:42.160 --> 03:47.410 Whereas the episodic memory, you have specific knowledge about specific episodes from your life. 03:48.280 --> 03:49.960 And then we have the procedural memory. 03:49.960 --> 03:55.300 So, you know, things about, for example, how you approach the dog so you won't get angry. 03:55.300 --> 04:00.580 Are you the famous ones or how you ride a bicycle or how you tie your shoelaces and so on. 04:00.910 --> 04:05.110 So they are procedures or how you play the violin or play the piano. 04:05.680 --> 04:08.890 You cannot probably see verbally how you do it. 04:09.730 --> 04:10.390 You just know. 04:11.470 --> 04:17.830 But the interesting thing then after this is that once you have started activating the notes that you 04:17.890 --> 04:23.860 saw and retrieved from your sensory image, you start activating the links. 04:25.150 --> 04:31.210 So let's say now that we have activated those four notes, the puppy, the ice cream, the kid and the 04:31.210 --> 04:36.580 bench, what happens now is that a working memory automatically start retrieving. 04:36.580 --> 04:39.670 The other notes there are linked to those notes. 04:40.570 --> 04:45.010 And that is called the that the activation spreads activation spreading. 04:46.780 --> 04:49.980 Another thing which is very interesting here. 04:49.990 --> 04:55.300 So this is when we are trying to retrieve existing knowledge from the working from the long term memory 04:55.300 --> 04:56.200 into the working memory. 04:56.560 --> 05:02.420 But what happens when we are trying to encode knowledge into the long term memory from the working memory? 05:03.010 --> 05:08.260 The interesting thing here is that knowledge is not automatically encoded just because we think of them 05:08.260 --> 05:09.460 or have them in working memory. 05:09.460 --> 05:09.700 We are. 05:09.920 --> 05:16.880 He intentionally had to try to acquire this knowledge, and that could be done in different ways and 05:16.880 --> 05:25.970 the date and the way we do it will actually affect how well it is organized and persistent and being 05:25.970 --> 05:27.440 persistent in that way. 05:27.770 --> 05:30.400 So let's take this very simple example again. 05:30.860 --> 05:38.080 This was research done again by Gleitzman and Craig and Tubbing and Bransford in the 70s. 05:38.390 --> 05:43.970 So they were giving a group a complex material to read or to learn. 05:45.050 --> 05:51.410 And directly after they had done that and they said that they were done with it, basically they were 05:51.410 --> 05:51.890 tested. 05:51.980 --> 05:52.880 So they were proud. 05:53.570 --> 05:58.510 They were giving trick questions that should reveal whether they actually know that material that we 05:58.520 --> 06:00.100 just to read or tried to learn. 06:01.760 --> 06:06.290 And then they were proved again sometimes later, a couple of weeks, a couple of months after this 06:06.590 --> 06:06.990 occasion. 06:08.180 --> 06:13.760 And the interesting thing that they saw was the ones that got the meaning from the beginning had much 06:13.760 --> 06:15.680 better memory when they were probed again. 06:18.990 --> 06:26.250 Hence, the ones that have integrated the knowledge from the very beginning, from the first time they 06:26.250 --> 06:28.890 try to learn this, they record better. 06:32.920 --> 06:39.040 That means that you if you have the working memory and you're working on your knowledge and your working 06:39.040 --> 06:45.280 memory, you both retrieve information from semantic memory and you actually create new memory and semantic 06:45.280 --> 06:45.640 memory. 06:46.450 --> 06:48.040 You could do that in different ways. 06:48.430 --> 06:53.410 So if you do it in a very shallow way, for example, route learning or mechanical rehearsal and just 06:53.410 --> 06:57.400 laying there, five plus five is 10, five plus five is 10, popular as high as 10, five plus five, 06:57.400 --> 06:58.090 10 and so on. 06:58.960 --> 07:01.480 That's the kind of rote learning, mechanical learning. 07:01.480 --> 07:03.190 You can do it a little bit more intermediate. 07:03.190 --> 07:07.780 They're trying to pay attention to the concrete situations with the sounds and appearances. 07:07.810 --> 07:16.540 Exactly how it sounds when you do this and when you when you after this specific five plus five is 10 07:18.340 --> 07:21.060 and you could do it in a deep processing way. 07:21.100 --> 07:23.980 That means that you pay attention to the semantics. 07:23.980 --> 07:26.850 You pay attention to what addition is about. 07:26.850 --> 07:28.870 Do you pay attention to arithmetic? 07:29.110 --> 07:33.960 And then you can determine in all situations where you need that five plus five is actually ten. 07:34.900 --> 07:38.070 So those kind of three very different ways of doing this. 07:38.080 --> 07:45.460 And the interesting thing that they found out was that if we do it in a shallow way, we get poor persistence. 07:45.460 --> 07:50.410 We cannot recall that knowledge in the long after a long time period have passed. 07:50.860 --> 07:57.070 Whereas if we do it using deep processing and trying to link this intuitive semantic memory, then it 07:57.070 --> 07:57.640 gets stuck. 07:58.870 --> 08:03.910 So we're creating new links to new knowledge and creating new nodes and also new links. 08:05.140 --> 08:06.460 Then we have the best persistence. 08:07.900 --> 08:14.650 And this is very interesting that also this or that it isn't enough just to search around for meaning. 08:15.010 --> 08:16.420 You have to find the meaning. 08:16.600 --> 08:22.000 You have to actually link in the new knowledge into the existing memory and find the meaning of that 08:22.000 --> 08:24.520 new knowledge in context of your existing knowledge. 08:25.090 --> 08:26.260 Then it stucks.