Source: Osaka Metropolitan University How do we understand words? Scientists don’t fully understand what happens when a word pops into your brain. A research team led by Professor Shogo Makioka at the Graduate School of Sustainable System Sciences, Osaka Metropolitan University, wanted to test the concept of embodied knowledge. Embodied cognition suggests that people understand words for objects through the way they interact with them, so the researchers devised a test to observe the semantic processing of words when the ways in which participants could interact with objects were limited. Words are expressed in relation to other words. a ‘cup’, for example, can be a ‘container, made of glass, used for drinking’. However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will break on the floor. Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols to the real world. How do people achieve symbol grounding? Cognitive psychology and cognitive science propose the concept of embodied cognition, where objects are given meaning through interactions with the body and environment. To test embodied cognition, the researchers conducted experiments to see how the participants’ brains responded to words describing objects that can be manipulated with the hand when the participants’ hands were free to move compared to when they were restrained. “It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked hard to come up with a paper in a way that we were able to measure brain activity with sufficient accuracy,” Professor Makioka explained. In the experiment, two words such as “cup” and “broom” were presented to the participants on a screen. They were asked to compare the relative sizes of the objects these words represented and verbally answer which object was larger — in this case, “broom.” When asked which word represented the larger object, participants responded faster when their hands were free (left) than when their hands were restrained (right). Hand restraint also reduced brain activity during word processing for hand-made objects in the left brain areas associated with tools. Credit: Makioka, Osaka Metropolitan University Comparisons were made between words describing two types of objects, manipulated objects such as “cup” or “broom” and unmanipulated objects such as “building” or “lamppost”, to observe how each type was processed. During testing, participants placed their hands on a desk, where they were either free or restrained by a clear acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, participants had to think about both objects and compare their sizes, forcing them to process the meaning of each word. Brain activity was measured by functional near-infrared spectroscopy (fNIRS), which has the advantage of obtaining measurements without imposing further physical constraints. Measurements focused on the medial sulcus and inferior parietal lobe (supraparietal gyrus and angular gyrus) of the left brain, which are responsible for tool-related semantic processing. Verbal response speed was measured to determine how quickly the participant responded after the words appeared on the screen. The results showed that left brain activity in response to hand-made objects was significantly reduced with restrained hands. Verbal responses were also affected by hand constraints. These results indicate that hand movement restriction affects the processing of objective meaning, which supports the idea of ​​embodied knowledge. These results suggest that the idea of ​​embodied knowledge could also be effective for artificial intelligence to learn the meaning of objects.

About this cognitive research news

Author: Yoshiko TaniSource: Osaka Metropolitan UniversityContact: Yoshiko Tani – Osaka Metropolitan UniversityImage: Image credited to Makioka, Osaka Metropolitan University See also Original Research: Open Access. “Hand restraint reduces brain activity and affects the speed of verbal responses in semantic tasks” by Sae Onishi et al. Scientific Reports Abstract Hand restraint reduces brain activity and affects the speed of verbal responses in semantic tasks According to embodied cognition theory, semantic processing is closely related to body movements. For example, limiting hand movements inhibits memory for objects that can be manipulated with the hands. However, whether body restraint reduces semantic-related brain activity has not been confirmed. We measured the effect of hand restraint on semantic processing in the parietal lobe using near-infrared functional spectroscopy. A pair of words representing the names of man-made (e.g., cup or pencil) or non-man-made (e.g., windmill or fountain) objects were presented, and participants were asked to determine which object was larger. Reaction time (RT) in the judgment task and activation of the left intraparietal sulcus (LIPS) and left inferior parietal lobule (LIPL), including the superior gyrus and angular gyrus, were analyzed. We found that hand movement restriction suppressed brain activity in LIPS to hand-made objects and affected RT in the size judgment task. These results show that body restraint reduces the activity of brain regions involved in semantics. Hand restraint may inhibit motor simulation, which, in turn, would inhibit body-related semantic processing.