"In effect, we discovered how the brain's dictionary is organized," said Just, the D.O. Hebb Professor of Psychology and director of the Center for Cognitive Brain Imaging. "It isn't alphabetical or ordered by the sizes of objects or their colors. It's through the three basic features that the brain uses to define common nouns like apartment, hammer and carrot."
As the researchers report January 12 in the journal PLoS One, the three codes or factors concern basic human fundamentals:
- how you physically interact with the object (how you hold it, kick it, twist it, etc.);
- how it is related to eating (biting, sipping, tasting, swallowing); and
- how it is related to shelter or enclosure.
The three factors, each coded in three to five different locations in the brain, were found by a computer algorithm that searched for commonalities among brain areas in how participants responded to 60 different nouns describing physical objects. For example, the word apartment evoked high activation in the five areas that code shelter-related words.
In the case of hammer, the motor cortex was the brain area activated to code the physical interaction. "To the brain, a key part of the meaning of hammer is how you hold it, and it is the sensory-motor cortex that represents 'hammer holding,'" said Cherkassky, who has a background in both computer science and neuroscience. Similarly "shelters" activated the
parahippocampal place area of the brain. The eating factor activates areas associated with face-related actions like chewing or biting. As you can see below, the "tools of manipulation" and "eating" words are represented in the left hemisphere probably because most of the particpants were right handed for tool use and feeding.
Each noun is represented as a mixture of factors. For example "apple" being both an object of eating and an object of manipulation activates multiple brain areas to different degrees producing a pattern of activation or a "code". Thus the meanings of concrete nouns can be semantically represented in terms of the activation codes in the cortex.
Interestingly, researchers found that word length was also a factor that was features in this activation code for each written word. The word length factor presents an opportunity to separate a low-level, perceptual feature of the printed word from the highlevel, semantic object features (encoded by the manipulation, eating, and shelter factors). The word length factor appeared to represent the width or number of letters of the printed word.
The research also showed that the noun meanings were coded similarly in all of the participants' brains. "This result demonstrates that when two people think about the word 'hammer' or 'house,' their brain activation patterns are very similar. But beyond that, our results show that these three discovered brain codes capture key building blocks also shared across people," said Mitchell, head of the Machine Learning Department in the School of Computer Science.
This study marked the first time that the thoughts stimulated by words alone were accurately identified using brain imaging, in contrast to earlier studies that used picture stimuli or pictures together with words. The programs were able to identify the thought without benefit of a picture representation in the visual area of the brain, focusing instead on the semantic or conceptual representation of the objects. Thus this is important in understanding how the brain reads and comprehends language.
Reference: Just et al. A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes.PLoS ONE, 2010; 5 (1): e8622 DOI:10.1371/journal.pone.0008622