The ability to represent concepts and the relationships between them is critical to human cognition. How does the brain code relationships between items that share basic conceptual properties (e.g., dog and wolf) while simultaneously representing associative links between dissimilar items that co-occur in particular contexts (e.g., dog and bone)? To clarify the neural bases of these semantic components in neurologically intact participants, both types of semantic relationship were investigated in an fMRI study optimized for anterior temporal lobe (ATL) coverage. The clear principal finding was that the same core semantic network (ATL, superior temporal sulcus, ventral prefrontal cortex) was equivalently engaged when participants made semantic judgments on the basis of association or conceptual similarity. Direct comparisons revealed small, weaker differences for conceptual similarity > associative decisions (e.g., inferior prefrontal cortex) and associative > conceptual similarity (e.g., ventral parietal cortex) which appear to reflect graded differences in task difficulty. Indeed, once reaction time was entered as a covariate into the analysis, no associative versus category differences remained. The paper concludes with a discussion of how categorical/feature-based and associative relationships might be represented within a single, unified semantic system.
|Journal||Cerebral cortex (New York, N.Y. : 1991)|
|Publication status||Published - 30 Jan 2015|
- hub-and-spoke model
- semantic memory