Towards the quantification of the semantic information encoded in written language

Marcelo A. Montemurro, Damián H. Zanette

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Written language is a complex communication signal capable of conveying information encoded in the form of ordered sequences of words. Beyond the local order ruled by grammar, semantic and thematic structures affect long-range patterns in word usage. Here, we show that a direct application of information theory quantifies the relationship between the statistical distribution of words and the semantic content of the text. We show that there is a characteristic scale, roughly around a few thousand words, which establishes the typical size of the most informative segments in written language. Moreover, we find that the words whose contributions to the overall information is larger, are the ones more closely associated with the main subjects and topics of the text. This scenario can be explained by a model of word usage that assumes that words are distributed along the text in domains of a characteristic size where their frequency is higher than elsewhere. Our conclusions are based on the analysis of a large database of written language, diverse in subjects and styles, and thus are likely to be applicable to general language sequences encoding complex information. © 2010 World Scientific Publishing Company.
    Original languageEnglish
    Pages (from-to)135-153
    Number of pages18
    JournalAdvances in Complex Systems
    Volume13
    Issue number2
    DOIs
    Publication statusPublished - Apr 2010

    Keywords

    • Complex communication
    • Information theory
    • Natural language

    Fingerprint

    Dive into the research topics of 'Towards the quantification of the semantic information encoded in written language'. Together they form a unique fingerprint.

    Cite this