Words are fundamental linguistic units that connect thoughts and things through meaning. However, words do not appear independently in a text sequence. The existence of syntactic rules induce correlations among neighboring words. Further, words are not evenly distributed but approximately follow a power law since terms with a pure semantic content appear much less often than terms that specify grammar relations. Using an ordinal pattern approach, we present an analysis of lexical statistical connections for eleven major languages. We find that the diverse manners that languages utilize to express word relations give rise to unique pattern distributions. Remarkably, we find that these relations can be modeled with a Markov model of order 2 and that this result is universally valid for all the studied languages. Furthermore, fluctuations of the pattern distributions can allow us to determine the historical period when the text was written and its author. Taken together, these results emphasize the relevance of time series analysis and information-theoretic methods for the understanding of statistical correlations in natural languages.
翻译:文字是将思想和事物通过含义联系起来的基本语言单位。 但是, 文字在文本序列中并不独立出现。 语言的出现并不独立。 相近规则的存在导致相邻词体之间的相互关系。 此外, 单词分布不均,但大致遵循权力法, 因为纯语义内容的术语似乎比指定语法关系的术语要少得多。 我们使用一个交汇式方法, 分析11种主要语言的词汇统计联系。 我们发现, 语言用来表达文字关系的多种方式导致独特的模式分布。 值得注意的是, 我们发现, 这些关系可以与Markov 2号秩序模式的模式建模, 并且这一结果对所有研究过的语文都具有普遍效力。 此外, 模式分布的波动可以让我们确定文字编写和作者的历史时期。 这些结果加在一起, 强调了时间序列分析和信息理论方法对于理解自然语言统计相关性的相关性。