Speech sounds subtly differ on a multidimensional auditory-perceptual space. Distinguishing speech sound categories is a perceptually demanding task, with large-scale individual differences as well as inter-population (e.g., native versus non-native listeners) heterogeneity. The neural representational differences underlying the inter-individual and cross-language differences are not completely understood. These questions have often been examined using joint analyses that ignore the individual heterogeneity or using separate analyses which cannot characterize human similarities. Neither extremes, therefore, allow for principled comparisons between populations and individuals. Motivated by these problems, we develop a novel Bayesian mixed multidimensional scaling method, taking into account the heterogeneity across populations and subjects. We design a Markov chain Monte Carlo algorithm for posterior computation. We evaluate the method's empirical performances through synthetic experiments. Applied to a motivating auditory neuroscience study, the method provides novel insights into how biologically interpretable lower-dimensional latent features reconstruct the observed distances between the stimuli and vary between individuals and their native language experiences. Supplementary materials for this article, including a standardized description of the materials for reproducing the work, are available as an online supplement.
翻译:在多层面的听觉和感知空间,区分语音声音类别是一项概念上要求很高的任务,具有大规模的个人差异以及人口之间的差异(例如,本地和非本地的听众)差异。 个人之间和跨语言差异背后的神经代表性差异没有完全理解。 这些问题经常通过联合分析来研究,这些分析忽视个体异质性,或使用无法描述人类异同特征的单独分析来研究。 因此,这两个极端都不允许在人口和个人之间进行有原则的比较。 受这些问题的驱使,我们开发了一种新型的巴伊西亚混合的多维比例计算方法,考虑到不同人口和主题之间的异质性。 我们设计了一个马可夫连锁蒙特卡洛算法用于后世计算。 我们通过合成实验来评估该方法的经验性表现。 我们应用该方法来激励听觉神经科学研究,该方法为生物学上可解释的低度潜伏特征提供了新的洞察力,从而重建观察到的个人与母语经历之间的距离。我们开发了一种新颖的Bayesian 混合的多维度缩方法, 用于制作这一文章,包括标准化的在线材料的增补。