Quantifying relationships between components of a complex system is critical to understanding the rich network of interactions that characterize the behavior of the system. Traditional methods for detecting pairwise dependence of time series, such as Pearson correlation, Granger causality, and mutual information, are computed directly in the space of measured time-series values. But for systems in which interactions are mediated by statistical properties of the time series (`time-series features') over longer timescales, this approach can fail to capture the underlying dependence from limited and noisy time-series data, and can be challenging to interpret. Addressing these issues, here we introduce an information-theoretic method for detecting dependence between time series mediated by time-series features that provides interpretable insights into the nature of the interactions. Our method extracts a candidate set of time-series features from sliding windows of the source time series and assesses their role in mediating a relationship to values of the target process. Across simulations of three different generative processes, we demonstrate that our feature-based approach can outperform a traditional inference approach based on raw time-series values, especially in challenging scenarios characterized by short time-series lengths, high noise levels, and long interaction timescales. Our work introduces a new tool for inferring and interpreting feature-mediated interactions from time-series data, contributing to the broader landscape of quantitative analysis in complex systems research, with potential applications in various domains including but not limited to neuroscience, finance, climate science, and engineering.
翻译:暂无翻译