Quantum causality is an emerging field of study which has the potential to greatly advance our understanding of quantum systems. One of the most important problems in quantum causality is linked to this prominent aphorism that states correlation does not mean causation. A direct generalization of the existing causal inference techniques to the quantum domain is not possible due to superposition and entanglement. We put forth a new theoretical framework for merging quantum information science and causal inference by exploiting entropic principles. For this purpose, we leverage the concept of conditional density matrices to develop a scalable algorithmic approach for inferring causality in the presence of latent confounders (common causes) in quantum systems. We apply our proposed framework to an experimentally relevant scenario of identifying message senders on quantum noisy links, where it is validated that the input before noise as a latent confounder is the cause of the noisy outputs. We also demonstrate that the proposed approach outperforms the results of classical causal inference even when the variables are classical by exploiting quantum dependence between variables through density matrices rather than joint probability distributions. Thus, the proposed approach unifies classical and quantum causal inference in a principled way. This successful inference on a synthetic quantum dataset can lay the foundations of identifying originators of malicious activity on future multi-node quantum networks.

### 相关内容

In this article, we introduce the BNPqte R package which implements the Bayesian nonparametric approach of Xu, Daniels and Winterstein (2018) for estimating quantile treatment effects in observational studies. This approach provides flexible modeling of the distributions of potential outcomes, so it is capable of capturing a variety of underlying relationships among the outcomes, treatments and confounders and estimating multiple quantile treatment effects simultaneously. Specifically, this approach uses a Bayesian additive regression trees (BART) model to estimate the propensity score and a Dirichlet process mixture (DPM) of multivariate normals model to estimate the conditional distribution of the potential outcome given the estimated propensity score. The BNPqte R package provides a fast implementation for this approach by designing efficient R functions for the DPM of multivariate normals model in joint and conditional density estimation. These R functions largely improve the efficiency of the DPM model in density estimation, compared to the popular DPpackage. BART-related R functions in the BNPqte R package are inherited from the BART R package with two modifications on variable importance and split probability. To maximize computational efficiency, the actual sampling and computation for each model are carried out in C++ code. The Armadillo C++ library is also used for fast linear algebra calculations.

The problem of comparing the entire second order structure of two functional processes is considered and a $L^2$-type statistic for testing equality of the corresponding spectral density operators is investigated. The test statistic evaluates, over all frequencies, the Hilbert-Schmidt distance between the two estimated spectral density operators. Under certain assumptions, the limiting distribution under the null hypothesis is derived. A novel frequency domain bootstrap method is introduced, which leads to a more accurate approximation of the distribution of the test statistic under the null than the large sample Gaussian approximation derived. Under quite general conditions, asymptotic validity of the bootstrap procedure is established for estimating the distribution of the test statistic under the null. Furthermore, consistency of the bootstrap-based test under the alternative is proved. Numerical simulations show that, even for small samples, the bootstrap-based test has a very good size and power behavior. An application to a bivariate real-life functional time series illustrates the methodology proposed.

We introduce a methodology for robust Bayesian estimation with robust divergence (e.g., density power divergence or {\gamma}-divergence), indexed by a single tuning parameter. It is well known that the posterior density induced by robust divergence gives highly robust estimators against outliers if the tuning parameter is appropriately and carefully chosen. In a Bayesian framework, one way to find the optimal tuning parameter would be using evidence (marginal likelihood). However, we numerically illustrate that evidence induced by the density power divergence does not work to select the optimal tuning parameter since robust divergence is not regarded as a statistical model. To overcome the problems, we treat the exponential of robust divergence as an unnormalized statistical model, and we estimate the tuning parameter via minimizing the Hyvarinen score. We also provide adaptive computational methods based on sequential Monte Carlo (SMC) samplers, which enables us to obtain the optimal tuning parameter and samples from posterior distributions simultaneously. The empirical performance of the proposed method through simulations and an application to real data are also provided.

The complex scaling/perfectly matched layer method is a widely spread technique to simulate wave propagation problems in open domains. The method is very popular, because its implementation is very easy and does not require the knowledge of a fundamental solution. However, for anisotropic media the method may yield an unphysical radiation condition and lead to erroneous and unstable results. In this article we argue that a radial scaling (opposed to a cartesian scaling) does not suffer from this drawback and produces the desired radiation condition. This result is of great importance as it rehabilitates the application of the complex scaling method for anisotropic media. To present further details we consider the radial complex scaling method for scalar anisotropic resonance problems. We prove that the associated operator is Fredholm and show the convergence of approximations generated by simulateneous domain truncation and finite element discretization. We present computational studies to undergird our theoretical results.

Claiming causal inferences in network settings necessitates careful consideration of the often complex dependency between outcomes for actors. Of particular importance are treatment spillover or outcome interference effects. We consider causal inference when the actors are connected via an underlying network structure. Our key contribution is a model for causality when the underlying network is unobserved and the actor covariates evolve stochastically over time. We develop a joint model for the relational and covariate generating process that avoids restrictive separability assumptions and deterministic network assumptions that do not hold in the majority of social network settings of interest. Our framework utilizes the highly general class of Exponential-family Random Network models (ERNM) of which Markov Random Fields (MRF) and Exponential-family Random Graph models (ERGM) are special cases. We present potential outcome based inference within a Bayesian framework, and propose a simple modification to the exchange algorithm to allow for sampling from ERNM posteriors. We present results of a simulation study demonstrating the validity of the approach. Finally, we demonstrate the value of the framework in a case-study of smoking over time in the context of adolescent friendship networks.

In this study, we consider a problem of monitoring parameter changes particularly in the presence of outliers. To propose a sequential procedure that is robust against outliers, we use the density power divergence to derive a detector and stopping time that make up our procedure. We first investigate the asymptotic properties of our sequential procedure for i.i.d. sequences, and then extend the proposed procedure to stationary time series models, where we provide a set of sufficient conditions under which the proposed procedure has an asymptotically controlled size and consistency in power. As an application, our procedure is applied to the GARCH models. We demonstrate the validity and robustness of the proposed procedure through a simulation study. Finally, two real data analyses are provided to illustrate the usefulness of the proposed sequential procedure.

Quantum machine learning is an emerging field at the intersection of machine learning and quantum computing. A central quantity for the theoretical foundation of quantum machine learning is the quantum cross entropy. In this paper, we present one operational interpretation of this quantity, that the quantum cross entropy is the compression rate for sub-optimal quantum source coding. To do so, we give a simple, universal quantum data compression protocol, which is developed based on quantum generalization of variable-length coding, as well as quantum strong typicality.

Graphs are widely used for describing systems made up of many interacting components and for understanding the structure of their interactions. Various statistical models exist, which describe this structure as the result of a combination of constraints and randomness. %Model selection techniques need to automatically identify the best model, and the best set of parameters for a given graph. To do so, most authors rely on the minimum description length paradigm, and apply it to graphs by considering the entropy of probability distributions defined on graph ensembles. In this paper, we introduce edge probability sequential inference, a new approach to perform model selection, which relies on probability distributions on edge ensembles. From a theoretical point of view, we show that this methodology provides a more consistent ground for statistical inference with respect to existing techniques, due to the fact that it relies on multiple realizations of the random variable. It also provides better guarantees against overfitting, by making it possible to lower the number of parameters of the model below the number of observations. Experimentally, we illustrate the benefits of this methodology in two situations: to infer the partition of a stochastic blockmodel, and to identify the most relevant model for a given graph between the stochastic blockmodel and the configuration model.

We propose a new quantum state reconstruction method that combines ideas from compressed sensing, non-convex optimization, and acceleration methods. The algorithm, called Momentum-Inspired Factored Gradient Descent (\texttt{MiFGD}), extends the applicability of quantum tomography for larger systems. Despite being a non-convex method, \texttt{MiFGD} converges \emph{provably} to the true density matrix at a linear rate, in the absence of experimental and statistical noise, and under common assumptions. With this manuscript, we present the method, prove its convergence property and provide Frobenius norm bound guarantees with respect to the true density matrix. From a practical point of view, we benchmark the algorithm performance with respect to other existing methods, in both synthetic and real experiments performed on an IBM's quantum processing unit. We find that the proposed algorithm performs orders of magnitude faster than state of the art approaches, with the same or better accuracy. In both synthetic and real experiments, we observed accurate and robust reconstruction, despite experimental and statistical noise in the tomographic data. Finally, we provide a ready-to-use code for state tomography of multi-qubit systems.

The aim of this paper is to offer the first systematic exploration and definition of equivalent causal models in the context where both models are not made up of the same variables. The idea is that two models are equivalent when they agree on all "essential" causal information that can be expressed using their common variables. I do so by focussing on the two main features of causal models, namely their structural relations and their functional relations. In particular, I define several relations of causal ancestry and several relations of causal sufficiency, and require that the most general of these relations are preserved across equivalent models.

0+阅读 · 6月28日
Duncan A. Clark,Mark S. Handcock
0+阅读 · 6月27日
Zhou Shangnan
0+阅读 · 6月25日
Louis Duvivier,Rémy Cazabet,Céline Robardet
0+阅读 · 6月25日
Junhyung Lyle Kim,George Kollias,Amir Kalev,Ken X. Wei,Anastasios Kyrillidis
0+阅读 · 6月25日
Sander Beckers
4+阅读 · 2020年12月10日

38+阅读 · 1月31日

31+阅读 · 2020年7月27日

43+阅读 · 2020年7月26日

152+阅读 · 2020年4月19日

108+阅读 · 2019年10月10日

47+阅读 · 2019年10月10日

37+阅读 · 2019年9月29日

CreateAMind
8+阅读 · 2019年5月18日

6+阅读 · 2019年1月11日
CreateAMind
24+阅读 · 2018年9月12日

24+阅读 · 2017年11月16日
Top