** We explore recent progress and open questions concerning local minima and saddle points of the Cahn--Hilliard energy in $d\geq 2$ and the critical parameter regime of large system size and mean value close to $-1$. We employ the String Method of E, Ren, and Vanden-Eijnden -- a numerical algorithm for computing transition pathways in complex systems -- in $d=2$ to gain additional insight into the properties of the minima and saddle point. Motivated by the numerical observations, we adapt a method of Caffarelli and Spruck to study convexity of level sets in $d\geq 2$. **

数值分析
·

** We propose a numerical method to approximate the scattering amplitudes for the elasticity system with a non-constant matrix potential in dimensions $d=2$ and $3$. This requires to approximate first the scattering field, for some incident waves, which can be written as the solution of a suitable Lippmann-Schwinger equation. In this work we adapt the method introduced by G. Vainikko in \cite{V} to solve such equations when considering the Lam\'e operator. Convergence is proved for sufficiently smooth potentials. Implementation details and numerical examples are also given. **

数值分析
·

** We use the energy method to study the well-posedness of initial-boundary value problems approximated by overset mesh methods in one and two space dimensions for linear constant-coefficient hyperbolic systems. We show that in one space dimension, for both scalar equations and systems of equations, the problem where one domain partially oversets another is well-posed when characteristic coupling conditions are used. If a system cannot be diagonalized, as is ususally the case in multiple space dimensions, then the energy method does not give proper bounds in terms of initial and boundary data. For those problems, we propose a novel penalty approach. We show, by using a global energy that accounts for the energy in the overlap region of the domains, that under well-defined conditions on the coupling matrices the penalized overset domain problems are energy bounded, conservative, well-posed and have solutions equivalent to the original single domain problem. **

** The random-cluster model is a unifying framework for studying random graphs, spin systems and electrical networks that plays a fundamental role in designing efficient Markov Chain Monte Carlo (MCMC) sampling algorithms for the classical ferromagnetic Ising and Potts models. In this paper, we study a natural non-local Markov chain known as the Chayes-Machta dynamics for the mean-field case of the random-cluster model, where the underlying graph is the complete graph on $n$ vertices. The random-cluster model is parametrized by an it edge probability $p$ and a cluster weight $q$. Our focus is on the critical regime: $p = p_c(q)$ and $q \in (1,2)$, where $p_c(q)$ is the threshold corresponding to the order-disorder phase transition of the model. We show that the mixing time of the Chayes-Machta dynamics is $O(\log n \cdot \log \log n)$ in this parameter regime, which reveals that the dynamics does not undergo an exponential slowdown at criticality, a surprising fact that had been predicted (but not proved) by statistical physicists. This also provides a nearly optimal bound (up to the $\log\log n$ factor) for the mixing time of the mean-field Chayes-Machta dynamics in the only regime of parameters where no non-trivial bound was previously known. Our proof consists of a multi-phased coupling argument that combines several key ingredients, including a new local limit theorem, a precise bound on the maximum of symmetric random walks with varying step sizes, and tailored estimates for critical random graphs. In addition, we derive an improved comparison inequality between the mixing time of the Chayes-Machta dynamics and that of the local Glauber dynamics on general graphs; this results in better mixing time bounds for the local dynamics in the mean-field setting. **

大学
·

** We derive upper and lower bounds on the sum of distances of a spherical code of size $N$ in $n$ dimensions when $N\sim n^\alpha, 0<\alpha\le 2.$ The bounds are derived by specializing recent general, universal bounds on energy of spherical sets. We discuss asymptotic behavior of our bounds along with several examples of codes whose sum of distances closely follows the upper bound. **

** In this paper, we study the nonlinear inverse problem of estimating the spectrum of a system matrix, that drives a finite-dimensional affine dynamical system, from partial observations of a single trajectory data. In the noiseless case, we prove an annihilating polynomial of the system matrix, whose roots are a subset of the spectrum, can be uniquely determined from data. We then study which eigenvalues of the system matrix can be recovered and derive various sufficient and necessary conditions to characterize the relationship between the recoverability of each eigenvalue and the observation locations. We propose various reconstruction algorithms, with theoretical guarantees, generalizing the classical Prony method, ESPIRIT, and matrix pencil method. We test the algorithms over a variety of examples with applications to graph signal processing, disease modeling and a real-human motion dataset. The numerical results validate our theoretical results and demonstrate the effectiveness of the proposed algorithms, even when the data did not follow an exact linear dynamical system. **

** We research relations between optimal transport theory (OTT) and approximate Bayesian computation (ABC) possibly connected to relevant metrics defined on probability measures. Those of ABC are computational methods based on Bayesian statistics and applicable to a given generative model to estimate its a posteriori distribution in case the likelihood function is intractable. The idea is therefore to simulate sets of synthetic data from the model with respect to assigned parameters and, rather than comparing prospects of these data with the corresponding observed values as typically ABC requires, to employ just a distance between a chosen distribution associated to the synthetic data and another of the observed values. Our focus lies in theoretical and methodological aspects, although there would exist a remarkable part of algorithmic implementation, and more precisely issues regarding mathematical foundation and asymptotic properties are carefully analysed, inspired by an in-depth study of what is then our main bibliographic reference, that is Bernton et al. (2019), carrying out what follows: a rigorous formulation of the set-up for the ABC rejection algorithm, also to regain a transparent and general result of convergence as the ABC threshold goes to zero whereas the number n of samples from the prior stays fixed; general technical proposals about distances leaning on OTT; weak assumptions which lead to lower bounds for small values of threshold and as n goes to infinity, ultimately showing a reasonable possibility of lack of concentration which is contrary to what is proposed in Bernton et al. (2019) itself. **

** To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. **

** We investigate how the final parameters found by stochastic gradient descent are influenced by over-parameterization. We generate families of models by increasing the number of channels in a base network, and then perform a large hyper-parameter search to study how the test error depends on learning rate, batch size, and network width. We find that the optimal SGD hyper-parameters are determined by a "normalized noise scale," which is a function of the batch size, learning rate, and initialization conditions. In the absence of batch normalization, the optimal normalized noise scale is directly proportional to width. Wider networks, with their higher optimal noise scale, also achieve higher test accuracy. These observations hold for MLPs, ConvNets, and ResNets, and for two different parameterization schemes ("Standard" and "NTK"). We observe a similar trend with batch normalization for ResNets. Surprisingly, since the largest stable learning rate is bounded, the largest batch size consistent with the optimal normalized noise scale decreases as the width increases. **