We investigate the descriptive complexity of a class of neural networks with unrestricted topologies and piecewise polynomial activation functions. We consider the general scenario where the networks run for an unlimited number of rounds and floating-point numbers are used to simulate reals. We characterize these neural networks with a recursive rule-based logic for Boolean networks. In particular, we show that the sizes of the neural networks and the corresponding Boolean rule formulae are polynomially related. In fact, in the direction from Boolean rules to neural networks, the blow-up is only linear. Our translations result in a time delay, which is the number of rounds that it takes for an object's translation to simulate a single round of the object. In the translation from neural networks to Boolean rules, the time delay of the resulting formula is polylogarithmic in the neural network size. In the converse translation, the time delay of the neural network is linear in the formula size. As a corollary, by restricting our logic, we obtain a similar characterization for classical feedforward neural networks. We also obtain translations between the rule-based logic for Boolean networks, the diamond-free fragment of modal substitution calculus and a class of recursive Boolean circuits where the number of input and output gates match. Ultimately, our translations offer a method of translating a given neural network into an equivalent neural network with different activation functions, including linear activation functions!
翻译:暂无翻译