We explored the mathematical foundations of Recurrent Neural Networks ($\mathtt{RNN}$s) and three fundamental procedures: temporal rescaling, discretisation and linearisation. These techniques provide essential tools for characterizing $\mathtt{RNN}$s behaviour, enabling insights into temporal dynamics, practical computational implementation, and linear approximations for analysis. We discuss the flexible order of application of these procedures, emphasizing their significance in modelling and analyzing $\mathtt{RNN}$s for neuroscience and machine learning applications. We explicitly describe here under what conditions these procedures can be interchangeable.
翻译:暂无翻译