Over the past few years, there has been a significant amount of research focused on studying the ReLU activation function, with the aim of achieving neural network convergence through over-parametrization. However, recent developments in the field of Large Language Models (LLMs) have sparked interest in the use of exponential activation functions, specifically in the attention mechanism. Mathematically, we define the neural function $F: \mathbb{R}^{d \times m} \times \mathbb{R}^d \rightarrow \mathbb{R}$ using an exponential activation function. Given a set of data points with labels $\{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\} \subset \mathbb{R}^d \times \mathbb{R}$ where $n$ denotes the number of the data. Here $F(W(t),x)$ can be expressed as $F(W(t),x) := \sum_{r=1}^m a_r \exp(\langle w_r, x \rangle)$, where $m$ represents the number of neurons, and $w_r(t)$ are weights at time $t$. It's standard in literature that $a_r$ are the fixed weights and it's never changed during the training. We initialize the weights $W(0) \in \mathbb{R}^{d \times m}$ with random Gaussian distributions, such that $w_r(0) \sim \mathcal{N}(0, I_d)$ and initialize $a_r$ from random sign distribution for each $r \in [m]$. Using the gradient descent algorithm, we can find a weight $W(T)$ such that $\| F(W(T), X) - y \|_2 \leq \epsilon$ holds with probability $1-\delta$, where $\epsilon \in (0,0.1)$ and $m = \Omega(n^{2+o(1)}\log(n/\delta))$. To optimize the over-parameterization bound $m$, we employ several tight analysis techniques from previous studies [Song and Yang arXiv 2019, Munteanu, Omlor, Song and Woodruff ICML 2022].
翻译:过去几年中,研究人员一直致力于研究ReLU激活函数,旨在通过超参数化实现神经网络的收敛。然而,最近在大型语言模型(LLMs)领域的发展引发了对指数激活函数的兴趣,特别是在注意力机制中。数学上,我们使用指数激活函数定义神经函数$F: \mathbb{R}^{d \times m} \times \mathbb{R}^d \rightarrow \mathbb{R}$。给定一组带标签的数据点$\{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\} \subset \mathbb{R}^d \times \mathbb{R}$,其中 $n$ 表示数据数量。这里$F(W(t),x)$可以表示为$F(W(t),x) := \sum_{r=1}^m a_r \exp(\langle w_r, x \rangle)$,其中$m$表示神经元的数量,$w_r(t)$是时间$t$的权重。在文献中,$a_r$是固定的权重,在训练过程中不会改变。我们使用随机高斯分布初始化权重$W(0) \in \mathbb{R}^{d \times m}$,使得$w_r(0) \sim \mathcal{N}(0, I_d)$,并为每个$r \in [m]$的$a_r$从随机符号分布中初始化。使用梯度下降算法,我们可以找到一个权重$W(T)$,使得概率$1-\delta$下,$\| F(W(T), X) - y \|_2 \leq \epsilon$成立,其中$\epsilon\in(0,0.1)$,$m=\Omega(n^{2+o(1)}\log(n/\delta))$。为了优化超参数化绑定$m$,我们采用了来自之前研究的几种紧密的分析技术[Song and Yang arXiv 2019, Munteanu, Omlor, Song and Woodruff ICML 2022]。