The commonly cited rule of thumb for regression analysis, which suggests that a sample size of $n \geq 30$ is sufficient to ensure valid inferences, is frequently referenced but rarely scrutinized. This research note evaluates the lower bound for the number of observations required for regression analysis by exploring how different distributional characteristics, such as skewness and kurtosis, influence the convergence of t-values to the t-distribution in linear regression models. Through an extensive simulation study involving over 22 billion regression models, this paper examines a range of symmetric, platykurtic, and skewed distributions, testing sample sizes from 4 to 10,000. The results show that it is sufficient that either the dependent or independent variable follow a symmetric distribution for the t-values to converge at much smaller sample sizes than $n=30$, unless the other variable is extremely skewed. This is contrary to previous guidance which suggests that the error term needs to be normally distributed for this convergence to happen at low $n$. However, when both variables are highly skewed, much larger sample sizes are required. These findings suggest the $n \geq 30$ rule is overly conservative in some cases and insufficient in others, offering revised guidelines for determining minimum sample sizes.
翻译:暂无翻译