Effectively addressing missing values in data imputation is pivotal, particularly for intricate datasets. This study delves into the full information maximum likelihood (FIML) optimized self-attention (FOSA) framework, an innovative approach that amalgamates the strengths of FIML estimation with the capabilities of self-attention neural networks. Our methodology begins with an initial estimation of missing values via FIML, which is subsequently refined by leveraging the self-attention mechanism. Our comprehensive experiments on both simulated and real-world datasets underscore the pronounced advantages of FOSA over traditional FIML techniques, including encapsulating facets of accuracy, computational efficiency, and adaptability to diverse data structures. Intriguingly, even in cases where the structural equation model can be misspecified, leading to sub-optimal FIML estimates, the robust architecture of the FOSA self-attention component adeptly rectifies and optimizes the imputation outcomes. Our empirical tests reveal that FOSA consistently delivers commendable predictions even for approximately 40% random missingness, highlighting its robustness and potential for wide-scale applications in data imputation.
翻译:暂无翻译