With the rapid advancement of AI, software engineering increasingly relies on AI-driven approaches, particularly language models (LMs), to enhance code performance. However, the trustworthiness and reliability of LMs remain significant challenges due to the potential for hallucinations -- unreliable or incorrect responses. To fill this gap, this research aims to develop reliable, LM-powered methods for code optimization that effectively integrate human feedback. This work aligns with the broader objectives of advancing cooperative and human-centric aspects of software engineering, contributing to the development of trustworthy AI-driven solutions.
翻译:暂无翻译