In recent years, the transition to cloud-based platforms in the IT sector has emphasized the significance of cloud incident root cause analysis to ensure service reliability and maintain customer trust. Central to this process is the efficient determination of root causes, a task made challenging due to the complex nature of contemporary cloud infrastructures. Despite the proliferation of AI-driven tools for root cause identification, their applicability remains limited by the inconsistent quality of their outputs. This paper introduces a method for enhancing confidence estimation in root cause analysis tools by prompting retrieval-augmented large language models (LLMs). This approach operates in two phases. Initially, the model evaluates its confidence based on historical incident data, considering its assessment of the evidence strength. Subsequently, the model reviews the root cause generated by the predictor. An optimization step then combines these evaluations to determine the final confidence assignment. Experimental results illustrate that our method enables the model to articulate its confidence effectively, providing a more calibrated score. We address research questions evaluating the ability of our method to produce calibrated confidence scores using LLMs, the impact of domain-specific retrieved examples on confidence estimates, and its potential generalizability across various root cause analysis models. Through this, we aim to bridge the confidence estimation gap, aiding on-call engineers in decision-making and bolstering the efficiency of cloud incident management.
翻译:暂无翻译