Automated Machine Learning (AutoML) is used more than ever before to support users in determining efficient hyperparameters, neural architectures, or even full machine learning pipelines. However, users tend to mistrust the optimization process and its results due to a lack of transparency, making manual tuning still widespread. We introduce DeepCAVE, an interactive framework to analyze and monitor state-of-the-art optimization procedures for AutoML easily and ad hoc. By aiming for full and accessible transparency, DeepCAVE builds a bridge between users and AutoML and contributes to establishing trust. Our framework's modular and easy-to-extend nature provides users with automatically generated text, tables, and graphic visualizations. We show the value of DeepCAVE in an exemplary use-case of outlier detection, in which our framework makes it easy to identify problems, compare multiple runs and interpret optimization processes. The package is freely available on GitHub https://github.com/automl/DeepCAVE.
翻译:自动机器学习(Automal)比以往更多地用于支持用户确定高效的超参数、神经结构,甚至完全的机器学习管道,然而,由于缺乏透明度,用户往往不信任优化过程及其结果,使手工调试仍然很普遍。我们引入了DeepCAVE(DeepCAVE),这是一个互动框架,用来分析和监测自动ML容易和临时的先进优化程序。DeepCAVE(Automle)的目标是实现完全和无障碍的透明度,在用户和自动ML之间搭建桥梁,有助于建立信任。我们的框架模块化和易于扩展的性质为用户提供了自动生成的文本、表格和图像可视化。我们展示了DeepCAVE的价值,这是一种示范性的外部检测,在这个框架中,我们很容易识别问题,比较多运行和解释优化程序。这个软件包可以在GitHub https://github.com/automl/DeepCAVEVE中免费查阅。