Responsible Artificial Intelligence (AI) - the practice of developing, evaluating, and maintaining accurate AI systems that also exhibit essential properties such as robustness and explainability - represents a multifaceted challenge that often stretches standard machine learning tooling, frameworks, and testing methods beyond their limits. In this paper, we present two new software libraries - hydra-zen and the rAI-toolbox - that address critical needs for responsible AI engineering. hydra-zen dramatically simplifies the process of making complex AI applications configurable, and their behaviors reproducible. The rAI-toolbox is designed to enable methods for evaluating and enhancing the robustness of AI-models in a way that is scalable and that composes naturally with other popular ML frameworks. We describe the design principles and methodologies that make these tools effective, including the use of property-based testing to bolster the reliability of the tools themselves. Finally, we demonstrate the composability and flexibility of the tools by showing how various use cases from adversarial robustness and explainable AI can be concisely implemented with familiar APIs.
翻译:负责任的人工智能(AI)是开发、评价和保持准确的人工智能系统的做法,这种系统也具有诸如稳健性和可解释性等基本特性,这是一个多层面的挑战,往往使标准的机器学习工具、框架和测试方法超出其极限,我们在本文件中介绍了两个新的软件库——水合和RAI工具箱,这些软件库满足负责任的人工智能工程的关键需求,水合大大简化了使复杂的人工智能应用程序可配置及其可复制行为的过程。RAI工具箱的设计是为了使评估和增强人工智能模型的稳健性的方法能够伸缩,并与其他广受欢迎的ML框架自然地结合。我们描述了使这些工具行之有效的设计原则和方法,包括使用基于财产的测试来提高工具本身的可靠性。最后,我们通过展示各种来自对抗性强性和可解释的人工智能案例如何与熟悉的API一起简明地应用,来证明这些工具的可容性和灵活性。