Despite existing work in machine learning inference serving, ease-of-use and cost efficiency remain key challenges. Developers must manually match the performance, accuracy, and cost constraints of their applications to decisions about selecting the right model and model optimizations, suitable hardware architectures, and auto-scaling configurations. These interacting decisions are difficult to make for users, especially when the application load varies, applications evolve, and the available resources vary over time. Thus, users often end up making decisions that overprovision resources. This paper introduces INFaaS, a model-less inference-as-a-service system that relieves users of making these decisions. INFaaS provides a simple interface allowing users to specify their inference task, and performance and accuracy requirements. To implement this interface, INFaaS generates and leverages model-variants, versions of a model that differ in resource footprints, latencies, costs, and accuracies. Based on the characteristics of the model-variants, INFaaS automatically navigates the decision space on behalf of users to meet user-specified objectives: (a) it selects a model, hardware architecture, and any compiler optimizations, and (b) it makes scaling and resource allocation decisions. By sharing models across users and hardware resources across models, INFaaS achieves up to 150x cost savings, 1.5x higher throughput, and violates latency objectives 1.5x less frequently, compared to Clipper and TensorFlow Serving.
翻译:尽管目前在机器学习推理服务、使用方便和成本效率方面进行了工作,但关键的挑战仍然是:开发者必须将其应用的性能、准确性和成本限制与选择正确的模型和模型优化、适当的硬件架构和自动缩放配置的决定进行人工匹配。这些相互作用的决定对于用户来说很难做出,特别是在应用负荷变化、应用变化以及可用资源随时间变化而变化的情况下。因此,用户最终往往做出过多提供资源的决定。本文件介绍了INFAAS,这是一个无模范推理服务系统,可以免除用户做出这些决定。INFAS提供了一个简单的界面,使用户能够具体指定其推理任务、业绩和准确性要求。为了实施这一界面,InFAS生成并利用模型变量、模型版本,在资源足迹、晚期、成本和理解度方面各不相同。根据模型的特性,INFAAS自动引导决定空间,以用户实现用户指定的目标:(a)它经常选择模型、硬件结构,通过成本配置来降低成本配置。