Active learning continues to remain significant in the industry since it is data efficient. Not only is it cost effective on a constrained budget, continuous refinement of the model allows for early detection and resolution of failure scenarios during the model development stage. Identifying and fixing failures with the model is crucial as industrial applications demand that the underlying model performs accurately in all foreseeable use cases. One popular state-of-the-art technique that specializes in continuously refining the model via failure identification is Learning Loss. Although simple and elegant, this approach is empirically motivated. Our paper develops a foundation for Learning Loss which enables us to propose a novel modification we call LearningLoss++. We show that gradients are crucial in interpreting how Learning Loss works, with rigorous analysis and comparison of the gradients between Learning Loss and LearningLoss++. We also propose a convolutional architecture that combines features at different scales to predict the loss. We validate LearningLoss++ for regression on the task of human pose estimation (using MPII and LSP datasets), as done in Learning Loss. We show that LearningLoss++ outperforms in identifying scenarios where the model is likely to perform poorly, which on model refinement translates into reliable performance in the open world.
翻译:由于该行业的数据效率较高,积极学习在行业中仍然具有重要意义。不仅在有限的预算中具有成本效益,而且该模型的不断完善有助于在模型开发阶段早期发现和解决故障假设情景。通过该模型查明和纠正失败至关重要,因为工业应用要求基础模型在所有可预见的使用情况下都准确运行。一个专门通过故障识别不断完善模型的流行先进技术是“学习损失”。虽然这一方法简单而优雅,但具有经验性能。我们的文件为学习损失开发了一个基础,使我们能够提出新的修改,我们称之为“学习损失+++”。我们表明,在解释学习损失如何运作方面,梯度至关重要,对学习损失与学习损失++(+)之间的梯度进行严格的分析和比较。我们还提出了一个将不同尺度的特征结合起来以预测损失的演进式结构。我们验证了学习成本+,以回归人造价值估算的任务(使用MPII和LSP数据集),正如在学习损失中所做的那样。我们显示,学习Los+的形状在确定模型可能表现不佳的情景方面超越了外观。