The judiciary has historically been conservative in its use of Artificial Intelligence, but recent advances in machine learning have prompted scholars to reconsider such use in tasks like sentence prediction. This paper investigates by experimentation the potential use of explainable artificial intelligence for predicting imprisonment sentences in assault cases in New Zealand's courts. We propose a proof-of-concept explainable model and verify in practice that it is fit for purpose, with predicted sentences accurate to within one year. We further analyse the model to understand the most influential phrases in sentence length prediction. We conclude the paper with an evaluative discussion of the future benefits and risks of different ways of using such an AI model in New Zealand's courts.
翻译:司法机构历来使用人工智能是保守的,但最近机器学习的进展促使学者重新考虑在判决预测等任务中使用这种手段。本文通过实验尝试在新西兰法院使用可解释的人工智能来预测攻击案件中的监禁判决的可能性。我们提出了一个可解释的证明模式,并在实践中核实它是否适合目的,预测的刑期准确到一年之内。我们进一步分析该模式,以了解刑期预测中最有影响力的词语。我们最后对新西兰法院使用这种人工智能模式的未来好处和风险进行了评估性讨论。