A central notion in U.S. copyright law is judging the substantial similarity between an original and an (allegedly) derived work. Capturing this notion has proven elusive, and the many approaches offered by case law and legal scholarship are often ill-defined, contradictory, or internally-inconsistent. This work suggests that key parts of the substantial-similarity puzzle are amendable to modeling inspired by theoretical computer science. Our proposed framework quantitatively evaluates how much "novelty" is needed to produce the derived work with access to the original work, versus reproducing it without access to the copyrighted elements of the original work. "Novelty" is captured by a computational notion of description length, in the spirit of Kolmogorov-Levin complexity, which is robust to mechanical transformations and availability of contextual information. This results in an actionable framework that could be used by courts as an aid for deciding substantial similarity. We evaluate it on several pivotal cases in copyright law and observe that the results are consistent with the rulings, and are philosophically aligned with the abstraction-filtration-comparison test of Altai.
翻译:美国版权法的一个核心概念正在判断原始作品和(据称)衍生作品之间的大量相似性。 事实证明,这个概念是难以实现的,判例法和法律奖学金所提供的许多方法往往定义不清、相互矛盾或内部不一致。 这项工作表明,实质性相似性难题的关键部分可以修正为计算机理论科学所启发的模型。 我们提议的框架定量评估了产生原始作品的衍生作品需要多少“新颖性”,而不是复制原始作品的版权要素。 “新颖性”被一个计算描述长度的概念所捕捉,这种概念是科尔莫戈罗夫-利文的精密精神,对机械转型和提供背景信息十分有力。这导致一个可操作的框架,法院可以用来帮助决定实质性相似性。 我们评估了版权法中几个关键案例,并观察到其结果与裁决相一致,在哲学上与阿尔泰的抽象过滤比较测试相一致。