Today's large-scale algorithms have become immensely influential, as they recommend and moderate the content that billions of humans are exposed to on a daily basis. They are the de-facto regulators of our societies' information diet, from shaping opinions on public health to organizing groups for social movements. This creates serious concerns, but also great opportunities to promote quality information. Addressing the concerns and seizing the opportunities is a challenging, enormous and fabulous endeavor, as intuitively appealing ideas often come with unwanted {\it side effects}, and as it requires us to think about what we deeply prefer. Understanding how today's large-scale algorithms are built is critical to determine what interventions will be most effective. Given that these algorithms rely heavily on {\it machine learning}, we make the following key observation: \emph{any algorithm trained on uncontrolled data must not be trusted}. Indeed, a malicious entity could take control over the data, poison it with dangerously manipulative fabricated inputs, and thereby make the trained algorithm extremely unsafe. We thus argue that the first step towards safe and ethical large-scale algorithms must be the collection of a large, secure and trustworthy dataset of reliable human judgments. To achieve this, we introduce \emph{Tournesol}, an open source platform available at \url{https://tournesol.app}. Tournesol aims to collect a large database of human judgments on what algorithms ought to widely recommend (and what they ought to stop widely recommending). We outline the structure of the Tournesol database, the key features of the Tournesol platform and the main hurdles that must be overcome to make it a successful project. Most importantly, we argue that, if successful, Tournesol may then serve as the essential foundation for any safe and ethical large-scale algorithm.
翻译:今天的大规模算法已经变得非常有影响力,因为它们建议和调和了数十亿人类每天接触的内容。它们是我们社会信息饮食 — — 从影响公众健康观点到组织社会运动团体等 — — 信息饮食的非法调控者。这造成了严重的关注,但也提供了促进高质量信息的巨大机会。解决关注和抓住机会是一项艰巨、巨大和不可思议的努力,因为直觉的吸引思想往往带来不必要的副作用,这需要我们思考我们深层次的偏好。了解今天的大规模算法是如何构建的,对于决定哪些干预将最为有效至关重要。鉴于这些算法在很大程度上依赖于对公众健康的看法,而对于社会运动的信息饮食管理者来说,我们做了如下关键观察: /emph{任何关于不受控制的数据的算法,但决不能令人信服。 事实上,一个恶意的实体可以控制数据,用危险的操纵的编造投入毒害它,从而使得经过培训的算法变得极其不安全。因此,我们主张, 通往安全和道德大规模算法的第一个步骤,对于确定什么是最重要的结构。 {Tourma must a sual a subal subal subly drial deal be drialtistration subaltistration thesubilde