As Machine Learning (ML) is still a recent field of study, especially outside the realm of abstract Mathematics and Computer Science, few works have been conducted on the political aspect of large Language Models (LLMs), and more particularly about the alignment process and its political dimension. This process can be as simple as prompt engineering but is also very complex and can affect completely unrelated notions. For example, politically directed alignment has a very strong impact on an LLM's embedding space and the relative position of political notions in such a space. Using special tools to evaluate general political bias and analyze the effects of alignment, we can gather new data to understand its causes and possible consequences on society. Indeed, by taking a socio-political approach, we can hypothesize that most big LLMs are aligned with what Marxist philosophy calls the 'dominant ideology.' As AI's role in political decision-making, at the citizen's scale but also in government agencies, such biases can have huge effects on societal change, either by creating new and insidious pathways for societal uniformity or by allowing disguised extremist views to gain traction among the people.
翻译:暂无翻译