Generative AI (GAI) offers numerous opportunities for research and innovation, but its commercialization has raised concerns about transparency, reproducibility, and safety. Most open GAI models lack the necessary components for full understanding, auditing, and reproducibility, and some use restrictive licenses whilst claiming to be "open-source". To address these concerns, we introduce the Model Openness Framework (MOF), a ranked classification system that rates machine learning models based on their completeness and openness, following principles of open science, as well as the Model Openness Tool (MOT), which provides a reference implementation designed to evaluate ML models against the principles outlined by the MOF. The MOF requires specific components of the model development lifecycle to be included and released under appropriate open licenses. This framework aims to prevent misrepresentation of models claiming to be open, to guide researchers and developers in providing all model components under permissive licenses, and to help individuals and organizations identify models that can be safely adopted. By promoting transparency and reproducibility, the MOF combats open-washing and establishes completeness and openness as core tenets of responsible AI research and development. Widespread adoption of the MOF will foster a more open AI ecosystem, benefiting research, innovation, and the adoption of state-of-the-art models.
翻译:暂无翻译