As artificial intelligence (AI) systems become increasingly embedded in ethically sensitive domains such as education, healthcare, and transportation, the need to balance accuracy and interpretability in decision-making has become a central concern. Coarse Ethics (CE) is a theoretical framework that justifies coarse-grained evaluations, such as letter grades or warning labels, as ethically appropriate under cognitive and contextual constraints. However, CE has lacked mathematical formalization. This paper introduces Coarse Set Theory (CST), a novel mathematical framework that models coarse-grained decision-making using totally ordered structures and coarse partitions. CST defines hierarchical relations among sets and uses information-theoretic tools, such as Kullback-Leibler Divergence, to quantify the trade-off between simplification and information loss. We demonstrate CST through applications in educational grading and explainable AI (XAI), showing how it enables more transparent and context-sensitive evaluations. By grounding coarse evaluations in set theory and probabilistic reasoning, CST contributes to the ethical design of interpretable AI systems. This work bridges formal methods and human-centered ethics, offering a principled approach to balancing comprehensibility, fairness, and informational integrity in AI-driven decisions.
翻译:暂无翻译