Clinical artificial intelligence (AI) methods have been proposed for predicting social behaviors which could be reasonably understood from patient-reported data. This raises ethical concerns about respect, privacy, and patient awareness/control over how their health data is used. Ethical concerns surrounding clinical AI systems for social behavior verification were divided into three main categories: (1) the use of patient data retrospectively without informed consent for the specific task of verification, (2) the potential for inaccuracies or biases within such systems, and (3) the impact on trust in patient-provider relationships with the introduction of automated AI systems for fact-checking. Additionally, this report showed the simulated misuse of a verification system and identified a potential LLM bias against patient-reported information in favor of multimodal data, published literature, and the outputs of other AI methods (i.e., AI self-trust). Finally, recommendations were presented for mitigating the risk that AI verification systems will cause harm to patients or undermine the purpose of the healthcare system.
翻译:暂无翻译