Artificial Intelligence (AI) has achieved transformative success across a wide range of domains, revolutionizing fields such as healthcare, education, and human-computer interaction. However, the mechanisms driving AI's performance often remain opaque, particularly in the context of large language models (LLMs), which have advanced at an unprecedented pace in recent years. Multi-modal large language models (MLLMs) like GPT-4o exemplify this evolution, integrating text, audio, and visual inputs to enable interaction across diverse domains. Despite their remarkable capabilities, these models remain largely "black boxes," offering limited insight into how they process multi-modal information internally. This lack of transparency poses significant challenges, including systematic biases, flawed associations, and unintended behaviors, which require careful investigation. Understanding the decision-making processes of MLLMs is both beneficial and essential for mitigating these challenges and ensuring their reliable deployment in critical applications. GPT-4o was chosen as the focus of this study for its advanced multi-modal capabilities, which allow simultaneous processing of textual and visual information. These capabilities make it an ideal model for investigating the parallels and distinctions between machine-driven and human-driven visual perception. While GPT-4o performs effectively in tasks involving structured and complete data, its reliance on bottom-up processing, which involves a feature-by-feature analysis of sensory inputs, presents challenges when interpreting complex or ambiguous stimuli. This limitation contrasts with human vision, which is remarkably adept at resolving ambiguity and reconstructing incomplete information through high-level cognitive processes.
翻译:暂无翻译