When language is utilized as a medium to store and communicate sensory information, there arises a kind of radical virtual reality, namely "the realities that are reduced into the same sentence are virtual/equivalent." In the current era, in which artificial intelligence engages in the linguistic mediation of sensory information, it is imperative to re-examine the various issues pertaining to this potential VR, particularly in relation to bias and (dis)communication. Semantic See-through Goggles represent an experimental framework for glasses through which the view is fully verbalized and re-depicted into the wearer's view. The participants wear the goggles equipped with a camera and head-mounted display (HMD). In real-time, the image captured by the camera is converted by the AI into a single line of text, which is then transformed into an image and presented to the user's eyes. This process enables users to perceive and interact with the real physical world through this redrawn view. We constructed a prototype of these goggles, examined their fundamental characteristics, and then conducted a qualitative analysis of the wearer's experience. This project investigates a methodology for subjectively capturing the situation in which AI serves as a proxy for our perception of the world. At the same time, It also attempts to appropriate some of the energy of today's debate over artificial intelligence for a classical inquiry around the fact that "intelligence can only see the world under meaning."
翻译:暂无翻译