Development of large-area, high-speed electronic skins is a grand challenge for robotics, prosthetics, and human-machine interfaces, but is fundamentally limited by wiring complexity and data bottlenecks. Here, we introduce Single-Pixel Tactile Skin (SPTS), a paradigm that uses compressive sampling to reconstruct rich tactile information from an entire sensor array via a single output channel. This is achieved through a direct circuit-level implementation where each sensing element, equipped with a miniature microcontroller, contributes a dynamically weighted analog signal to a global sum, performing distributed compressed sensing in hardware. Our flexible, daisy-chainable design simplifies wiring to a few input lines and one output, and significantly reduces measurement requirements compared to raster scanning methods. We demonstrate the system's performance by achieving object classification at an effective 3500 FPS and by capturing transient dynamics, resolving an 8 ms projectile impact into 23 frames. A key feature is the support for adaptive reconstruction, where sensing fidelity scales with measurement time. This allows for rapid contact localization using as little as 7% of total data, followed by progressive refinement to a high-fidelity image - a capability critical for responsive robotic systems. This work offers an efficient pathway towards large-scale tactile intelligence for robotics and human-machine interfaces.
翻译:开发大面积、高速电子皮肤是机器人学、假肢与人机界面领域的重大挑战,但受限于布线复杂性和数据瓶颈。本文提出单像素触觉皮肤(SPTS),一种通过压缩采样从整个传感器阵列中经单输出通道重建丰富触觉信息的范式。该技术通过电路级直接实现:每个传感单元配备微型微控制器,将动态加权的模拟信号贡献至全局总和,在硬件层面实现分布式压缩感知。我们的柔性菊花链式设计将布线简化为少数输入线与单输出线,相比逐行扫描方法显著降低测量需求。通过实现等效3500帧/秒的物体分类,以及捕获瞬态动力学(将8毫秒的弹丸撞击解析为23帧),验证了系统性能。其关键特性在于支持自适应重建——传感保真度随测量时间动态扩展,仅需7%总数据即可快速定位接触点,随后渐进细化至高保真图像,这对响应式机器人系统至关重要。本研究为机器人及人机界面的大规模触觉智能提供了高效技术路径。