Synthetic aperture imaging (SAI) is able to achieve the see through effect by blurring out the off-focus foreground occlusions and reconstructing the in-focus occluded targets from multi-view images. However, very dense occlusions and extreme lighting conditions may bring significant disturbances to SAI based on conventional frame-based cameras, leading to performance degeneration. To address these problems, we propose a novel SAI system based on the event camera which can produce asynchronous events with extremely low latency and high dynamic range. Thus, it can eliminate the interference of dense occlusions by measuring with almost continuous views, and simultaneously tackle the over/under exposure problems. To reconstruct the occluded targets, we propose a hybrid encoder-decoder network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs). In the hybrid network, the spatio-temporal information of the collected events is first encoded by SNN layers, and then transformed to the visual image of the occluded targets by a style-transfer CNN decoder. Through experiments, the proposed method shows remarkable performance in dealing with very dense occlusions and extreme lighting conditions, and high quality visual images can be reconstructed using pure event data.
翻译:合成孔径成像(SAI)能够通过从多视图像中模糊离地表面表面,并重建在地表内隐蔽的目标,从而达到可见效果。然而,非常稠密的封闭和极端照明条件可能会在常规框架照相机的基础上,给SAI带来重大干扰,导致性能变异。为了解决这些问题,我们提议以事件相机为基础建立一个新型SAI系统,该系统能够产生无同步事件,且高度低悬浮和高动态范围。因此,它可以通过测量几乎连续的视图,同时解决暴露在地表内/下的问题,消除密集封闭目标的干扰。然而,为了重建隐蔽目标,我们提议建立一个混合的编码器-解密网络,由光学网络和动态神经网络构成,为了解决这些问题,我们提议在混合网络中,所收集事件的波形时波波波波波信息首先由SNNNT层编码,然后通过高清晰的图像传输率和高清晰度图像的图像分析,然后转换成隐形目标目标的图像图像图像图像图像图像。我们提议采用高清晰度和高清晰度的图像分析方法,通过粉色和高清晰度图像的图像分析进行重建。