会议组织
会议链接
会议组织
特邀专家
报告题目:视频单曝光压缩成像中的动态特性——Explore the Time Issues in Snapshot Compressive Imaging
报告摘要:单曝光视频压缩成像作为新兴的计算成像技术,通过在成像期间引入编码压缩,将多帧视频压缩为单帧测量,实现以二维传感器获取三维数据的能力。然而,从压缩测量中恢复原始场景的动态特性具有巨大挑战。本报告从时序建模的角度,捕获了在单目、双目单曝光压缩成像系统中的视频动态特征,实现了快速高质量的视频重建。此外,进一步探究了在大场景下重建算法所面临的实用性挑战,并从不同角度(可逆网络、元学习)提出相应解决思路。
报告摘要:Terahertz (THz) computational imaging has recently attracted significant attention thanks to its non-invasive, non-destructive, non-ionizing, material-classification, and ultra-fast nature for 3D object exploration and inspection. However, its strong water absorption nature and low noise tolerance lead to undesired blurs and distortions of reconstructed THz images. The performances of existing methods are highly constrained by the diffraction-limited THz signals. In this talk, we will introduce the characteristics of THz imaging and its applications. We will also show how to break the limitations of THz imaging with the aid of complementary information between the THz amplitude and phase images sampled at prominent frequencies (i.e., the water absorption profile of THz signal) for THz image restoration. To this end, we propose a novel physics-guided deep neural network model, namely Subspace-Attention-guided Restoration Network (SARNet), that fuses such multi-spectral features of THz images for effective restoration. Furthermore, we experimentally construct an ultra-fast THz time-domain spectroscopy system covering a broad frequency range from 0.1 THz to 4 THz for building up temporal/spectral/spatial/phase/material THz database of hidden 3D objects.
报告题目:神经形态事件与传统图像的互补增强机制
报告摘要:Neuromorphic event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. These advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. The main challenge in robot perception and navigation with these sensors is to design new algorithms that process the unfamiliar stream of intensity changes (“events”) and can unlock the sensor’s potential. This talk will provide a brief introduction on neuromorphic event-based cameras, including their working principles and application scenarios. Also, I will share some results of my recent research on event-based perception and navigation, including event-based 3D perception, dynamic scene understanding and SLAM.
报告题目:Making Rolling Shutter Distortion Correction Easier
报告简介:Consumer digital cameras are usually equipped with a rolling shutter, which has a variety of merits in cost, speed and noise level. However, it indeed leads to annoying geometric distortion when the camera and the scene undergo fast relative motions. Various algorithms have been proposed to correct rolling shutter distortion, yet the computation is complex and the learned model does not generalize well. This talk will cover two novel strategies that try to make the task of rolling shutter distortion correction easier, including (1) to use the global reset feature and convert the distortion correction task into a deblurring task; (2) to use dual rolling shutter for simultaneous rolling shutter correction and frame interpolation.
讲者介绍:Dr.Yinqiang Zheng received his Doctoral degree of engineering from the Department of Mechanical and Control Engineering, Tokyo Institute of Technology, Tokyo, Japan, in 2013. After working at National Institute of Informatics, Japan, for more than seven years, he is currently an associate professor in the Next Generation Artificial Intelligence Research Center, The University of Tokyo, Japan, leading the Optical Sensing and Camera System Laboratory (OSCARS Lab.). He has served as program committee member for major international conferences, like CVPR, ICCV, ECCV, ICLR, NeurIPS, AAAI, IJCAI, MICCAI, area chair for MVA2017, DICTA2018, ISAIR2019, ACCV2020, 3DV2020, ICCV2021, MM2022, workshop chair for ACCV2022, and workshop co-organizer for ICPR2018 and ICCV2019. His research interests include image processing, optical imaging and mathematical optimization.
会议日程
13:55-14:00 |
会议签到 |
14:00-14:15 |
学校领导致辞、重点实验室介绍 |
14:15-14:20 |
在线合影 |
14:20-15:00 |
主题报告:陈渤教授 |
15:00-15:40 |
主题报告:施柏鑫研究员 |
15:40-16:20 |
主题报告:林嘉文教授 |
16:20-17:00 |
主题报告:周易教授 |
17:00-17:40 | 主题报告:郑银强教授 |
17:40-18:00 |
总结 |
联系方式
承办方联系人:戴老师 18710752665
daiyuchao@nwpu.edu.cn
主办方联系人:黄老师 010-82544754
info@csig.org.cn
来源:CSIG机器视觉专委会