VALSE Webinar改版说明:
自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式:
1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。
2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。
报告时间:2019年2月27日(星期三)晚上20:00(北京时间)
主题:弱监督图像理解专题
主持人:王兴刚(华中科技大学)
报告嘉宾:叶齐祥(中国科学院大学)
报告题目:Weakly Supervised Detection, Localization and Instance Segmentation
报告嘉宾:程明明(南开大学)
报告题目:Weakly Supervised Semantic Segmentation
Panel议题:
弱监督图像理解目前或者在不久的未来是否存在杀手级应用(killer applications)?
在未来,弱监督物体检测、语义分割等问题上的算法性能能否逼近全监督算法的性能?
未来弱监督图像理解的主要技术方向是什么?
如何看待GAN技术在弱监督图像理解中的作用?
目前的弱监督物体检测、语义分割主要是基于图像的,视频中的时序信息对于弱监督视觉理解会带来哪些机遇和挑战?
如何看待基于混合监督的图像理解方法,如:部分数据有弱监督标记、少量样本有全监督标记、甚至部分样本存在标记缺失的情况?
Panel嘉宾:
左旺孟(哈尔滨工业大学)、李宇峰(南京大学)
*欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题!
报告嘉宾:叶齐祥(中国科学院大学)
报告时间:2019年2月27日(星期三)晚上20:00(北京时间)
报告题目:Weakly Supervised Detection, Localization and Instance Segmentation
报告人简介:
叶齐祥分别于1999、2001年获哈尔滨工业大学学士、硕士学位。2006年获中科院计算所博士学位。2006年在中国科学院大学任教,历任讲师、副教授、教授。2013至2014年在美国马里兰大学先进计算机技术研究所(UMIACS)任访问助理教授,2016年Duke大学信息技术研究所(IID)访问学者。研究方向为机器学习与视觉目标感知。在典型目标检测方面进行了长期研究,提出了小波域、深度学习多尺度、不变性特征、分段线性SVM方法,弱监督视觉建模方法。研制了可靠性乙烯收率预测技术,在中国石化获得应用推广,研制了高精度多聚焦图像融合技术,在北京高宸朗日高清数码显微镜系统中应用。发表论文100余篇,包括IEEE CVPR, ICCV, ECCV,PAMI等重要期刊与国际会议25篇,获得中国科学院卢嘉锡青年人才奖,中国电子学会自然科学奖,IEEE 高级会员,担任SCI国际期刊Journal of Visual Computer(Springer)编委。
个人主页:
http://people.ucas.ac.cn/~qxye
报告摘要:
Weakly supervised object detection is a challenging task when provided with image category supervision but required to learn, at the same time, object locations and object detectors. The inconsistency between the weak supervision and learning objectives introduces significant randomness to object locations and ambiguity to detectors. In this paper, a min-entropy latent model (MELM) is proposed for weakly supervised object detection. Min-entropy serves as a model to learn object locations and a metric to measure the randomness of object localization during learning. It aims to principally reduce the variance of learned instances and alleviate the ambiguity of detectors. MELM is decomposed into three components including proposal clique partition, object clique discovery, and object localization. MELM is optimized with a recurrent learning algorithm, which leverages continuation optimization to solve the challenging non-convexity problem. Experiments demonstrate that MELM significantly improves the performance of weakly supervised object detection, weakly supervised object localization, and image classification, against the state-of-the-art approaches.
Besides weakly supervised object detection, I will also introduce our two representative approaches about weakly supervised localization and weakly supervised instance segmentation. Our approaches involve not only simple but effective network architectures, but also insightful ideas about solving the non-convex optimization problems from various perspectives.
参考文献:
[1] F. Wan, P. Wei, Z. Han, J. Jiao, Q. Ye, “Min-entropy Latent Model for Weakly Supervised object Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), DOI:10.1109/TPAMI.2019.2898858.
[2] P. Tang, X. Wang, S. Bai, W. Shen, X. Bai, W. Liu, and A. L. Yuille, “Pcl: Proposal cluster learning for weakly supervised object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2018.
[3] Y. Zhou, Y. Zhu, Q. Ye, Q. Qiu, J. Jiao, “Weakly Supervised Instance Segmentation using Class Peak Response,” in Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2018 (Spotlight).
[4] F. Wan, P. Wei, Z. Han, J. Jiao, Q. Ye, “Min-entropy Latent Model for Weakly Supervised object Detection,” in Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR), 2018: 1297-1306.
[5] Y. Zhu, Y. Zhou, Q. Ye, Q. Qiu, and J. Jiao, "Soft Proposal Network for Weakly Supervised Object Localization," in Proc. of IEEE Int. Conf. on Computer Vision (ICCV), 2017.
[6] B. Hakan and V. Andrea, “Weakly supervised deep detection networks,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 2846–2854.
[7] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp.2921–2929.
报告嘉宾:程明明(南开大学)
报告时间:2019年2月27日(星期三)晚上20:30(北京时间)
报告题目:Weakly Supervised Semantic Segmentation
报告人简介:
程明明,1985年生。2012年博士毕业于清华大学,之后在英国牛津从事计算机视觉研究,并于2014年回国任教,2016年起任南开大学教授,国家“万人计划”青年拔尖人才,首批天津市杰出青年基金获得者。其主要研究方向包括:计算机图形学、计算机视觉、图像处理等。已在IEEE PAMI, ACM TOG等CCF-A类国际会议及期刊发表论文30余篇。相关研究成果论文他引10,000余次,最高单篇他引2,000余次。其研究成果在华为、腾讯等公司的旗舰产品中得以应用。其中,显著性物体检测技术被华为Mate 10等旗舰手机作为亮点特性,于产品发布会中展示。其研究工作曾被英国《BBC》、德国《明镜周刊》、美国《赫芬顿邮报》等权威国际媒体撰文报道。
个人主页:
http://mmcheng.net
报告摘要:
Sematic segmentation, which aims at recognizing every image pixels, have received significant research attention due to its wide range of applications. However, training state of the art semantic segmentation models usually requires large collection of images samples with pixel level annotations, which is very cost expensive and time consuming to obtain. In this talk, we are going to discuss a bunch of weakly supervised semantic segmentation methods which only require keywords level supervision. These methods use a rich set of low level vision ques such as edges, salient objects, attention to bridge between high level keywords and low level pixels without human guidance, which largely reduce manual efforts.
参考文献:
[1] Associating Inter-Image Salient Instances for Weakly Supervised Semantic Segmentation, Ruochen Fan, Qibin Hou, Ming-Ming Cheng, Gang Yu, Ralph R. Martin, Shi-Min Hu, ECCV, 2018.
[2] Self-Erasing Network for Integral Object Attention, Qibin Hou, Peng-Tao Jiang, Yunchao Wei, Ming-Ming Cheng, NIPS, 2018.
[3] Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach, Yunchao Wei, Jiashi Feng, Xiaodan Liang, Ming-Ming Cheng, Yao Zhao, Shuicheng Yan, IEEE CVPR (Oral), 2017.
[4] STC: A Simple to Complex Framework for Weakly-supervised Semantic Segmentation, Yunchao Wei, Xiaodan Liang, Yunpeng Chen, Xiaohui Shen, Ming-Ming Cheng, Yao Zhao, Shuicheng Yan, IEEE TPAMI, 39(11):2314-2320, 2017.
[5] Richer Convolutional Features for Edge Detection, Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Jia-Wang Bian, Le Zhang, Xiang Bai, Jinhui Tang, IEEE TPAMI, 2019.
[6] Deeply supervised salient object detection with short connections, Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, Philip Torr, IEEE TPAMI, 2019.
Panel嘉宾:左旺孟(哈尔滨工业大学)
嘉宾简介:
左旺孟,哈尔滨工业大学计算机学院教授、博士生导师。主要从事图像增强与复原、图像编辑与生成、物体检测与目标跟踪、图像与视频分类等方面的研究。在CVPR/ICCV/ECCV等顶级会议和T-PAMI、IJCV及IEEE Trans.等期刊上发表论文80余篇。
个人主页:
http://homepage.hit.edu.cn/wangmengzuo
Panel嘉宾:李宇峰(南京大学)
嘉宾简介:
李宇峰,博士,南京大学计算机科学与技术系副教授。主要围绕机器学习开展研究,在JMLR、TPAMI、ICML、NIPS、IJCAI、AAAI等领域内重要期刊会议发表论文30余篇。应邀担任了IJCAI19/17/15、AAAI19等会议高级程序委员。
个人主页:
http://lamda.nju.edu.cn/liyf/
19-04期VALSE在线学术报告参与方式:
长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“04期”,获取直播地址。
特别鸣谢本次Webinar主要组织者:
主办AC:王兴刚(华中科技大学)
协办AC:左旺孟(哈尔滨工业大学)、孟德宇(西安交通大学)、高陈强(重庆邮电大学)
责任AC:王兴刚(华中科技大学)
活动参与方式:
1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;
2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G群已满,除讲者等嘉宾外,只能申请加入VALSE I群,群号:480601274);
*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。
3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;
4、活动过程中,请不要说无关话语,以免影响活动正常进行;
5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;
6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;
7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。
8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。
9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。