VALSE Webinar改版说明:
自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式:
1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。
2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。
报告时间:2019年6月26日(星期三)上午10:00(北京时间)
主题:深度学习遇上优化方法:异曲同工?
主持人:刘日升(大连理工大学)
报告嘉宾:印卧涛(University of California, Los Angeles)
报告题目:Plug-and-Play: Use Trained Networks in Optimization with Provable Convergence
报告嘉宾:赵拓(Georgia Tech)
报告题目:Towards Principled Methodologies and Efficient Algorithms for Minimax Optimization in Machine Learning
Panel议题:
深度网络和优化方法怎样结合才会更合理?
优化方法会对深度学习的理论分析带来哪些好处?或者产生哪些影响?
将深度网络嵌入优化方法中,对网络的可解释性会有哪些方面的帮助?
目前两者结合适合解决哪些视觉问题?这样的结合对工业界是否有意义?
采用优化方法对深度学习进行分析最大的挑战是什么?
在不同的深度学习任务(如分类,分割,检测等)中,如何选择或设计优化方法从而使得深度模型取得更好的效果?有没有什么原则和指导建议?
Panel嘉宾:
印卧涛(University of California, Los Angeles)、赵拓(Georgia Tech)、林宙辰(北京大学)、谭明奎(华南理工大学)
*欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题!
报告嘉宾:印卧涛(University of California, Los Angeles)
报告时间:2019年6月26日(星期三)上午10:00(北京时间)
报告题目:Plug-and-Play: Use Trained Networks in Optimization with Provable Convergence
报告人简介:
Dr. Wotao Yin received his Ph.D. degree in operations research from Columbia University, New York, NY, USA, in 2006, respectively. He is currently a Professor with the Department of Mathematics, University of California, Los Angeles, and also Principal Engineer with Alibaba US. His research interests include computational optimization and its applications in signal processing, machine learning, and other data science problems. He invented fast algorithms for sparse optimization and large-scale distributed optimization problems. During 2006–2013, he was at Rice University. He was the NSF CAREER award in 2008, the Alfred P. Sloan Research Fellowship in 2009, and the Morningside Gold Medal in 2016, and has coauthored five papers receiving best paper-type awards. He is among the top 1% cited cross-discipline researchers by Clarivate Analytics.
个人主页:
http://www.math.ucla.edu/~wotaoyin/
报告摘要:
Plug-and-play (PnP) is a non-convex framework that integrates denoising priors, such as BM3D or deep learning-based denoisers, into ADMM and other proximal algorithms. An advantage of PnP is that one can use pre-trained networks when there is not sufficient data for end-to-end training. Although PnP has exhibited great empirical results, theoretical analysis addressing even the most basic question of convergence has been insufficient. We establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the pre-trained network. We propose "real spectral normalization" to train networks that satisfy the proposed Lipschitz condition. Finally, we present experimental results that validate the theory.
参考文献:
[1] Ryu E K, Liu J, Wang S, et al. “Plug-and-Play Methods Provably Converge with Properly Trained Denoisers”. arXiv preprint arXiv:1905.05406, 2019.
报告嘉宾:赵拓(Georgia Tech)
报告时间:2019年6月26日(星期三)上午10:30(北京时间)
报告题目:Towards Principled Methodologies and Efficient Algorithms for Minimax Optimization in Machine Learning
报告人简介:
Tuo Zhao is an assistant professor in School of Industrial and Systems Engineering and School of Computational Science and Engineering at Georgia Tech. He received his Ph.D. degree in Computer Science at Johns Hopkins University. His research focuses on developing principled methodologies and nonconvex optimization algorithms for machine learning (especially deep learning), as well as open source software development for scientific computing.
个人主页:
https://www2.isye.gatech.edu/~tzhao80/
报告摘要:
The minimax optimization naturally arises in various applications, including Generative Adversarial Networks (GANs), Adversarial Robust Training (ART) and Imitation Learning (IL). Despite of significant empirical progresses in these applications, the development of their foundation – minimax optimization has fallen behind. Due to complex deep neural networks, the minimax optimization problems in the aforementioned applications lack convex-concave structures, and therefore existing algorithms and theories in convex optimization literatures are not applicable. We are lacking principled methodologies and efficient algorithms that could fully harness the potential power of minimax optimization in machine learning.
This talk will introduce several recent advances of minimax optimization in machine learning: (1) First, we investigate the importance of normalization techniques in training GANs. Specifically, we show that adopting proper normalization can improve the optimization landscape, stabilize the training and improves the generalization performance; (2) Second, we investigate the possibility of learning to optimize techniques in ART. Specifically, we show that properly parameterizing the algorithm for solving minimax optimization as a neural network can improve the robust training and computational efficiency; (3) Last, we investigate the convergence properties of the alternating stochastic gradient (ASG) algorithm for solving IL/RL. Specifically, we show that for IL equipped with Kernel mean embedding, ASG guarantees a sublinear rate of converge to a stationary solution, shedding light on the computational performance of ASG in practice.
参考文献:
[1] H. Jiang, Z. Chen, M. Chen, et al. "On Computation and Generalization of Generative Adversarial Networks under Spectrum Control." International Conference on Learning Representations (ICLR), 2019.
[2] H. Jiang, Z. Chen, Y. Shi, et al. "Learning to Defense by Learning to Attack." arxiv preprint arXiv.1811.01213, 2018.
Panel嘉宾:林宙辰(北京大学)
嘉宾简介:
Zhouchen LIN is currently a Professor with the Key Laboratory of Machine Perception, School of Electronics Engineering and Computer Science, Peking University. His research interests include computer vision, image processing, machine learning, pattern recognition, and numerical optimization. He is an area chair of ACCV 2009/2018, CVPR 2014/2016/2019, ICCV 2015, NIPS 2015/2018/2019 and AAAI 2019/2020, and senior program committee member of AAAI 2016/2017/2018 and IJCAI 2016/2018/2019. He is an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the International Journal of Computer Vision. He is an IAPR/IEEE fellow.
个人主页:
http://www.cis.pku.edu.cn/faculty/vision/zlin/zlin.htm
Panel嘉宾:谭明奎(华南理工大学)
嘉宾简介:
谭明奎,男,博士,华南理工大学教授、博士生导师。2006年和2009年于湖南大学获得环境科学与工程学士学位与控制科学与工程硕士学位。2014年获得新加坡南洋理工大学计算机科学博士学位。随后在澳大利亚阿德莱德大学计算机科学学院担任计算机视觉高级研究员。谭明奎教授于于2018年入选广东省“珠江人才团队”。自2016年9月全职回国以来,主持了国家自然科学基金青年项目、广东省新一代人工智能重点研发项目等多个重点项目。谭明奎教授一直从事机器学习和深度学习方面的研究工作,在深度神经网络结构优化及理论分析方面具有一定的研究基础。近年来以一作或者通讯作者完成的相关成果发表于人工智能顶级国际会议如NIPS、ICML、ACML、AAAI、CVPR、IJCAI和人工智能权威期刊如IEEE TNNLS、IEEE TIP、IEEE TSP、IEEE TKDE、JMLR等。
个人主页:
https://tanmingkui.github.io/
主持人:刘日升(大连理工大学)
主持人简介:
刘日升,大连理工大学计算数学博士,香港理工大学计算科学博士后。任大连理工大学国际信息与软件学院副教授,博士生导师,数字媒体技术系主任,几何计算与智能媒体技术研究所副所长。主要研究方向为机器学习、计算机视觉、优化方法。在本领域重要学术期刊和会议发表论文100余篇,其中A类论文(TPAMI、TIP 、NIPS、CVPR、ACM MM、IJCAI、AAAI)30 余篇。入选辽宁省兴辽英才计划青年拔尖人才、香江学者、大连市青年科技之星、大连理工大学星海优青。
个人主页:
https://dutmedia.org
19-14期VALSE在线学术报告参与方式:
长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“14期”,获取直播地址。
特别鸣谢本次Webinar主要组织者:
主办AC:刘日升(大连理工大学)
协办AC:谭明奎(华南理工大学),张利军(南京大学),张健(北京大学深圳研究院)
责任AC:韩琥(中科院计算所)
活动参与方式:
1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;
2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I群已满,除讲者等嘉宾外,只能申请加入VALSE J群,群号:734872379);
*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。
3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;
4、活动过程中,请不要说无关话语,以免影响活动正常进行;
5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;
6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;
7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。
8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。
9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。