讲座报名丨 ICML专场

2021 年 9 月 15 日 THU数据派
  
  
    


        
        
          
           
           
             
来源:AI TIME论道

  本文约2145字,建议阅读3分钟

本文介绍了9月16日15:00-21:00多位PhD带来的国际机器学习大会直播分享,欢迎扫码观看。

9月16日 15:00~21:00

AI TIME特别邀请了多位PhD,带来ICML-4!

哔哩哔哩直播通道

扫码关注AITIME哔哩哔哩官方账号

观看直播

链接:https://live.bilibili.com/21813994


15:00-17:00

★ 嘉宾介绍 ★

朱鑫祺

悉尼大学三年级PhD,在 Prof. Dacheng Tao 和 Dr. Chang Xu 指导下进行解耦表征学习,计算机视觉相关的研究。


报告题目:

基于可交换李群变分自编码的解耦学习

内容简介:

We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.


陈晓晖

Tufts University 一年级 PhD,在Prof. Liping Liu 和 Prof. Michael Hughes 的指导下研究 Generative Modeling 和 Graph Learning。


报告题目:

自回归图生成模型上的节点生成顺序建模

内容简介:

A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at https://github.com/tufts-ml/graph-generation-vi.


张智杰

中科院计算所五年级博士生,导师为张家琳研究员。研究兴趣包括组合优化、近似算法、机器学习。最近的研究课题包括次模优化与影响力最大化。


报告题目:

网络推断与数据驱动的影响力最大化问题

内容简介:

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.


19:30-21:00

杨智勇

博士毕业于中国科学院信息工程研究所,现为中国科学院大学博士后。目前主要的研究方向主要为AUC优化、多任务学习、机器学习理论。在ICML、NeurIPS、T-PAMI等CCF-A类期刊/会议发表一作论文7篇。担任ICML、NeurIPS、ICLR、AAAI、IJCAI等会议PC member;IJCAI 2021 senior PC member;T-PAMI、T-IP等国际期刊审稿人。曾入选博新计划、百度AI华人新星百强榜单,曾获百度奖学金全球20强提名奖、中科院院长特别奖、NeurIPS top 10% 审稿人等荣誉。


报告题目:

TPAUC指标的end-to-end 优化方法

内容简介:

The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with TPR≥α,FPR≤β is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics where only the FPR range is restricted, opening a new problem to seek solutions to leverage high TPAUC. Motivated by this, we present the first trial in this paper to optimize this new metric. The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss. To address this issue, we propose a generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep-learning. Moreover, our theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate problems leads to good generalization performance in terms of TPAUC with a high probability. Finally, empirical studies over several benchmark datasets speak to the efficacy of our framework.

沈广宇

普渡大学计算机系二年级在读博士,在 Prof. Xiangyu Zhang 的研究组进行神经网络安全性相关的研究,包括对抗攻击,后门攻击以及防御。


报告题目:

基于多臂老虎机优化的神经网络后门扫描

内容简介:

Back-door attack poses a severe threat to deep learning systems. It injects hidden malicious be- haviors to a model such that any input stamped with a special pattern can trigger such behaviors. Detecting back-door is hence of pressing need. Many existing defense techniques use optimiza- tion to generate the smallest input pattern that forces the model to misclassify a set of benign inputs injected with the pattern to a target label. However, the complexity is quadratic to the num- ber of class labels such that they can hardly handle models with many classes. Inspired by Multi-Arm Bandit in Reinforcement Learning, we propose a K-Arm optimization method for backdoor detec- tion. By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substan- tially reduce the complexity, allowing to handle models with many classes. Moreover, by itera- tively refining the selection of labels to optimize, it substantially mitigates the uncertainty in choos- ing the right labels, improving detection accuracy. At the time of submission, the evaluation of our method on over 4000 models in the IARPA Tro- jAI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed. The code of our work is available at https://github.com/PurduePAML/ K-ARM_Backdoor_Optimization


闫雪

闫雪是中国科学院自动化所一年级博士生,研究兴趣包括机器学习,多智能体评估。


报告题目:

基于低秩矩阵填充的高效多智能体策略评估

内容简介:

Multi-agent evaluation aims at the assessment of an agent's strategy on the basis of interaction with others. Typically, existing methods such as -rank and its approximation still require to exhaustively compare all pairs of -tuple joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we intend to reduce the number of pairwise comparisons in order to recover a satisfied ranking for -players. We explore the fact that agents with similar skills may achieve similar performance payoff against others, as evidenced from our experiments. Two situations are considered: the first one is when we can obtain the true payoffs (e.g., noise-free evaluation). The other one is when we can only access noisy payoff observations (e.g., noisy evaluation). Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively leverage low-rank matrix completion. For both of these settings, we derive that  (  is num. of agents and  is the rank of the payoff matrix) comparisons are required to achieve sufficiently well evaluation performance. Empirical results on evaluating the players in three synthetic games and twelve real world games from OpenSpiel demonstrate that payoff evaluation of a few  pairs can lead to comparable performance compared to algorithms that know the complete payoff matrix.


—— END ——

登录查看更多
0

相关内容

ICML 是 International Conference on Machine Learning的缩写,即国际机器学习大会。ICML如今已发展为由国际机器学习学会(IMLS)主办的年度机器学习国际顶级会议。
最新《自监督表示学习》报告,70页ppt
专知会员服务
85+阅读 · 2020年12月22日
专知会员服务
44+阅读 · 2020年10月31日
专知会员服务
52+阅读 · 2020年9月7日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
【SIGGRAPH2019】TensorFlow 2.0深度学习计算机图形学应用
专知会员服务
39+阅读 · 2019年10月9日
直播预告 | 11月15日:中欧联合实验室系列学术讲座
中国科学院自动化研究所
0+阅读 · 2021年11月13日
讲座报名 | 数据挖掘专场来啦!
THU数据派
0+阅读 · 2021年11月3日
ICML'21 | 六篇图神经网络论文精选(模型鲁棒性)
图与推荐
0+阅读 · 2021年10月18日
ICML'21 | 五篇图神经网络论文精选
图与推荐
1+阅读 · 2021年10月15日
DGL&NVIDIA | 图机器学习在线讲座
图与推荐
0+阅读 · 2021年9月30日
讲座报名 | CMU博士后带来自监督学习主题分享
THU数据派
0+阅读 · 2021年8月30日
国家自然科学基金
14+阅读 · 2017年12月31日
国家自然科学基金
6+阅读 · 2017年6月30日
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年9月30日
国家自然科学基金
0+阅读 · 2011年12月31日
Arxiv
27+阅读 · 2020年6月19日
已删除
Arxiv
32+阅读 · 2020年3月23日
Domain Representation for Knowledge Graph Embedding
Arxiv
14+阅读 · 2019年9月11日
Arxiv
24+阅读 · 2018年10月24日
Arxiv
23+阅读 · 2018年10月1日
VIP会员
相关资讯
直播预告 | 11月15日:中欧联合实验室系列学术讲座
中国科学院自动化研究所
0+阅读 · 2021年11月13日
讲座报名 | 数据挖掘专场来啦!
THU数据派
0+阅读 · 2021年11月3日
ICML'21 | 六篇图神经网络论文精选(模型鲁棒性)
图与推荐
0+阅读 · 2021年10月18日
ICML'21 | 五篇图神经网络论文精选
图与推荐
1+阅读 · 2021年10月15日
DGL&NVIDIA | 图机器学习在线讲座
图与推荐
0+阅读 · 2021年9月30日
讲座报名 | CMU博士后带来自监督学习主题分享
THU数据派
0+阅读 · 2021年8月30日
相关基金
国家自然科学基金
14+阅读 · 2017年12月31日
国家自然科学基金
6+阅读 · 2017年6月30日
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年9月30日
国家自然科学基金
0+阅读 · 2011年12月31日
相关论文
Arxiv
27+阅读 · 2020年6月19日
已删除
Arxiv
32+阅读 · 2020年3月23日
Domain Representation for Knowledge Graph Embedding
Arxiv
14+阅读 · 2019年9月11日
Arxiv
24+阅读 · 2018年10月24日
Arxiv
23+阅读 · 2018年10月1日
Top
微信扫码咨询专知VIP会员