Synthesizing multimodality medical data provides complementary knowledge and helps doctors make precise clinical decisions. Although promising, existing multimodal brain graph synthesis frameworks have several limitations. First, they mainly tackle only one problem (intra- or inter-modality), limiting their generalizability to synthesizing inter- and intra-modality simultaneously. Second, while few techniques work on super-resolving low-resolution brain graphs within a single modality (i.e., intra), inter-modality graph super-resolution remains unexplored though this would avoid the need for costly data collection and processing. More importantly, both target and source domains might have different distributions, which causes a domain fracture between them. To fill these gaps, we propose a multi-resolution StairwayGraphNet (SG-Net) framework to jointly infer a target graph modality based on a given modality and super-resolve brain graphs in both inter and intra domains. Our SG-Net is grounded in three main contributions: (i) predicting a target graph from a source one based on a novel graph generative adversarial network in both inter (e.g., morphological-functional) and intra (e.g., functional-functional) domains, (ii) generating high-resolution brain graphs without resorting to the time consuming and expensive MRI processing steps, and (iii) enforcing the source distribution to match that of the ground truth graphs using an inter-modality aligner to relax the loss function to optimize. Moreover, we design a new Ground Truth-Preserving loss function to guide both generators in learning the topological structure of ground truth brain graphs more accurately. Our comprehensive experiments on predicting target brain graphs from source graphs using a multi-resolution stairway showed the outperformance of our method in comparison with its variants and state-of-the-art method.


翻译:合成多式联运医疗数据提供了补充性知识,有助于医生做出准确的临床决定。尽管现有多式脑图合成框架有希望,但现有多式脑图合成框架存在若干局限性。首先,它们主要处理一个问题(内部或内部模式),将其一般性限制为同时合成跨式和内部模式。第二,虽然在单一模式(即内部)内超级解析低分辨率脑图方面,技术很少在超式解低分辨率脑图方面工作,但超式智能解析法仍未被探索出来,尽管这样可以避免昂贵的数据收集和处理。更重要的是,目标和源域可能存在不同的分布,造成它们之间的域断裂。为了填补这些差距,我们提议了一个多分辨率StairwayGraphNet(SG-Net)框架,共同推导出一个基于某种特定模式和内部和内部领域超溶解式脑图图的定向图模式。我们SG-网络基于一个新的源(i)预测一个源的具体目标图,一个基于新式对立式对立式直径网络,在内部(e.g.oral-deal-deal dealalalalal lial lial liver livesal liver) 和内部和内部平流流流流中,在不使用一个直径平流数据流数据流的系统流的系统流数据流、直径流数据流、直径解到一个源到一个直径流流流流流、直流、直径流、直流、直流、直径流、直径流、直径流、直径、直径流、直径流、直径流、直达、直达、直径、直径、直径、直径、直至直至直至直至直至直至直至直至直至直至直至直径、直径、直径、直径、直径、直径、直至直至直至直至直至直至直至直径、直至直至直径、直径、直径、直径、直径流、直至直径、直径、直至直至直至直至直至直至直至直至直至直至直径、直至直至直径、直径流、直至直至直至直至直至直径、直图、直至

0
下载
关闭预览

相关内容

【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
99+阅读 · 2019年10月9日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
52+阅读 · 2019年9月29日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
26+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
Facebook PyText 在 Github 上开源了
AINLP
7+阅读 · 2018年12月14日
视觉机械臂 visual-pushing-grasping
CreateAMind
3+阅读 · 2018年5月25日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
ICCV17 :12为顶级大牛教你学生成对抗网络(GAN)!
全球人工智能
8+阅读 · 2017年11月26日
Arxiv
1+阅读 · 2021年12月2日
Arxiv
12+阅读 · 2021年10月22日
Arxiv
3+阅读 · 2018年8月12日
VIP会员
相关VIP内容
Top
微信扫码咨询专知VIP会员