We know that both the CNN mapping function and the sampling scheme are of paramount importance for CNN-based image analysis. It is clear that both functions operate in the same space, with an image axis $\mathcal{I}$ and a feature axis $\mathcal{F}$. Remarkably, we found that no frameworks existed that unified the two and kept track of the spatial origin of the data automatically. Based on our own practical experience, we found the latter to often result in complex coding and pipelines that are difficult to exchange. This article introduces our framework for 1, 2 or 3D image classification or segmentation: DeepVoxNet2 (DVN2). This article serves as an interactive tutorial, and a pre-compiled version, including the outputs of the code blocks, can be found online in the public DVN2 repository. This tutorial uses data from the multimodal Brain Tumor Image Segmentation Benchmark (BRATS) of 2018 to show an example of a 3D segmentation pipeline.
翻译:我们知道CNN绘图功能和取样方案对于CNN图像分析至关重要。 显然,这两个功能都在同一空间运作,其图像轴值为$\mathcal{I}$和特性轴值为$\mathcal{F}$。 值得注意的是,我们发现不存在将数据两者统一起来并自动跟踪数据空间来源的框架。 根据我们自己的实际经验,我们发现后者往往导致复杂的编码和管道,难以交流。 文章介绍了我们的1、 2或3D图像分类或分区框架: DeepVoxNet2(DVN2),这篇文章是一个交互式辅导,一个预先编译的版本,包括代码块的产出,可以在公共 DVN2 储存库中在线找到。 这个导师使用2018年多式脑图象分路基准(BRATS)的数据来展示3D分解管道的实例。