The size of image stacks in connectomics studies now reaches the terabyte and often petabyte scales with a great diversity of appearance across brain regions and samples. However, manual annotation of neural structures, e.g., synapses, is time-consuming, which leads to limited training data often smaller than 0.001\% of the test data in size. Domain adaptation and generalization approaches were proposed to address similar issues for natural images, which were less evaluated on connectomics data due to a lack of out-of-domain benchmarks.
翻译:连结组学研究中图像堆积的大小现已达到千兆字节,而且往往是小字节尺度,各大脑区域和样本的外观千差万别,然而,神经结构如突触的人工说明费时,导致培训数据有限,往往小于测试数据大小的0.001 ⁇ ;建议对域进行适应和概括处理自然图像的类似问题,由于缺乏局外基准,对连结组学数据的评价较少。