Boundary representation (B-rep) models are the standard way 3D shapes are described in Computer-Aided Design (CAD) applications. They combine lightweight parametric curves and surfaces with topological information which connects the geometric entities to describe manifolds. In this paper we introduce BRepNet, a neural network architecture designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds. BRepNet defines convolutional kernels with respect to oriented coedges in the data structure. In the neighborhood of each coedge, a small collection of faces, edges and coedges can be identified and patterns in the feature vectors from these entities detected by specific learnable parameters. In addition, to encourage further deep learning research with B-reps, we publish the Fusion 360 Gallery segmentation dataset. A collection of over 35,000 B-rep models annotated with information about the modeling operations which created each face. We demonstrate that BRepNet can segment these models with higher accuracy than methods working on meshes, and point clouds.
翻译:B-rep 模型是计算机辅助设计(CAD)应用中标准描述 3D 形状的标准方式。 它们将轻量参数曲线和表面与将几何实体连接在一起以描述方形的地形信息结合起来。 在本文中, 我们引入了BreepNet, 这是一种神经网络结构, 旨在直接在 B-rep 数据结构上运行, 避免将模型与 meshes 或点云相近的必要性。 BreepNet 定义了数据结构中定向隐蔽的卷心骨。 在每一个隐蔽物的附近, 可以找到少量的面部、 边缘 和 隐蔽物, 以及这些实体通过特定可学习参数检测到的特征矢量的形态模式。 此外, 为了鼓励与 B- reps 进一步深入学习研究, 我们出版了 Fusion 360 画廊分层数据集。 收集了超过 35 000 个 B-rep 模型, 附加了每面建模操作的信息。 我们证明, BrepNet 能够将这些模型分解这些模型, 其精度高于在胶层上工作的方法和点云。