Tensor decomposition (TD) is an important method for extracting latent information from high-dimensional (multi-modal) sparse data. This study presents a novel framework for accelerating fundamental TD operations on massively parallel GPU architectures. In contrast to prior work, the proposed Blocked Linearized Coordinate (BLCO) format enables efficient out-of-memory computation of tensor algorithms using a unified implementation that works on a single tensor copy. Our adaptive blocking and linearization strategies not only meet the resource constraints of GPU devices, but also accelerate data indexing, eliminate control-flow and memory-access irregularities, and reduce kernel launching overhead. To address the substantial synchronization cost on GPUs, we introduce an opportunistic conflict resolution algorithm, in which threads collaborate instead of contending on memory access to discover and resolve their conflicting updates on-the-fly, without keeping any auxiliary information or storing non-zero elements in specific mode orientations. As a result, our framework delivers superior in-memory performance compared to prior state-of-the-art, and is the only framework capable of processing out-of-memory tensors. On the latest Intel and NVIDIA GPUs, BLCO achieves 2.12-2.6X geometric-mean speedup (with up to 33.35X speedup) over the state-of-the-art mixed-mode compressed sparse fiber (MM-CSF) on a range of real-world sparse tensors.
翻译:电离分解( TD) 是从高维( 多模式) 稀少的数据中提取潜在信息的重要方法。 本研究为加速大规模平行 GPU 结构的基本TD 运行提供了一个全新的框架。 与先前的工作不同, 拟议的封闭线性线性坐标( BLCO) 格式使得能够使用单一色调版本的统一实施方式, 高效地从模拟中计算发光算算法。 我们的适应性阻塞和线性化战略不仅满足了 GPU 设备的资源限制,而且还加快了数据指数化,消除了控制流和记忆存取方面的违规行为,并减少了内核内核发射的顶端。 为了解决GPUPS的巨大同步成本, 我们引入了一种机会性的冲突解决算法, 其中的线性比对记忆性访问进行协作,以发现和解决在飞行上的矛盾性更新,而没有保留任何辅助信息或将非零元素储存在特定模式方向上。 结果,我们的框架不仅提供了与先前的状态相比, 更高级的内空性性性性性性性性性表现, 并且是唯一一个能够对G- IM- IM- IM- IM- IM- 和 IM- IM- 最新 IM- IM- IM- IM- IMF- 上 节流- 速度 上 上 上 的 的内流- IM- 级- AS- h- 级 上 级- AS- 级- 级- AS- 级- 级- AS- AS- 级- 级 上 级 级 级- AS- 级- 级- 级- 级- 级 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级- 级-