Techniques of computer systems that have been successfully deployed for dense regular workloads fall short of achieving their goals of scalability and efficiency when applied to irregular and dynamic applications. This is primarily due to the discontent between the multiple layers of the system design from hardware architecture, execution model, programming model, to data-structure and application code. The paper approaches this issue by addressing all layers of the system design. It presents and argues key design principles needed for scalable and efficient dynamic graph processing, and from which it builds: 1) a fine-grain memory driven architecture that supports asynchronous active messages, 2) a programming and execution model that allows spawning tasks from within the data-parallelism, 3) and a data-structure that parallelizes vertex object across many compute cells and yet provides a single programming abstraction to the data object. Simulated experimental results show performance gain of geomean $2.38 \times$ against an state-of-the-art similar system for graph traversals and yet being able to natively support dynamic graph processing. It uses programming abstractions of actions, introduces new dynamic graph storage scheme, and message delivery mechanisms with continuations that contain post-completion actions. Continuations seamlessly adjusts, prior or running, execution to mutations in the input graph and enable dynamic graph processing.
翻译:暂无翻译