Mixed-precision algorithms combine low- and high-precision computations in order to benefit from the performance gains of reduced-precision without sacrificing accuracy. In this work, we design mixed-precision Runge-Kutta-Chebyshev (RKC) methods, where high precision is used for accuracy, and low precision for stability. Generally speaking, RKC methods are low-order explicit schemes with a stability domain growing quadratically with the number of function evaluations. For this reason, most of the computational effort is spent on stability rather than accuracy purposes. In this paper, we show that a na\"ive mixed-precision implementation of any Runge-Kutta scheme can harm the convergence order of the method and limit its accuracy, and we introduce a new class of mixed-precision RKC schemes that are instead unaffected by this limiting behaviour. We present three mixed-precision schemes: a first- and a second-order RKC method, and a first-order multirate RKC scheme for multiscale problems. These schemes perform only the few function evaluations needed for accuracy (1 or 2 for first- and second-order methods respectively) in high precision, while the rest are performed in low precision. We prove that while these methods are essentially as cheap as their fully low-precision equivalent, they retain the convergence order of their high-precision counterpart. Indeed, numerical experiments confirm that these schemes are as accurate as the corresponding high-precision method.


翻译:混合精密算法结合了低精度和高精度计算法,以便从降低精度的绩效增益中获益,同时又不牺牲准确性。在这项工作中,我们设计了混精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精精

0
下载
关闭预览

相关内容

机器学习系统设计系统评估标准
专知会员服务
41+阅读 · 2021年4月2日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
58+阅读 · 2019年10月17日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
27+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
Adversarial Variational Bayes: Unifying VAE and GAN 代码
CreateAMind
7+阅读 · 2017年10月4日
Arxiv
7+阅读 · 2020年6月29日
Optimization for deep learning: theory and algorithms
Arxiv
104+阅读 · 2019年12月19日
Arxiv
6+阅读 · 2018年10月3日
VIP会员
相关VIP内容
专知会员服务
41+阅读 · 2021年4月2日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
58+阅读 · 2019年10月17日
Top
微信扫码咨询专知VIP会员