In CMOS-based electronics, the most straightforward way to implement a summation operation is to use the ripple carry adder (RCA). Magnonics, the field of science concerned with data processing by spin-waves and their quanta magnons, recently proposed a magnonic half-adder that can be considered as the simplest magnonic integrated circuit. Here, we develop a computation model for the magnonic basic blocks to enable the design and simulation of magnonic gates and magnonic circuits of arbitrary complexity and demonstrate its functionality on the example of a 32-bit integrated RCA. It is shown that the RCA requires the utilization of additional regenerators based on magnonic directional couplers with embedded amplifiers to normalize the magnon signals in-between the half-adders. The benchmarking of large-scale magnonic integrated circuits is performed. The energy consumption of 30 nm-based magnonic 32-bit adder can be as low as 961aJ per operation with taking into account all required amplifiers.

0
下载
关闭预览

相关内容

Quantization has been proven to be a vital method for improving the inference efficiency of deep neural networks (DNNs). However, it is still challenging to strike a good balance between accuracy and efficiency while quantizing DNN weights or activation values from high-precision formats to their quantized counterparts. We propose a new method called elastic significant bit quantization (ESB) that controls the number of significant bits of quantized values to obtain better inference accuracy with fewer resources. We design a unified mathematical formula to constrain the quantized values of the ESB with a flexible number of significant bits. We also introduce a distribution difference aligner (DDA) to quantitatively align the distributions between the full-precision weight or activation values and quantized values. Consequently, ESB is suitable for various bell-shaped distributions of weights and activation of DNNs, thus maintaining a high inference accuracy. Benefitting from fewer significant bits of quantized values, ESB can reduce the multiplication complexity. We implement ESB as an accelerator and quantitatively evaluate its efficiency on FPGAs. Extensive experimental results illustrate that ESB quantization consistently outperforms state-of-the-art methods and achieves average accuracy improvements of 4.78%, 1.92%, and 3.56% over AlexNet, ResNet18, and MobileNetV2, respectively. Furthermore, ESB as an accelerator can achieve 10.95 GOPS peak performance of 1k LUTs without DSPs on the Xilinx ZCU102 FPGA platform. Compared with CPU, GPU, and state-of-the-art accelerators on FPGAs, the ESB accelerator can improve the energy efficiency by up to 65x, 11x, and 26x, respectively.

0
0
下载
预览

Small angle X-ray scattering (SAXS) is extensively used in materials science as a way of examining nanostructures. The analysis of experimental SAXS data involves mapping a rather simple data format to a vast amount of structural models. Despite various scientific computing tools to assist the model selection, the activity heavily relies on the SAXS analysts' experience, which is recognized as an efficiency bottleneck by the community. To cope with this decision-making problem, we develop and evaluate the open-source, Machine Learning-based tool SCAN (SCattering Ai aNalysis) to provide recommendations on model selection. SCAN exploits multiple machine learning algorithms and uses models and a simulation tool implemented in the SasView package for generating a well defined set of datasets. Our evaluation shows that SCAN delivers an overall accuracy of 95%-97%. The XGBoost Classifier has been identified as the most accurate method with a good balance between accuracy and training time. With eleven predefined structural models for common nanostructures and an easy draw-drop function to expand the number and types training models, SCAN can accelerate the SAXS data analysis workflow.

0
0
下载
预览

We study a class of weakly identifiable location-scale mixture models for which the maximum likelihood estimates based on $n$ i.i.d. samples are known to have lower accuracy than the classical $n^{- \frac{1}{2}}$ error. We investigate whether the Expectation-Maximization (EM) algorithm also converges slowly for these models. We provide a rigorous characterization of EM for fitting a weakly identifiable Gaussian mixture in a univariate setting where we prove that the EM algorithm converges in order $n^{\frac{3}{4}}$ steps and returns estimates that are at a Euclidean distance of order ${ n^{- \frac{1}{8}}}$ and ${ n^{-\frac{1} {4}}}$ from the true location and scale parameter respectively. Establishing the slow rates in the univariate setting requires a novel localization argument with two stages, with each stage involving an epoch-based argument applied to a different surrogate EM operator at the population level. We demonstrate several multivariate ($d \geq 2$) examples that exhibit the same slow rates as the univariate case. We also prove slow statistical rates in higher dimensions in a special case, when the fitted covariance is constrained to be a multiple of the identity.

0
0
下载
预览

In many practices, scientists are particularly interested in detecting which of the predictors are truly associated with a multivariate response. It is more accurate to model multiple responses as one vector rather than separating each component one by one. This is particularly true for complex traits having multiple correlated components. A Bayesian multivariate variable selection (BMVS) approach is proposed to select important predictors influencing the multivariate response from a candidate pool with an ultrahigh dimension. By applying the sample-size-dependent spike and slab priors, the BMVS approach satisfies the strong selection consistency property under certain conditions, which represents the advantages of BMVS over other existing Bayesian multivariate regression-based approaches. The proposed approach considers the covariance structure of multiple responses without assuming independence and integrates the estimation of covariance-related parameters together with all regression parameters into one framework through a fast updating MCMC procedure. It is demonstrated through simulations that the BMVS approach outperforms some other relevant frequentist and Bayesian approaches. The proposed BMVS approach possesses the flexibility of wide applications, including genome-wide association studies with multiple correlated phenotypes and a large scale of genetic variants and/or environmental variables, as demonstrated in the real data analyses section. The computer code and test data of the proposed method are available as an R package.

0
0
下载
预览

A second order accurate, linear numerical method is analyzed for the Landau-Lifshitz equation with large damping parameters. This equation describes the dynamics of magnetization, with a non-convexity constraint of unit length of the magnetization. The numerical method is based on the second-order backward differentiation formula in time, combined with an implicit treatment of the linear diffusion term and explicit extrapolation for the nonlinear terms. Afterward, a projection step is applied to normalize the numerical solution at a point-wise level. This numerical scheme has shown extensive advantages in the practical computations for the physical model with large damping parameters, which comes from the fact that only a linear system with constant coefficients (independent of both time and the updated magnetization) needs to be solved at each time step, and has greatly improved the numerical efficiency. Meanwhile, a theoretical analysis for this linear numerical scheme has not been available. In this paper, we provide a rigorous error estimate of the numerical scheme, in the discrete $\ell^{\infty}(0,T; \ell^2) \cap \ell^2(0,T; H_h^1)$ norm, under suitable regularity assumptions and reasonable ratio between the time step-size and the spatial mesh-size. In particular, the projection operation is nonlinear, and a stability estimate for the projection step turns out to be highly challenging. Such a stability estimate is derived in details, which will play an essential role in the convergence analysis for the numerical scheme, if the damping parameter is greater than 3.

0
1
下载
预览

Smartphones have become the most used electronic devices. They carry out most of the functionalities of desktops, offering various useful applications that suit the users needs. Therefore, instead of the operator, the user has been the main controller of the device and its applications, therefore its reliability has become an emergent requirement. As a first step, based on collected smartphone applications failure data, we investigated and evaluated the efficacy of Software Reliability Growth Models (SRGMs) when applied to these smartphone data in order to check whether they achieve the same accuracy as in the desktop/laptop area. None of the selected models were able to account for the smartphone data satisfactorily. Their failure is traced back to: (i) the hardware and software differences between desktops and smartphones, (ii) the specific features of mobile applications compared to desktop applications, and (iii) the different operational conditions and usage profiles. Thus, a reliability model suited to smartphone applications is still needed. In the second step, we applied the Weibull and Gamma distributions, and their two particular cases, Rayleigh and S-Shaped, to model the smartphone failure data sorted by application version number and grouped into different time periods. An estimation of the expected number of defects in each application version was obtained. The performances of the distributions were then compared amongst each other. We found that both Weibull and Gamma distributions can fit the failure data of mobile applications, although the Gamma distribution is frequently more suited.

0
0
下载
预览

Mined bitexts can contain imperfect translations that yield unreliable training signals for Neural Machine Translation (NMT). While filtering such pairs out is known to improve final model quality, we argue that it is suboptimal in low-resource conditions where even mined data can be limited. In our work, we propose instead, to refine the mined bitexts via automatic editing: given a sentence in a language xf, and a possibly imperfect translation of it xe, our model generates a revised version xf' or xe' that yields a more equivalent translation pair (i.e., <xf, xe'> or <xf', xe>). We use a simple editing strategy by (1) mining potentially imperfect translations for each sentence in a given bitext, (2) learning a model to reconstruct the original translations and translate, in a multi-task fashion. Experiments demonstrate that our approach successfully improves the quality of CCMatrix mined bitext for 5 low-resource language-pairs and 10 translation directions by up to ~ 8 BLEU points, in most cases improving upon a competitive back-translation baseline.

0
0
下载
预览

Model complexity is a fundamental problem in deep learning. In this paper we conduct a systematic overview of the latest studies on model complexity in deep learning. Model complexity of deep learning can be categorized into expressive capacity and effective model complexity. We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity. We also discuss the applications of deep learning model complexity including understanding model generalization capability, model optimization, and model selection and design. We conclude by proposing several interesting future directions.

0
22
下载
预览

We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM). Given an input text with masked tokens, we rely on conventional masks to learn inter-relations between corrupted tokens and context via autoencoding, and pseudo masks to learn intra-relations between masked spans via partially autoregressive modeling. With well-designed position embeddings and self-attention masks, the context encodings are reused to avoid redundant computation. Moreover, conventional masks used for autoencoding provide global masking information, so that all the position embeddings are accessible in partially autoregressive language modeling. In addition, the two tasks pre-train a unified language model as a bidirectional encoder and a sequence-to-sequence decoder, respectively. Our experiments show that the unified language models pre-trained using PMLM achieve new state-of-the-art results on a wide range of natural language understanding and generation tasks across several widely used benchmarks.

0
13
下载
预览

Neural waveform models such as the WaveNet are used in many recent text-to-speech systems, but the original WaveNet is quite slow in waveform generation because of its autoregressive (AR) structure. Although faster non-AR models were recently reported, they may be prohibitively complicated due to the use of a distilling training method and the blend of other disparate training criteria. This study proposes a non-AR neural source-filter waveform model that can be directly trained using spectrum-based training criteria and the stochastic gradient descent method. Given the input acoustic features, the proposed model first uses a source module to generate a sine-based excitation signal and then uses a filter module to transform the excitation signal into the output speech waveform. Our experiments demonstrated that the proposed model generated waveforms at least 100 times faster than the AR WaveNet and the quality of its synthetic speech is close to that of speech generated by the AR WaveNet. Ablation test results showed that both the sine-wave excitation signal and the spectrum-based training criteria were essential to the performance of the proposed model.

0
4
下载
预览
小贴士
相关主题
相关论文
Cheng Gong,Ye Lu,Kunpeng Xie,Zongming Jin,Tao Li,Yanzhi Wang
0+阅读 · 2021年11月17日
Machine Learning-Assisted Analysis of Small Angle X-ray Scattering
Piotr Tomaszewski,Shun Yu,Markus Borg,Jerk Rönnols
0+阅读 · 2021年11月16日
Raaz Dwivedi,Nhat Ho,Koulik Khamaru,Martin J. Wainwright,Michael I. Jordan,Bin Yu
0+阅读 · 2021年11月16日
Xiaotian Dai,Guifang Fu,Randall Reese,Shaofei Zhao,Zuofeng Shang
0+阅读 · 2021年11月15日
Sonia Meskini,Ali Bou Nassif,Luiz Fernando Capretz
0+阅读 · 2021年11月12日
BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation
Eleftheria Briakou,Sida I. Wang,Luke Zettlemoyer,Marjan Ghazvininejad
0+阅读 · 2021年11月12日
Xia Hu,Lingyang Chu,Jian Pei,Weiqing Liu,Jiang Bian
22+阅读 · 2021年3月8日
Hangbo Bao,Li Dong,Furu Wei,Wenhui Wang,Nan Yang,Xiaodong Liu,Yu Wang,Songhao Piao,Jianfeng Gao,Ming Zhou,Hsiao-Wuen Hon
13+阅读 · 2020年2月28日
Xin Wang,Shinji Takaki,Junichi Yamagishi
4+阅读 · 2018年11月26日
相关VIP内容
专知会员服务
65+阅读 · 2020年11月20日
专知会员服务
43+阅读 · 2020年7月26日
专知会员服务
79+阅读 · 2020年2月3日
开源书:PyTorch深度学习起步
专知会员服务
35+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
46+阅读 · 2019年10月10日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
39+阅读 · 2019年9月29日
相关资讯
灾难性遗忘问题新视角:迁移-干扰平衡
CreateAMind
16+阅读 · 2019年7月6日
意识是一种数学模式
CreateAMind
3+阅读 · 2019年6月24日
Hierarchically Structured Meta-learning
CreateAMind
13+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
8+阅读 · 2019年5月18日
深度自进化聚类:Deep Self-Evolution Clustering
我爱读PAMI
11+阅读 · 2019年4月13日
已删除
将门创投
7+阅读 · 2019年3月28日
Call for Participation: Shared Tasks in NLPCC 2019
中国计算机学会
5+阅读 · 2019年3月22日
机器翻译 | Bleu:此蓝;非彼蓝
黑龙江大学自然语言处理实验室
3+阅读 · 2018年3月14日
Top
微信扫码咨询专知VIP会员