GPT has been proven to be capable of extracting general information from language sequences, thereby benefiting all downstream tasks. This motivates us to use pre-trained models to explore the hidden inherent information in DNA sequences. However, data and task requirements in DNA sequence analyses are tasked in different formats such as generation, prediction and regression, and are complexity and involve different modalities, such as nucleotides sequences and, expression levels, etc. Existing BERT-based models are mostly for generation tasks and use sequence data as input and output, thus cannot easily handle various DNA analysis tasks in one single model. Herein, we propose a generalized DNA pre-training DNA model, DNAGPT, that was trained on over 200 billion base pairs from all the mammals. We enhance the classic GPT model by adding binary classification task (DNA sequence order) and numerical regression task (guanine-cytosine content prediction) in the pre-training period and enhancing the architecture with corresponding embedding layers and encoding heads. We also design a comprehensive token language to encode sequence, number and task related information in the same token space. Therefore, DNAGPT can handle versatile DNA analysis tasks and simultaneously process handle both sequence and numerical data. We have evaluated our model on genomic signals and regions recognition, pseudo genomes generation and mRNA abudance regression tasks. We demonstrate that benefiting from pre-training, DNAGPT can shows superior performance than the existing models specially designed for various downstreams tasks.
翻译:暂无翻译