The minimizer of a word of size $k$ (a $k$-mer) is defined as its smallest substring of size $m$ (with $m\leq k$), according to some ordering on $m$-mers. minimizers have been used in bioinformatics -- notably -- to partition sequencing datasets, binning together $k$-mers that share the same minimizer. It is folklore that using the lexicographical order lead to very unbalanced partitions, resulting in an abundant literature devoted to devising alternative orders for achieving better balanced partitions. To the best of our knowledge, the unbalanced-ness of lexicographical-based minimizer partitions has never been investigated from a theoretical point of view. In this article, we aim to fill this gap and determine, for a given minimizer, how many $k$-mers would admit the chosen minimizer -- i.e. what would be the size of the bucket associated to the chosen minimizer in the worst case, where all $k$-mers would be seen in the data. We show that this number can be computed in $O(km)$ space and $O(km^2)$ time. We further introduce approximations that can be computed in $O(k)$ space and $O(km)$ time. We also show on genomic datasets that the practical number of $k$-mers associated to a minimizer are closely correlated to the theoretical expected number. We introduce two conjectures that could help closely approximating the total number of $k$-mers sharing a minimizer. We believe that characterising the distribution of the number of $k$-mers per minimizer will help devise efficient lexicographic-based minimizer bucketting.
翻译:暂无翻译