In the recent decade companies started collecting of large amount of data. Without a proper analyse, the data are usually useless. The field of analysing the data is called data mining. Unfortunately, the amount of data is quite large: the data do not fit into main memory and the processing time can become quite huge. Therefore, we need parallel data mining algorithms. One of the popular and important data mining algorithm is the algorithm for generation of so called frequent itemsets. The problem of mining of frequent itemsets can be explained on the following example: customers goes in a store put into theirs baskets some goods; the owner of the store collects the baskets and wants to know the set of goods that are bought together in at least p% of the baskets. Currently, the sequential algorithms for mining of frequent itemsets are quite good in the means of performance. However, the parallel algorithms for mining of frequent itemsets still do not achieve good speedup. In this thesis, we develop a parallel method for mining of frequent itemsets that can be used for an arbitrary depth first search sequential algorithms on a distributed memory parallel computer. Our method achieves speedup of ~ 6 on 10 processors. The method is based on an approximate estimation of processor load from a database sample - however it always computes the set of frequent itemsets from the whole database. In this thesis, we show a theory underlying our method and show the performance of the estimation process.
翻译:在最近十年里, 公司开始收集大量的数据。 没有适当的分析, 数据通常是没有用处的。 分析数据的领域被称为数据挖掘。 不幸的是, 数据的数量相当大: 数据不适应主记忆, 处理时间会变得相当大。 因此, 我们需要平行的数据开采算法。 流行和重要的数据开采算法是制作所谓的经常物品的算法。 经常物品的开采问题可以通过以下例子来解释: 客户进入商店, 放在仓库里, 一些商品 ; 商店的所有人收集篮子, 想知道在篮子中至少占比购买的一组货物。 目前, 经常物品开采的顺序算法在性能方面相当不错。 然而, 经常物品的开采的平行算法仍然不能取得良好的速度。 在这个例子中, 我们开发了一种平行物品的采掘方法, 可以在分布式平行的计算机上任意地进行第一次测算顺序算。 我们的方法是快速地从一个基础的模型中获取 。 然而, 通常的模型显示过程 。