There is no unified definition of Data anomalies, which refers to the specific data operation mode that may violate the consistency of the database. Known data anomalies include Dirty Write, Dirty Read, Non-repeatable Read, Phantom, Read Skew and Write Skew, etc. In order to improve the efficiency of concurrency control algorithms, data anomalies are also used to define the isolation levels, because the weak isolation level can improve the efficiency of transaction processing systems. This paper systematically studies the data anomalies and the corresponding isolation levels. We report twenty-two new data anomalies that other papers have not reported, and all data anomalies are classified miraculously. Based on the classification of data anomalies, two new isolation levels systems with different granularity are proposed, which reveals the rule of defining isolation levels based on data anomalies and makes the cognition of data anomalies and isolation levels more concise.
翻译:数据异常没有统一的定义,它指的是可能破坏数据库一致性的具体数据操作模式。已知的数据异常包括脏写、脏读、不可重复读、幻影、读Skew和写Skew等。为了提高同值货币控制算法的效率,数据异常还被用来确定孤立水平,因为薄弱的隔离水平可以提高交易处理系统的效率。本文系统地研究数据异常和相应的隔离水平。我们报告了其他文件没有报告的22个新的数据异常情况,所有数据异常情况都神奇地分类。根据数据异常的分类,提出了两个具有不同颗粒的新的隔离级别系统,揭示了根据数据异常确定隔离水平的规则,并使数据异常和隔离水平的认知更加简洁。