It is necessary to gather real refactoring instances while conducting empirical studies on refactoring. However, existing refactoring detection approaches are insufficient in terms of their accuracy and coverage. Reducing the manual effort of curating refactoring data is challenging in terms of obtaining various refactoring data accurately. This paper proposes a tool named RefactorHub, which supports users to manually annotate potential refactoring-related commits obtained from existing refactoring detection approaches to make their refactoring information more accurate and complete with rich details. In the proposed approach, the parameters of each refactoring operation are defined as a meaningful set of code elements in the versions before or after refactoring. RefactorHub provides interfaces and supporting features to annotate each parameter, such as the automated filling of dependent parameters, thereby avoiding wrong or uncertain selections. A preliminary user study showed that RefactorHub reduced annotation effort and improved the degree of agreement among users. Source code and demo video are available at https://github.com/salab/RefactorHub
翻译:有必要收集真实的再考虑因素实例,同时进行关于再考虑的实验性研究,然而,现有的再考虑检测方法在准确性和覆盖面方面不够充分。减少人工努力的再考虑数据在准确获取各种再考虑数据方面是困难的。本文提议了一个名为RefactorHub的工具,它支持用户人工注解潜在的再考虑因素相关因素,从现有的再考虑检测方法中获得承诺,使其再考虑信息更加准确和完整,并有丰富的细节。在拟议方法中,每项再考虑操作的参数被界定为在再考虑之前或之后版本中一套有意义的代码要素。RefactorHub提供了界面,支持每个参数的注释功能,例如自动填充依赖参数,从而避免错误或不确定的选择。初步用户研究表明,RefactorHub降低了重新考虑努力,提高了用户之间的协议程度。源码和演示视频见https://github.com/salab/RefactorHubb。