In this paper, we revisit the use of honeypots for detecting reflective amplification attacks. These measurement tools require careful design of both data collection and data analysis including cautious threshold inference. We survey common amplification honeypot platforms as well as the underlying methods to infer attack detection thresholds and to extract knowledge from the data. By systematically exploring the threshold space, we find most honeypot platforms produce comparable results despite their different configurations. Moreover, by applying data from a large-scale honeypot deployment, network telescopes, and a real-world baseline obtained from a leading DDoS mitigation provider, we question the fundamental assumption of honeypot research that convergence of observations can imply their completeness. Conclusively we derive guidance on precise, reproducible honeypot research, and present open challenges.
翻译:在本文中,我们重新审视了蜜罐用于探测反射放大攻击的情况。这些测量工具需要谨慎地设计数据收集和数据分析,包括谨慎的阈值推断。我们调查了共同的蜂罐放大平台以及推断攻击探测阈值和从数据中获取知识的基本方法。通过系统探索临界空间,我们发现大多数蜜罐平台尽管配置不同,却产生可比较的结果。此外,通过应用大型蜜罐部署、网络望远镜和来自一个主要的DDoS减灾提供者的真实世界基线的数据,我们质疑蜜罐研究的基本假设,即观测的趋同可能意味着其完整性。我们闭口不提地就精确的、可复制的蜜罐研究提供指导,并提出了公开的挑战。