Since the outbreak of the COVID-19 pandemic, videoconferencing has become the default mode of communication in our daily lives at homes, workplaces and schools, and it is likely to remain an important part of our lives in the post-pandemic world. Despite its significance, there has not been any systematic study characterizing the user-perceived performance of existing videoconferencing systems other than anecdotal reports. In this paper, we present a detailed measurement study that compares three major videoconferencing systems: Zoom, Webex and Google Meet. Our study is based on 48 hours' worth of more than 700 videoconferencing sessions, which were created with a mix of emulated videoconferencing clients deployed in the cloud, as well as real mobile devices running from a residential network. We find that the existing videoconferencing systems vary in terms of geographic scope, which in turns determines streaming lag experienced by users. We also observe that streaming rate can change under different conditions (e.g., number of users in a session, mobile device status, etc), which affects user-perceived streaming quality. Beyond these findings, our measurement methodology can enable reproducible benchmark analysis for any types of comparative or longitudinal study on available videoconferencing systems.
翻译:自COVID-19大流行爆发以来,视像会议已成为我们家庭、工作场所和学校日常生活中默认的通信模式,很可能仍然是我们在后广域世界中生活中的一个重要部分。尽管其意义重大,但除了传闻报告外,没有任何系统研究将现有视像会议系统的用户感知性能定性为其他传闻性能。在本文中,我们提出一份详细的计量研究,比较了三个主要视像会议系统:Zoom、Webex和Google Meet。我们的研究以价值48小时的700多场视像会议为基础,这些会议由云层所部署的模拟视像会议客户以及从住宅网络运行的实际移动设备混合而创建。我们发现,现有的视像会议系统在地理范围上各不相同,这反过来又决定了用户所经历的时滞。我们还注意到,流率在不同条件下(例如会场用户人数、移动设备状况等)可能会发生变化,影响到用户感知性流质量。除了这些调查结果外,我们的测量方法可以使现有的任何类型的比较性长期视像会议系统进行可追溯的基准分析。