Prefetching web pages is a well-studied solution to reduce network latency by predicting users' future actions based on their past behaviors. However, such techniques are largely unexplored on mobile platforms. Today's privacy regulations make it infeasible to explore prefetching with the usual strategy of amassing large amounts of data over long periods and constructing conventional, "large" prediction models. Our work is based on the observation that this may not be necessary: Given previously reported mobile-device usage trends (e.g., repetitive behaviors in brief bursts), we hypothesized that prefetching should work effectively with "small" models trained on mobile-user requests collected during much shorter time periods. To test this hypothesis, we constructed a framework for automatically assessing prediction models, and used it to conduct an extensive empirical study based on over 15 million HTTP requests collected from nearly 11,500 mobile users during a 24-hour period, resulting in over 7 million models. Our results demonstrate the feasibility of prefetching with small models on mobile platforms, directly motivating future work in this area. We further introduce several strategies for improving prediction models while reducing the model size. Finally, our framework provides the foundation for future explorations of effective prediction models across a range of usage scenarios.
翻译:通过预测用户过去的行为来预测其未来行动来降低网络悬浮度,这是一个很好的解决办法。然而,这些技术基本上没有在移动平台上探索。今天的隐私条例使得无法探索与长期收集大量数据并构建常规的“大”预测模型的通常战略的预先牵线搭桥。我们的工作基于以下可能没有必要的观察:鉴于以前报告的移动装置使用趋势(例如短暂的重复行为),我们假设预先扩展应当有效地与在较短的时间内收集的移动电话请求培训的“小型”模型合作。为了检验这一假设,我们建立了一个自动评估预测模型的框架,并利用这一框架根据24小时内从近11 500个移动用户收集的1 500多万个HTTP请求进行广泛的实证研究,结果超过700万个模型。我们的结果显示,在移动平台上与小型模型预先连接的可行性,直接激励今后在这一领域的工作。我们进一步推出若干战略,在缩小模型范围的同时,改进未来预测模型的利用范围,同时提供我们今后有效模型的基础。