As conventional storage density reaches its physical limits, the cost of a gigabyte of storage is no longer plummeting, but rather has remained mostly flat for the past decade. Meanwhile, file sizes continue to grow, leading to ever fuller drives. When a user's storage is full, they must disrupt their workflow to laboriously find large files that are good candidates for deletion. Separately, the web acts as a distributed storage network, providing free access to petabytes of redundant files across 200 million websites. An automated method of restoring files from the web would enable more efficient storage management, since files readily recoverable from the web would make good candidates for removal. Despite this, there are no prescribed methods for automatically detecting these files and ensuring their easy recoverability from the web, as little is known about either the biggest files of users or their origins on the web. This study thus seeks to determine what files consume the most space in users' storage, and from this, to propose an automated method to select candidate files for removal. Our investigations show 989 MB of storage per user can be saved by inspecting preexisting metadata of their 25 largest files alone, with file recovery from the web 3 months later. This demonstrates the feasibility of applying such a method in a climate of increasingly scarce local storage resources.
翻译:暂无翻译