Web caching is essential for the World Wide Web, saving processing power, bandwidth, and reducing latency. Many proxy caching solutions focus on buffering data from the main server, neglecting cacheable information meant for server writes. Existing systems addressing this issue are often intrusive, requiring modifications to the main application for integration. We identify opportunities for enhancement in conventional caching proxies. This paper explores, designs, and implements a potential prototype for such an application. Our focus is on harnessing a faster bulk-data-write approach compared to single-data-write within the context of relational databases. If a (upload) request matches a specified cacheable URL, then the data will be extracted and buffered on the local disk for later bulk-write. In contrast with already existing caching proxies, Squid, for example, in a similar uploading scenario, the request would simply get redirected, leaving out potential gains such as minimized processing power, lower server load, and bandwidth. After prototyping and testing the suggested application against Squid, concerning data uploads with 1, 100, 1.000, ..., and 100.000 requests, we consistently observed query execution improvements ranging from 5 to 9 times. This enhancement was achieved through buffering and bulk-writing the data, the extent of which depended on the specific test conditions.
翻译:暂无翻译