Recent developments in Machine Learning and Deep Learning depend heavily on cloud computing and specialized hardware, such as GPUs and TPUs. This forces those using those models to trust private data to cloud servers. Such scenario has prompted a large interest on Homomorphic Cryptography and Secure Multi-Party Computation protocols that allow the use of cloud computing power in a privacy-preserving manner. When comparing the efficiency of such protocols, most works in literature resort to complexity analysis that gives asymptotic higher-bounding limits of computational cost when input size tends to infinite. These limits may be very different from the actual cost or execution time, when performing such computations over small, or average-sized datasets. We argue that Monte Carlo methods can render better computational cost and time estimates, fostering better design and implementation decisions for complex systems, such as Privacy-Preserving Machine Learning Frameworks.
翻译:机器学习和深层学习的近期发展在很大程度上依赖于云计算和专门硬件,如GPU和TPU。这迫使那些使用这些模型的人将私人数据信任于云服务器。这种情景引起了人们对单方加密和安全多党计算协议的极大兴趣,这些协议允许以保护隐私的方式使用云计算能力。在比较这类协议的效率时,大多数文献工作都采用复杂分析,在输入大小往往无限的情况下,这种分析会给计算成本带来无症状的更高限制。这些限制可能与实际成本或执行时间大不相同,在对小或平均大小的数据集进行计算时,这些限制可能与实际成本或执行时间大不相同。我们认为,蒙特卡洛方法可以更好地计算成本和时间估计,促进复杂系统(如隐私-保存机器学习框架)更好的设计和执行决定。