Investigation of the confidential containers performance in fog computing for object detection tasks
In this paper, we propose and formalize a new general task that is relevant for distributed fog environments with intelligent edge/fog nodes — secure scaling, updating and (re)training of deployed ML models (Secure Scale Machine Learning — SSML). As part of the SSML solution for the secure detection of objects in foggy infrastructures for video systems, a method is proposed for determining the optimal ensemble of an ML detector and an ML framework that takes into account the features of the Intel SGX enclave. The proposed technique additionally makes it possible to assess the readiness/unavailability of edge/fog nodes to detect objects in real time during the experiment. The implementation of the technique on a dedicated fog node with an SGX enclave and using SCONE in hardware mode demonstrated that secure inference has an almost 5-fold increase in latency for the TensorFlow interpreter and more than 50-fold for TensorFlow Lite. It has been revealed that the main reason for such high overhead costs is the limitation of the size of protected memory caching pages and the associated intensive page switching operations. The absolute latency for secure inference at the best of the studied ensembles (TinyYolo_v3, TensorFlow, SGX) was 331 ms per frame of the video stream. It is emphasized that confidential logical inference in real time requires both specialized assemblies of ML frameworks and modifications of ML detector architectures.