6 DISCUSSION 
From experimental results, we validate the feasibility 
of our idea on bident network structure. For each 
considered structure, the trained model can achieve at 
least 99% accuracy. In addition, results of using 
asymmetric network structures show that offloading 
computation from trusted execution environment to 
untrusted execution environment is doable.  
However, the training overhead varies among 
different operations. We evaluate the performance 
overhead for the training phase and the inference 
phase by the required number of epochs and the layers 
in the model. We consider the overhead of the 
training phase is more acceptable than one of the 
inference phase. The required numbers of epochs 
indeed are increased, so it takes more time to 
accomplish the training processes. We do not change 
the total numbers of layers among models in 
experiments, so the real-time performance (the 
inference phase) remains. 
Bident network structures protect the model by 
embedding it into two different environments. This 
method does not prevent the query-based model 
stealing attack. To fully protect the model, a 
complementary protection is preferred, such as 
limiting the query throughput or detecting query-
based attacks.
 
7 CONCLUSION 
As trained models are crucial intelligent properties for 
deep learning based applications, we propose the use 
of bident network structure to protect model 
confidentiality. By dividing the neural network into 
two sub-networks and minimizing their intermediate 
interaction, each sub-network is deployed in a 
different environment. As long as one cannot obtain 
parameters from both environments, one cannot 
reconstruct the model. Experimental results of 
difference bident network structures on MNIST 
dataset show the feasibility with low performance 
overhead of the inference phase. 
We list some potential directions to explore more 
related to this research.  
  To validate the feasibility of bident networks in 
general, more experiments are required on 
different datasets, different types of input data 
(such as texts instead of images), different ways 
of inputting data into sub-networks (such as 
dividing instead of replication), and different 
types of models (such as recurrent neural 
networks instead of CNN.)  
  To quantify the impact on confidentiality, more 
investigations are required to analyze the 
information entropy on the sub-network.  
  In addition to trusted execution environment in 
devices, bident network structures are potentially 
applicable to machine learning-as-a-service cloud 
with multi-server structures where each server 
holds a sub-network. As an extension, bident 
network structures can be expanded to trident-
network, quadruplet-network, and more. 
REFERENCES 
Florian Tramér, Fan Zhang, Ari Juels, Michael K. Reiter 
and Thomas Ristenpart. 2016. Stealing Machine 
Learning Models via Prediction APIs. In Proceedings 
of the 25th USENIX Security Symposium, pp.601-608 
Xuhui Chen, Jinlong Ji, Lixing Yu, Changqing Luo and Pan 
Li. 2018. SecureNets: Secure Inference of Deep Neural 
Networks on an Untrusted Cloud. In Proceedings of the 
10th Asian Conference on Machine Learning 2018, 
646-661 
Xiaoqian Jiang, Miran Kim, Kristin E. Lauter, and Yongsoo 
Song: 2018. Secure Outsourced Matrix Computation 
and Application to Neural Networks. In Proceedings of 
2018 ACM SIGSAC Conference on Computer and 
Communications Security.1209-1222 
Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. 
Asokan 2019. PRADA: Protecting Against DNN 
Model Stealing Attacks. IEEE European Symposium on 
Security and Privacy 2019  
Olga Ohrimenko, Felix Schuster, Cédric Fournet, Aastha 
Mehta, Sebastian Nowozin, Kapil Vaswani, Manuel 
Costa. 2016. Oblivious Multi-Party Machine Learning 
on Trusted Processors. The USENIX Security 
Symposium 2016: 619-636  
Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, 
Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, Ian 
Molloy. 2018. Reaching Data Confidentiality and 
Model Accountability on the CalTrain, arXiv preprint 
arXiv:1812.03230   
Stavros Volos, Kapil Vaswani, Rodrigo Bruno. 2018. 
Graviton: Trusted Execution Environments on GPUs, 
In Proceedings of the  13th USENIX Symposium on 
Operating Systems Design and Implementation, 2018: 
681-696  
Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian 
Xia.2017. Multi-view 3D Object Detection Network for 
Autonomous Driving. In Proceedings of the IEEE 
Conference on Computer Vision and Pattern 
Recognition 2017. 6526-6534  
Florian Tramèr, Dan Boneh. 2019. Slalom: Fast, Verifiable 
and Private Execution of Neural Networks in Trusted 
Hardware. ICLR 2019