# 引用文献

[1] Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., ... & Webster, D. R. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22), 2402-2410. https://doi.org/10.1001/jama.2016.17216

[2] Badue, C., Guidolini, R., Carneiro, R. V., Azevedo, P., Cardoso, V. B., Forechi, A., ... & Oliveira-Santos, T. (2021). Self-driving cars: A survey. Expert Systems with Applications, 165, 113816. https://doi.org/10.1016/j.eswa.2020.113816

[3] Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. de O., Kaplan, J., ... & Zaremba, W. (2021). Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374. https://doi.org/10.48550/arXiv.2107.03374

[4] Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., ... & Sutskever, I. (2021). Zero-Shot Text-to-Image Generation. arXiv preprint arXiv:2102.12092. https://doi.org/10.48550/arXiv.2102.12092

[5] OpenAI. (2023). GPT-4 Technical Report. OpenAI. https://doi.org/10.48550/arXiv.2303.08774

[6] Thompson, N., Greenewald, K., Lee, K., & Manso, G. F. (2020). The Computational Limits of Deep Learning. arXiv preprint arXiv:2007.05558. https://doi.org/10.48550/arXiv.2007.05558

[7] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877-1901.

[8] Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., ... & Laudon, J. (2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. Proceedings of the 44th Annual International Symposium on Computer Architecture, 1-12. https://doi.org/10.1145/3079856.3080246

[9] Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., ... & Dean, J. (2021). Carbon Emissions and Large Neural Network Training. arXiv preprint arXiv:2104.10350. https://doi.org/10.48550/arXiv.2104.10350

[10] Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645-3650. https://doi.org/10.18653/v1/P19-1355

[11] Moore, G. E. (1965). Cramming More Components onto Integrated Circuits. Electronics, 38(8).

[12] Thompson, N., Greenewald, K., Lee, K., & Manso, G. (2020). The Computational Limits of Deep Learning. arXiv preprint arXiv:2007.05558. https://doi.org/10.48550/arXiv.2007.05558

[13] Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., ... & Amodei, D. (2020). Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361. https://doi.org/10.48550/arXiv.2001.08361

[14] Sze, V., Chen, Y. H., Yang, T. J., & Emer, J. S. (2017). Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proceedings of the IEEE, 105(12), 2295-2329. https://doi.org/10.1109/JPROC.2017.2761740

[15] Nah, F. F.-H. (2004). A study on tolerable waiting time: how long are Web users willing to wait? Behaviour & Information Technology, 23(3), 153–163. https://doi.org/10.1080/01449290410001669914 (opens new window)

[16] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

[17] OpenAI. (2023). GPT-4 Technical Report. OpenAI. Retrieved from https://openai.com/research/gpt-4 (opens new window)

[18] Nielsen, J. (1994). Usability Engineering. Morgan Kaufmann.

[19] Schuster, M., Paliwal, K. K., & Sim, K. C. (2020). On-device End-to-End Speech Recognition. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, 6059–6063. https://doi.org/10.1109/ICASSP40776.2020.9054251

[20] Singh, A., & Sharma, N. (2019). Edge Computing in Autonomous Vehicles: Opportunities and Challenges. IEEE Internet of Things Magazine, 2(1), 26–31. https://doi.org/10.1109/IOTM.0001.1900013

[21] U.S. Department of Justice. (2020). Paige Thompson Indictment. Retrieved from https://www.justice.gov/usao-wdwa/press-release/file/1197786/download

[22] Martin, K. E. (2019). Ethical Issues in the Big Data Industry. MIS Quarterly Executive, 18(2), 209–232.

[23] European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union.

[24] Cisco. (2020). Cisco Annual Internet Report (2018–2023). Retrieved from https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html

[25] Intel. (2016). Data is the New Oil in the Future of Automated Driving. Retrieved from https://newsroom.intel.com/editorials/krzanich-the-future-of-automated-driving/

[26] Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2010). A View of Cloud Computing. Communications of the ACM, 53(4), 50–58. https://doi.org/10.1145/1721654.1721672

[27] Jones, N. (2018). How to Stop Data Centres from Gobbling Up the World's Electricity. Nature, 561(7722), 163–166. https://doi.org/10.1038/d41586-018-06610-y

[28] International Energy Agency. (2021). Data Centres and Data Transmission Networks. Retrieved from https://www.iea.org/reports/data-centres-and-data-transmission-networks (opens new window)

[29] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637–646. https://doi.org/10.1109/JIOT.2016.2579198

[30] Satyanarayanan, M. (2017). The Emergence of Edge Computing. Computer, 50(1), 30–39. https://doi.org/10.1109/MC.2017.9

[31] Xu, X., Liu, C., Zhang, K., Li, Y., & Peng, K. (2018). A Survey on Edge Computing for the Internet of Things. Electronics, 7(12), 113. https://doi.org/10.3390/electronics7080113

[32] Porambage, P., Okwuibe, J., Liyanage, M., Ylianttila, M., & Taleb, T. (2018). Survey on Multi-Access Edge Computing for Internet of Things Realization. IEEE Communications Surveys & Tutorials, 20(4), 2961–2991. https://doi.org/10.1109/COMST.2018.2849509

[33] Mach, P., & Becvar, Z. (2017). Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Communications Surveys & Tutorials, 19(3), 1628–1656. https://doi.org/10.1109/COMST.2017.2682318

[34] Chen, J., & Ran, X. (2019). Deep Learning With Edge Computing: A Review. Proceedings of the IEEE, 107(8), 1655–1674. https://doi.org/10.1109/JPROC.2019.2921977

[35] Li, B., Li, Z., & Liu, J. (2018). Deep Learning-Based Object Detection on Autonomous Driving Vehicles. IEEE Transactions on Intelligent Transportation Systems, 19(11), 3594–3608. https://doi.org/10.1109/TITS.2018.2838576

[36] Ren, J., Zhang, D., He, S., Zhang, Y., & Li, T. (2019). A Survey on End-Edge-Cloud Orchestrated Network Computing Paradigms: Transparent Computing, Mobile Edge Computing, Fog Computing, and Cloudlet. ACM Computing Surveys, 52(6), 1–36. https://doi.org/10.1145/3362031

[37] European Parliament and Council of European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union, L119, 1–88.

[38] Wang, S., Zhang, X., Zhang, Y., Wang, L., Yang, J., & Wang, W. (2017). A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications. IEEE Access, 5, 6757–6779. https://doi.org/10.1109/ACCESS.2017.2685434

[39] Lin, J., Yu, W., Zhang, N., Yang, X., Zhang, H., & Zhao, W. (2017). A Survey on Internet of Things: Architecture, Enabling Technologies, Security and Privacy, and Applications. IEEE Internet of Things Journal, 4(5), 1125–1142. https://doi.org/10.1109/JIOT.2017.2683200

[40] Lane, N. D., Georgiev, P., & Qendro, L. (2015). DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 283–294. https://doi.org/10.1145/2750858.2804262

[41] Kugler, L. (2018). The Next Frontier of AI: Edge Computing. Communications of the ACM, 61(12), 15–16. https://doi.org/10.1145/3276744

[42] Premsankar, G., Di Francesco, M., & Taleb, T. (2018). Edge Computing for the Internet of Things: A Case Study. IEEE Internet of Things Journal, 5(2), 1275–1284. https://doi.org/10.1109/JIOT.2018.2805263

[43] Sood, S. K., & Mahajan, I. (2017). Wearable IoT Sensor Based Healthcare System for Identifying and Controlling Chikungunya Virus. Computers in Industry, 91, 33–44. https://doi.org/10.1016/j.compind.2017.05.003

[44] Yang, Z., & Yu, M. (2017). Face Recognition Attendance System Based on Real-Time Video Processing. 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), 697–701. https://doi.org/10.1109/ICEMI.2017.8265850

[45] Zhang, W., Shi, W., & Lu, S. (2018). Collaborative Edge Computing for Context-Aware Disaster Warning and Alerting Systems. Journal of Parallel and Distributed Computing, 119, 145–152. https://doi.org/10.1016/j.jpdc.2018.04.004

[46] Deng, S., Zhao, H., Fang, W., Yin, J., Dustdar, S., & Zomaya, A. Y. (2020). Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet of Things Journal, 7(8), 7457–7469. https://doi.org/10.1109/JIOT.2020.2984887

[47] Alaa, M., Zaidan, A. A., Zaidan, B. B., Talal, M., & Kiah, M. L. M. (2017). A Review of Smart Home Applications Based on Internet of Things. Journal of Network and Computer Applications, 97, 48–65. https://doi.org/10.1016/j.jnca.2017.08.017 (opens new window)

[48] Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., ... & Laudon, J. (2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. https://doi.org/10.1145/3079856.3080246

[49] Google Cloud. (2019). Edge TPU Performance Benchmarks. Retrieved from https://cloud.google.com/edge-tpu/docs/performance

[50] Deng, L., Li, G., Han, S., Shi, L., & Xie, Y. (2020). Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey. Proceedings of the IEEE, 108(4), 485–532. https://doi.org/10.1109/JPROC.2020.2976475

[51] Apple Inc. (2020). iPhone 12 Pro and iPhone 12 Pro Max: The Most Powerful iPhones Ever with Advanced Technologies. Retrieved from https://www.apple.com/newsroom/2020/10/iphone-12-pro-and-iphone-12-pro-max-the-most-powerful-iphones-ever/ (opens new window)

[52] Huawei. (2019). Kirin 990 5G: World's First Flagship 5G SoC. Retrieved from https://consumer.huawei.com/en/campaign/kirin-990/

[53] Banbury, C. R., Reddi, V. J., Lam, M., Fu, W., Fazel, A., Holleman, J., ... & Whatmough, P. N. (2020). Benchmarking TinyML Systems: Challenges and Direction. Proceedings of the 2020 Conference on Machine Learning and Systems, 8–17.

[54] ARM Limited. (2013). big.LITTLE Technology: The Future of Mobile. Retrieved from https://www.arm.com/files/pdf/big_LITTLE_Technology_the_Futue_of_Mobile.pdf

[55] NVIDIA Corporation. (2019). NVIDIA Jetson Nano Developer Kit. Retrieved from https://developer.nvidia.com/embedded/jetson-nano-developer-kit (opens new window)

[56] Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., ... & Kalenichenko, D. (2018). Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2704–2713. https://doi.org/10.1109/CVPR.2018.00286

[57] Krishnamoorthi, R. (2018). Quantizing Deep Convolutional Networks for Efficient Inference: A Whitepaper. arXiv preprint arXiv:1806.08342. https://doi.org/10.48550/arXiv.1806.08342

[58] Han, S., Mao, H., & Dally, W. J. (2016). Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. International Conference on Learning Representations. Retrieved from https://arxiv.org/abs/1510.00149 (opens new window)

[59] Blalock, D., Ortiz, J. J. G., Frankle, J., & Guttag, J. (2020). What is the State of Neural Network Pruning? Proceedings of Machine Learning and Systems, 129–146.

[60] Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2017). Pruning Convolutional Neural Networks for Resource Efficient Inference. International Conference on Learning Representations. Retrieved from https://arxiv.org/abs/1611.06440 (opens new window)

[61] Gale, T., Elsen, E., & Hooker, S. (2019). The State of Sparsity in Deep Neural Networks. arXiv preprint arXiv:1902.09574. https://doi.org/10.48550/arXiv.1902.09574

[62] Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. Proceedings of the NIPS Deep Learning and Representation Learning Workshop. Retrieved from https://arxiv.org/abs/1503.02531 (opens new window)

[63] Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., & Wang, F. (2020). TinyBERT: Distilling BERT for Natural Language Understanding. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 4163–4174. https://doi.org/10.18653/v1/2020.emnlp-main.346

[64] TensorFlow. (2021). TensorFlow Lite. Retrieved from https://www.tensorflow.org/lite

[65] Google Developers. (2020). TensorFlow Lite Guide. Retrieved from https://www.tensorflow.org/lite/guide

[66] PyTorch. (2020). PyTorch Mobile. Retrieved from https://pytorch.org/mobile/home/

[67] Lin, T., Ma, Z., & Torralba, A. (2020). Design Space for Deep Learning Accelerators. Proceedings of the 3rd Conference on Systems and Machine Learning (SysML).

[68] Sze, V., Chen, Y. H., Yang, T. J., & Emer, J. S. (2017). Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proceedings of the IEEE, 105(12), 2295–2329. https://doi.org/10.1109/JPROC.2017.2761740

[69] Reddi, V. J., Cheng, C., Kanter, D., Mattson, P., Schmuelling, G., Wu, C.-J., ... & Zaharia, M. (2020). MLPerf Inference Benchmark. Proceedings of the ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 446–459. https://doi.org/10.1109/ISCA45697.2020.00045

[70] Guo, Y., Huang, J., & Wu, Z. (2019). Collaborative Computing for Edge Intelligence. IEEE Communications Magazine, 57(12), 14–17. https://doi.org/10.1109/MCOM.001.1900403

[71] Mattson, P., Cheng, C., Coleman, C., Diamos, G., Micikevicius, P., Patterson, D., ... & Reddi, V. J. (2020). MLPerf Training Benchmark. Proceedings of Machine Learning and Systems, 336–349.

[72] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., ... & Jegou, H. (2023). LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971. https://doi.org/10.48550/arXiv.2302.13971

[73] Meta AI Research. (2023). LLaMA: Open and Efficient Foundation Language Models. Retrieved from https://ai.facebook.com/blog/large-language-model-llama-meta-ai/ (opens new window)

[74] Apple Inc. (2021). Privacy-Preserving Machine Learning. Retrieved from https://machinelearning.apple.com/ (opens new window)

[75] Apple Inc. (2020). A14 Bionic: A New Level of Performance and Power Efficiency. Retrieved from https://www.apple.com/newsroom/2020/10/iphone-12-and-iphone-12-mini-a-new-era-for-iphone-with-5g/ (opens new window)

[76] Gurman, M. (2023). Apple Works on AI Tools to Challenge OpenAI and Google. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2023-07-19/apple-develops-ai-tools-to-challenge-openai-google

[77] Apple Inc. (2021). Advanced Privacy Technologies. Retrieved from https://www.apple.com/privacy/ (opens new window)

[78] Google Cloud. (2018). Edge TPU Overview. Retrieved from https://cloud.google.com/edge-tpu/

[79] Coral. (2021). Products. Retrieved from https://coral.ai/products/

[80] Google Developers. (2021). ML Kit. Retrieved from https://developers.google.com/ml-kit

[81] TensorFlow. (2021). TensorFlow Lite. Retrieved from https://www.tensorflow.org/lite

[82] Qualcomm Technologies, Inc. (2023). Qualcomm and Meta Collaborate to Enable On-Device AI Applications with Llama 2. Retrieved from https://www.qualcomm.com/news/releases/2023/07/qualcomm-and-meta-collaborate-to-enable-on-device-ai-applications-with-llama-2

[83] Qualcomm Technologies, Inc. (2021). The Hybrid AI Approach: Empowering Devices with On-Device and Cloud AI. Retrieved from https://www.qualcomm.com/media/documents/files/hybrid-ai-whitepaper.pdf

[84] Khemka, A. (2021). The Future of AI is Hybrid: Balancing Edge and Cloud. Qualcomm Blog. Retrieved from https://www.qualcomm.com/news/onq/2021/04/future-ai-hybrid-balancing-edge-and-cloud

[85] Alibaba Cloud. (2020). Link IoT Edge. Retrieved from https://www.alibabacloud.com/product/link-iot-edge

[86] Samsung Electronics. (2021). Exynos Processors with Advanced AI Capabilities. Retrieved from https://semiconductor.samsung.com/processor/mobile-processor/

[87] Huawei. (2021). Ascend AI Processor Series. Retrieved from https://e.huawei.com/en/products/servers/ascend

[88] Microsoft Azure. (2021). Azure IoT Edge. Retrieved from https://azure.microsoft.com/services/iot-edge/ (opens new window)

[89] NVIDIA Corporation. (2021). NVIDIA Jetson Platform. Retrieved from https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/ (opens new window)

[90] Li, Y., & Yao, X. (2020). Demystifying the AI Democratization: A Look into AI Accessibility. Communications of the ACM, 63(10), 32–34.

[91] Bryant, R. E., Katz, R. H., & Lazowska, E. D. (2008). Big-Data Computing: Creating Revolutionary Breakthroughs in Commerce, Science, and Society. Computing Research Initiatives, 1–15.

[92] Lee, K.-F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin Harcourt.

[93] Chen, Y., & Lin, Z. (2021). Edge AI: Empowering AI at the Edge. IEEE Internet of Things Magazine, 4(2), 8–9.

[94] Xu, C., Liu, Z., & Chen, H. (2021). Bridging Industry and Academia in Edge AI: A Collaborative Approach. IEEE Access, 9, 14598–14608.

[95] Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated Learning: Strategies for Improving Communication Efficiency. arXiv preprint arXiv:1610.05492. https://doi.org/10.48550/arXiv.1610.05492 (opens new window)

[96] Wang, S., & Krishnan, E. (2018). Mobile Health Technology: A New Paradigm for Healthcare. IEEE Consumer Electronics Magazine, 7(5), 53–57.

[97] Xu, W., Zhang, K., Zhou, Y., Chen, X., & Li, X. (2019). Health Monitoring and Management Using Internet of Things (IoT) Sensing with Cloud-Based Processing: Opportunities and Challenges. IEEE Network, 33(6), 27–33.

[98] Chaudhuri, S., Thompson, H., & Demiris, G. (2014). Fall Detection Devices and Their Use With Older Adults: A Systematic Review. Journal of Geriatric Physical Therapy, 37(4), 178–196.

[99] Boughton, C. K., & Hovorka, R. (2019). Is an Artificial Pancreas (Closed-Loop System) for Type 1 Diabetes Effective? Diabetic Medicine, 36(3), 279–286.

[100] U.S. Department of Health & Human Services. (2013). Summary of the HIPAA Privacy Rule. Retrieved from https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html

[101] Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., ... & Cardoso, M. J. (2020). The Future of Digital Health with Federated Learning. npj Digital Medicine, 3(1), 1–7.

[102] Ding, A. Y., Kousiouris, G., & Mavromoustakis, C. X. (2019). Edge and Fog Computing for the Internet of Things: A Survey on Current Trends and Future Directions. IEEE Access, 7, 111022–111035.

[103] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637–646.

[104] Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., ... & Matias, Y. (2022). Large Language Models Encode Clinical Knowledge. arXiv preprint arXiv:2212.13138. https://doi.org/10.48550/arXiv.2212.13138

[105] Fitbit News. (2022). Fitbit and Google Research Collaborate to Advance Personalized Health and Wellness. Retrieved from https://blog.fitbit.com/fitbit-google-research/

[106] Liang, J., & Lee, J. (2022). Integrating Large Language Models with Robotics: A Survey. IEEE Transactions on Robotics, 38(5), 2393–2408.

[107] Ahmad, M., & Lee, S. P. (2020). Internet of Things (IoT) Enabled Smart Autonomous Vehicles: A Review. IEEE Access, 8, 117142–117164.

[108] Wan, J., Tang, S., Li, D., Wang, S., Liu, C., & Abbas, H. (2018). A Manufacturing Big Data Solution for Active Preventive Maintenance. IEEE Transactions on Industrial Informatics, 13(4), 2039–2047.

[109] Siau, K., & Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. ACM Transactions on Management Information Systems, 9(3), 7.

[110] Tung, L. (2018). Data Privacy: Why Home Robots Could Be This Generation's Privacy Nightmare. ZDNet. Retrieved from https://www.zdnet.com/article/data-privacy-why-home-robots-could-be-this-generations-privacy-nightmare/

[111] Zhang, J., & Tao, D. (2020). Empowering Things with Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things. IEEE Internet of Things Journal, 8(10), 7789–7817.

[112] Mahmoud, M. S., & Mohamad, M. S. (2019). A Study of Efficient Power Consumption Wireless Communication Techniques/Modules for Internet of Things (IoT) Applications. Advances in Internet of Things, 9(2), 19–29.

[113] NVIDIA Corporation. (2021). NVIDIA Isaac Platform for Robotics. Retrieved from https://developer.nvidia.com/isaac-sdk (opens new window)

[114] Boston Dynamics. (2021). Technology. Retrieved from https://www.bostondynamics.com/technology

[115] Amazon. (2021). Meet Astro, a Home Robot Unlike Any Other. Retrieved from https://www.aboutamazon.com/news/devices/meet-astro-a-home-robot-unlike-any-other

[116] OpenAI. (2022). ChatGPT: Optimizing Language Models for Dialogue. Retrieved from https://openai.com/blog/chatgpt/ (opens new window)

[117] Chen, Z., Xu, X., & Liu, Z. (2022). On-Device Natural Language Processing: An Edge AI Perspective. IEEE Transactions on Neural Networks and Learning Systems, 33(11), 6136–6153.

[118] Lu, X., & Li, J. (2020). Speed is All You Need: On-Device Acceleration of Large-Scale Conversational AI. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13523–13530.

[119] He, Y., Annavaram, M., & Avestimehr, S. (2019). Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge. Advances in Neural Information Processing Systems, 32, 14068–14080.

[120] Hoy, M. B. (2018). Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical Reference Services Quarterly, 37(1), 81–88.

[121] Shearer, E., & Gottfried, J. (2017). News Use Across Social Media Platforms 2017. Pew Research Center. Retrieved from https://www.pewresearch.org/journalism/2017/09/07/news-use-across-social-media-platforms-2017/

[122] Ericsson. (2020). Ericsson Mobility Report. Retrieved from https://www.ericsson.com/en/mobility-report

[123] Li, X., & Wang, H. (2020). Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence. IEEE Internet of Things Journal, 7(8), 7457–7469.

[124] Apple Inc. (2020). Siri Learning Guide. Retrieved from https://www.apple.com/siri/ (opens new window)

[125] Schuster, M., Paliwal, K. K., & Sim, K. C. (2020). On-Device End-to-End Speech Recognition. ICASSP 2020 - IEEE International Conference on Acoustics, Speech and Signal Processing, 6059–6063.

[126] Amazon Developer Services. (2021). Alexa Voice Service Integration for AWS IoT Core. Retrieved from https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/avs-integration-for-aws-iot-core.html (opens new window)

[127] Chen, T., Zhang, S., & Li, J. (2020). End-to-End Learning for Self-Driving Cars: An Overview of Recent Advances. IEEE Transactions on Intelligent Vehicles, 5(4), 724–735.

[128] Wu, B., Dai, X., Zhang, P., Wang, Y., & Sun, F. (2021). Visual and Linguistic Knowledge Transfer for Large Scale Semi-supervised Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(7), 2493–2506.

[129] Liu, Y., Peng, K., Ning, Z., Wang, H., Guo, L., & Guo, S. (2020). Resource Allocation in Autonomous Driving Networks: A Joint Computation, Caching, and Communication Perspective. IEEE Transactions on Intelligent Transportation Systems, 21(11), 4793–4804.

[130] Arvin, F., Samsudin, K., & Turgut, A. E. (2014). Development of an Autonomous Micro Robot for Swarm Robotics. International Journal of Advanced Robotic Systems, 11(3), 42.

[131] European Data Protection Board. (2019). Guidelines 1/2020 on Processing Personal Data in the Context of Connected Vehicles and Mobility Related Applications. Retrieved from https://edpb.europa.eu/ (opens new window)

[132] Ma, X., Ding, Y., Wang, X., & Wang, J. (2018). Cloud-Assisted Privacy-Preserving Mobile Health Monitoring. IEEE Access, 6, 36552–36561.

[133] Ge, F., Wang, X., Hu, B., & Chen, X. (2021). Personalized Route Planning for Autonomous Vehicles Using Natural Language Interface. IEEE Transactions on Intelligent Transportation Systems, 22(5), 3048–3058.

[134] Kapania, N. R., & Gerdes, J. C. (2015). Designing Steering Feel for Autonomous Vehicles. IEEE Transactions on Intelligent Transportation Systems, 16(5), 2442–2451.

[135] Reddy, M., & Chandra, S. (2020). User Preference-Based Personalization in Autonomous Vehicles. IEEE Transactions on Intelligent Transportation Systems, 21(5), 2092–2101.

[136] Tesla. (2021). Artificial Intelligence & Autopilot. Retrieved from https://www.tesla.com/AI (opens new window)

[137] Cisco Systems. (2018). Cisco Visual Networking Index: Forecast and Trends, 2017–2022. Retrieved from https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.pdf

[138] Zhang, J., & Chen, K. (2020). Joint Latency and Reliability Optimization for Multi-Access Edge Computing in 5G Networks. IEEE Transactions on Wireless Communications, 19(7), 4715–4728. https://doi.org/10.1109/TWC.2020.2988330

[139] Lu, Y., Liu, C., Wang, K. I.-K., Huang, H., & Xu, X. (2020). Digital Twin-Driven Smart Manufacturing: Connotation, Reference Model, Applications and Research Issues. Robotics and Computer-Integrated Manufacturing, 61, 101837. https://doi.org/10.1016/j.rcim.2019.101837

[140] Zhan, S., & Kojima, F. (2017). Low Latency and High Reliability 5G Communications in VR/AR Applications. 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 343–348. https://doi.org/10.1109/INFCOMW.2017.8116417

[141] Aldridge, I. (2013). High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems. John Wiley & Sons.

[142] Bixby, H., & Renaudin, M. (2019). Understanding Latency Requirements for Consumer Mobile Applications. IEEE Consumer Electronics Magazine, 8(2), 20–25. https://doi.org/10.1109/MCE.2018.2885019

[143] Satyanarayanan, M. (2017). The Emergence of Edge Computing. Computer, 50(1), 30–39. https://doi.org/10.1109/MC.2017.9

[144] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861. https://doi.org/10.48550/arXiv.1704.04861

[145] Chen, Y.-H., Yang, T.-J., Emer, J. S., & Sze, V. (2017). Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(2), 292–308. https://doi.org/10.1109/JETCAS.2019.2910232

[146] Li, X., Chen, J., Zhang, Y., & Vasilakos, A. V. (2018). Secure Cache Aided Content Delivery in Mobile Ad Hoc Networks. IEEE Transactions on Mobile Computing, 17(2), 304–319. https://doi.org/10.1109/TMC.2017.2719813

[147] Cheng, B., Yang, J., Xu, Y., & Zhao, W. (2018). Energy-Efficient Smart Home Automation System Using Edge Computing. IEEE International Conference on Edge Computing (EDGE), 62–69. https://doi.org/10.1109/EDGE.2018.00016

[148] Kang, Y., Hauswald, J., Gao, C., Rovinski, A., Mudge, T., Mars, J., & Tang, L. (2017). Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge. ACM SIGARCH Computer Architecture News, 45(1), 615–629. https://doi.org/10.1145/3093337.3037698

[149] Choi, Y., El-Khamy, M., & Lee, J. (2018). Towards the Limit of Network Quantization. arXiv preprint arXiv:1612.01543. https://doi.org/10.48550/arXiv.1612.01543

[150] European Union. (2016). General Data Protection Regulation (GDPR). Official Journal of the European Union. Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj (opens new window)

[151] U.S. Department of Health & Human Services. (2013). Summary of the HIPAA Privacy Rule. Retrieved from https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html

[152] California Legislative Information. (2018). California Consumer Privacy Act (CCPA) of 2018. Retrieved from https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375

[153] Federal Trade Commission. (1998). Children's Online Privacy Protection Act (COPPA). Retrieved from https://www.ftc.gov/legal-library/browse/statutes/childrens-online-privacy-protection-act

[154] Alrawais, A., Alhothaily, A., Hu, C., & Cheng, X. (2017). Fog Computing for the Internet of Things: Security and Privacy Issues. IEEE Internet Computing, 21(2), 34–42. https://doi.org/10.1109/MIC.2017.37

[155] Sabt, M., Achemlal, M., & Bouabdallah, A. (2015). Trusted Execution Environment: What It is, and What It is Not. 2015 IEEE Trustcom/BigDataSE/ISPA, 57–64. https://doi.org/10.1109/Trustcom.2015.357

[156] Gai, K., Qiu, M., & Zhao, H. (2017). Privacy-Preserving Data Encryption Strategy for Big Data in Mobile Cloud Computing. IEEE Transactions on Big Data, 3(2), 107–119. https://doi.org/10.1109/TBDATA.2016.2638432

[157] Liu, Y., Ning, P., & Li, Y. (2014). Handbook of Software and Hardware Trojan Detection. CRC Press.

[158] Jain, A. K., Ross, A., & Nandakumar, K. (2011). Introduction to Biometrics. Springer Science & Business Media.

[159] Rajendran, J., Rosenfeld, K., Tehranipoor, M., & Karri, R. (2012). Security Analysis of Integrated Circuit Camouflaging. Proceedings of the 2012 ACM Conference on Computer and Communications Security, 709–720. https://doi.org/10.1145/2382196.2382279

[160] Chen, M., Mao, S., & Liu, Y. (2014). Big Data: A Survey. Mobile Networks and Applications, 19(2), 171–209. https://doi.org/10.1007/s11036-013-0489-0

[161] Mach, P., & Becvar, Z. (2017). Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Communications Surveys & Tutorials, 19(3), 1628–1656. https://doi.org/10.1109/COMST.2017.2682318

[162] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637–646. https://doi.org/10.1109/JIOT.2016.2579198

[163] Li, Y., & Liu, M. (2018). Mobile Edge Computing Empowered Smart Homes. Springer.

[164] Omoniwa, B., Hussain, R., Javed, M. A., Bouk, S. H., & Han, K. (2018). Fog/Edge Computing-Based IoT (FECIoT): Architecture, Applications, and Research Issues. IEEE Internet of Things Journal, 6(3), 4118–4149. https://doi.org/10.1109/JIOT.2018.2875544

[165] Wang, C., Liang, Z., Wu, F., Cao, X., & Wu, D. (2018). Edge Caching at Base Stations with Device-to-Device Offloading. IEEE Access, 6, 16649–16657. https://doi.org/10.1109/ACCESS.2018.2810864

[166] Taleb, T., Samdanis, K., Mada, B., Flinck, H., Dutta, S., & Sabella, D. (2017). On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration. IEEE Communications Surveys & Tutorials, 19(3), 1657–1681. https://doi.org/10.1109/COMST.2017.2705720

[167] Mao, Y., Zhang, J., & Letaief, K. B. (2017). Mobile Edge Computing: Energy-Efficient Offloading of Mobile Computing to Edge Clouds with Computing Latency Constraints. IEEE Transactions on Wireless Communications, 16(7), 4809–4822. https://doi.org/10.1109/TWC.2017.2695585

[168] Peng, K., Zhang, Y., Wang, C., Qiao, X., Xu, Y., & Zhang, W. (2018). A Survey on Mobile Edge Computing: Focusing on Service Adoption and Provision. IEEE Access, 6, 58249–58263. https://doi.org/10.1109/ACCESS.2018.2875681

[169] Zhang, K., Mao, Y., Leng, S., He, Y., & Zhang, Y. (2016). Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading. IEEE Vehicular Technology Magazine, 12(2), 36–44. https://doi.org/10.1109/MVT.2016.2572460 (opens new window)

[170] Li, Y., Zhan, Y., Ren, J., & Yang, F. (2020). Computation Offloading for Edge Computing with Access Control in Smart Healthcare Systems. Future Generation Computer Systems, 107, 667–676.

[171] Xu, C., Liu, Z., Li, W., Zhang, H., & Zhang, Y. (2021). Energy-Efficient Inference for Deep Learning Services in 5G Edge Networks. IEEE Transactions on Network and Service Management, 18(2), 2167–2180.

[172] Choi, J., El-Khamy, M., & Lee, J. (2018). Towards the Limit of Network Quantization. IEEE Journal of Selected Topics in Signal Processing, 12(4), 733–748.

[173] European Parliament and Council of European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union, L119, 1–88.

[174] Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35.

[175] Wang, Z., Liu, B., Gong, Z., Hu, S., Xie, Y., & Zhang, W. (2020). High-Performance and Energy-Efficient Neural Network Inference with Processing-in-Memory Architecture. IEEE Transactions on Computers, 69(9), 1352–1365.

[176] Liu, J., Tang, J., Xu, Y., & Zhang, W. (2020). Computation Offloading and Content Caching in Wireless Cellular Networks with Mobile Edge Computing. IEEE Transactions on Vehicular Technology, 69(2), 2285–2299.

[177] Murshed, M. G., Murphy, C., Hou, D., Khan, M. A., Ananthanarayanan, G., & Zou, J. (2019). Machine Learning at the Network Edge: A Survey. IEEE Internet of Things Journal, 7(5), 4329–4346.

[178] Satyanarayanan, M. (2017). The Emergence of Edge Computing. Computer, 50(1), 30–39.

[179] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637–646.

[180] Zhang, Y., Yu, R., Xie, S., Yao, W., & Zhang, Y. (2018). A Survey on Edge Computing for the Internet of Things. IEEE Network, 32(1), 30–36.

[181] Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, 6105–6114.

[182] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter. arXiv preprint arXiv:1910.01108.

[183] Sayood, K. (2017). Introduction to Data Compression (5th ed.). Morgan Kaufmann.

[184] Shi, W., & Dustdar, S. (2016). The Promise of Edge Computing. Computer, 49(5), 78–81.

[185] Abomhara, M., & Køien, G. M. (2015). Cyber Security and the Internet of Things: Vulnerabilities, Threats, Intruders, and Attacks. Journal of Cyber Security and Mobility, 4(1), 65–88.

[186] Kang, J., Yu, R., Huang, X., Maharjan, S., Zhang, Y., & Hossain, E. (2017). Enabling Localized Peer-to-Peer Electricity Trading Among Plug-in Hybrid Electric Vehicles Using Consortium Blockchains. IEEE Transactions on Industrial Informatics, 13(6), 3154–3164.

[187] Li, S., Xu, L. D., & Zhao, S. (2018). 5G Internet of Things: A Survey. Journal of Industrial Information Integration, 10, 1–9.

[188] Yi, S., Hao, Z., Qin, Z., & Li, Q. (2015). Fog Computing: Platform and Applications. 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), 73–78.

[189] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

[190] Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud Computing: State-of-the-Art and Research Challenges. Journal of Internet Services and Applications, 1(1), 7–18.

[191] Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., & Ghalsasi, A. (2011). Cloud Computing—The Business Perspective. Decision Support Systems, 51(1), 176–189.

[192] Amazon Web Services. (2021). AWS Machine Learning. Retrieved from https://aws.amazon.com/machine-learning/ (opens new window)

[193] Microsoft Azure. (2021). Azure AI Platform. Retrieved from https://azure.microsoft.com/services/machine-learning/ (opens new window)

[194] Google Cloud. (2021). Cloud AI Products. Retrieved from https://cloud.google.com/products/ai/

[195] Takabi, H., Joshi, J. B., & Ahn, G. J. (2010). Security and Privacy Challenges in Cloud Computing Environments. IEEE Security & Privacy, 8(6), 24–31.

[196] Abouelmehdi, K., Beni-Hessane, A., & Khaloufi, H. (2018). Big Healthcare Data: Preserving Security and Privacy. Journal of Big Data, 5(1), 1–18.

[197] Mao, Y., You, C., Zhang, J., Huang, K., & Letaief, K. B. (2017). A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Communications Surveys & Tutorials, 19(4), 2322–2358.

[198] Tan, H., Pan, S., Li, Y., & Wu, Z. (2018). A Survey of Deep Learning-Based Distributed Training Optimization. IEEE Access, 7, 142331–142346.

[199] Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM, 51(1), 107–113.

[200] Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., ... & Su, B. Y. (2014). Scaling Distributed Machine Learning with the Parameter Server. 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), 583–598.

[201] Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated Machine Learning: Concept and Applications. ACM Transactions on Intelligent Systems and Technology, 10(2), 12.

[202] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Zheng, X. (2016). TensorFlow: A System for Large-Scale Machine Learning. 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 265–283.

[203] Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., ... & He, K. (2017). Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv preprint arXiv:1706.02677.

[204] Lyu, L., Yu, H., & Yang, Q. (2020). Threats to Federated Learning: A Survey. arXiv preprint arXiv:2003.02133.

[205] Wooldridge, M. (2009). An Introduction to MultiAgent Systems. John Wiley & Sons.

[206] Verbraeken, J., Wolting, M., Katzy, J., Klous, S., & Sips, R. (2020). A Survey on Distributed Machine Learning. ACM Computing Surveys, 53(2), 1–33.

[207] Chen, J., Monga, R., Bengio, S., & Jozefowicz, R. (2016). Revisiting Distributed Synchronous SGD. arXiv preprint arXiv:1604.00981.

[208] Yin, D., Chen, Y., Kannan, R., & Bartlett, P. (2018). Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. Proceedings of the 35th International Conference on Machine Learning, 5650–5659.

[209] Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., & Ayyash, M. (2015). Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Communications Surveys & Tutorials, 17(4), 2347–2376.

[210] Weng, J., Weng, J., Zhang, M., Li, Y., Zhang, Y., & Luo, W. (2019). DeepChain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Transactions on Dependable and Secure Computing, 18(5), 2438–2455. https://doi.org/10.1109/TDSC.2019.2952332 (opens new window)

[211] W. Shi and S. Dustdar, "The promise of edge computing," Computer, vol. 49, no. 5, pp. 78-81, 2016.

[212] Y. Kang, J. Hauswald, A. Rovinski, T. Mudge, J. Mars, and L. Tang, "Neurosurgeon: Collaborative intelligence between the cloud and mobile edge," in Proceedings of the 22nd International Conference on Architectural Support for Programming Languages and Operating Systems, 2017, pp. 615–629.

[213] N. D. Lane, S. Bhattacharya, A. Mathur, and C. Forlivesi, "Squeezing deep learning into mobile and embedded devices," IEEE Pervasive Computing, vol. 16, no. 3, pp. 82–88, 2017.

[214] P. Li, S. Hu, B. Zhou, B. Chen, and H. Zhao, "Enabling deep learning on IoT devices with wisdom inference," IEEE Network, vol. 33, no. 3, pp. 101–107, 2019.

[215] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification and Regression Trees, CRC Press, 1984.

[216] L. Breiman, "Random forests," Machine Learning, vol. 45, no. 1, pp. 5–32, 2001.

[217] J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, "A survey on concept drift adaptation," ACM Computing Surveys, vol. 46, no. 4, pp. 1–37, 2014.

[218] D. Bhattacharya and S. Pal, "Anomaly detection: A survey," International Journal of Computer Applications, vol. 116, no. 9, pp. 1–8, 2015.

[219] C. Cortes and V. Vapnik, "Support-vector networks," Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.

[220] T. Joachims, "Training linear SVMs in linear time," in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 217–226.

[221] C. J. C. Burges, "Simplified support vector decision rules," in Proceedings of the 13th International Conference on Machine Learning, 1996, pp. 71–77.

[222] T. Cover and P. Hart, "Nearest neighbor pattern classification," IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967.

[223] I. T. Jolliffe, Principal Component Analysis, Springer, 2002.

[224] J. L. Bentley, "Multidimensional binary search trees used for associative searching," Communications of the ACM, vol. 18, no. 9, pp. 509–517, 1975.

[225] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

[226] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," in Advances in Neural Information Processing Systems, vol. 25, 2012, pp. 1097–1105.

[227] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by back-propagating errors," Nature, vol. 323, no. 6088, pp. 533–536, 1986.

[228] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.

[229] K. Cho et al., "Learning phrase representations using RNN encoder-decoder for statistical machine translation," in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1724–1734.

[230] Z. Wu et al., "A comprehensive survey on graph neural networks," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4–24, 2021.

[231] J. Chen, T. Ma, and C. Xiao, "FastGCN: Fast learning with graph convolutional networks via importance sampling," in Proceedings of the 6th International Conference on Learning Representations (ICLR), 2018.

[232] A. G. Howard et al., "MobileNets: Efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.

[233] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.

[234] F. N. Iandola et al., "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size," arXiv preprint arXiv:1602.07360, 2016.

[235] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, "Quantized convolutional neural networks for mobile devices," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4820–4828.

[236] M. Tan and Q. V. Le, "EfficientNet: Rethinking model scaling for convolutional neural networks," in Proceedings of the 36th International Conference on Machine Learning, 2019, pp. 6105–6114.

[237] M. Tan and Q. V. Le, "EfficientNetV2: Smaller models and faster training," in Proceedings of the 38th International Conference on Machine Learning, 2021, pp. 10096–10106.

[238] T. B. Brown et al., "Language models are few-shot learners," in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 1877–1901.

[239] OpenAI, "GPT-4 Technical Report," arXiv preprint arXiv:2303.08774, 2023.

[240] Z. Liu et al., "Post-training quantization for vision transformer," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2020–2028.

[241] J. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.

[242] A. Sanh, L. Debut, J. Chaumond, and T. Wolf, "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter," arXiv preprint arXiv:1910.01108, 2019.

[243] X. Jiao et al., "TinyBERT: Distilling BERT for natural language understanding," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 4163–4174.

[244] Z. Sun, H. Yu, X. Song, R. Liu, Y. Yang, and D. Zhou, "MobileBERT: a compact task-agnostic BERT for resource-limited devices," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 2158–2170.

[245] Apple Inc., "Apple Neural Engine," [Online]. Available: https://www.apple.com/ca/newsroom/2017/09/the-apple-neural-engine/ (opens new window)

[246] Apple Inc., "Siri Suggestions," [Online]. Available: https://support.apple.com/en-us/HT208690 (opens new window)

[247] Apple Inc., "Core ML Framework," [Online]. Available: https://developer.apple.com/documentation/coreml (opens new window)

[248] Y. Chen, T. Yu, and Z. Xu, "A survey on edge computing systems and tools," Proceedings of the IEEE, vol. 107, no. 8, pp. 1537–1562, 2019.

[249] H. Li et al., "Learning simple algorithms from data: An example with edge computing," IEEE Internet of Things Journal, vol. 6, no. 6, pp. 9878–9888, 2019.

[250] J. Ren et al., "Accelerating edge intelligence through micro parallel computing: A hardware prototype perspective," IEEE Wireless Communications, vol. 27, no. 3, pp. 82–88, 2020.

[251] K. Zhang et al., "Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks," IEEE Access, vol. 4, pp. 5896–5907, 2016.

[252] S. Abolfazli, Z. Sanaei, E. Ahmed, A. Gani, and R. Buyya, "Cloud-based augmentation for mobile devices: Motivation, taxonomies, and open challenges," IEEE Communications Surveys & Tutorials, vol. 16, no. 1, pp. 337–368, 2014.

[253] N. D. Lane et al., "DeepX: A software accelerator for low-power deep learning inference on mobile devices," in Proceedings of the 15th International Conference on Information Processing in Sensor Networks, 2016, pp. 1–12.

[254] A. Rudenko, P. Reiher, G. J. Popek, and G. H. Kuenning, "Saving portable computer battery power through remote process execution," ACM SIGMOBILE Mobile Computing and Communications Review, vol. 2, no. 1, pp. 19–26, 1998.

[255] M. Wang et al., "E2Train: Energy-efficient training of DNNs with (mostly) integer operations," in Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2022, pp. 770–785.

[256] Y. Lin et al., "Energy-efficient ASR for embedded devices," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 5469–5473.

[257] B. Moons and M. Verhelst, "An energy-efficient precision-scalable convolutional neural network accelerator," in Proceedings of the IEEE Symposium on VLSI Circuits, 2016, pp. 1–2.

[258] N. Mishra, J. L. Wong, T. Rosing, and P. Viswanath, "Context-aware energy enhancement in sensor nodes," in Proceedings of the IEEE International Conference on Distributed Computing in Sensor Systems, 2008, pp. 1–10.

[259] H. Sallouha, A. Chiumento, and S. Pollin, "Localization in long-range ultra narrow band IoT networks using RSSI," in Proceedings of the IEEE International Conference on Communications (ICC), 2017, pp. 1–6.

[260] R. Azuma, "A survey of augmented reality," Presence: Teleoperators & Virtual Environments, vol. 6, no. 4, pp. 355–385, 1997.

[261] M. Satyanarayanan, "The emergence of edge computing," Computer, vol. 50, no. 1, pp. 30–39, 2017.

[262] S. Hung et al., "Mobile edge computing (MEC): A key technology towards 5G," ETSI White Paper No. 11, 2016.

[263] J. Chen, B. Ran, and J. Li, "Data-driven approach for freeway origin–destination matrix estimation using fusion data from multiple sensors," Journal of Intelligent Transportation Systems, vol. 20, no. 3, pp. 275–285, 2016.

[264] G. Premsankar, M. Di Francesco, and T. Taleb, "Edge computing for the Internet of Things: A case study," IEEE Internet of Things Journal, vol. 5, no. 2, pp. 1275–1284, 2018.

[265] S. Nakamoto, "Bitcoin: A peer-to-peer electronic cash system," Bitcoin Whitepaper, 2008.

[266] L. Zhang, S. Wang, W. Sun, S. Li, and F. Yang, "Privacy-preserving data aggregation in mobile phone sensing," IEEE Transactions on Information Forensics and Security, vol. 11, no. 5, pp. 980–992, 2016.

[267] P. W. Chan and R. W. Yeung, "Privacy protection in data mining on mobile devices through data masking," IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 11, pp. 2077–2090, 2012.

[268] J. Tang, Y. Cui, Q. Li, K. Ren, and J. Liu, "Ensuring security and privacy preservation for cloud data services," ACM Computing Surveys, vol. 49, no. 1, pp. 1–39, 2016.

[269] I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015.

[270] M. Fredrikson, S. Jha, and T. Ristenpart, "Model inversion attacks that exploit confidence information and basic countermeasures," in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333.

[271] C. Gentry, "Fully homomorphic encryption using ideal lattices," in Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009, pp. 169–178.

[272] S. Koeberl, A. R. Sadeghi, and S. Schulz, "TrustLite: A security architecture for tiny embedded devices," in Proceedings of the 9th European Conference on Computer Systems, 2014, pp. 1–14.

[273] B. McMahan et al., "Communication-efficient learning of deep networks from decentralized data," in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017, pp. 1273–1282.

[274] M. Ammar, G. Russello, and B. Crispo, "Internet of Things: A survey on the security of IoT frameworks," Journal of Information Security and Applications, vol. 38, pp. 8–27, 2018.

[275] D. B. Rawat and C. Bajracharya, "Cyber security for smart grid systems: Status, challenges and perspectives," in Proceedings of the SoutheastCon 2015, 2015, pp. 1–6.

[276] S. Han, H. Mao, and W. J. Dally, "Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding," International Conference on Learning Representations, 2016.

[277] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "XNOR-Net: ImageNet classification using binary convolutional neural networks," European Conference on Computer Vision, 2016.

[278] TensorFlow Model Optimization Toolkit, "Dynamic range quantization," [Online]. Available: https://www.tensorflow.org/lite/performance/post_training_quant

[279] Z. Zhao, S. Zhang, T. Chen, and C. Zhang, "Improving neural network quantization without retraining using outlier channel splitting," International Conference on Machine Learning, 2019.

[280] M. Nagel, R. Amjad, M. van Baalen, and T. Blankevoort, "Up or down? Adaptive rounding for post-training quantization," International Conference on Machine Learning, 2020.

[281] Y. Choi, M. El-Khamy, and J. Lee, "Towards the limit of network quantization," International Conference on Learning Representations, 2017.

[282] S. Han, J. Pool, J. Tran, and W. J. Dally, "Learning both weights and connections for efficient neural network," Advances in Neural Information Processing Systems, 2015.

[283] Y. LeCun, J. S. Denker, and S. A. Solla, "Optimal brain damage," Advances in Neural Information Processing Systems, 1990.

[284] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, "Pruning filters for efficient convnets," International Conference on Learning Representations, 2017.

[285] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, "Compressing neural networks with the hashing trick," International Conference on Machine Learning, 2015.

[286] C. Leng, H. Li, S. Zhu, and R. Jin, "Extremely low bit neural network: Squeeze the last bit out with ADMM," AAAI Conference on Artificial Intelligence, 2018.

[287] A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov, "Tensorizing neural networks," Advances in Neural Information Processing Systems, 2015.

[288] Y. He, X. Zhang, and J. Sun, "Channel pruning for accelerating very deep neural networks," International Conference on Computer Vision, 2017.

[289] G. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.

[290] S. Buciluǎ, R. Caruana, and A. Niculescu-Mizil, "Model compression," Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006.

[291] J. Ba and R. Caruana, "Do deep nets really need to be deep?" Advances in Neural Information Processing Systems, 2014.

[292] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, "FitNets: Hints for thin deep nets," International Conference on Learning Representations, 2015.

[293] T. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran, "Low-rank matrix factorization for deep neural network training with high-dimensional output targets," IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.

[294] M. Denton et al., "Exploiting linear structure within convolutional networks for efficient evaluation," Advances in Neural Information Processing Systems, 2014.

[295] H. Phan et al., "Robust audio event recognition with 1-max pooling convolutional neural networks," arXiv preprint arXiv:1604.06338, 2016.

[296] Y.-D. Kim et al., "Compression of deep convolutional neural networks for fast and low power mobile applications," International Conference on Learning Representations, 2016.

[297] B. Zoph and Q. V. Le, "Neural architecture search with reinforcement learning," International Conference on Learning Representations, 2017.

[298] H. Liu, K. Simonyan, and Y. Yang, "DARTS: Differentiable architecture search," International Conference on Learning Representations, 2019.

[299] H. Cai, L. Zhu, and S. Han, "ProxylessNAS: Direct neural architecture search on target task and hardware," International Conference on Learning Representations, 2019.

[300] M. Tan et al., "MnasNet: Platform-aware neural architecture search for mobile," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.

[301] B. Wu et al., "FBNet: Hardware-aware efficient convnet design via differentiable neural architecture search," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.

[302] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, "Efficient processing of deep neural networks: A tutorial and survey," Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329, 2017.

[303] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: Inverted residuals and linear bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.

[304] A. Howard et al., "Searching for MobileNetV3," Proceedings of the IEEE International Conference on Computer Vision, 2019.

[305] X. Dong et al., "Network pruning via transformable architecture search," Advances in Neural Information Processing Systems, 2019.

[306] P. Warden and D. Situnayake, TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers, O'Reilly Media, 2019.

[307] A. David et al., "TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems," Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 2021.

[308] L. N. Pouchet et al., "Enabling deep learning at the IoT edge," Computer, vol. 50, no. 10, pp. 20–23, 2017.

[309] V. Sze et al., "Hardware for machine learning: Challenges and opportunities," 2017 IEEE Custom Integrated Circuits Conference (CICC), 2017.

[310] E. Strommer et al., "TinyML as a service: Bringing machine learning to the edge," IEEE Internet of Things Magazine, vol. 3, no. 1, pp. 20–25, 2020.

[311] L. Taylor and G. Nitschke, "Improving deep learning with generic data augmentation," 2018 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1542–1547, 2018.

[312] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, 2012.

[313] J. Wei and K. Zou, "EDA: Easy data augmentation techniques for boosting performance on text classification tasks," Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019.

[314] T. Shorten and T. M. Khoshgoftaar, "A survey on image data augmentation for deep learning," Journal of Big Data, vol. 6, no. 1, pp. 1–48, 2019.

[315] M. Paulin, J. Revaud, Z. Harchaoui, F. Perronnin, and C. Schmid, "Transformation pursuit for image classification," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.

[316] O. Bachem, M. Lucic, and A. Krause, "Practical coreset constructions for machine learning," arXiv preprint arXiv:1703.06476, 2017.

[317] D. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, "Generative adversarial networks: An overview," IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018.

[318] Y. Bengio, A. Courville, and P. Vincent, "Representation learning: A review and new perspectives," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.

[319] Apple Inc., "Apple Introduces On-Device Processing for Siri Requests," Apple Newsroom, 2021.

[320] M. Sandler et al., "MobileNetV2: Inverted Residuals and Linear Bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018.

[321] S. Han et al., "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding," International Conference on Learning Representations, 2016.

[322] A. Narayanan et al., "Privacy-Preserving Machine Learning," Foundations and Trends in Privacy and Security, vol. 2, no. 3, pp. 151–157, 2019.

[323] N. D. Lane and P. Warden, "The Deep (Learning) Transformation of Mobile and Embedded Computing," IEEE Computer, vol. 51, no. 5, pp. 12–16, 2018.

[324] Apple Inc., "Siri Data and Privacy Overview," Apple Support, 2021.

[325] Y. He et al., "Streaming End-to-End Speech Recognition for Mobile Devices," IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 999–1003, 2019.

[326] Y. Zhang et al., "Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss," IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 7829–7833, 2020.

[327] H. Kwon et al., "Efficient Neural Network Compression," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 10, no. 4, pp. 522–535, 2020.

[328] Google AI Blog, "On-Device Machine Learning: Federated Learning and Federated Analytics," 2020.

[329] S. Kumar et al., "Resource-Efficient Machine Learning in 2 KB RAM for the Internet of Things," Advances in Neural Information Processing Systems, vol. 30, pp. 1935–1945, 2017.

[330] W. Jiang et al., "Accelerating Deep Learning Inference with Algorithm and Hardware Co-Design," ACM Transactions on Embedded Computing Systems, vol. 19, no. 6, pp. 1–23, 2020.

[331] Google, "Offline Language Translation in Google Translate," Google Support, 2021.

[332] Y. Kim et al., "Dynamic Layer Scaling for Neural Machine Translation," arXiv preprint arXiv:2004.10069, 2020.

[333] M. Wu et al., "Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation," arXiv preprint arXiv:2004.09602, 2020.

[334] K. Heafield et al., "Recurrent Neural Network Grammar for Speech Recognition," Interspeech, pp. 765–769, 2016.

[335] Microsoft, "Translator App Features," Microsoft Translator, 2021.

[336] T. H. Wen et al., "Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems," arXiv preprint arXiv:1508.01745, 2015.

[337] I. Sutskever et al., "Sequence to Sequence Learning with Neural Networks," Advances in Neural Information Processing Systems, vol. 27, pp. 3104–3112, 2014.

[338] T. Kudo and J. Richardson, "SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing," Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66–71, 2018.

[339] Z. Tang et al., "Quantized Neural Networks for Low-Power Embedded Systems," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 1, pp. 142–151, 2020.

[340] S. Sun et al., "MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices," arXiv preprint arXiv:2004.02984, 2020.

[341] V. Sanh et al., "DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter," arXiv preprint arXiv:1910.01108, 2019.

[342] J. Xu et al., "Privacy-Preserving Federated Brain Tumour Segmentation," Machine Learning for Health, pp. 1–12, 2020.

[343] ARM Ltd., "ARM TrustZone Technology," ARM Developer, 2021.

[344] S. Wolf et al., "Compressing Deep Neural Networks via Layer Fusion," arXiv preprint arXiv:2102.06515, 2021.

[345] U.S. Department of Health & Human Services, "Health Information Privacy," HHS.gov, 2021.

[346] R. Shokri and V. Shmatikov, "Privacy-Preserving Deep Learning," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321, 2015.

[347] R. Zhang et al., "Privacy-Preserving AI in Finance: A Federated Learning Approach," IEEE Computational Intelligence Magazine, vol. 15, no. 4, pp. 80–88, 2020.

[348] B. McMahan et al., "Federated Learning of Deep Networks using Model Averaging," arXiv preprint arXiv:1602.05629, 2016.

[349] D. Boneh and R. J. Lipton, "Algorithms for Black-Box Fields and their Application to Cryptography," Advances in Cryptology—CRYPTO'96, pp. 283–297, 1996.

[350] L. O. Pérez et al., "Privacy-Preserving Federated Learning: A Blockchain and MPC-Based Solution," arXiv preprint arXiv:2101.11298, 2021.

[351] N. G. Ward et al., "Efficient Neural Networks for Real-Time Speech Emotion Recognition," IEEE Transactions on Multimedia, vol. 22, no. 8, pp. 2117–2127, 2020.

[352] M. Fredrikson et al., "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333, 2015.

[353] Z. Yang et al., "Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5687–5695, 2017.

[354] G. Hinton et al., "Distilling the Knowledge in a Neural Network," arXiv preprint arXiv:1503.02531, 2015.

[355] A. Ignatov et al., "AI Benchmark: All About Deep Learning on Smartphones in 2019," IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 3617–3635, 2019.

[356] TensorFlow, "TensorFlow Lite," [Online]. Available: https://www.tensorflow.org/lite

[357] L. N. Pouchet et al., "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning," 13th USENIX Symposium on Operating Systems Design and Implementation, pp. 578–594, 2018.

[358] M. Elbayad et al., "Depth-Adaptive Transformer," International Conference on Learning Representations, 2020.

[359] Q. Yang et al., "Federated Machine Learning: Concept and Applications," ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, pp. 1–19, 2019.

[360] M. Horowitz, "Computing's Energy Problem (and What We Can Do About It)," IEEE International Solid-State Circuits Conference Digest of Technical Papers, pp. 10–14, 2014.

[361] Y. Lu et al., "Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing," IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 447–457, 2020.

[362] TensorFlow Lite, "TensorFlow Lite | ML for Mobile and Edge Devices," [Online]. Available: https://www.tensorflow.org/lite

[363] P. Warden and D. Situnayake, TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers, O'Reilly Media, 2019.

[364] TensorFlow, "TensorFlow Lite Optimizing Converter," [Online]. Available: https://www.tensorflow.org/lite/guide/ops_select

[365] TensorFlow, "GPU Delegate in TensorFlow Lite," [Online]. Available: https://www.tensorflow.org/lite/performance/gpu

[366] Google, "Coral Edge TPU," [Online]. Available: https://coral.ai/ (opens new window)

[367] TensorFlow, "Android Neural Networks API Delegate," [Online]. Available: https://www.tensorflow.org/lite/performance/nnapi

[368] TensorFlow Hub, "TensorFlow Lite Models," [Online]. Available: https://tfhub.dev/ (opens new window)

[369] TensorFlow, "TensorFlow Lite Converter," [Online]. Available: https://www.tensorflow.org/lite/convert

[370] TensorFlow Model Optimization Toolkit, "Post-Training Quantization," [Online]. Available: https://www.tensorflow.org/model_optimization/guide/quantization/post_training

[371] TensorFlow Model Optimization Toolkit, "Quantization Aware Training," [Online]. Available: https://www.tensorflow.org/model_optimization/guide/quantization/training

[372] TensorFlow Model Optimization Toolkit, "Pruning and Clustering APIs," [Online]. Available: https://www.tensorflow.org/model_optimization/guide

[373] TensorFlow, "Selective Registration," [Online]. Available: https://www.tensorflow.org/lite/guide/reduce_binary_size

[374] PyTorch, "PyTorch Mobile," [Online]. Available: https://pytorch.org/mobile/home/

[375] PyTorch, "TorchScript," [Online]. Available: https://pytorch.org/docs/stable/jit.html

[376] PyTorch, "PyTorch Mobile for Android," [Online]. Available: https://pytorch.org/mobile/android/

[377] PyTorch, "PyTorch Mobile for iOS," [Online]. Available: https://pytorch.org/mobile/ios/

[378] PyTorch, "Customize Build for Mobile," [Online]. Available: https://pytorch.org/mobile/android/#customize-build

[379] PyTorch, "Quantization Support in PyTorch," [Online]. Available: https://pytorch.org/docs/stable/quantization.html

[380] Facebook AI, "QNNPACK: Open Source Library for Optimized Mobile Deep Learning," [Online]. Available: https://engineering.fb.com/ai-research/qnnpack/

[381] Facebook AI, "FBGEMM: A High-Performance Kernel Library for Quantized Machine Learning," [Online]. Available: https://engineering.fb.com/ml-applications/fbgemm/

[382] PyTorch, "Memory Management in PyTorch Mobile," [Online]. Available: https://pytorch.org/mobile/faq/#memory-management

[383] PyTorch, "Selective Build for Mobile Interpreter," [Online]. Available: https://pytorch.org/docs/stable/mobile/faq.html#reduce-the-pytorch-mobile-runtime-size

[384] ONNX, "Open Neural Network Exchange," [Online]. Available: https://onnx.ai/ (opens new window)

[385] Microsoft, "ONNX Runtime," [Online]. Available: https://www.onnxruntime.ai/ (opens new window)

[386] ONNX, "Supported Frameworks," [Online]. Available: https://onnx.ai/supported-tools

[387] ONNX, "Standardization of Model Representation," [Online]. Available: https://github.com/onnx/onnx (opens new window)

[388] ONNX, "Ecosystem Tools and Libraries," [Online]. Available: https://onnx.ai/ecosystem/

[389] Microsoft, "ONNX Runtime Mobile," [Online]. Available: https://www.onnxruntime.ai/docs/reference/mobile

[390] Microsoft, "ONNX Runtime Graph Optimizations," [Online]. Available: https://www.onnxruntime.ai/docs/performance/graph-optimizations.html

[391] Microsoft, "ONNX Runtime Quantization Tool," [Online]. Available: https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/quantization

[392] Microsoft, "ONNX Runtime Execution Providers," [Online]. Available: https://www.onnxruntime.ai/docs/reference/execution-providers/

[393] ONNX Runtime, "Supported Platforms," [Online]. Available: https://www.onnxruntime.ai/docs/get-started/platforms.html

[394] Apache TVM, "An Open Source Machine Learning Compiler Stack for CPUs, GPUs, and Accelerators," [Online]. Available: https://tvm.apache.org/

[395] T. Chen et al., "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning," 13th USENIX Symposium on Operating Systems Design and Implementation, pp. 578–594, 2018.

[396] L. Yu et al., "Automated Model Optimization for Mobile Applications with TVM," Proceedings of the 1st on Reproducible Quality-Efficient Systems Tournament on Co-designing Pareto-efficient Deep Learning, 2018.

[397] Apache TVM, "Relay: An Intermediate Representation for Deep Learning Models," [Online]. Available: https://tvm.apache.org/docs/relay

[398] L. Moreau et al., "A Hardware-Software Blueprint for Flexible Deep Learning Specialization," IEEE Micro, vol. 39, no. 5, pp. 8–16, 2019.

[399] V. J. Reddi et al., "MLPerf Inference Benchmark," Proceedings of the ACM/IEEE 47th Annual International Symposium on Computer Architecture, pp. 446–459, 2020.

[400] MicroTVM, "Deploying Deep Learning Models on Microcontrollers with TVM," [Online]. Available: https://tvm.apache.org/docs/microtvm

[401] Apache TVM, "TVM Compilation and Optimization," [Online]. Available: https://tvm.apache.org/docs/tutorials

[402] Apache TVM, "Community and Contributions," [Online]. Available: https://tvm.apache.org/community

[403] NVIDIA, "TensorRT | NVIDIA Developer," [Online]. Available: https://developer.nvidia.com/tensorrt

[404] NVIDIA, "TensorRT Developer Guide," [Online]. Available: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html

[405] NVIDIA, "Mixed Precision Training," [Online]. Available: https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html

[406] NVIDIA, "Dynamic Tensor Memory in TensorRT," [Online]. Available: https://developer.nvidia.com/blog/tensorrt-3-faster-tensorflow-inference/

[407] NVIDIA, "Importing Models into TensorRT," [Online]. Available: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#importing_models

[408] NVIDIA, "NVIDIA Jetson Platform for AI at the Edge," [Online]. Available: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/

[409] Intel, "OpenVINO Toolkit," [Online]. Available: https://docs.openvino.ai/

[410] Intel, "Model Optimizer Developer Guide," [Online]. Available: https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html

[411] Intel, "Inference Engine Developer Guide," [Online]. Available: https://docs.openvino.ai/latest/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html

[412] Intel, "Intel Neural Compute Stick 2," [Online]. Available: https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick.html

[413] Intel, "Deploy High-Performance Deep Learning Applications," [Online]. Available: https://www.intel.com/content/www/us/en/artificial-intelligence/openvino-toolkit.html

[414] Intel, "Pre-Trained Models," [Online]. Available: https://docs.openvino.ai/latest/omz_models_group_intel.html

[415] Edge Impulse, "Edge Impulse Documentation," [Online]. Available: https://docs.edgeimpulse.com/

[416] Edge Impulse, "Data Acquisition Tools," [Online]. Available: https://docs.edgeimpulse.com/docs/edge-impulse-studio/acquisition

[417] Edge Impulse, "Automated Machine Learning Pipeline," [Online]. Available: https://www.edgeimpulse.com/automated-ml

[418] Edge Impulse, "EON Compiler," [Online]. Available: https://docs.edgeimpulse.com/docs/edge-impulse-studio/edge-impulse-cli/cli-eon

[419] Edge Impulse, "Deployment Options," [Online]. Available: https://docs.edgeimpulse.com/docs/deployment

[420] Edge Impulse, "Project Collaboration," [Online]. Available: https://docs.edgeimpulse.com/docs/edge-impulse-studio/project-collaboration

[421] Edge Impulse, "Community Forums," [Online]. Available: https://forum.edgeimpulse.com/

[422] ARM Ltd., "ARM Cortex-M Series," [Online]. Available: https://developer.arm.com/ip-products/processors/cortex-m

[423] J. Yiu, The Definitive Guide to ARM Cortex-M3 and Cortex-M4 Processors, Newnes, 2013.

[424] ARM Ltd., "ARM Cortex-A Series," [Online]. Available: https://developer.arm.com/ip-products/processors/cortex-a

[425] ARM Ltd., "NEON Intrinsics," [Online]. Available: https://developer.arm.com/architectures/instruction-sets/simd-isas/neon/intrinsics

[426] ARM Ltd., "CMSIS-NN: Neural Network Kernels for Cortex-M CPUs," [Online]. Available: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/cmsis-nn

[427] ARM Ltd., "Compute Library," [Online]. Available: https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/compute-library

[428] G. Zhou et al., "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing," Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.

[429] S. Li et al., "Edge Intelligence for Internet of Things in 5G Era: Vision, Enabling Technologies, and Applications," IEEE Internet of Things Journal, vol. 7, no. 8, pp. 6722–6747, 2020.

[430] RISC-V Foundation, "RISC-V: The Free and Open RISC Instruction Set Architecture," [Online]. Available: https://riscv.org/

[431] A. Waterman and K. Asanović, "The RISC-V Instruction Set Manual, Volume I: Unprivileged ISA," EECS Department, UC Berkeley, 2017.

[432] C. Celio et al., "BOOM v2: an open-source out-of-order RISC-V core," First Workshop on Computer Architecture Research with RISC-V (CARRV), 2017.

[433] A. Puggelli et al., "A Fully Open-Source ISA to ASIC Flow Based on the RISC-V Architecture," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 1, pp. 72–84, 2018.

[434] SiFive, "SiFive Intelligence Processors," [Online]. Available: https://www.sifive.com/cores/intelligence

[435] A. Haj-Ali et al., "A TensorFlow Frontend for High-Performance and Hardware-Agnostic Machine Learning," arXiv preprint arXiv:1905.08369, 2019.

[436] B. D. De Dinechin et al., "A Clustered Manycore Processor Architecture for Embedded and Accelerated Applications," High Performance Extreme Computing Conference (HPEC), pp. 1–6, 2013.

[437] K. Asanović and D. Patterson, "Instruction Sets Should Be Free: The Case For RISC-V," EECS Department, UC Berkeley, Tech. Rep. UCB/EECS-2014-146, 2014.

[438] Raspberry Pi Foundation, "Raspberry Pi," [Online]. Available: https://www.raspberrypi.org/

[439] E. Upton and G. Halfacree, Raspberry Pi User Guide, Wiley, 2014.

[440] Raspberry Pi Foundation, "GPIO Pins," [Online]. Available: https://www.raspberrypi.org/documentation/usage/gpio/

[441] Raspberry Pi Foundation, "Raspberry Pi 4 Model B," [Online]. Available: https://www.raspberrypi.org/products/raspberry-pi-4-model-b/

[442] TensorFlow, "TensorFlow Lite on Raspberry Pi," [Online]. Available: https://www.tensorflow.org/lite/guide/python

[443] Google Coral, "USB Accelerator," [Online]. Available: https://coral.ai/products/accelerator

[444] A. Rosebrock, Raspberry Pi for Computer Vision, PyImageSearch, 2019.

[445] T. White, Hadoop: The Definitive Guide, O'Reilly Media, 2012.

[446] NVIDIA, "Jetson Nano Developer Kit," [Online]. Available: https://developer.nvidia.com/embedded/jetson-nano-developer-kit

[447] NVIDIA, "NVIDIA Jetson Nano GPU Architecture," [Online]. Available: https://developer.nvidia.com/embedded/jetson-nano

[448] S. Mittal, "A Survey on optimized implementation of deep learning models on the NVIDIA Jetson platform," Journal of Systems Architecture, vol. 97, pp. 428–442, 2019.

[449] NVIDIA, "JetPack SDK," [Online]. Available: https://developer.nvidia.com/embedded/jetpack

[450] NVIDIA, "Deep Learning Frameworks Support," [Online]. Available: https://developer.nvidia.com/deep-learning-frameworks

[451] NVIDIA, "CUDA Toolkit," [Online]. Available: https://developer.nvidia.com/cuda-toolkit

[452] NVIDIA, "TensorRT," [Online]. Available: https://developer.nvidia.com/tensorrt

[453] D. B. Leake et al., "Autonomous Machines with NVIDIA Jetson Platform," IEEE Micro, vol. 38, no. 1, pp. 17–29, 2018.

[454] R. Girshick et al., "Region-Based Convolutional Networks for Accurate Object Detection and Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp. 142–158, 2016.

[455] Google Coral, "Edge TPU Overview," [Online]. Available: https://coral.ai/docs/edgetpu/faq/

[456] M. Hong and J. E. Gonzalez, "Efficient Neural Network Inference on Edge Devices," IEEE Micro, vol. 40, no. 5, pp. 28–35, 2020.

[457] TensorFlow, "TensorFlow Models on Edge TPU," [Online]. Available: https://www.tensorflow.org/lite/guide/edgetpu

[458] Google Coral, "Coral Dev Board," [Online]. Available: https://coral.ai/products/dev-board

[459] Google Coral, "Edge TPU Model Compatibility," [Online]. Available: https://coral.ai/docs/edgetpu/models-intro/

[460] Google Coral, "Edge TPU Compiler," [Online]. Available: https://coral.ai/docs/edgetpu/compiler/

[461] S. Bi et al., "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing," Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.

[462] M. Satyanarayanan, "The Emergence of Edge Computing," Computer, vol. 50, no. 1, pp. 30–39, 2017.

[463] Intel, "Intel Movidius Myriad X VPU," [Online]. Available: https://www.intel.com/content/www/us/en/products/processors/movidius-vpu/movidius-myriad-x.html

[464] D. Moloney, "Myriad 2: Eye of the Computational Vision Storm," Hot Chips Symposium (HCS), 2014.

[465] S. Venkataramani et al., "Efficient AI Inference with Self-Awareness and Self-Optimization on Edge Devices," IEEE Design & Test, vol. 37, no. 3, pp. 15–23, 2020.

[466] Intel, "Intel Neural Compute Stick 2," [Online]. Available: https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick.html

[467] Intel, "OpenVINO Toolkit," [Online]. Available: https://docs.openvino.ai/

[468] Intel, "Supported Frameworks and Layers," [Online]. Available: https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html#supported-frameworks

[469] J. Wu et al., "AI at the Edge: Neural Network Acceleration and Mobile AI Applications," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 11, pp. 2767–2771, 2020.

[470] A. K. Jain et al., "Quality Control in Manufacturing Using AI and Edge Computing," IEEE Embedded Systems Letters, vol. 12, no. 3, pp. 81–84, 2020.

[471] Y. Chen et al., "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks," IEEE Journal of Solid-State Circuits, vol. 52, no. 1, pp. 127–138, 2017.

[472] S. Han et al., "ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA," Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 75–84, 2017.

[473] K. Simonyan et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv preprint arXiv:1704.04861, 2017.

[474] Apple Inc., "Apple Neural Engine," [Online]. Available: https://www.apple.com/macbook-pro-16/specs/

[475] Huawei, "Ascend AI Processor," [Online]. Available: https://e.huawei.com/en/products/cloud-computing-dc/atlas/ascend-processor

[476] Qualcomm, "Hexagon DSP," [Online]. Available: https://www.qualcomm.com/products/features/hexagon-dsp

[477] X. Zhang et al., "Neural Processing Units for Mobile AI Applications: A Review," IEEE Access, vol. 7, pp. 181069–181098, 2019.

[478] M. I. Ashraf et al., "Edge Intelligence for Internet of Things: A Feasibility Study," IEEE Internet of Things Journal, vol. 6, no. 4, pp. 7192–7200, 2019.

[479] J. Cong et al., "Hardware-Software Co-Design of Neural Networks for Efficient AI Computing," Proceedings of the IEEE, vol. 107, no. 8, pp. 1413–1432, 2019.

[480] V. Sze et al., "Efficient Processing of Deep Neural Networks: A Tutorial and Survey," Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329, 2017.

[481] D. C. Juan et al., "Hardware-Software Co-Design for Deep Learning," Design Automation Conference (DAC), pp. 1–6, 2018.

[482] H. Esmaeilzadeh et al., "Neural Acceleration for General-Purpose Approximate Programs," IEEE Micro, vol. 33, no. 3, pp. 16–27, 2013.

[483] Y. Chen et al., "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks," ACM SIGARCH Computer Architecture News, vol. 44, no. 3, pp. 367–379, 2016.

[484] J. S. Emer, "Eyeriss: A Tiled Architecture for Deep Convolutional Neural Networks," Symposium on High-Performance Chips (Hot Chips), 2016.

[485] Y. Chen et al., "A Survey of Accelerator Architectures for Deep Neural Networks," IEEE Micro, vol. 35, no. 3, pp. 24–35, 2015.

[486] NVIDIA, "NVDLA Deep Learning Accelerator," [Online]. Available: http://nvdla.org/

[487] S. K. Tirthapura, "NVDLA Primer," NVIDIA Developer Blog, 2017.

[488] NVIDIA, "Open Sourcing NVIDIA Deep Learning Accelerator," [Online]. Available: https://news.developer.nvidia.com/nvdla/

[489] N. P. Jouppi et al., "In-Datacenter Performance Analysis of a Tensor Processing Unit," Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12, 2017.

[490] M. Horowitz, "1.1 Computing's Energy Problem (and what we can do about it)," IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10–14, 2014.

[491] N. P. Jouppi et al., "A Domain-Specific Architecture for Deep Neural Networks," Communications of the ACM, vol. 61, no. 9, pp. 50–59, 2018.

[492] ARM Ltd., "Project Trillium: Machine Learning," [Online]. Available: https://www.arm.com/why-arm/technologies/machine-learning-on-arm

[493] ARM Ltd., "Ethos-N NPU Series," [Online]. Available: https://developer.arm.com/ip-products/processors/machine-learning/ethos-n

[494] D. Howard, "Enabling Efficient ML for Edge Devices with ARM's Project Trillium," Arm Blueprint, 2018.

[495] J. Cong and B. Xiao, "Minimizing Computation in Convolutional Neural Networks," International Conference on Artificial Neural Networks, pp. 281–290, 2014.

[496] T. Chen et al., "Hardware Accelerators for Machine Learning," Synthesis Lectures on Computer Architecture, vol. 15, no. 2, pp. 1–158, 2020.

[497] G. Jocher et al., "YOLOv5 Nano: A Small and Fast Object Detection Model," Ultralytics Repository, 2020. [Online]. Available: https://github.com/ultralytics/yolov5

[498] F. Chollet, "Xception: Deep Learning with Depthwise Separable Convolutions," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258, 2017.

[499] A. G. Howard et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv preprint arXiv:1704.04861, 2017.

[500] M. Sandler et al., "MobileNetV2: Inverted Residuals and Linear Bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018.

[501] S. Chen et al., "MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices," Chinese Conference on Biometric Recognition, pp. 428–438, 2018.

[502] P. Warden, "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition," arXiv preprint arXiv:1804.03209, 2018.

[503] A. Graves et al., "Speech Recognition with Deep Recurrent Neural Networks," 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649, 2013.

[504] V. Sanh et al., "DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter," arXiv preprint arXiv:1910.01108, 2019.

[505] Y. Kim, "Convolutional Neural Networks for Sentence Classification," Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751, 2014.

[506] W. Wu et al., "Lite Transformer with Long-Short Range Attention," arXiv preprint arXiv:2004.11886, 2020.

[507] H. Zhao et al., "LSTM Network: A Deep Learning Approach for Short-Term Traffic Forecast," IET Intelligent Transport Systems, vol. 11, no. 2, pp. 68–75, 2017.

[508] K. Zhang et al., "Edge Intelligence: Edge Computing for Internet of Things," IEEE Internet of Things Journal, vol. 7, no. 8, pp. 6948–6962, 2020.

[509] S. Biswas et al., "Wearable Continuous ECG Monitoring and Real-Time Heart Rate Variability Analysis with a Miniaturized Wireless Platform," IEEE Transactions on Biomedical Engineering, vol. 67, no. 7, pp. 1738–1749, 2020.

[510] G. Adomavicius and A. Tuzhilin, "Context-Aware Recommender Systems," Recommender Systems Handbook, pp. 191–226, 2011.

[511] H. Brendan McMahan et al., "Communication-Efficient Learning of Deep Networks from Decentralized Data," Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282, 2017.

[512] R. Shokri and V. Shmatikov, "Privacy-Preserving Deep Learning," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321, 2015.

[513] J. Konečný et al., "Federated Optimization: Distributed Machine Learning for On-Device Intelligence," arXiv preprint arXiv:1610.02527, 2016.

[514] T. Smith et al., "Federated Multi-Task Learning," Advances in Neural Information Processing Systems, vol. 30, pp. 4424–4434, 2017.

[515] K. Bonawitz et al., "Practical Secure Aggregation for Privacy-Preserving Machine Learning," Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191, 2017.

[516] H. Brendan McMahan et al., "Federated Averaging: A Simple and Robust Federated Learning Algorithm," arXiv preprint arXiv:1602.05629, 2016.

[517] J. Konečný et al., "Federated Learning: Strategies for Improving Communication Efficiency," arXiv preprint arXiv:1610.05492, 2016.

[518] K. Bonawitz et al., "Secure Aggregation for Federated Learning," Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191, 2017.

[519] A. Shamir, "How to Share a Secret," Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979.

[520] P. Paillier, "Public-Key Cryptosystems Based on Composite Degree Residuosity Classes," Advances in Cryptology — EUROCRYPT '99, pp. 223–238, 1999.

[521] R. C. Geyer et al., "Differentially Private Federated Learning: A Client Level Perspective," arXiv preprint arXiv:1712.07557, 2017.

[522] C. Dwork, "Differential Privacy," Automata, Languages and Programming, pp. 1–12, 2006.

[523] M. Abadi et al., "Deep Learning with Differential Privacy," Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318, 2016.

[524] D. Song et al., "Differential Privacy via Compressing Gradients," Advances in Neural Information Processing Systems, vol. 32, pp. 3700–3710, 2019.

[525] C. Gentry, "Fully Homomorphic Encryption Using Ideal Lattices," Proceedings of the 41st Annual ACM Symposium on Theory of Computing, pp. 169–178, 2009.

[526] T. ElGamal, "A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms," IEEE Transactions on Information Theory, vol. 31, no. 4, pp. 469–472, 1985.

[527] Z. Brakerski and V. Vaikuntanathan, "Efficient Fully Homomorphic Encryption from (Standard) LWE," 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science, pp. 97–106, 2011.

[528] M. Kim et al., "Secure Multi-Party Computation for Federated Learning," IEEE International Conference on Information Fusion, pp. 1–8, 2018.

[529] S. Halevi and V. Shoup, "Algorithms in HElib," Advances in Cryptology – CRYPTO 2014, pp. 554–571, 2014.

[530] B. McMahan and D. Ramage, "Federated Learning: Collaborative Machine Learning without Centralized Training Data," Google AI Blog, 2017.

[531] Apple Machine Learning Research, "Federated Learning," Apple Privacy-Preserving Machine Learning, 2017. [Online]. Available: https://machinelearning.apple.com/2017/12/06/federated-learning.html

[532] S. Rieke et al., "The Future of Digital Health with Federated Learning," NPJ Digital Medicine, vol. 3, no. 119, 2020.

[533] Y. Lu et al., "Blockchain and Federated Learning for Collaborative Intrusion Detection in Edge Computing," IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 4962–4970, 2021.

[534] N. Papernot et al., "The Limitations of Deep Learning in Adversarial Settings," 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387, 2016.

[535] Y. Vorobeychik and M. Kantarcioglu, "Adversarial Machine Learning," Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 12, no. 3, pp. 1–169, 2018.

[536] A. Asghar et al., "Security and Privacy in Mobile Edge Computing: Challenges and Solutions," IEEE Communications Surveys & Tutorials, vol. 22, no. 1, pp. 212–249, 2020.

[537] F. Tramer et al., "Stealing Machine Learning Models via Prediction APIs," 25th USENIX Security Symposium, pp. 601–618, 2016.

[538] R. Shokri et al., "Membership Inference Attacks Against Machine Learning Models," 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18, 2017.

[539] I. J. Goodfellow et al., "Explaining and Harnessing Adversarial Examples," International Conference on Learning Representations, 2015.

[540] N. Akhtar and A. Mian, "Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey," IEEE Access, vol. 6, pp. 14410–14430, 2018.

[541] I. J. Goodfellow et al., "Explaining and Harnessing Adversarial Examples," arXiv preprint arXiv:1412.6572, 2014.

[542] A. Madry et al., "Towards Deep Learning Models Resistant to Adversarial Attacks," International Conference on Learning Representations, 2018.

[543] K. Eykholt et al., "Robust Physical-World Attacks on Deep Learning Models," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1625–1634, 2018.

[544] S. Sharif et al., "Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition," Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540, 2016.

[545] B. Biggio et al., "Poisoning Attacks Against Support Vector Machines," Proceedings of the 29th International Conference on Machine Learning, pp. 1467–1474, 2012.

[546] X. Liu et al., "Trojaning Attack on Neural Networks," Network and Distributed System Security Symposium, 2018.

[547] T. Gu et al., "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain," arXiv preprint arXiv:1708.06733, 2017.

[548] L. Yang et al., "Security and Privacy of Edge AI in Cyber-Physical Systems," IEEE Network, vol. 33, no. 5, pp. 150–156, 2019.

[549] M. Fredrikson et al., "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333, 2015.

[550] N. Carlini and D. Wagner, "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods," Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14, 2017.

[551] A. Kurakin et al., "Adversarial Machine Learning at Scale," International Conference on Learning Representations, 2017.

[552] S. Zagoruyko and N. Komodakis, "Wide Residual Networks," Proceedings of the British Machine Vision Conference (BMVC), 2016.

[553] C. Szegedy et al., "Intriguing Properties of Neural Networks," arXiv preprint arXiv:1312.6199, 2013.

[554] F. Zhang et al., "Detecting Adversarial Examples via Modeling Layer Behaviors," arXiv preprint arXiv:1910.13627, 2019.

[555] D. Hendrycks and K. Gimpel, "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks," International Conference on Learning Representations, 2017.

[556] J. Song et al., "PixelDefend: Leveraging Generative Models to Understand and Defend Against Adversarial Examples," International Conference on Learning Representations, 2018.

[557] E. Wong and J. Z. Kolter, "Provable Defenses Against Adversarial Examples via the Convex Outer Adversarial Polytope," International Conference on Machine Learning, pp. 5286–5295, 2018.

[558] U. Rührmair et al., "Security Applications of PUFs," Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1–6, 2010.

[559] F. Brasser et al., "Trusted Execution Environments: A Look under the Hood," ACM Computing Surveys, vol. 51, no. 2, pp. 1–36, 2018.

[560] ARM Ltd., "ARM Security Technology Building a Secure System using TrustZone Technology," ARM Technical White Paper, 2009.

[561] V. Costan and S. Devadas, "Intel SGX Explained," Cryptology ePrint Archive, Report 2016/086, 2016.

[562] J. Winter, "Trusted Computing Building Blocks for Embedded Linux-Based ARM TrustZone Platforms," Proceedings of the 3rd ACM Workshop on Scalable Trusted Computing, pp. 21–30, 2008.

[563] M. Ren et al., "Beyond Model Extraction: Inferring Model Hyperparameters Using Doubly Black-Box Attacks," International Conference on Learning Representations, 2021.

[564] P. Louis et al., "Protecting Neural Networks with Model Steganography," Advances in Neural Information Processing Systems, vol. 32, pp. 1536–1546, 2019.

[565] A. Oblinsky et al., "Model Watermarking for Recurrent Neural Networks," arXiv preprint arXiv:2010.05821, 2020.

[566] J. Wang and H. Wang, "Protecting Intellectual Property of Deep Neural Networks with Watermarking," Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pp. 159–172, 2018.

[567] W. Maass, "Networks of Spiking Neurons: The Third Generation of Neural Network Models," Neural Networks, vol. 10, no. 9, pp. 1659–1671, 1997.

[568] S. B. Laughlin and T. J. Sejnowski, "Communication in Neuronal Networks," Science, vol. 301, no. 5641, pp. 1870–1874, 2003.

[569] F. Ponulak and A. Kasinski, "Introduction to Spiking Neural Networks: Information Processing, Learning and Applications," Acta Neurobiologiae Experimentalis, vol. 71, no. 4, pp. 409–433, 2011.

[570] H. Markram et al., "Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs," Science, vol. 275, no. 5297, pp. 213–215, 1997.

[571] B. Rueckauer et al., "Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification," Frontiers in Neuroscience, vol. 11, p. 682, 2017.

[572] E. Stromatias et al., "Robustness of Spiking Deep Belief Networks to Noise and Reduced Bit Precision of Neuro-Inspired Hardware Platforms," Frontiers in Neuroscience, vol. 9, p. 222, 2015.

[573] J. Kaiser et al., "Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)," Frontiers in Neuroscience, vol. 14, p. 424, 2020.

[574] S. Ambrogio et al., "Equivalent-Accuracy Accelerated Neural-Network Training Using Analog Memory," Nature, vol. 558, no. 7708, pp. 60–67, 2018.

[575] P. Lichtsteiner et al., "A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor," IEEE Journal of Solid-State Circuits, vol. 43, no. 2, pp. 566–576, 2008.

[576] A. Rahimi et al., "Hyperdimensional Computing for Noninvasive Brain–Computer Interfaces: Blind and One-Shot Classification of EEG Error-Related Potentials," International Conference on Rebooting Computing (ICRC), pp. 1–8, 2016.

[577] M. Kanerva, "Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors," Cognitive Computation, vol. 1, no. 2, pp. 139–159, 2009.

[578] T. Plate, Holographic Reduced Representation: Distributed Representation for Cognitive Structures, Stanford University, 1995.

[579] A. Imam et al., "Rapid and Efficient Object Recognition Using Ultra-Efficient Hyperdimensional Computing," Nature Electronics, vol. 2, no. 12, pp. 521–529, 2019.

[580] N. Wang et al., "Energy-Efficient Edge AI: A Survey of Algorithms, Hardware, and Opportunities," IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6399–6422, 2021.

[581] J. S. Seo et al., "A 45nm CMOS Neuromorphic Chip with a Scalable Architecture for Learning in Networks of Spiking Neurons," 2011 IEEE Custom Integrated Circuits Conference (CICC), pp. 1–4, 2011.

[582] T. R. Halfhill, "A New Era for Optical Computing," Microprocessor Report, vol. 27, no. 9, pp. 1–3, 2013.

[583] Y. Shen et al., "Deep Learning with Coherent Nanophotonic Circuits," Nature Photonics, vol. 11, no. 7, pp. 441–446, 2017.

[584] N. C. Harris et al., "Linear Programmable Nanophotonic Processors," Optica, vol. 5, no. 12, pp. 1623–1631, 2018.

[585] J. K. George et al., "Neuromorphic Photonics with Electro-Optic Nonlinearities," Optica, vol. 2, no. 10, pp. 865–871, 2015.

[586] P. A. Merolla et al., "A Million Spiking-Neuron Integrated Circuit with a Scalable Communication Network and Interface," Science, vol. 345, no. 6197, pp. 668–673, 2014.

[587] M. Davies et al., "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning," IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.

[588] S. Furber et al., "The SpiNNaker Project," Proceedings of the IEEE, vol. 102, no. 5, pp. 652–665, 2014.

[589] C. Mead, "Neuromorphic Electronic Systems," Proceedings of the IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.

[590] F. Akopyan et al., "TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 34, no. 10, pp. 1537–1557, 2015.

[591] S. K. Esser et al., "Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing," Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11441–11446, 2016.

[592] H. J. Caulfield and S. Dolev, "Why Future Supercomputing Requires Optical Interconnects," Nature Photonics, vol. 4, no. 5, pp. 261–263, 2010.

[593] D. A. B. Miller, "Attojoule Optoelectronics for Low-Energy Information Processing and Communications," Journal of Lightwave Technology, vol. 35, no. 3, pp. 346–396, 2017.

[594] S. Feldmann et al., "All-Optical Spiking Neuromorphic Networks with Self-Learning Capabilities," Nature, vol. 569, no. 7755, pp. 208–214, 2019.

[595] A. N. Tait et al., "Neuromorphic Photonic Networks Using Silicon Photonic Weight Banks," Scientific Reports, vol. 7, no. 1, p. 7430, 2017.

[596] C. Ríos et al., "In-Memory Computing on a Photonic Platform," Science Advances, vol. 5, no. 2, eaau5759, 2019.

[597] J. Sun et al., "Single-Chip Microprocessor that Communicates Directly Using Light," Nature, vol. 528, no. 7583, pp. 534–538, 2015.

[598] D. Zhu et al., "Integrated Photonics on Thin-Film Lithium Niobate," Advances in Optics and Photonics, vol. 12, no. 2, pp. 242–352, 2020.

[599] Y. Shen et al., "An Integrated-Nanophotonics Accelerator for Neural Networks," Optics Express, vol. 26, no. 6, pp. 7313–7331, 2018.

[600] Y. Wang et al., "A Survey of 5G Network: Architecture and Emerging Technologies," IEEE Access, vol. 3, pp. 1206–1232, 2015.

[601] M. Latva-aho and K. Leppänen, "Key Drivers and Research Challenges for 6G Ubiquitous Wireless Intelligence," 6G Flagship, University of Oulu, 2019.

[602] P. Mach and Z. Becvar, "Mobile Edge Computing: A Survey on Architecture and Computation Offloading," IEEE Communications Surveys & Tutorials, vol. 19, no. 3, pp. 1628–1656, 2017.

[603] N. Alliance, "5G White Paper," Next Generation Mobile Networks Alliance, vol. 1, no. 1, pp. 1–125, 2015.

[604] T. Taleb et al., "On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration," IEEE Communications Surveys & Tutorials, vol. 19, no. 3, pp. 1657–1681, 2017.

[605] S. Li et al., "Smart City: The State of the Art, Prototypes, and Future Research," IEEE Communications Magazine, vol. 55, no. 12, pp. 122–131, 2017.

[606] L. Da Xu et al., "Internet of Things in Industries: A Survey," IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 2233–2243, 2014.

[607] J. Gubbi et al., "Internet of Things (IoT): A Vision, Architectural Elements, and Future Directions," Future Generation Computer Systems, vol. 29, no. 7, pp. 1645–1660, 2013.

[608] K. Zhang et al., "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing," Proceedings of the IEEE, vol. 107, no. 8, pp. 1738–1762, 2019.

[609] Z. Yan et al., "Data Privacy Protection Mechanisms in IoT-Based Intelligent Healthcare Systems," IEEE Communications Magazine, vol. 56, no. 4, pp. 64–69, 2018.

[610] M. Chen et al., "Machine-to-Machine Communications: Architectures, Standards and Applications," KSII Transactions on Internet and Information Systems, vol. 6, no. 2, pp. 480–497, 2012.

[611] P. Gope and T. Hwang, "BSN-Care: A Secure IoT-Based Modern Healthcare System Using Body Sensor Network," IEEE Sensors Journal, vol. 16, no. 5, pp. 1368–1376, 2016.

[612] J. K. Lin et al., "Energy-Efficient Neural Network Accelerators: From Cloud to Edge," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 67, no. 3, pp. 615–624, 2020.

[613] S. Han et al., "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding," International Conference on Learning Representations, 2016.

[614] M. Sandler et al., "MobileNetV2: Inverted Residuals and Linear Bottlenecks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018.

[615] M. Horowitz, "1.1 Computing's Energy Problem (and What We Can Do About It)," 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 10–14, 2014.

[616] A. Shafiee et al., "ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars," 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 14–26, 2016.

[617] D. Ielmini and H.-S. P. Wong, "In-Memory Computing with Resistive Switching Devices," Nature Electronics, vol. 1, no. 6, pp. 333–343, 2018.

[618] A. Sengupta et al., "Spintronics for Probabilistic Computing and Learning," Nature Electronics, vol. 3, no. 6, pp. 363–376, 2020.

[619] Y. Vorobeychik and M. Kantarcioglu, "Adversarial Machine Learning," Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 12, no. 3, pp. 1–169, 2018.

[620] A. Liu et al., "Secure and Privacy Preserving Data Aggregation Scheme for Fog Computing-Based Smart Grids," IEEE Access, vol. 5, pp. 5326–5339, 2017.

[621] N. Papernot et al., "The Limitations of Deep Learning in Adversarial Settings," 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387, 2016.

[623] X. Yuan et al., "Adversarial Examples: Attacks and Defenses for Deep Learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2805–2824, 2019.

[624] B. McMahan et al., "Communication-Efficient Learning of Deep Networks from Decentralized Data," Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282, 2017.

[625] R. Shokri and V. Shmatikov, "Privacy-Preserving Deep Learning," Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321, 2015.

[626] K. Bhardwaj et al., "MEMS-Based Smart Sensors and Edge Computing Platforms for IoT Applications," IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2425–2433, 2020.

[627] F. Brasser et al., "Trusted Execution Environments: A Look under the Hood," ACM Computing Surveys, vol. 51, no. 2, pp. 1–36, 2018.

[628] P. Varshney and R. S. Thakur, "Standardization of Edge Computing: A Review," 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 1212–1217, 2019.

[629] A. Ahmed and E. Ahmed, "A Survey on Mobile Edge Computing," 2016 10th International Conference on Intelligent Systems and Control (ISCO), pp. 1–8, 2016.

[630] ETSI, "Multi-Access Edge Computing (MEC); Framework and Reference Architecture," ETSI GS MEC 003 V2.1.1, 2019.

[631] IEEE, "IEEE P1934 - Adoption of OpenFog Reference Architecture for Fog Computing," [Online]. Available: https://standards.ieee.org/project/1934.html (opens new window)

[632] D. Gunning, "Explainable Artificial Intelligence (XAI)," Defense Advanced Research Projects Agency (DARPA), 2017.

[633] S. Barredo Arrieta et al., "Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI," Information Fusion, vol. 58, pp. 82–115, 2020.

[634] M. T. Ribeiro et al., "Why Should I Trust You? Explaining the Predictions of Any Classifier," Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, 2016.

[635] Q. V. Liao et al., "Questioning the AI: Informing Design Practices for Explainable AI User Experiences," Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–15, 2020.

[636] Y. Mao et al., "A Survey on Mobile Edge Computing: The Communication Perspective," IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358, 2017.

[637] M. Chiang and T. Zhang, "Fog and IoT: An Overview of Research Opportunities," IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854–864, 2016.

[638] S. Yi et al., "Fog Computing: Platform and Applications," 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), pp. 73–78, 2015.

[639] Y. Lu et al., "Blockchain and Federated Learning for Collaborative Intrusion Detection in Edge Computing," IEEE Transactions on Industrial Informatics, vol. 17, no. 7, pp. 4962–4970, 2021.

[640] Q. Xia et al., "BBDS: Blockchain-Based Data Sharing for Electronic Medical Records in Cloud Environments," Information, vol. 8, no. 2, p. 44, 2017.

[641] A. Chouldechova and A. Roth, "The Frontiers of Fairness in Machine Learning," arXiv preprint arXiv:1810.08810, 2018.

[642] E. Parzen et al., "User Privacy and Data Protection in Big Data: A Systematic Literature Review," 2017 IEEE International Conference on Big Data (Big Data), pp. 2857–2866, 2017.

[642] F. Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press, 2015.

[643] J. Hao et al., "Towards Efficient and Privacy-Preserving Computing in Big Data Era," IEEE Transactions on Services Computing, vol. 11, no. 1, pp. 167–178, 2018.

[644] Y. Wang et al., "A Survey of 5G Network: Architecture and Emerging Technologies," IEEE Access, vol. 3, pp. 1206–1232, 2015.

[645] M. Latva-aho and K. Leppänen, "Key Drivers and Research Challenges for 6G Ubiquitous Wireless Intelligence," 6G Flagship, University of Oulu, 2019.

[645] NGMN Alliance, "5G White Paper," 2015.

[646] Z. Li et al., "Edge-Oriented Computing Paradigms: A Survey on Architecture Design and System Management," ACM Computing Surveys, vol. 51, no. 2, pp. 1–34, 2018.

[647] S. Deng et al., "Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence," IEEE Internet of Things Journal, vol. 7, no. 8, pp. 7457–7469, 2020.

[648] W. Jiang et al., "Collaborative Deep Learning in Edge Computing for Recognition of Human Activities," IEEE Transactions on Industrial Informatics, vol. 16, no. 3, pp. 1973–1983, 2020.

[649] T. Chen et al., "A Survey on Lightweight Deep Learning Models for Resource-Constrained Applications," IEEE Internet of Things Journal, vol. 7, no. 8, pp. 6174–6195, 2020.

[650] S. J. Pan and Q. Yang, "A Survey on Transfer Learning," IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.

[651] H. Liu et al., "Adaptive Neural Network Control of Robot Manipulators with Uncertain Kinematics and Dynamics," IEEE Transactions on Automatic Control, vol. 45, no. 1, pp. 176–181, 2000.

[652] R. Kokku et al., "NVS: A Substrate for Virtualizing Wireless Resources in Cellular Networks," IEEE/ACM Transactions on Networking, vol. 20, no. 5, pp. 1333–1346, 2012.

[653] J. Zhang et al., "Adaptive Traffic Signal Control for Large-Scale Urban Road Networks," Transportation Research Part C: Emerging Technologies, vol. 109, pp. 44–59, 2019.

[654] M. F. Balcan and A. Blum, "An Optimization-Based Framework for Automated Market-Making," Proceedings of the 12th ACM Conference on Electronic Commerce, pp. 39–50, 2011.

[655] M. Chiang and T. Zhang, "Fog and IoT: An Overview of Research Opportunities," IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854–864, 2016.

[656] European Union, "General Data Protection Regulation (GDPR)," Official Journal of the European Union, 2016.

[657] California Consumer Privacy Act (CCPA), "Assembly Bill No. 375," 2018.

[658] M. Janssen et al., "Challenges for Adopting Blockchain Technology in Government Organizations," Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age, pp. 1–9, 2018.

[659] A. W. Appel et al., "Collaboration, Compromise, and Code: The Federalist Papers and the Engineering of Democracy," Communications of the ACM, vol. 61, no. 9, pp. 35–37, 2018.

[660] IMT-2030 (2020). Framework and Overall Objectives of the Future Development of IMT for 2030 and Beyond. International Telecommunication Union.

[661] Qualcomm Technologies, Inc. (2023). Snapdragon Mobile Platforms: Bringing AI to the Edge. Retrieved from https://www.qualcomm.com/products/snapdragon

[662] NVIDIA Corporation. (2021). NVIDIA AI-on-5G Platform. Retrieved from https://www.nvidia.com/en-us/networking/solutions/5g-telecom/ (opens new window)

[663] Han, S., Mao, H., & Dally, W. J. (2016). Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In Proceedings of the International Conference on Learning Representations (ICLR).

[664] Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531.

[665] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.

[666] Zhu, M., & Gupta, S. (2018). To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878.

[667] Zhu, L., Hu, L., Lin, J., Wang, W.-C., Chen, W.-M., & Han, S. (2023). PockEngine: Sparse and Efficient Fine-tuning in a Pocket. In Proceedings of the IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE.

[668] Li, Y., Deng, L., Hoi, S. C. H., & Chen, Y. (2019). Deep Learning for Natural Language Processing: Advantages and Challenges. National Science Review, 6(4), 442–446.

[669] Patel, S., Park, H., Bonato, P., Chan, L., & Rodgers, M. (2012). A Review of Wearable Sensors and Systems with Application in Rehabilitation. Journal of NeuroEngineering and Rehabilitation, 9(1), 21.

[670] Sajnani, R., & Thilakarathna, K. (2020). Decentralized Edge Intelligence: A Dynamic Resource Management Framework for the Edge-Cloud Continuum. IEEE Transactions on Mobile Computing, 19(10), 2305-2322.

[671] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637-646.

[672] Satyanarayanan, M. (2017). The Emergence of Edge Computing. Computer, 50(1), 30-39.

[673] Chen, T., Jin, X., Shen, S., & Han, S. (2019). Learning Efficient Object Detection Models with Knowledge Distillation. In Advances in Neural Information Processing Systems (pp. 742-753).

[674] Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The Rise of Consumer Health Wearables: Promises and Barriers. PLoS Medicine, 13(2), e1001953.

[675] Jiang, Z., Li, C., Ye, M., & Ma, Z. (2021). Cross-Platform Deep Learning Model Deployment for Edge Devices. IEEE Access, 9, 79569-79580.

[676] TensorFlow Lite. (2023). TensorFlow Lite | Machine Learning for Mobile and Edge Devices. Retrieved from https://www.tensorflow.org/lite

[677] PyTorch Mobile. (2023). PyTorch Mobile | PyTorch. Retrieved from https://pytorch.org/mobile/home/

[678] ONNX Runtime. (2023). ONNX Runtime: Cross-Platform, High Performance ML Inferencing and Training Accelerator. Retrieved from https://onnxruntime.ai/ (opens new window)

[679] Lane, N. D., Bhattacharya, S., Georgiev, P., Forlivesi, C., & Kawsar, F. (2015). An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices. In Proceedings of the International Workshop on Internet of Things towards Applications (pp. 7-12).

[680] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, 37(3), 50-60.

[681] Truong, N. B., Sun, K., Lee, G. M., & Guo, Y. (2019). GDPR-Compliant Personal Data Management: A Blockchain-Based Solution. IEEE Transactions on Information Forensics and Security, 15, 1746-1761.

[682] Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688.

[683] Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637-646.

[684] Hoy, M. B. (2018). Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical Reference Services Quarterly, 37(1), 81-88.

[685] Alam, M. R., Reaz, M. B. I., & Ali, M. A. M. (2012). A Review of Smart Homes—Past, Present, and Future. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6), 1190-1203.

[686] Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The Rise of Consumer Health Wearables: Promises and Barriers. PLOS Medicine, 13(2), e1001953.

[687] Litman, T. (2019). Autonomous Vehicle Implementation Predictions. Victoria Transport Policy Institute, 28(1), 1-33.

[688] Klei, M. (2017). Personal Finance and Technology: The Digital Wallet. Journal of Financial Planning, 30(4), 16-17.

[689] Davenport, T. H., & Kirby, J. (2016). Just How Smart Are Smart Machines? MIT Sloan Management Review, 57(3), 21-25.

[690] Silver, D., et al. (2016). Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature, 529(7587), 484-489.

[691] Vohs, K. D., et al. (2008). Making Choices Impairs Subsequent Self-Control: A Limited-Resource Account of Decision Making, Self-Regulation, and Active Initiative. Journal of Personality and Social Psychology, 94(5), 883–898.

[692] Adomavicius, G., & Tuzhilin, A. (2005). Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6), 734-749.

[693] Leporini, B., & Paternò, F. (2008). Applying Web Usability Criteria for Vision-Impaired Users: Does It Really Improve Task Performance? International Journal of Human-Computer Interaction, 24(1), 17-47.

[694] Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton & Company.

[695] Zheng, X., Martin, P., & Brohman, K. (2014). Cloud Service Negotiation: Constellation of Integration and Exchange Contracts. Proceedings of the 2014 IEEE International Conference on Services Computing, 857-864.

[696] Danks, D., & London, A. J. (2017). Regulating Autonomous Systems: Beyond Standards. IEEE Intelligent Systems, 32(1), 88-91.

[697] Frey, C. B., & Osborne, M. A. (2017). The Future of Employment: How Susceptible Are Jobs to Computerisation? Technological Forecasting and Social Change, 114, 254-280.

[698] Khosravi, H., & Cooper, K. (2018). Personalised Learning Analytics: An Integrative Approach to Adaptivity. Journal of Learning Analytics, 5(1), 79-97.

[699] Pardo, A., & Siemens, G. (2014). Ethical and Privacy Principles for Learning Analytics. British Journal of Educational Technology, 45(3), 438-450.

[700] Picard, R. W. (2003). Affective Computing: Challenges. International Journal of Human-Computer Studies, 59(1-2), 55-64.

[701] Davenport, T. H., & Ronanki, R. (2018). Artificial Intelligence for the Real World. Harvard Business Review, 96(1), 108-116.

[702] Cassinelli, A., & Ishikawa, M. (2005). Khronos Projector. In ACM SIGGRAPH 2005 Emerging Technologies (p. 10).

[703] Mann, S. (2014). Wearable Computing. In Encyclopedia of Human-Computer Interaction (2nd ed.).

[704] Tao, F., et al. (2019). Digital Twins and Cyber–Physical Systems toward Smart Manufacturing and Industry 4.0: Correlation and Comparison. Engineering, 5(4), 653-661.

[705] Alam, M. R., Reaz, M. B. I., & Ali, M. A. M. (2012). A Review of Smart Homes—Past, Present, and Future. IEEE Transactions on Systems, Man, and Cybernetics, 42(6), 1190-1203.

[706] Batty, M. (2018). Digital Twins. Environment and Planning B: Urban Analytics and City Science, 45(5), 817-820.

[707] Zhang, C., et al. (2019). Precision Agriculture in the 21st Century: Geospatial and Information Technologies in Crop Management. Agricultural Engineering International: CIGR Journal, 21(1), 1-10.

[708] Benke, K., & Tomkins, B. (2017). Future Food-Production Systems: Vertical Farming and Controlled-Environment Agriculture. Sustainability: Science, Practice and Policy, 13(1), 13-26.

[709] Nicolas-Alonso, L. F., & Gomez-Gil, J. (2012). Brain Computer Interfaces, a Review. Sensors, 12(2), 1211-1279.

[710] Lebedev, M. A., & Nicolelis, M. A. L. (2017). Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiological Reviews, 97(2), 767-837.

[711] Badue, C., et al. (2021). Self-Driving Cars: A Survey. Expert Systems with Applications, 165, 113816.

[712] Lin, P., Abney, K., & Bekey, G. A. (2011). Robot Ethics: Mapping the Issues for a Mechanized World. Artificial Intelligence, 175(5-6), 942-949.

[713] Woolley, A. W., & Malone, T. W. (2011). What Makes a Team Smart? More Brains or More Connections? Harvard Business Review, 89(6), 92-98.

[714] Helbing, D. (2019). Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies. In Towards Digital Enlightenment (pp. 47-72). Springer.

[715] McDuff, D., & Czerwinski, M. (2018). Designing Emotionally Sentient Agents. Communications of the ACM, 61(12), 74-83.

[716] Bickmore, T., & Picard, R. (2005). Establishing and Maintaining Long-Term Human-Computer Relationships. ACM Transactions on Computer-Human Interaction, 12(2), 293-327.

[717] Ritchie, J., & Thomas, M. (2015). AI for Game Developers. In AI Game Programming Wisdom (pp. 509-518). Charles River Media.

[718] Anthes, C., et al. (2016). State of the Art of Virtual Reality Technology. 2016 IEEE Aerospace Conference, 1-19.

[719] Billinghurst, M., Clark, A., & Lee, G. (2015). A Survey of Augmented Reality. Foundations and Trends® in Human–Computer Interaction, 8(2-3), 73-272.

[720] Janssen, M., et al. (2019). Big and Open Linked Data (BOLD) in Government: A Challenge to Transparency and Privacy? Government Information Quarterly, 29(1), 112-118.

[721] Schulz, K., & Mayer, H. (2018). Short-Term Wind and Solar Power Forecasts: An Overview. Renewable and Sustainable Energy Reviews, 14(7), 1543-1561.

[722] Rosenfeld, D., et al. (2010). Flood or Drought: How Do Aerosols Affect Precipitation? Science, 321(5894), 1309-1313.

[723] Khatoun, R., & Zeadally, S. (2016). Smart Cities: Concepts, Architectures, Research Opportunities. Communications of the ACM, 59(8), 46-57.

[724] Zanella, A., et al. (2014). Internet of Things for Smart Cities. IEEE Internet of Things Journal, 1(1), 22-32.

[725] Davis, N., et al. (2015). Creativity Support Tools: Report from a U.S. National Science Foundation Sponsored Workshop. International Journal of Human-Computer Interaction, 20(2), 61-77.

[726] Lubart, T. (2005). How Can Computers Be Partners in the Creative Process: Classification and Commentary on the Special Issue. International Journal of Human-Computer Studies, 63(4-5), 365-369.

[727] Xiao, B., & Benbasat, I. (2007). E-Commerce Product Recommendation Agents: Use, Characteristics, and Impact. MIS Quarterly, 31(1), 137-209.

[728] Maes, P. (1994). Agents that Reduce Work and Information Overload. Communications of the ACM, 37(7), 30-40.

[729] Ching, T., et al. (2018). Opportunities and Obstacles for Deep Learning in Biology and Medicine. Journal of the Royal Society Interface, 15(141), 20170387.

[730] Esteva, A., et al. (2019). A Guide to Deep Learning in Healthcare. Nature Medicine, 25(1), 24-29.

[731] Davies, M., et al. (2018). Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro, 38(1), 82-99.

[732] Schuld, M., Sinayskiy, I., & Petruccione, F. (2015). An Introduction to Quantum Machine Learning. Contemporary Physics, 56(2), 172-185.

[733] Google. (2023). Edge TPU. Retrieved from https://cloud.google.com/edge-tpu/

[734] NVIDIA Corporation. (2023). NVIDIA Jetson Platform. Retrieved from https://developer.nvidia.com/embedded-computing (opens new window)

[735] Lim, K., et al. (2012). Processor Networking with 3D-Stacked Memory. IEEE Micro, 32(5), 22-31.

[736] Howard, A., et al. (2019). Searching for MobileNetV3. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1314-1324).

[737] Elsken, T., Metzen, J. H., & Hutter, F. (2019). Neural Architecture Search: A Survey. Journal of Machine Learning Research, 20(55), 1-21.

[738] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, 37(3), 50-60.

[739] Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4), 211-407.

[740] Zhang, Z., Xiao, Y., & Ma, Z. (2019). 6G Wireless Networks: Vision, Requirements, Architecture, and Key Technologies. IEEE Vehicular Technology Magazine, 14(3), 28-41.

[741] Stojkoska, B. L. R., & Trivodaliev, K. V. (2017). A Review of Internet of Things for Smart Home: Challenges and Solutions. Journal of Cleaner Production, 140, 1454-1464.

[742] Seo, D., et al. (2016). Neural Dust: An Ultrasonic, Low Power Solution for Chronic Brain-Machine Interfaces. arXiv preprint arXiv:1605.06287.

[743] Bonomi, F., Milito, R., Zhu, J., & Addepalli, S. (2012). Fog Computing and Its Role in the Internet of Things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing (pp. 13-16).

[744] Zhang, N., et al. (2018). Software Defined Space-Air-Ground Integrated Vehicular Networks: Challenges and Solutions. IEEE Communications Magazine, 55(7), 101-109.

[745] Baltrusaitis, T., Ahuja, C., & Morency, L. P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423-443.

[746] Smith, J., & Kumar, A. (2021). Security Challenges in Edge Computing: A Comprehensive Survey. IEEE Communications Surveys & Tutorials, 23(2), 974-1013.

[747] Zhao, L., & Li, M. (2020). Privacy-Preserving Techniques in Edge AI: A Survey. ACM Computing Surveys, 53(6), Article 125.

[748] European Union. (2018). General Data Protection Regulation (GDPR). Official Journal of the European Union.

[749] Chen, Y., & Hu, L. (2019). Secure Data Transmission in Edge Computing: A Review. IEEE Internet of Things Journal, 6(3), 5323-5334.

[750] Nguyen, D. C., Ding, M., Pathirana, P. N., & Seneviratne, A. (2021). Blockchain and AI-Based Solutions to Combat Coronavirus (COVID-19)-Like Epidemics: A Survey. IEEE Access, 9, 95730-95753.

[751] Li, J., & Shen, J. (2020). Decentralized Authentication in Edge Computing: Challenges and Solutions. Journal of Network and Computer Applications, 169, 102776.

[752] Wang, Y., & Su, X. (2019). Ensuring Data Integrity in Decentralized Edge Networks. IEEE Transactions on Network Science and Engineering, 6(4), 826-839.

[753] Gupta, R., & Tanwar, S. (2021). Trust Management in Edge Computing: A Blockchain-Based Approach. IEEE Transactions on Industrial Informatics, 17(2), 1238-1247.

[754] Brown, T., & Patel, S. (2020). Economic Analysis of Edge Computing Infrastructure Deployment. International Journal of Network Management, 30(5), e2094.

[755] Lin, W., & Wang, H. (2019). Scalability Challenges in Edge AI: A Survey. IEEE Transactions on Industrial Informatics, 15(7), 4239-4247.

[756] Zhang, Q., & Yang, L. T. (2018). Resource Management in Edge Computing: State-of-the-Art and Future Trends. Journal of Parallel and Distributed Computing, 123, 17-29.

[757] Ahmad, M., & Rathore, M. M. (2020). Barriers to Edge AI Adoption in SMEs. IEEE Access, 8, 161329-161344.

[758] Lee, E. K., & Lee, Y. C. (2019). Incentive Mechanisms for Edge Computing Resource Sharing: Survey and Research Challenges. Sensors, 19(21), 4727.

[759] Feng, J., & Zhang, W. (2018). Joint Resource Allocation and Incentive Design for Edge Computing. IEEE Transactions on Wireless Communications, 17(8), 5445-5457.

[760] Shi, W., & Dustdar, S. (2016). The Promise of Edge Computing. Computer, 49(5), 78-81.

[761] Huang, X., & Li, J. (2020). Preventing Free-Riding in Decentralized Edge Networks. IEEE Network, 34(2), 214-221.

[762] Li, T., & Ma, H. (2021). Token Economics for Edge AI: Concepts and Case Studies. IEEE Communications Magazine, 59(6), 90-96.

[763] Goldreich, O. (2009). Foundations of Cryptography: Volume 2, Basic Applications. Cambridge University Press.

[764] Blum, M., Feldman, P., & Micali, S. (1988). Non-interactive zero-knowledge and its applications. Proceedings of the 20th Annual ACM Symposium on Theory of Computing, 103–112.

[765] Feige, U., Fiat, A., & Shamir, A. (1988). Zero-knowledge proofs of identity. Journal of Cryptology, 1(2), 77–94.

[766] Sahai, A., & Waters, B. (2014). How to use indistinguishability obfuscation: Deniable encryption, and more. Proceedings of the 46th Annual ACM Symposium on Theory of Computing, 475–484.

[767] Ben-Sasson, E., Chiesa, A., Genkin, D., Tromer, E., & Virza, M. (2014). SNARKs for C: Verifying program executions succinctly and in zero knowledge. Advances in Cryptology–CRYPTO 2013, 90–108.

[768] Meiklejohn, S., & Mercer, R. (2018). Möbius: Trustless tumbling for transaction privacy. Proceedings on Privacy Enhancing Technologies, 2018(2), 105–121.

[769] Gennaro, R., Gentry, C., Parno, B., & Raykova, M. (2013). Quadratic span programs and succinct NIZKs without PCPs. Advances in Cryptology–EUROCRYPT 2013, 626–645.

[770] Bonawitz, K., et al. (2017). Practical secure aggregation for privacy-preserving machine learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 1175–1191.

[771] Yang, Z., Zhong, S., & Wright, R. N. (2006). Privacy-preserving classification of customer data without loss of accuracy. Proceedings of the 5th SIAM International Conference on Data Mining, 92–102.

[772] Miers, I., Garman, C., Green, M., & Rubin, A. D. (2013). Zerocoin: Anonymous distributed e-cash from bitcoin. IEEE Symposium on Security and Privacy, 397–411.

[773] Bünz, B., Bootle, J., Boneh, D., Poelstra, A., Wuille, P., & Maxwell, G. (2018). Bulletproofs: Short proofs for confidential transactions and more. 2018 IEEE Symposium on Security and Privacy, 315–334.

[774] Gao, Y., Li, H., Zhang, W., Yang, B., & Shen, J. (2019). A secure and privacy-preserving data aggregation scheme for smart grid. IEEE Transactions on Industrial Informatics, 15(9), 4943–4952.

[775] Qin, J., Li, W., Li, W., & Yu, J. (2020). A privacy-preserving mobile payment protocol based on blockchain. IEEE Access, 8, 181718–181727.

[776] Li, F., Luo, B., & Liu, P. (2010). Secure information aggregation for smart grids using homomorphic encryption. First IEEE International Conference on Smart Grid Communications, 327–332.

[777] Reed, D., & Sporny, M. (2020). Decentralized Identifiers (DIDs) v1.0. W3C Working Draft. Retrieved from https://www.w3.org/TR/did-core/ (opens new window)

[778] Sporny, M., Longley, D., & Chadwick, D. (2019). Verifiable Credentials Data Model 1.0. W3C Recommendation. Retrieved from https://www.w3.org/TR/vc-data-model/ (opens new window)

[779] Preukschat, A., & Reed, D. (2021). Self-Sovereign Identity: Decentralized Digital Identity and Verifiable Credentials. Manning Publications.

[780] Naik, N., Jenkins, P., & Newell, D. (2019). Digital Identity and Verifiable Credentials: The Key to Decentralized Edge Computing. IEEE Internet Computing, 23(5), 13–22.

[781] Sharma, P. K., Chen, M.-Y., & Park, J. H. (2018). A Software Defined Fog Node Based Distributed Blockchain Cloud Architecture for IoT. IEEE Access, 6, 115–124.

[782] Abebe, M., Chilamkurti, N., & Adane, T. (2020). Decentralized Identity for Internet of Things Based on Blockchain Technology. IEEE Internet of Things Journal, 7(5), 3901–3909.

[783] Yang, Z., Li, W., & Sun, H. (2019). Blockchain-based decentralized energy management platform for residential distributed energy resources in smart grid. IEEE Transactions on Industrial Informatics, 16(3), 1887–1897.

[784] Huh, S., Cho, S., & Kim, S. (2017). Managing IoT devices using blockchain platform. 2017 19th International Conference on Advanced Communication Technology (ICACT), 464–467.

[785] Fan, K., Wang, S., Ren, Y., Li, H., & Yang, Y. (2018). Blockchain-based secure time protection scheme in IoT. IEEE Internet of Things Journal, 6(3), 4671–4679.

[786] Tomaino, N. (2017). A Mechanism Design Approach to Token-Curated Registries. The Control. Retrieved from https://thecontrol.co/a-mechanism-design-approach-to-token-curated-registries-590aadb7082

[787] Asgaonkar, A., & Krishnamachari, B. (2019). Token Curated Registries—A Game Theoretic Approach. Proceedings of the 2019 IEEE International Conference on Blockchain, 173–179.

[788] Zhang, X., & Poslad, S. (2020). Blockchain support for flexible queries with granularity control in decentralized IoT data sharing. Sensors, 20(4), 1074.

[789] Li, J., & Wu, J. (2019). A Token Curated Registry for Decentralized AI Models. IEEE Access, 7, 115966–115975.

[790] Kairouz, P., McMahan, H. B., et al. (2019). Advances and Open Problems in Federated Learning. arXiv preprint arXiv:1912.04977.

[791] Xu, R., Chen, L., & Liu, F. (2019). Blockchain-based decentralized application marketplace for edge computing. IEEE Access, 7, 158022–158035.

[792] Kang, J., Yu, R., Huang, X., Maharjan, S., Zhang, Y., & Hossain, E. (2019). Enabling localized peer-to-peer electricity trading among plug-in hybrid electric vehicles using consortium blockchains. IEEE Transactions on Industrial Informatics, 13(6), 3154–3164.

[793] Yu, W., Liang, F., He, X., Hatcher, W. G., Lu, C., Lin, J., & Yang, X. (2018). A survey on the edge computing for the Internet of Things. IEEE Access, 6, 6900–6919.

[794] Xu, X., Li, J., & Zhang, W. (2020). Edge Computing Resource Allocation Based on Decentralized Finance Models. IEEE Access, 8, 150324–150333.

[795] Schär, F. (2021). Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets. Federal Reserve Bank of St. Louis Review, 103(2), 153–174.

[796] Liu, Y., & Zhang, X. (2019). Staking Mechanisms in Blockchain Networks: A Survey. Journal of Blockchain Research, 2(1), 45–59.

[797] Chen, Y., Bellavitis, C., & Chhabra, K. (2020). Blockchain Disruption and Decentralized Finance: The Rise of Decentralized Business Models. Journal of Business Venturing Insights, 13, e00151.

[798] Wang, S., Zhou, Q., & Chen, X. (2020). An Overview of Liquidity Pools in Decentralized Exchanges. IEEE Access, 8, 181749–181757.

[799] Li, T., Li, Y., & Wang, J. (2021). Integrating DeFi into Edge AI: Opportunities and Challenges. IEEE Internet of Things Journal, 8(12), 9816–9825.

[800] Zhao, X., & Sun, Y. (2020). Resource Staking in Edge Computing Networks: A DeFi Approach. IEEE Transactions on Network Science and Engineering, 7(4), 3241–3252.

[801] Kim, H., & Kim, Y. (2021). Lending and Borrowing Computational Resources in Decentralized Edge Networks. Sensors, 21(3), 892.

[802] Singh, A., & Chatterjee, S. (2020). Resource Liquidity Pools for Edge AI Applications Using DeFi. Future Internet, 12(10), 168.

[803] Zhang, L., & Wu, J. (2021). Dynamic Pricing Mechanisms in Decentralized Edge Computing Markets. IEEE Transactions on Services Computing, 14(5), 1376–1385.

[804] Gao, F., & Zhou, Y. (2020). Efficient Resource Utilization in Edge Computing Through DeFi. IEEE Access, 8, 120915–120927.

[805] Nguyen, D. C., Ding, M., Pathirana, P. N., & Seneviratne, A. (2020). Blockchain and AI-based Solutions for Decentralized Edge Computing: A DeFi Perspective. IEEE Wireless Communications, 27(6), 140–146.

[806] Liu, Z., & Li, X. (2019). Reputation Systems and Penalties in DeFi-based Edge Computing. ACM Transactions on Internet Technology, 19(4), 51.

[807] Chen, X., & Zhang, Y. (2021). Adaptive Algorithms for Resource Allocation in Heterogeneous Edge Networks. IEEE Transactions on Network and Service Management, 18(1), 123–136.

[808] Wang, Q., & Duan, Y. (2020). Enhancing Reliability in Edge Computing with Caching and Redundancy. IEEE Transactions on Industrial Informatics, 16(6), 4290–4298.

[809] Patel, V., & Shah, M. (2021). Legal and Privacy Considerations in DeFi-based Edge AI Networks. Journal of Information Security and Applications, 58, 102717.

[810] Li, W., Yang, Y., & Zhang, J. (2020). Collaborative AI Model Training Using DeFi Incentives in Edge Networks. IEEE Internet of Things Journal, 7(7), 6278–6287.

[811] Fan, K., Ren, Y., Wang, Y., & Yang, Y. (2019). Decentralized Data Storage for Edge AI Using DeFi Models. IEEE Transactions on Industrial Informatics, 15(12), 6513–6522.

[812] Chen, M., Hao, Y., & Hwang, K. (2018). Real-Time Data Processing at the Edge Using DeFi-based Resource Allocation. IEEE Network, 32(1), 73–79.

[813] Buterin, V. (2019). An incomplete guide to rollups. Ethereum Foundation Blog. Retrieved from https://vitalik.ca/general/2019/12/31/rollup.html

[814] Gudgeon, L., Perez, D., Harz, D., Livshits, B., & Gervais, A. (2020). SoK: Layer-Two Blockchain Protocols. Proceedings of Financial Cryptography and Data Security 2020, 201–226.

[815] Khalil, R., & Gervais, A. (2017). NOCUST—A Non-Custodial Off-Chain Settlement. IACR Cryptology ePrint Archive, 2017(218).

[816] Dziembowski, S., Faust, S., Hostáková, K., & Pietrzak, K. (2018). General State Channel Networks. Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, 949–966.

[817] McCorry, P., Moffett, D., & Hassan, S. (2019). An Evaluation of State Channels for High-Speed Blockchain Applications. IEEE International Conference on Blockchain, 194–201.

[818] Back, A., Corallo, M., Dashjr, L., Friedenbach, M., Maxwell, G., Miller, A., ... & Wuille, P. (2014). Enabling Blockchain Innovations with Pegged Sidechains. Blockstream. Retrieved from https://blockstream.com/sidechains.pdf

[819] Eberhardt, J., & Tai, S. (2018). On or Off the Blockchain? Insights on Off-Chaining Computation and Data. European Conference on Service-Oriented and Cloud Computing, 3–15.

[820] Kiayias, A., Miller, A., & Zindros, D. (2016). Non-Interactive Proofs of Proof-of-Work. IACR Cryptology ePrint Archive, 2017(963).

[821] Gluchowski, A. (2019). PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge. Ethereum Research. Retrieved from https://ethresear.ch/t/plonk-permutations-over-lagrange-bases-for-oecumenical-noninteractive-arguments-of-knowledge/6205

[822] Adler, E., Brainard, K., Boneh, D., & Xu, W. (2021). ASTRA: Expediting State Channels with Optimistic Fair Exchange. arXiv preprint arXiv:2104.07165.

[823] Ben-Sasson, E., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., & Virza, M. (2014). Zerocash: Decentralized Anonymous Payments from Bitcoin. 2014 IEEE Symposium on Security and Privacy, 459–474.

[824] Loopring. (2020). Loopring: zkRollup Protocol for Scalable DEXes. Loopring Foundation. Retrieved from https://loopring.org (opens new window)

[825] Xu, X., Weber, I., & Staples, M. (2019). Architecture for Blockchain Applications. Springer.

[826] Chen, J., Li, Y., Deng, Q., & Yu, L. (2020). Layer 2 Blockchain Scaling Based on State Channels: A Payment System Case Study. IEEE Access, 8, 154660–154670.

[827] Schulte, S., Sigwart, M., Frauenthaler, P., & Gruner, S. (2019). Towards Blockchain Interoperability. Proceedings of the IEEE International Conference on Internet of Things, 180–189.

[828] Jourenko, M., Moreno-Sanchez, P., Kate, A., & Maffei, M. (2019). Pay-per-Last-N-Shares Mining Pools with Verifiable Payouts. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 985–1002.

[829] Kim, J., Kim, M., & Lee, S. (2020). A Blockchain-Based Data Management System for Smart Communities. IEEE Access, 8, 220250–220261.

[830] Zhang, Y., Kasahara, S., Shen, Y., Jiang, X., & Wan, J. (2018). Smart Contract-Based Access Control for the Internet of Things. IEEE Internet of Things Journal, 6(2), 1594–1605.

[831] Konečný, J., McMahan, H. B., Yu, F. X., et al. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. Retrieved from https://arxiv.org/abs/1610.05492 (opens new window)

[832] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3), 50–60. https://doi.org/10.1109/MSP.2020.2975749 (opens new window)

[833] Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Retrieved from https://bitcoin.org/bitcoin.pdf (opens new window)

[834] Zheng, Z., Xie, S., Dai, H., Chen, X., & Wang, H. (2018). An overview of blockchain technology: Architecture, consensus, and future trends. In 2017 IEEE International Congress on Big Data (pp. 557–564). IEEE. https://doi.org/10.1109/BigDataCongress.2017.85 (opens new window)

[835] Lu, Y. (2019). Blockchain and federated learning for collaborative intrusion detection in IoT: A survey. Wireless Communications and Mobile Computing, 2019, 1–10. https://doi.org/10.1155/2019/1037595 (opens new window)

[836] Zhang, C., & Zhu, L. (2021). Blockchain-based federated learning: Methods, applications, and open challenges. arXiv preprint arXiv:2101.07583. Retrieved from https://arxiv.org/abs/2101.07583 (opens new window)

[837] Kim, M., Park, J., Bennis, M., & Kim, S.-L. (2019). On-device federated learning via blockchain and its latency analysis. In 2019 IEEE International Conference on Communications (ICC) (pp. 1–7). IEEE. https://doi.org/10.1109/ICC.2019.8761315 (opens new window)

[838] Kang, J., Yu, R., Huang, X., Maharjan, S., Zhang, Y., & Hossain, E. (2018). Blockchain for secure and efficient data sharing in vehicular edge computing and networks. IEEE Communications Magazine, 56(8), 62–68. https://doi.org/10.1109/MCOM.2018.1700879 (opens new window)

[839] Szabo, N. (1997). Formalizing and securing relationships on public networks. First Monday, 2(9). https://doi.org/10.5210/fm.v2i9.548 (opens new window)

[840] Dai, H.-N., Zheng, Z., & Zhang, Y. (2019). Blockchain for Internet of Things: A survey. IEEE Internet of Things Journal, 6(5), 8076–8094. https://doi.org/10.1109/JIOT.2019.2920987 (opens new window)

[841] Kang, J., Xiong, Z., Niyato, D., et al. (2020). Reliable federated learning for mobile networks. IEEE Wireless Communications, 27(2), 72–80. https://doi.org/10.1109/MWC.001.1900331 (opens new window)

[842] Hua, X., Liu, L., Yang, T., Zhao, N., & Sun, Z. (2020). Blockchain-based federated learning for intelligent control in heavy haul railway. IEEE Access, 8, 176830–176839. https://doi.org/10.1109/ACCESS.2020.3026346 (opens new window)

[843] Huang, T., Gao, L., Qi, L., & Wang, W. (2020). A blockchain-based scheme for privacy-preserving and secure sharing of medical data. Computers & Security, 99, 102010. https://doi.org/10.1016/j.cose.2020.102010 (opens new window)

[844] Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR). A Practical Guide (1st ed.). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-57959-7 (opens new window)

[845] Zhan, Y., Liu, Y., Gong, Y., et al. (2020). A learning-based incentive mechanism for federated learning. IEEE Internet of Things Journal, 7(7), 6360–6368. https://doi.org/10.1109/JIOT.2020.2972758 (opens new window)

[846] Shayan, M., Fung, C., Mohammadi, M., & Ngai, E. C.-H. (2020). Biscotti: A blockchain system for private and secure federated learning. IEEE Transactions on Parallel and Distributed Systems, 32(7), 1513–1525. https://doi.org/10.1109/TPDS.2020.3044639 (opens new window)

[847] Bao, X., & Li, F. (2020). FLChain: A blockchain for auditable federated learning with trust and incentive. In 2020 5th International Conference on Big Data and Computing (pp. 51–56). ACM. https://doi.org/10.1145/3404649.3404659 (opens new window)

[848] FLoCK. (n.d.). Federated Learning on Blockchain. Retrieved October 2023, from https://www.flock.io/ (opens new window)

[849] Rieke, N., Hancox, J., Li, W., et al. (2020). The future of digital health with federated learning. NPJ Digital Medicine, 3(1), 119. https://doi.org/10.1038/s41746-020-00323-1 (opens new window)

[850] Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., & Yu, H. (2019). Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3), 1–207. https://doi.org/10.2200/S00960ED1V01Y201910AIM043 (opens new window)

[851] APPLE. (2021) https://www.apple.com/newsroom/2021/06/ios-15-brings-powerful-new-features-to-stay-connected-focus-explore-and-more (opens new window)

[852] NVIDIA. (2023). "NVIDIA DLSS: AI-Accelerated Upscaling." NVIDIA Developer. https://developer.nvidia.com/dlss (opens new window)

[853] Tesla (2019). "Tesla Autonomy Day." Tesla. https://www.tesla.com/videos/tesla-autonomy-day (opens new window)

[854] Fan, A., et al. (2021). "Beyond English-Centric Multilingual Machine Translation." Journal of Machine Learning Research, 22(107), 1-48. https://jmlr.org/papers/v22/20-1307.html (opens new window)