With the advances in 5G and IoT devices, the industries are vastly adopting artificial intelligence (AI) techniques for improving classification and prediction-based services. However, the use of AI also raises concerns regarding data privacy and security that can be misused or leaked. Private AI was recently coined to address the data security issue by combining AI with encryption techniques but existing studies have shown that model inversion attacks can be used to reverse engineer the images from model parameters. In this regard, we propose a federated learning and encryption-based private (FLEP) AI framework that provides two-tier security for data and model parameters in an IIoT environment. We proposed a three-layer encryption method for data security and provided a hypothetical method to secure the model parameters. Experimental results show that the proposed method achieves better encryption quality at the expense of slightly increased execution time. We also highlighted several open issues and challenges regarding the FLEP AI framework's realization.

0
下载
关闭预览

相关内容

ACM/IEEE第23届模型驱动工程语言和系统国际会议,是模型驱动软件和系统工程的首要会议系列,由ACM-SIGSOFT和IEEE-TCSE支持组织。自1998年以来,模型涵盖了建模的各个方面,从语言和方法到工具和应用程序。模特的参加者来自不同的背景,包括研究人员、学者、工程师和工业专业人士。MODELS 2019是一个论坛,参与者可以围绕建模和模型驱动的软件和系统交流前沿研究成果和创新实践经验。今年的版本将为建模社区提供进一步推进建模基础的机会,并在网络物理系统、嵌入式系统、社会技术系统、云计算、大数据、机器学习、安全、开源等新兴领域提出建模的创新应用以及可持续性。 官网链接:http://www.modelsconference.org/

The broad application of artificial intelligence techniques ranging from self-driving vehicles to advanced medical diagnostics afford many benefits. Federated learning is a new breed of artificial intelligence, offering techniques to help bridge the gap between personal data protection and utilization for research and commercial deployment, especially in the use-cases where security and privacy are the key concerns. Here, we present OpenFed, an open-source software framework to simultaneously address the demands for data protection and utilization. In practice, OpenFed enables state-of-the-art model development in low-trust environments despite limited local data availability, which lays the groundwork for sustainable collaborative model development and commercial deployment by alleviating concerns of asset protection. In addition, OpenFed also provides an end-to-end toolkit to facilitate federated learning algorithm development as well as several benchmarks to fair performance comparison under diverse computing paradigms and configurations.

0
0
下载
预览

Deep learning-based time series models are being extensively utilized in engineering and manufacturing industries for process control and optimization, asset monitoring, diagnostic and predictive maintenance. These models have shown great improvement in the prediction of the remaining useful life (RUL) of industrial equipment but suffer from inherent vulnerability to adversarial attacks. These attacks can be easily exploited and can lead to catastrophic failure of critical industrial equipment. In general, different adversarial perturbations are computed for each instance of the input data. This is, however, difficult for the attacker to achieve in real time due to higher computational requirement and lack of uninterrupted access to the input data. Hence, we present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models. Attackers can easily utilize universal adversarial perturbations for real-time attack since continuous access to input data and repetitive computation of adversarial perturbations are not a prerequisite for the same. We evaluate the effect of universal adversarial attacks using NASA turbofan engine dataset. We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model. To the best of our knowledge, we are the first to study the effect of the universal adversarial perturbation on time series regression models. We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength of the universal adversarial attack. We also showcase that universal adversarial perturbation can be transferred across different models.

0
0
下载
预览

After being legalized as an agricultural commodity by the 2018 U.S. Farm Bill, the Industrial Hemp production is moved from limited pilot programs to a regulated agriculture production system, and the market keeps increasing since then. However, Industrial Hemp Supply Chain (IHSC) faces several critical challenges, including high complexity and variability, data tampering, and lack of immutable information tracking system. In this paper, we develop a blockchain enabled internet-of-things (IoT) platform for IHSC to support process tracking, scalability, interoperability, and risk management. Basically, we create a two-layer blockchain with proof-of-authority based smart contract, which can leverage local authorities with state/federal regulators to ensure and accelerate quality control verification and regulatory compliance. Then, we develop a user-friendly mobile app so that each participant can use smart phone to real-time collect and upload their data to the cloud, and further share the process verification and tracking information through the blockchain network. Our study indicates the proposed platform can support interoperability, improve the efficiency of quality control verification, and ensure the safety of regulated IHSC.

0
0
下载
预览

Clinical trials are a multi-billion dollar industry. One of the biggest challenges facing the clinical trial research community is satisfying Part 11 of Title 21 of the Code of Federal Regulations and ISO 27789. These controls provide audit requirements that guarantee the reliability of the data contained in the electronic records. Context-aware smart devices and wearable IoT devices have become increasingly common in clinical trials. Electronic Data Capture (EDC) and Clinical Data Management Systems (CDMS) do not currently address the new challenges introduced using these devices. The healthcare digital threat landscape is continually evolving, and the prevalence of sensor fusion and wearable devices compounds the growing attack surface. We propose Scrybe, a permissioned blockchain, to store proof of clinical trial data provenance. We illustrate how Scrybe addresses each control and the limitations of the Ethereum-based blockchains. Finally, we provide a proof-of-concept integration with REDCap to show tamper resistance.

0
0
下载
预览

Critical infrastructures (CI) and industrial organizations aggressively move towards integrating elements of modern Information Technology (IT) into their monolithic Operational Technology (OT) architectures. Yet, as OT systems progressively become more and more interconnected, they silently have turned into alluring targets for diverse groups of adversaries. Meanwhile, the inherent complexity of these systems, along with their advanced-in-age nature, prevents defenders from fully applying contemporary security controls in a timely manner. Forsooth, the combination of these hindering factors has led to some of the most severe cybersecurity incidents of the past years. This work contributes a full-fledged and up-to-date survey of the most prominent threats against Industrial Control Systems (ICS) along with the communication protocols and devices adopted in these environments. Our study highlights that threats against CI follow an upward spiral due to the mushrooming of commodity tools and techniques that can facilitate either the early or late stages of attacks. Furthermore, our survey exposes that existing vulnerabilities in the design and implementation of several of the OT-specific network protocols may easily grant adversaries the ability to decisively impact physical processes. We provide a categorization of such threats and the corresponding vulnerabilities based on various criteria. As far as we are aware, this is the first time an exhaustive and detailed survey of this kind is attempted.

0
0
下载
预览

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

0
16
下载
预览

Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.

0
16
下载
预览

Train machine learning models on sensitive user data has raised increasing privacy concerns in many areas. Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data. One way to achieve a strict privacy guarantee is to apply local differential privacy into federated learning. However, previous works do not give a practical solution due to three issues. First, the noisy data is close to its original value with high probability, increasing the risk of information exposure. Second, a large variance is introduced to the estimated average, causing poor accuracy. Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models. In this paper, we proposed a novel design of local differential privacy mechanism for federated learning to address the abovementioned issues. It is capable of making the data more distinct from its original value and introducing lower variance. Moreover, the proposed mechanism bypasses the curse of dimensionality by splitting and shuffling model updates. A series of empirical evaluations on three commonly used datasets, MNIST, Fashion-MNIST and CIFAR-10, demonstrate that our solution can not only achieve superior deep learning performance but also provide a strong privacy guarantee at the same time.

0
4
下载
预览

We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors. This abstraction allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user. We report early results on the Boston Housing and Pima Indian Diabetes datasets. While the privacy features apart from Differential Privacy do not impact the prediction accuracy, the current implementation of the framework introduces a significant overhead in performance, which will be addressed at a later stage of the development. We believe this work is an important milestone introducing the first reliable, general framework for privacy preserving deep learning.

0
4
下载
预览

In this study, we investigate the limits of the current state of the art AI system for detecting buffer overflows and compare it with current static analysis tools. To do so, we developed a code generator, s-bAbI, capable of producing an arbitrarily large number of code samples of controlled complexity. We found that the static analysis engines we examined have good precision, but poor recall on this dataset, except for a sound static analyzer that has good precision and recall. We found that the state of the art AI system, a memory network modeled after Choi et al. [1], can achieve similar performance to the static analysis engines, but requires an exhaustive amount of training data in order to do so. Our work points towards future approaches that may solve these problems; namely, using representations of code that can capture appropriate scope information and using deep learning methods that are able to perform arithmetic operations.

0
3
下载
预览
小贴士
相关论文
Arghya Basak,Pradeep Rathore,Sri Harsha Nistala,Sagar Srinivas,Venkataramana Runkana
0+阅读 · 9月15日
Keqi Wang,Wei Xie,Wencen Wu,Jinxiang Pei,Qi Zhou
0+阅读 · 9月15日
Jon Oakley,Carl Worley,Lu Yu,Richard Brooks,Ilker Ozcelik,Anthony Skjellum,Jihad Obeid
0+阅读 · 9月13日
Vulnerabilities and Attacks Against Industrial Control Systems and Critical Infrastructures
Georgios Michail Makrakis,Constantinos Kolias,Georgios Kambourakis,Craig Rieger,Jacob Benjamin
0+阅读 · 9月10日
Lingjuan Lyu,Han Yu,Xingjun Ma,Lichao Sun,Jun Zhao,Qiang Yang,Philip S. Yu
16+阅读 · 2020年12月7日
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
Gabriel Resende Machado,Eugênio Silva,Ronaldo Ribeiro Goldschmidt
16+阅读 · 2020年9月8日
Lichao Sun,Jianwei Qian,Xun Chen,Philip S. Yu
4+阅读 · 2020年7月31日
Theo Ryffel,Andrew Trask,Morten Dahl,Bobby Wagner,Jason Mancuso,Daniel Rueckert,Jonathan Passerat-Palmbach
4+阅读 · 2018年11月13日
Carson D. Sestili,William S. Snavely,Nathan M. VanHoudnos
3+阅读 · 2018年9月12日
相关资讯
已删除
将门创投
5+阅读 · 2018年6月7日
Top