首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses.  相似文献   

2.
Adversarial examples are one of the most intriguing topics in modern deep learning. Imperceptible perturbations to the input can fool robust models. In relation to this problem, attack and defense methods are being developed almost on a daily basis. In parallel, efforts are being made to simply pointing out when an input image is an adversarial example. This can help prevent potential issues, as the failure cases are easily recognizable by humans. The proposal in this work is to study how chaos theory methods can help distinguish adversarial examples from regular images. Our work is based on the assumption that deep networks behave as chaotic systems, and adversarial examples are the main manifestation of it (in the sense that a slight input variation produces a totally different output). In our experiments, we show that the Lyapunov exponents (an established measure of chaoticity), which have been recently proposed for classification of adversarial examples, are not robust to image processing transformations that alter image entropy. Furthermore, we show that entropy can complement Lyapunov exponents in such a way that the discriminating power is significantly enhanced. The proposed method achieves 65% to 100% accuracy detecting adversarials with a wide range of attacks (for example: CW, PGD, Spatial, HopSkip) for the MNIST dataset, with similar results when entropy-changing image processing methods (such as Equalization, Speckle and Gaussian noise) are applied. This is also corroborated with two other datasets, Fashion-MNIST and CIFAR 19. These results indicate that classifiers can enhance their robustness against the adversarial phenomenon, being applied in a wide variety of conditions that potentially matches real world cases and also other threatening scenarios.  相似文献   

3.
《Physical Communication》2008,1(2):134-145
Applications for wireless sensor networks require widespread, highly reliable communications even in the face of adversarial influences. Maintaining connectivity and secure communications between entities are vital networking properties towards ensuring the successful and accurate completion of desired sensing tasks. We examine the required communication range for nodes in a wireless sensor network with respect to several parameters. Network properties such as key predistribution schemes and node compromise attacks are modelled with several network parameters and studied in terms of how they influence global network connectivity. These networks are physically vulnerable to malicious behavior by way of node compromise attacks that may affect global connectivity. We introduce a metric that determines the resilience of a network employing a key predistribution scheme with respect to node compromise attacks. In this work,we provide the first study of global network connectivity and its relationship to node compromise attacks. Existing work considers the relationship between the probability of node compromise and the probability of link compromise and the relationship of the probability of secure link establishment and overall network connectivity for the Erdős network model. Here, we present novel work which combines these two relationships to study the relationship between node compromise attacks and global network connectivity. Our analysis is performed with regard to large-scale networks; however, we provide simulation results for both large-scale and small-scale networks. First, we derive a single expression to determine the required communication radius for wireless sensor networks to include the effects of key predistribution schemes. From this, we derive an expression for determining required communication range after an adversary has compromised a fraction of the nodes in the network. The required communication range represents the resource usage of nodes in a network to cope with key distribution schemes and node compromise attacks. We introduce the Resiliency-Connectivity metric, which measures the resilience of a network in expending its resources to provide global connectivity in adverse situations.  相似文献   

4.
With the recent developments of Machine Learning as a Service (MLaaS), various privacy concerns have been raised. Having access to the user’s data, an adversary can design attacks with different objectives, namely, reconstruction or attribute inference attacks. In this paper, we propose two different training frameworks for an image classification task while preserving user data privacy against the two aforementioned attacks. In both frameworks, an encoder is trained with contrastive loss, providing a superior utility-privacy trade-off. In the reconstruction attack scenario, a supervised contrastive loss was employed to provide maximal discrimination for the targeted classification task. The encoded features are further perturbed using the obfuscator module to remove all redundant information. Moreover, the obfuscator module is jointly trained with a classifier to minimize the correlation between private feature representation and original data while retaining the model utility for the classification. For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. Therefore, the encoder is trained with supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. The reported results on the CelebA dataset validate the effectiveness of the proposed frameworks.  相似文献   

5.
Artificial neural networks have become the go-to solution for computer vision tasks, including problems of the security domain. One such example comes in the form of reidentification, where deep learning can be part of the surveillance pipeline. The use case necessitates considering an adversarial setting—and neural networks have been shown to be vulnerable to a range of attacks. In this paper, the preprocessing defences against adversarial attacks are evaluated, including block-matching convolutional neural network for image denoising used as an adversarial defence. The benefit of using preprocessing defences comes from the fact that it does not require the effort of retraining the classifier, which, in computer vision problems, is a computationally heavy task. The defences are tested in a real-life-like scenario of using a pre-trained, widely available neural network architecture adapted to a specific task with the use of transfer learning. Multiple preprocessing pipelines are tested and the results are promising.  相似文献   

6.
6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.  相似文献   

7.
In this paper we explore the physical interpretation of statistical data collected from complex black-box systems. Given the output statistics of a black-box system, and considering a class of relevant Markov dynamics which are physically meaningful, we reverse-engineer the Markov dynamics to obtain an equilibrium distribution that coincides with the output statistics observed. This reverse-engineering scheme provides us with a conceptual physical interpretation of the black-box system investigated. Five specific reverse-engineering methodologies are developed, based on the following dynamics: Langevin, geometric Langevin, diffusion, growth-collapse, and decay-surge. In turn, these methodologies yield physical interpretations of the black-box system in terms of conceptual intrinsic forces, temperatures, and instabilities. The application of these methodologies is exemplified in the context of the distribution of wealth and income in human societies, which are outputs of the complex black-box system called “the economy”.  相似文献   

8.
We present the next step in an ongoing research program to allow for the black-box computation of the so-called finite-genus solutions of integrable differential equations. This next step consists of the black-box computation of the Abel map from a Riemann surface to its Jacobian. Using a plane algebraic curve representation of the Riemann surface, we provide an algorithm for the numerical computation of this Abel map. Since our plane algebraic curves are of arbitrary degree and may have arbitrary singularities, the Abel map of any connected compact Riemann surface may be obtained in this way. This generality is necessary in order for these algorithms to be relevant for the computation of the finite-genus solutions of any integrable equation.  相似文献   

9.
The log messages generated in the system reflect the state of the system at all times. The realization of autonomous detection of abnormalities in log messages can help operators find abnormalities in time and provide a basis for analyzing the causes of abnormalities. First, this paper proposes a log sequence anomaly detection method based on contrastive adversarial training and dual feature extraction. This method uses BERT (Bidirectional Encoder Representations from Transformers) and VAE (Variational Auto-Encoder) to extract the semantic features and statistical features of the log sequence, respectively, and the dual features are combined to perform anomaly detection on the log sequence, with a novel contrastive adversarial training method also used to train the model. In addition, this paper introduces the method of obtaining statistical features of log sequence and the method of combining semantic features with statistical features. Furthermore, the specific process of contrastive adversarial training is described. Finally, an experimental comparison is carried out, and the experimental results show that the method in this paper is better than the contrasted log sequence anomaly detection method.  相似文献   

10.
The robustness of urban bus network is essential to a city that heavily relies on buses as its main transportation solution. In this paper, the urban bus network has been modeled as a directed and space L network, and Changsha, a transportation hub of nearly 8 million people and hundreds of bus lines in southern China, is taken as a case. Based on the quantitative analyses of the topological properties, it is found that Changsha urban bus network is a scale-free network, not a small-world network. To evaluate the robustness of the network, five scenarios of network failure are simulated, including a random failure and four types of intentional attacks that differed in key node identification methods (i.e., unweighted degree or betweenness centrality) and attack strategies (i.e., normal or cascading attack). It is revealed that intentional attacks are more destructive than a random failure, and cascading attacks are more disruptive than normal attacks in the urban bus network. In addition, the key nodes identification methods are found to play a critical role in the robustness of the urban bus network. Specifically, cascading attack could be more disruptive when the betweenness centrality is used to identify key nodes; in contrast, normal attack could be more disruptive when the unweighted degree is used to identify key nodes. Our results could provide reference for risk management of urban bus network.  相似文献   

11.
Robustness against attacks serves as evidence for complex network structures and failure mechanisms that lie behind them. Most often, due to detection capability limitation or good disguises, attacks on networks are subject to false positives and false negatives, meaning that functional nodes may be falsely regarded as compromised by the attacker and vice versa. In this work, we initiate a study of false positive/negative effects on network robustness against three fundamental types of attack strategies, namely, random attacks (RA), localized attacks (LA), and targeted attack (TA). By developing a general mathematical framework based upon the percolation model, we investigate analytically and by numerical simulations of attack robustness with false positive/negative rate (FPR/FNR) on three benchmark models including Erd?s-Rényi (ER) networks, random regular (RR) networks, and scale-free (SF) networks. We show that ER networks are equivalently robust against RA and LA only when FPR equals zero or the initial network is intact. We find several interesting crossovers in RR and SF networks when FPR is taken into consideration. By defining the cost of attack, we observe diminishing marginal attack efficiency for RA, LA, and TA. Our finding highlights the potential risk of underestimating or ignoring FPR in understanding attack robustness. The results may provide insights into ways of enhancing robustness of network architecture and improve the level of protection of critical infrastructures.  相似文献   

12.
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting perturbations as the constraint; (2) taking the minimum of adversarial perturbations as the target and the successful attack as the constraint. These all involve two fundamental problems of AEs: the minimum boundary of constructing the AEs and whether that boundary is reachable. The reachability means whether the AEs of successful attack models exist equal to that boundary. Previous optimization models have no complete answer to the problems. Therefore, in this paper, for the first problem, we propose the definition of the minimum AEs and give the theoretical lower bound of the amplitude of the minimum AEs. For the second problem, we prove that solving the generation of the minimum AEs is an NPC problem, and then based on its computational inaccessibility, we establish a new third optimization model. This model is general and can adapt to any constraint. To verify the model, we devise two specific methods for generating controllable AEs under the widely used distance evaluation standard of adversarial perturbations, namely Lp constraint and SSIM constraint (structural similarity). This model limits the amplitude of the AEs, reduces the solution space’s search cost, and is further improved in efficiency. In theory, those AEs generated by the new model which are closer to the actual minimum adversarial boundary overcome the blindness of the adversarial amplitude setting of the existing methods and further improve the attack success rate. In addition, this model can generate accurate AEs with controllable amplitude under different constraints, which is suitable for different application scenarios. In addition, through extensive experiments, they demonstrate a better attack ability under the same constraints as other baseline attacks. For all the datasets we test in the experiment, compared with other baseline methods, the attack success rate of our method is improved by approximately 10%.  相似文献   

13.
This paper studies the privacy of wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect transmissions of interest. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper. In the meantime, a cooperative jammer (CJ) with multiple antennas transmits carefully crafted adversarial perturbations over the air to fool the eavesdropper into classifying the received superposition of signals as noise. While generating the adversarial perturbation at the CJ, multiple antennas are utilized to improve the attack performance in terms of fooling the eavesdropper. Two main points are considered while exploiting the multiple antennas at the adversary, namely the power allocation among antennas and the utilization of channel diversity. To limit the impact on the bit error rate (BER) at the receiver, the CJ puts an upper bound on the strength of the perturbation signal. Performance results show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with a high probability while increasing the BER at the legitimate receiver only slightly. Furthermore, the adversarial perturbation is shown to become more effective when multiple antennas are utilized.  相似文献   

14.
As a multi-particle entangled state, the Greenberger–Horne–Zeilinger (GHZ) state plays an important role in quantum theory and applications. In this study, we propose a flexible multi-user measurement-device-independent quantum key distribution (MDI-QKD) scheme based on a GHZ entangled state. Our scheme can distribute quantum keys among multiple users while being resistant to detection attacks. Our simulation results show that the secure distance between each user and the measurement device can reach more than 280 km while reducing the complexity of the quantum network. Additionally, we propose a method to expand our scheme to a multi-node with multi-user network, which can further enhance the communication distance between the users at different nodes.  相似文献   

15.
In recent studies of generative adversarial networks (GAN), researchers have attempted to combine adversarial perturbation with data hiding in order to protect the privacy and authenticity of the host image simultaneously. However, most of the studied approaches can only achieve adversarial perturbation through a visible watermark; the quality of the host image is low, and the concealment of data hiding cannot be achieved. In this work, we propose a true data hiding method with adversarial effect for generating high-quality covers. Based on GAN, the data hiding area is selected precisely by limiting the modification strength in order to preserve the fidelity of the image. We devise a genetic algorithm that can explore decision boundaries in an artificially constrained search space to improve the attack effect as well as construct aggressive covert adversarial samples by detecting “sensitive pixels” in ordinary samples to place discontinuous perturbations. The results reveal that the stego-image has good visual quality and attack effect. To the best of our knowledge, this is the first attempt to use covert data hiding to generate adversarial samples based on GAN.  相似文献   

16.
杨壮  颜永红  黄志华 《应用声学》2024,43(3):498-504
口音识别是指在同一语种下识别不同的区域口音的过程。为了提高口音识别的准确率,我们采用了多种方法,取得了明显的效果。首先,为了解决声学特征中关键特征权重不突出的问题,引入了有效的注意力机制,并对多种注意力机制进行了比较和分析。通过模型自适应学习通道和空间维度的不同权重,提高了口音识别的性能。在Common Voice英语口音数据集上的实验结果表明,引入CBAM注意力模块是有效的,识别准确率相对提升了12.7%,精确度和F1分数相对提升了17.9%。之后,我们提出了一种树形分类方法来缓解数据集中的长尾效应,识别准确率最多相对提升了5.2%。受域对抗训练(DAT)的启发,我们尝试通过对抗学习方法剔除口音特征中的冗余信息,使得准确率最多相对提升了3.4%,召回率最多相对提升了16.9%。  相似文献   

17.
We show how to implement cryptographic primitives based on the realistic assumption that quantum storage of qubits is noisy. We thereby consider individual-storage attacks; i.e., the dishonest party attempts to store each incoming qubit separately. Our model is similar to the model of bounded-quantum storage; however, we consider an explicit noise model inspired by present-day technology. To illustrate the power of this new model, we show that a protocol for oblivious transfer is secure for any amount of quantum-storage noise, as long as honest players can perform perfect quantum operations. Our model also allows us to show the security of protocols that cope with noise in the operations of the honest players and achieve more advanced tasks such as secure identification.  相似文献   

18.
We present a device-independent protocol to test if a given black-box measurement device is entangled, that is, has entangled eigenstates. Our scheme involves three parties and is inspired by entanglement swapping; the test uses the Clauser-Horne-Shimony-Holt Bell inequality, checked between each pair of parties. In the case where all particles are qubits, we characterize quantitatively the deviation of the measurement device from a perfect Bell-state measurement.  相似文献   

19.
We present a model which aims at describing the morphology of colonies of Proteus mirabilis and Bacillus subtilis. Our model is based on a cellular automaton which is obtained by the adequate discretisation of a diffusion-like equation, describing the migration of the bacteria, to which we have added rules simulating the consolidation process. Our basic assumption, following the findings of the group of Chuo University, is that the migration and consolidation processes are controlled by the local density of the bacteria. We show that it is possible within our model to reproduce the morphological diagrams of both bacteria species. Moreover, we model some detailed experiments done by the Chuo University group, obtaining a fine agreement.  相似文献   

20.
This paper studies the resiliency of hierarchical networks when subjected to random errors, static attacks, and cascade attacks. The performance is compared with existing Erdös–Rényi (ER) random networks and Barabasi and Albert (BA) scale-free networks using global efficiency as the common performance metric. The results show that critical infrastructures modeled as hierarchical networks are intrinsically efficient and are resilient to random errors, however they are more vulnerable to targeted attacks than scale-free networks. Based on the response dynamics to different attack models, we propose a novel hybrid mitigation strategy that combines discrete levels of critical node reinforcement with additional edge augmentation. The proposed modified topology takes advantage of the high initial efficiency of the hierarchical network while also making it resilient to attacks. Experimental results show that when the level of damage inflicted on a critical node is low, the node reinforcement strategy is more effective, and as the level of damage increases, the additional edge augmentation is highly effective in maintaining the overall network resiliency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号