首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对深度神经网络训练过程中残差随着其传播深度越来越小而使底层网络无法得到有效训练的问题,通过分析传统sigmoid激活函数应用于深度神经网络的局限性,提出双参数sigmoid激活函数。一个参数保证激活函数的输入集中坐标原点两侧,避免了激活函数进入饱和区,一个参数抑制残差衰减的速度,双参数结合有效的增强了深度神经网络的训练。结合DBN对MNIST数据集进行数字分类实验,实验表明双参数 sigmoid激活函数能够直接应用于无预训练深度神经网络,而且提高了sigmoid激活函数在有预训练深度神经网络中的训练效果。  相似文献   

2.
《中国物理 B》2021,30(5):54201-054201
We present a ghost handwritten digit recognition method for the unknown handwritten digits based on ghost imaging(GI) with deep neural network, where a few detection signals from the bucket detector, generated by the cosine transform speckle, are used as the characteristic information and the input of the designed deep neural network(DNN), and the output of the DNN is the classification. The results show that the proposed scheme has a higher recognition accuracy(as high as98% for the simulations, and 91% for the experiments) with a smaller sampling ratio(say 12.76%). With the increase of the sampling ratio, the recognition accuracy is enhanced. Compared with the traditional recognition scheme using the same DNN structure, the proposed scheme has slightly better performance with a lower complexity and non-locality property.The proposed scheme provides a promising way for remote sensing.  相似文献   

3.
王全东  郭良浩  闫超 《应用声学》2019,38(6):1004-1014
针对干扰或噪声环境下水声目标信号难以获取的问题,该文提出研究基于深度神经网络的自适应水声被动信号波形恢复方法。在单阵元情况下,该方法提取对数功率谱特征作为输入,采用深度神经网络回归模型自适应学习目标信号的自身特征,输出降噪后的对数功率谱特征并还原时域波形。在多阵元情况下,提出阵列深度神经网络降噪方法,将部分或全部阵元特征拼接为长向量作为输入,从而利用空域信息。为全面利用阵列丰富的时频域信息,该文提出一种两阶段特征融合深度神经网络,在第一阶段将阵列分为若干个子阵,将每个子阵分别用阵列深度神经网络进行处理,在第二阶段将第一阶段的各子阵处理结果与阵列接收信号同时输入一个深度神经网络进行融合学习。实验表明,所提出的单阵元和两阶段融合深度神经网络取得了显著优于常规波束形成的恢复结果,能够准确估计目标信号波形和功率并显著提高输出信噪比。  相似文献   

4.
With the quick development of sensor technology in recent years, online detection of early fault without system halt has received much attention in the field of bearing prognostics and health management. While lacking representative samples of the online data, one can try to adapt the previously-learned detection rule to the online detection task instead of training a new rule merely using online data. As one may come across a change of the data distribution between offline and online working conditions, it is challenging to utilize the data from different working conditions to improve detection accuracy and robustness. To solve this problem, a new online detection method of bearing early fault is proposed in this paper based on deep transfer learning. The proposed method contains an offline stage and an online stage. In the offline stage, a new state assessment method is proposed to determine the period of the normal state and the degradation state for whole-life degradation sequences. Moreover, a new deep dual temporal domain adaptation (DTDA) model is proposed. By adopting a dual adaptation strategy on the time convolutional network and domain adversarial neural network, the DTDA model can effectively extract domain-invariant temporal feature representation. In the online stage, each sequentially-arrived data batch is directly fed into the trained DTDA model to recognize whether an early fault occurs. Furthermore, a health indicator of target bearing is also built based on the DTDA features to intuitively evaluate the detection results. Experiments are conducted on the IEEE Prognostics and Health Management (PHM) Challenge 2012 bearing dataset. The results show that, compared with nine state-of-the-art fault detection and diagnosis methods, the proposed method can get an earlier detection location and lower false alarm rate.  相似文献   

5.
The thermal issue is of great importance during the layout design of heat source components in systems engineering, especially for high functional-density products. Thermal analysis requires complex simulation, which leads to an unaffordable computational burden to layout optimization as it iteratively evaluates different schemes. Surrogate modeling is an effective method for alleviating computation complexity. However, the temperature field prediction(TFP) with complex heat source layout(HSL) input is an ultra-high dimensional nonlinear regression problem, which brings great difficulty to traditional regression models. The deep neural network(DNN) regression method is a feasible way for its good approximation performance. However, it faces great challenges in data preparation for sample diversity and uniformity in the layout space with physical constraints and proper DNN model selection and training for good generality, which necessitates the efforts of layout designers and DNN experts. To advance this cross-domain research, this paper proposes a DNN-based HSL-TFP surrogate modeling task benchmark. With consideration for engineering applicability, sample generation, dataset evaluation, DNN model, and surrogate performance metrics are thoroughly investigated. Experiments are conducted with ten representative state-of-the-art DNN models. A detailed discussion on baseline results is provided, and future prospects are analyzed for DNN-based HSL-TFP tasks.  相似文献   

6.
Hai-Yang Meng 《中国物理 B》2022,31(6):64305-064305
Accurate and fast prediction of aerodynamic noise has always been a research hotspot in fluid mechanics and aeroacoustics. The conventional prediction methods based on numerical simulation often demand huge computational resources, which are difficult to balance between accuracy and efficiency. Here, we present a data-driven deep neural network (DNN) method to realize fast aerodynamic noise prediction while maintaining accuracy. The proposed deep learning method can predict the spatial distributions of aerodynamic noise information under different working conditions. Based on the large eddy simulation turbulence model and the Ffowcs Williams-Hawkings acoustic analogy theory, a dataset composed of 1216 samples is established. With reference to the deep learning method, a DNN framework is proposed to map the relationship between spatial coordinates, inlet velocity and overall sound pressure level. The root-mean-square-errors of prediction are below 0.82 dB in the test dataset, and the directivity of aerodynamic noise predicted by the DNN framework are basically consistent with the numerical simulation. This work paves a novel way for fast prediction of aerodynamic noise with high accuracy and has application potential in acoustic field prediction.  相似文献   

7.
An intelligent solution method is proposed to achieve real-time optimal control for continuous-time nonlinear systems using a novel identifier-actor-optimizer(IAO)policy learning architecture.In this IAO-based policy learning approach,a dynamical identifier is developed to approximate the unknown part of system dynamics using deep neural networks(DNNs).Then,an indirect-method-based optimizer is proposed to generate high-quality optimal actions for system control considering both the constraints and performance index.Furthermore,a DNN-based actor is developed to approximate the obtained optimal actions and return good initial guesses to the optimizer.In this way,the traditional optimal control methods and state-of-the-art DNN techniques are combined in the IAO-based optimal policy learning method.Compared to the reinforcement learning algorithms with actor-critic architectures that suffer hard reward design and low computational efficiency,the IAO-based optimal policy learning algorithm enjoys fewer user-defined parameters,higher learning speeds,and steadier convergence properties in solving complex continuous-time optimal control problems(OCPs).Simulation results of three space flight control missions are given to substantiate the effectiveness of this IAO-based policy learning strategy and to illustrate the performance of the developed DNN-based optimal control method for continuous-time OCPs.  相似文献   

8.
Satellite communication is expected to play a vital role in realizing Internet of Remote Things (IoRT) applications. This article considers an intelligent reflecting surface (IRS)-assisted downlink low Earth orbit (LEO) satellite communication network, where IRS provides additional reflective links to enhance the intended signal power. We aim to maximize the sum-rate of all the terrestrial users by jointly optimizing the satellite’s precoding matrix and IRS’s phase shifts. However, it is difficult to directly acquire the instantaneous channel state information (CSI) and optimal phase shifts of IRS due to the high mobility of LEO and the passive nature of reflective elements. Moreover, most conventional solution algorithms suffer from high computational complexity and are not applicable to these dynamic scenarios. A robust beamforming design based on graph attention networks (RBF-GAT) is proposed to establish a direct mapping from the received pilots and dynamic network topology to the satellite and IRS’s beamforming, which is trained offline using the unsupervised learning approach. The simulation results corroborate that the proposed RBF-GAT approach can achieve more than 95% of the performance provided by the upper bound with low complexity.  相似文献   

9.
宋睿卓  魏庆来 《中国物理 B》2017,26(3):30505-030505
We develop an optimal tracking control method for chaotic system with unknown dynamics and disturbances. The method allows the optimal cost function and the corresponding tracking control to update synchronously. According to the tracking error and the reference dynamics, the augmented system is constructed. Then the optimal tracking control problem is defined. The policy iteration(PI) is introduced to solve the min-max optimization problem. The off-policy adaptive dynamic programming(ADP) algorithm is then proposed to find the solution of the tracking Hamilton–Jacobi–Isaacs(HJI) equation online only using measured data and without any knowledge about the system dynamics. Critic neural network(CNN), action neural network(ANN), and disturbance neural network(DNN) are used to approximate the cost function, control, and disturbance. The weights of these networks compose the augmented weight matrix, and the uniformly ultimately bounded(UUB) of which is proven. The convergence of the tracking error system is also proven. Two examples are given to show the effectiveness of the proposed synchronous solution method for the chaotic system tracking problem.  相似文献   

10.
This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problem formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.  相似文献   

11.
相位解码的时-空重建算法   总被引:1,自引:1,他引:0  
基于相位映射的三维传感技术对几何形状和拓扑结构复杂或表面梯度很大的物体进行绝对相位测量及相位重建仍然是一个困难的问题。近年来国际上提出了一种时间维度相位重建算法可以对此提供一种解决方案。然而,该算法对结构光照明系统提出了很高的要求,当系统无法满足算法要求时,重建结果存在严重的噪声。针对这一问题,提出了一种利用分段函数构造的相位解码的时空重建算法。该算法在相位重建过程中同时考虑时间维度和空间维度相位的相对关系,使得空间频率非严格按指数增长的条纹序列可以得到正确的重建,消除了跳变边界的相位模糊问题,从而可以更加有效地解决深度表面不连续和存在相互孤立物表面拓扑结构的景物相位重建问题。实验结果证明了此算法的可行性和有效性。  相似文献   

12.
针对目前有监督语音增强忽略了纯净语音、噪声与带噪语音之间的幅度谱相似性对增强效果影响等问题,提出了一种联合精确比值掩蔽(ARM)与深度神经网络(DNN)的语音增强方法。该方法利用纯净语音与带噪语音、噪声与带噪语音的幅度谱归一化互相关系数,设计了一种基于时频域理想比值掩蔽的精确比值掩蔽作为目标掩蔽;然后以纯净语音和噪声幅度谱为训练目标的DNN为基线,通过该DNN的输出来估计目标掩蔽,并对基线DNN和目标掩蔽进行联合优化,增强语音由目标掩蔽从带噪语音中估计得到;此外,考虑到纯净语音与噪声的区分性信息,采用一种区分性训练函数代替均方误差(MSE)函数作为基线DNN的目标函数,以使网络输出更加准确。实验表明,区分性训练函数提升了基线DNN以及整个联合优化网络的增强效果;在匹配噪声和不匹配噪声下,相比于其它常见DNN方法,本文方法取得了更高的平均客观语音质量评估(PESQ)和短时客观可懂度(STOI),增强后的语音保留了更多语音成分,同时对噪声的抑制效果更加明显。   相似文献   

13.
联合深度神经网络和凸优化的单通道语音增强算法   总被引:1,自引:1,他引:0       下载免费PDF全文
噪声估计的准确性直接影响语音增强算法的好坏,为提升当前语音增强算法的噪声抑制效果,有效求解无约束优化问题,提出一种联合深度神经网络(DNN)和凸优化的时频掩蔽优化算法进行单通道语音增强.首先,提取带噪语音的能量谱作为DNN的输入特征;接着,将噪声与带噪语音的频带内互相关系数(ICC Factor)作为DNN的训练目标;...  相似文献   

14.
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses.  相似文献   

15.
A light pipe based laser beam shaper is proposed as a low loss approach to transfer a Gaussian laser beam into a long line beam with uniform distribution along the line direction for the applications of illumination or material processing. A telecentric relay optics has been used for solving the radiometric issues in the wide angle optics. In addition, the length of the light pipe is used as a parameter for manipulating irradiance distribution at the outlet to compensate for the non-uniformity resulting from the partial reflection at the optical interface in the relay optics, so as to achieve a highly uniform line beam at the target plane. The proposed scheme provides an economical and versatile solution for the wide angle beam shaping problem.  相似文献   

16.
当前基于深度神经网络模型中,虽然其隐含层可设置多层,对复杂问题适应能力强,但每层之间的节点连接是相互独立的,这种结构特性导致了在语音序列中无法利用上下文相关信息来提高识别效果,而传统的循环神经网络虽然做出了改进,但是只能对上文信息进行利用。针对以上问题,该文采用可以同时利用语音序列中上下文相关信息的双向循环神经网络模型与深度神经网络模型相结合,并应用于语音识别。构建具有5层隐含层的模型,其中第3层为双向循环神经网络结构,其他层采用深度神经网络结构。实验结果表明:加入了双向循环神经网络结构的模型与其他模型相比,较好地提高了识别正确率;噪声对双向循环神经网络汉语识别有重要影响,尤其是训练集和测试集附加噪声类型不同时,单一的含噪声语音的训练模型无法适应不同噪声类型的语音识别;调整神经网络模型中隐含层神经元数量后,识别正确率并不是一直随着隐含层中神经元数量的增加而增加,神经元数量数目增加到一定程度后正确率出现了降低的趋势。  相似文献   

17.
How to improve the flexibility of limited communication resources to meet the increasing requirements of data services has become one of the research hotspots of the modern wireless communication network. In this paper, a novel social-aware motivated relay selection method is put forward to allocate the energy efficiency (EE) resources for the device-to-device (D2D) communication. To improve system flexibility, a D2D user is selected to act as a relay named D2D-relay instead of the traditional cellular relay. The optimal relay selection strategy is formulated by searching the maximum trust value that is obtained by assessing the link stability and social connections between two users. Then, the resource allocation problem, which turns out to be a mixed-integer nonlinear fractional programming (MINLFP) problem, is solved by maximizing the total EE under physical constraint and social constraint synthetically. To improve the solution efficiency, a novel iterative algorithm is proposed by integrating the Dinkelbach theory and Lagrange dual decomposition method. Simulation results demonstrate the effectiveness of the proposed scheme. Compared with the existing social-blind and social-aware schemes, it significantly improves the probability of successful relay selection and total EE of the D2D pairs.  相似文献   

18.
In practical ESPI applications in industry, the object under investigation often has low reflectance. From theoretical analysis, it is shown here that the implementation of phase shifting techniques will result in a smaller fraction of acceptable measurements for a given level of tolerable phase error. The use of higher power lasers may solve this problem; but it is an expensive solution. In this work, structured lighting using diffractive optical elements is proposed as a cost-effective solution. The approach is particularly useful when the deformation phase has to be obtained from only selected areas on the object. A simple experiment conducted verifies the workability of this approach.  相似文献   

19.
Spoofing relay is an effective way for legitimate agencies to monitor suspicious communication links and prevent malicious behaviors. The unmanned aerial vehicles(UAVs)-assisted wireless information surveillance system can virtually improve proactive eavesdropping efficiency thanks to the high maneuverability of UAVs. This paper aims to maximize the eavesdropping rate of the surveillance system where the UAV is exploited to actively eavesdrop by spoofing relay technology. We formulate the problem to jointly optimize the amplification coefficient, splitting ratio, and UAV’s trajectory while considering successful monitoring. To make the nonconvex problem tractable, we decompose the problem into three sub-problems and propose a successive convex approximation based alternate iterative algorithm to quickly obtain the near-optimal solution. The final simulation results show that the UAV as an active eavesdropper can effectively increase the information eavesdropping rate than a passive eavesdropper.  相似文献   

20.
We consider an intelligent reflecting surface (IRS)-assisted wireless powered communication network (WPCN) in which a multi antenna power beacon (PB) sends a dedicated energy signal to a wireless powered source. The source first harvests energy and then utilizing this harvested energy, it sends an information signal to destination where an external interference may also be present. For the considered system model, we formulated an analytical problem in which the objective is to maximize the throughput by jointly optimizing the energy harvesting (EH) time and IRS phase shift matrices. The optimization problem is high dimensional non-convex, thus a good quality solution can be obtained by invoking any state-of-the-art algorithm such as Genetic algorithm (GA). It is well-known that the performance of GA is generally remarkable, however it incurs a high computational complexity. To this end, we propose a deep unsupervised learning (DUL) based approach in which a neural network (NN) is trained very efficiently as time-consuming task of labeling a data set is not required. Numerical examples show that our proposed approach achieves a better performance–complexity trade-off as it is not only several times faster but also provides almost same or even higher throughput as compared to the GA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号