首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tonghe Ying 《中国物理 B》2022,31(7):78402-078402
A machine learning (ML) potential for Au clusters is developed through training on a dataset including several different sized clusters. This ML potential accurately covers the whole configuration space of Au clusters in a broad size range, thus expressing a good performance in search of their global minimum energy structures. Based on our potential, the low-lying structures of 17 different sized Au clusters are identified, which shows that small sized Au clusters tend to form planar structures while large ones are more likely to be stereo, revealing the critical size for the two-dimensional (2D) to three-dimensional (3D) structural transition. Our calculations demonstrate that ML is indeed powerful in describing the interaction of Au atoms and provides a new paradigm on accelerating the search of structures.  相似文献   

2.
Magnetization switching is one of the most fundamental topics in the field of magnetism.Machine learning(ML)models of random forest(RF),support vector machine(SVM),deep neural network(DNN)methods are built and trained to classify the magnetization reversal and non-reversal cases of single-domain particle,and the classification performances are evaluated by comparison with micromagnetic simulations.The results show that the ML models have achieved great accuracy and the DNN model reaches the best area under curve(AUC)of 0.997,even with a small training dataset,and RF and SVM models have lower AUCs of 0.964 and 0.836,respectively.This work validates the potential of ML applications in studies of magnetization switching and provides the benchmark for further ML studies in magnetization switching.  相似文献   

3.
Applying machine learning algorithms for assessing the transmission quality in optical networks is associated with substantial challenges. Datasets that could provide training instances tend to be small and heavily imbalanced. This requires applying imbalanced compensation techniques when using binary classification algorithms, but it also makes one-class classification, learning only from instances of the majority class, a noteworthy alternative. This work examines the utility of both these approaches using a real dataset from a Dense Wavelength Division Multiplexing network operator, gathered through the network control plane. The dataset is indeed of a very small size and contains very few examples of “bad” paths that do not deliver the required level of transmission quality. Two binary classification algorithms, random forest and extreme gradient boosting, are used in combination with two imbalance handling methods, instance weighting and synthetic minority class instance generation. Their predictive performance is compared with that of four one-class classification algorithms: One-class SVM, one-class naive Bayes classifier, isolation forest, and maximum entropy modeling. The one-class approach turns out to be clearly superior, particularly with respect to the level of classification precision, making it possible to obtain more practically useful models.  相似文献   

4.
In this paper, the optimization of network performance to support the deployment of federated learning (FL) is investigated. In particular, in the considered model, each user owns a machine learning (ML) model by training through its own dataset, and then transmits its ML parameters to a base station (BS) which aggregates the ML parameters to obtain a global ML model and transmits it to each user. Due to limited radio frequency (RF) resources, the number of users that participate in FL is restricted. Meanwhile, each user uploading and downloading the FL parameters may increase communication costs thus reducing the number of participating users. To this end, we propose to introduce visible light communication (VLC) as a supplement to RF and use compression methods to reduce the resources needed to transmit FL parameters over wireless links so as to further improve the communication efficiency and simultaneously optimize wireless network through user selection and resource allocation. This user selection and bandwidth allocation problem is formulated as an optimization problem whose goal is to minimize the training loss of FL. We first use a model compression method to reduce the size of FL model parameters that are transmitted over wireless links. Then, the optimization problem is separated into two subproblems. The first subproblem is a user selection problem with a given bandwidth allocation, which is solved by a traversal algorithm. The second subproblem is a bandwidth allocation problem with a given user selection, which is solved by a numerical method. The ultimate user selection and bandwidth allocation are obtained by iteratively compressing the model and solving these two subproblems. Simulation results show that the proposed FL algorithm can improve the accuracy of object recognition by up to 16.7% and improve the number of selected users by up to 68.7%, compared to a conventional FL algorithm using only RF.  相似文献   

5.
The accurate prediction of the solar diffuse fraction (DF), sometimes called the diffuse ratio, is an important topic for solar energy research. In the present study, the current state of Diffuse irradiance research is discussed and then three robust, machine learning (ML) models are examined using a large dataset (almost eight years) of hourly readings from Almeria, Spain. The ML models used herein, are a hybrid adaptive network-based fuzzy inference system (ANFIS), a single multi-layer perceptron (MLP) and a hybrid multi-layer perceptron grey wolf optimizer (MLP-GWO). These models were evaluated for their predictive precision, using various solar and DF irradiance data, from Spain. The results were then evaluated using frequently used evaluation criteria, the mean absolute error (MAE), mean error (ME) and the root mean square error (RMSE). The results showed that the MLP-GWO model, followed by the ANFIS model, provided a higher performance in both the training and the testing procedures.  相似文献   

6.
Magnetocardiography(MCG)measurement is important for investigating the cardiac biological activities.Traditionally,the extremely weak MCG signal was detected by using superconducting quantum interference devices(SQUIDs).As a room-temperature magnetic-field sensor,optically pumped magnetometer(OPM)has shown to have comparable sensitivity to that of SQUIDs,which is very suitable for biomagnetic measurements.In this paper,a synthetic gradiometer was constructed by using two OPMs under spin-exchange relaxation-free(SERF)conditions within a moderate magnetically shielded room(MSR).The magnetic noise of the OPM was measured to less than 70 fT/Hz1/2.Under a baseline of 100 mm,noise cancellation of about 30 dB was achieved.MCG was successfully measured with a signal to noise ratio(SNR)of about 37 dB.The synthetic gradiometer technique was very effective to suppress the residual environmental fields,demonstrating the OPM gradiometer technique for highly cost-effective biomagnetic measurements.  相似文献   

7.
基于神经网络的三维宽场显微图像复原研究   总被引:8,自引:0,他引:8  
陈华  金伟其  张楠  石俊生  王霞 《光子学报》2006,35(3):473-476
提出一种利用BP神经网络进行三维宽场显微图像复原的非线性映射方法,将三维图像转化为二维图像进行处理,利用神经网络的学习能力,通过训练,建立含有散焦信息的二维模糊图像与二维清晰图像之间的映射关系,然后对切片堆叠进行逐幅复原,从而实现显微图像的三维复原.得到的复原图像在视觉上和定量分析上都获得了很好的效果.由于采用小规模神经网络,训练时间短,计算量小,使实时复原成为可能.  相似文献   

8.
6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.  相似文献   

9.
Hybrid analog/digital multiple input multiple output (MIMO) system is proposed to mitigate the challenges of millimeter wave (mmWave) communication. This architecture enables utilizing the large array gain with reasonable power consumption. However, new methods are required for the channel estimation problem of hybrid architecture-based systems due to the fewer number of radio frequency (RF) chains than antenna elements. Leveraging the sparse nature of the mmWave channels, compressed sensing (CS)-based channel estimation methods are proposed. Recently, machine learning (ML)-aided methods have been investigated to improve the channel estimation performance. Additionally, the Doppler effect should be considered for the high mobility scenarios, and we deal with the time-varying channel model. Therefore, in this article, we consider the scenario of time-varying channels for a multi-user mmWave hybrid MIMO system. By proposing a Deep Neural Network (DNN) and defining the inputs and outputs, we introduce a novel algorithm called Deep Learning Assisted Angle Estimation (DLA-AE) for improving the estimation of the Angles of Departure/Arrival (AoDs/AoAs) of the channel paths. In addition, we suggest Linear Phase Interpolation (LPI) to acquire the path gains for the data transmission instants. Simulation results show that utilizing the proposed DLA-AE and LPI methods enhance the time-varying channel estimation accuracy with low computational complexity.  相似文献   

10.
Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n > 3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen.  相似文献   

11.
The first application of high energy ion channeling to the atomically clean GaAs(110) surface and metal-GaAs interfaces is reported. Questions of sample preparation, background correction and computer simulation are addressed. It is found that the Ga and As atoms at the clean surface are laterally displaced ? 0.1 Å from the ideal bulk-like sites. The implications of this result to current LEED models are discussed. Au overlayers, deposited at room temperature, do not seem to produce lateral displacements of the substrate for coverages below ≈ 5 monolayer (ML). However, ≈ 0.9 ML of the substrate are expanded or contracted upon Au deposition; this process is completed at a coverage of 0.5 ML. Neither an indication of any order in the Au film is found, nor seems a significant (? 5%) fraction of Au atoms to occupy substitutional sites. In contrast, room-temperature deposition of Pd disorders the substrate substantially, without threshold coverage, even at very small film thicknesses.  相似文献   

12.
Khaleel  Zahraa S.  Mudhafer  A. 《Optical Review》2023,30(4):454-461
Optical Review - Using a combination of the finite element method (FEM) applied in COMSOL Multiphysics and the machine learning (ML)-based classification models, a computational tool has been...  相似文献   

13.
Despite the importance of few-shot learning, the lack of labeled training data in the real world makes it extremely challenging for existing machine learning methods because this limited dataset does not well represent the data variance. In this research, we suggest employing a generative approach using variational autoencoders (VAEs), which can be used specifically to optimize few-shot learning tasks by generating new samples with more intra-class variations on the Labeled Faces in the Wild (LFW) dataset. The purpose of our research is to increase the size of the training dataset using various methods to improve the accuracy and robustness of the few-shot face recognition. Specifically, we employ the VAE generator to increase the size of the training dataset, including the basic and the novel sets while utilizing transfer learning as the backend. Based on extensive experimental research, we analyze various data augmentation methods to observe how each method affects the accuracy of face recognition. The face generation method based on VAEs with perceptual loss can effectively improve the recognition accuracy rate to 96.47% using both the base and the novel sets.  相似文献   

14.
In massive multiple-input multiple-output (MIMO), it is much challenging to obtain accurate channel state information (CSI) after radio frequency (RF) chain reduction due to the high dimensions. With the fast development of machine learning(ML), it is widely acknowledged that ML is an effective method to deal with channel models which are typically unknown and hard to approximate. In this paper, we use the low complexity vector approximate messaging passing (VAMP) algorithm for channel estimation, combined with a deep learning framework for soft threshold shrinkage function training. Furthermore, in order to improve the estimation accuracy of the algorithm for massive MIMO channels, an optimized threshold function is proposed. This function is based on Gaussian mixture (GM) distribution modeling, and the expectation maximum Algorithm (EM Algorithm) is used to recover the channel information in beamspace. This contraction function and deep neural network are improved on the vector approximate messaging algorithm to form a high-precision channel estimation algorithm. Simulation results validate the effectiveness of the proposed network.  相似文献   

15.
Equilibrium states of large layered neural networks with differentiable activation function and a single, linear output unit are investigated using the replica formalism. The quenched free energy of a student network with a very large number of hidden units learning a rule of perfectly matching complexity is calculated analytically. The system undergoes a first order phase transition from unspecialized to specialized student configurations at a critical size of the training set. Computer simulations of learning by stochastic gradient descent from a fixed training set demonstrate that the equilibrium results describe quantitatively the plateau states which occur in practical training procedures at sufficiently small but finite learning rates. Received 16 December 1998  相似文献   

16.
17.
Studying the complex quantum dynamics of interacting many-body systems is one of the most challenging areas in modern physics. Here, we use machine learning (ML) models to identify the symmetrized base states of interacting Rydberg atoms of various atom numbers (up to six) and geometric configurations. To obtain the data set for training the ML classifiers, we generate Rydberg excitation probability profiles that simulate experimental data by utilizing Lindblad equations that incorporate laser intensities and phase noise. Then, we classify the data sets using support vector machines (SVMs) and random forest classifiers (RFCs). With these ML models, we achieve high accuracy of up to 100% for data sets containing only a few hundred samples, especially for the closed atom configurations such as the pentagonal (five atoms) and hexagonal (six atoms) systems. The results demonstrate that computationally cost-effective ML models can be used in the identification of Rydberg atom configurations.  相似文献   

18.
The gray-scale ultrasound(US) imaging method is usually used to assess synovitis in rheumatoid arthritis(RA) in clinical practice. This four-grade scoring system depends highly on the sonographer's experience and has relatively lower validity compared with quantitative indexes. However, the training of a qualified sonographer is expensive and timeconsuming while few studies focused on automatic RA grading methods. The purpose of this study is to propose an automatic RA grading method using deep convolutional neural networks(DCNN) to assist clinical assessment. Gray-scale ultrasound images of finger joints are taken as inputs while the output is the corresponding RA grading results. Firstly,we performed the auto-localization of synovium in the RA image and obtained a high precision in localization. In order to make up for the lack of a large annotated training dataset, we performed data augmentation to increase the number of training samples. Motivated by the approach of transfer learning, we pre-trained the GoogLeNet on ImageNet as a feature extractor and then fine-tuned it on our own dataset. The detection results showed an average precision exceeding 90%. In the experiment of grading RA severity, the four-grade classification accuracy exceeded 90% while the binary classification accuracies exceeded 95%. The results demonstrate that our proposed method achieves performances comparable to RA experts in multi-class classification. The promising results of our proposed DCNN-based RA grading method can have the ability to provide an objective and accurate reference to assist RA diagnosis and the training of sonographers.  相似文献   

19.
Radio frequency machine learning (RFML) can be loosely termed as a field that machine learning (ML) and deep learning (DL) techniques to applications related to wireless communications. However, traditional RFML basically assume that the data of training set and test set are independent and identically distributed and only a large number of labeled data can train a classification model which can effectively classify test set data. In other words, without enough training samples, it is impossible to learn an automatic modulation classifier that performs well in varying noise interference environment. Feature-based transfer learning minimizes the distribution difference between historical modulated signal data and new data by learning similarity-maximizing feature spaces. Therefore, in this paper, Dynamic Distribution Adaptation (DDA) is adopted to address the above challenges. We propose a Tensor Embedding RF Domain Adaptation (TERFDA) approach, which learns the latent subspace of the tensors formed by the time–frequency maps of the signals, so that use the multi-dimensional domain information of the signals to jointly learn the shared feature subspace of the source domain and the target domain, then perform DDA in the shared subspace. The experimental results show that under the modulated signal data, compared with the state-of-the-art DA algorithm, TERFDA has less requirements on the number of samples and categories, and has superior performance for confrontation the varying noise interference between source domain and target domain.  相似文献   

20.
特征提取是太赫兹光谱识别的关键处理步骤,通常利用降维方法作为特征提取手段。然而,当一些化合物的太赫兹光谱曲线整体差异度较小时,降维方法往往会缺失样本差异的重要特征信息,从而导致分类错误。如果不采用降维方法提取特征,传统机器学习分类算法对维数较高的原始太赫兹光谱数据又不能很好的分类。针对此问题,提出了一种基于双向长短期记忆网络(BLSTM-RNN)自动提取太赫兹光谱特征的识别方法。BLSTM-RNN作为一种特殊的循环神经网络,利用其LSTM单元可以有效解决原始太赫兹光谱数据维数较高使得模型难以训练问题。再结合模型的双向频谱信息利用架构模式,可以增强模型对复杂光谱数据自动提取有效特征信息的能力。采用三类、15种化合物太赫兹透射光谱作为测试对象,首先利用S-G滤波和三次样条插值对Anthraquinone,Benomyl和Carbazole等十五种化合物在0.9~6 THz内的太赫兹透射光谱数据进行归一化处理,然后通过构建一个具有双向长短期记忆的循环神经网络对太赫兹光谱的全频谱信息进行自动特征提取并利用Softmax分类器进行分类。通过试验优化网络结构和各项参数,最终获得了针对复杂太赫兹透射光谱数据的预测模型,并与传统机器学习算法SVM,KNN及神经网络算法MLP,CNN进行对比实验。结果表明,dataset-1和dataset-2分别作为差异度较大和无明显峰值特征的五种化合物太赫兹透射光谱数据集,其平均识别率分别为100%和98.51%,与其他方法相比识别率有所提高;最重要的是,dataset-3作为5种化合物谱线极为相似的太赫兹透射光谱数据集,其平均识别率为96.56%,与其他方法相比识别率提高显著;dataset-4作为dataset-1,dataset-2和dataset-3的透射光谱数据集集合,其平均识别率为98.87%。从而验证了BLSTM-RNN模型能自动提取有效的太赫兹光谱特征,同时又能保证复杂太赫兹光谱的预测精度。在选择模型训练优化算法方面,使用Adam优化算法要好于RMSProp,SGD和AdaGrad,其模型的目标函数损失值收敛速度最快。同时随着模型训练迭代次数增加,相似太赫兹透射光谱数据集的预测准确率也不断提升。可为复杂太赫兹光谱数据库的光谱识别检索提供一种新的识别方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号