共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Specific emitter identification (SEI) refers to distinguishing emitters using individual features extracted from wireless signals. The current SEI methods have proven to be accurate in tackling large labeled data sets at a high signal-to-noise ratio (SNR). However, their performance declines dramatically in the presence of small samples and a significant noise environment. To address this issue, we propose a complex self-supervised learning scheme to fully exploit the unlabeled samples, comprised of a pretext task adopting the contrastive learning concept and a downstream task. In the former task, we design an optimized data augmentation method based on communication signals to serve the contrastive conception. Then, we embed a complex-valued network in the learning to improve the robustness to noise. The proposed scheme demonstrates the generality of handling the small and sufficient samples cases across a wide range from 10 to 400 being labeled in each group. The experiment also shows a promising accuracy and robustness where the recognition results increase at 10–16% from 10–15 SNR. 相似文献
3.
Saleh Albelwi 《Entropy (Basel, Switzerland)》2022,24(4)
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL. 相似文献
4.
It is difficult to identify the working conditions of the rotary kilns due to the harsh environment in the kilns. The flame images of the firing zone in the kilns contain a lot of working condition information, but the flame image data sample size is too small to be used to fully extract the key features. In order to solve this problem, a method combining transfer learning and attention mechanism is proposed to extract key features of flame images, in which the deep residual network is used as the backbone network, the coordinate attention module is introduced to capture the position information and channel information on the branch of feature graphs, and the features of flame images obtained are further screened to improve the extraction ability. At the same time, migration learning is performed by the pre-trained ImageNet data set, and feature migration and parameter sharing are realized to cope with the training difficulty of a small sample data size. Moreover, an activation function Mish is introduced to reduce the loss of effective information. The experimental results show that, compared with traditional methods, the working condition recognition accuracy of rotary kilns is improved by about 5% with the proposed method. 相似文献
5.
The electrocardiogram (ECG) signal has become a popular biometric modality due to characteristics that make it suitable for developing reliable authentication systems. However, the long segment of signal required for recognition is still one of the limitations of existing ECG biometric recognition methods and affects its acceptability as a biometric modality. This paper investigates how a short segment of an ECG signal can be effectively used for biometric recognition, using deep-learning techniques. A small convolutional neural network (CNN) is designed to achieve better generalization capability by entropy enhancement of a short segment of a heartbeat signal. Additionally, it investigates how various blind and feature-dependent segments with different lengths affect the performance of the recognition system. Experiments were carried out on two databases for performance evaluation that included single and multisession records. In addition, a comparison was made between the performance of the proposed classifier and four well-known CNN models: GoogLeNet, ResNet, MobileNet and EfficientNet. Using a time–frequency domain representation of a short segment of an ECG signal around the R-peak, the proposed model achieved an accuracy of 99.90% for PTB, 98.20% for the ECG-ID mixed-session, and 94.18% for ECG-ID multisession datasets. Using the preprinted ResNet, we obtained 97.28% accuracy for 0.5-second segments around the R-peaks for ECG-ID multisession datasets, outperforming existing methods. It was found that the time–frequency domain representation of a short segment of an ECG signal can be feasible for biometric recognition by achieving better accuracy and acceptability of this modality. 相似文献
6.
提出了一种基于分段轮廓平滑的目标识别算法.首先通过曲率将轮廓划分为特征区域和非特征区域|然后在不同区域内分别采用不同方差的高斯函数进行轮廓平滑;最后采用基于仿射不变矩的目标识别算法对平滑后的目标轮廓进行识别.结果表明,该算法不仅取得了更好的轮廓平滑效果,而且在强噪音条件下能够显著提高识别准确率. 相似文献
7.
The analysis and processing of ECG signals are a key approach in the diagnosis of cardiovascular diseases. The main field of work in this area is classification, which is increasingly supported by machine learning-based algorithms. In this work, a deep neural network was developed for the automatic classification of primary ECG signals. The research was carried out on the data contained in a PTB-XL database. Three neural network architectures were proposed: the first based on the convolutional network, the second on SincNet, and the third on the convolutional network, but with additional entropy-based features. The dataset was divided into training, validation, and test sets in proportions of 70%, 15%, and 15%, respectively. The studies were conducted for 2, 5, and 20 classes of disease entities. The convolutional network with entropy features obtained the best classification result. The convolutional network without entropy-based features obtained a slightly less successful result, but had the highest computational efficiency, due to the significantly lower number of neurons. 相似文献
8.
Xun Zhang Lanyan Yang Bin Zhang Ying Liu Dong Jiang Xiaohai Qin Mengmeng Hao 《Entropy (Basel, Switzerland)》2021,23(4)
The problem of extracting meaningful data through graph analysis spans a range of different fields, such as social networks, knowledge graphs, citation networks, the World Wide Web, and so on. As increasingly structured data become available, the importance of being able to effectively mine and learn from such data continues to grow. In this paper, we propose the multi-scale aggregation graph neural network based on feature similarity (MAGN), a novel graph neural network defined in the vertex domain. Our model provides a simple and general semi-supervised learning method for graph-structured data, in which only a very small part of the data is labeled as the training set. We first construct a similarity matrix by calculating the similarity of original features between all adjacent node pairs, and then generate a set of feature extractors utilizing the similarity matrix to perform multi-scale feature propagation on graphs. The output of multi-scale feature propagation is finally aggregated by using the mean-pooling operation. Our method aims to improve the model representation ability via multi-scale neighborhood aggregation based on feature similarity. Extensive experimental evaluation on various open benchmarks shows the competitive performance of our method compared to a variety of popular architectures. 相似文献
9.
Image segmentation plays a central role in a broad range of applications, such as medical image analysis, autonomous vehicles, video surveillance and augmented reality. Portrait segmentation, which is a subset of semantic image segmentation, is widely used as a preprocessing step in multiple applications such as security systems, entertainment applications, video conferences, etc. A substantial amount of deep learning-based portrait segmentation approaches have been developed, since the performance and accuracy of semantic image segmentation have improved significantly due to the recent introduction of deep learning technology. However, these approaches are limited to a single portrait segmentation model. In this paper, we propose a novel approach using an ensemble method by combining multiple heterogeneous deep-learning based portrait segmentation models to improve the segmentation performance. The Two-Models ensemble and Three-Models ensemble, using a simple soft voting method and weighted soft voting method, were experimented. Intersection over Union (IoU) metric, IoU standard deviation and false prediction rate were used to evaluate the performance. Cost efficiency was calculated to analyze the efficiency of segmentation. The experiment results show that the proposed ensemble approach can perform with higher accuracy and lower errors than single deep-learning-based portrait segmentation models. The results also show that the ensemble of deep-learning models typically increases the use of memory and computing power, although it also shows that the ensemble of deep-learning models can perform more efficiently than a single model with higher accuracy using less memory and less computing power. 相似文献
10.
Xingyu Li Jitendra Jonnagaddala Min Cen Hong Zhang Steven Xu 《Entropy (Basel, Switzerland)》2022,24(11)
Most deep-learning algorithms that use Hematoxylin- and Eosin-stained whole slide images (WSIs) to predict cancer survival incorporate image patches either with the highest scores or a combination of both the highest and lowest scores. In this study, we hypothesize that incorporating wholistic patch information can predict colorectal cancer (CRC) cancer survival more accurately. As such, we developed a distribution-based multiple-instance survival learning algorithm (DeepDisMISL) to validate this hypothesis on two large international CRC WSIs datasets called MCO CRC and TCGA COAD-READ. Our results suggest that combining patches that are scored based on percentile distributions together with the patches that are scored as highest and lowest drastically improves the performance of CRC survival prediction. Including multiple neighborhood instances around each selected distribution location (e.g., percentiles) could further improve the prediction. DeepDisMISL demonstrated superior predictive ability compared to other recently published, state-of-the-art algorithms. Furthermore, DeepDisMISL is interpretable and can assist clinicians in understanding the relationship between cancer morphological phenotypes and a patient’s cancer survival risk. 相似文献
11.
基于动态目标建模的粒子滤波视觉跟踪算法 总被引:3,自引:1,他引:3
提出一种根据场景变化动态建立目标模型的粒子滤波视觉跟踪算法.该方法首先选择简单且具有互补性的特征描述当前图像,并统一采用直方图法对这些特征进行建模;然后在粒子滤波框架下,根据巴塔恰里亚测度评价各个目标特征和背景特征之间的可区分程度,动态调整特征间的置信度;并对各个特征似然函数的噪音参量进行在线估计和更新,使其似然函数的度量标准达到统一.分析和实验表明,该算法性能优于仅仅采用多特征融合进行粒子滤波视觉跟踪的方法,对摄像机运动、混淆干扰、遮挡及目标外观大小的改变具有更强的鲁棒性. 相似文献
12.
提出了一种基于局部描述符的三维点云物体识别算法.算法首先得到点云的邻域、法线矢量等相关信息,通过邻域进一步得到形状索引值.特征点的提取以形状索引值为依据,以每个特征点为基点对曲面根据欧式距离和矢量夹角分割.分割的曲面片进行等间距划分为多个欧氏距离同心圆,以特征点切平面为基平面投影,并进行等角度抽样,通过抽样点相对特征点的法线矢量及测地距离变化曲线,建立曲面片的二维描述,从而把三维识别转化为二维.根据算法建立模型数据库,给定一个物体,通过和模型数据库中的曲面描述进行比对,得到潜在的识别结果,再通过迭代最近点算法,得到最终的识别结果.最后,通过大量具体实验验证了算法的有效性,并给出了算法的计算复杂度及耗时对比分析,说明了算法的高效性. 相似文献
13.
Jiding Zhai Chunxiao Mu Yongchao Hou Jianping Wang Yingjie Wang Haokun Chi 《Entropy (Basel, Switzerland)》2022,24(10)
Marine oil spills due to ship collisions or operational errors have caused tremendous damage to the marine environment. In order to better monitor the marine environment on a daily basis and reduce the damage and harm caused by oil pollution, we use marine image information acquired by synthetic aperture radar (SAR) and combine it with image segmentation techniques in deep learning to monitor oil spills. However, it is a significant challenge to accurately distinguish oil spill areas in original SAR images, which are characterized by high noise, blurred boundaries, and uneven intensity. Hence, we propose a dual attention encoding network (DAENet) using an encoder–decoder U-shaped architecture for identifying oil spill areas. In the encoding phase, we use the dual attention module to adaptively integrate local features with their global dependencies, thus improving the fusion feature maps of different scales. Moreover, a gradient profile (GP) loss function is used to improve the recognition accuracy of the oil spill areas’ boundary lines in the DAENet. We used the Deep-SAR oil spill (SOS) dataset with manual annotation for training, testing, and evaluation of the network, and we established a dataset containing original data from GaoFen-3 for network testing and performance evaluation. The results show that DAENet has the highest mIoU of 86.1% and the highest F1-score of 90.2% in the SOS dataset, and it has the highest mIoU of 92.3% and the highest F1-score of 95.1% in the GaoFen-3 dataset. The method proposed in this paper not only improves the detection and identification accuracy of the original SOS dataset, but also provides a more feasible and effective method for marine oil spill monitoring. 相似文献
14.
基于可见光光谱分析的黄瓜白粉病识别研究 总被引:1,自引:0,他引:1
白粉病是黄瓜常见病害之一,传播速度极快,严重时可造成黄瓜大量减产,对其进行快速准确识别,对黄瓜白粉病诊断和防治具有重要意义,应用可见光谱技术,结合主成分分析和支持向量机算法,实现对黄瓜白粉病的快速识别。配制白粉病菌孢子悬浮液,并人工接种于科研温室内的黄瓜叶片上,以诱发黄瓜白粉病,待白粉病有一定面积暴发后,利用海洋光学USB2000+型便携式光谱仪对黄瓜叶片光谱信息进行采集,利用五点取样法采集样本,在5个检查点,每点选取2株黄瓜进行调查,每株选取4枚感病叶片,每枚叶片随机选取5个感病区域进行光谱采集,共计采集200个感病叶片光谱样本,同样采集200个健康叶片样本作为对照。通过Ocean Optics Spectra-Suite软件采集漫反射标准白板信息和光谱仪暗电流实现光谱仪校正,调节积分时间、扫描次数以及平滑度等参数来实现光谱曲线平滑处理,以有效抑制光谱噪声,对光谱特征进行分类识别,去掉首尾噪声较大的波段,保留光谱的可见光波段进行研究,最终选取450~780 nm波段范围作为研究对象。利用主成分分析对所研究波段范围内的高维光谱数据(947维)进行降维处理,根据主成分的累计贡献率,选取前5个主成分作为分类模型的输入,以白粉病和健康叶片的判别结果作为输出,利用支持向量机算法,通过对样本的分类学习训练构建黄瓜白粉病和健康叶片的分类识别模型,随机选取120个样本作为训练集用于分类模型构建,其余80个样本作为测试集用于模型检验,并通过选取不同的核函数来获得最优模型。利用混淆矩阵对分类识别模型的准确率进行评价,当选取径向基核函数时,分类识别模型对黄瓜健康叶片和白粉病叶片的识别准确率最高,分别为100%和96.25%,总准确率为98.125%,具有较高的准确率。结果表明,利用可见光光谱信息并结合主成分分析和支持向量机算法,可以实现对黄瓜白粉病的快速准确识别,为黄瓜病害诊断提供了方法和参考依据。 相似文献
15.
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features. 相似文献
16.
David Alaminos Fernando Aguilar-Vijande Jos Ramn Snchez-Serrano 《Entropy (Basel, Switzerland)》2021,23(1)
Currency crises have been analyzed and modeled over the last few decades. These currency crises develop mainly due to a balance of payments crisis, and in many cases, these crises lead to speculative attacks against the price of the currency. Despite the popularity of these models, they are currently shown as models with low estimation precision. In the present study, estimates are made with first- and second-generation speculative attack models using neural network methods. The results conclude that the Quantum-Inspired Neural Network and Deep Neural Decision Trees methodologies are shown to be the most accurate, with results around 90% accuracy. These results exceed the estimates made with Ordinary Least Squares, the usual estimation method for speculative attack models. In addition, the time required for the estimation is less for neural network methods than for Ordinary Least Squares. These results can be of great importance for public and financial institutions when anticipating speculative pressures on currencies that are in price crisis in the markets. 相似文献
17.
A network for detection of an approaching object was proposed and fabricated based on the transient response of a descending contralateral movement detector (DCMD) existing in the brain of locusts. The proposed network was constructed with simple analog circuits. The experimental results of a test chip fabricated with a 1.2 $mUm complementary metal-oxide-semiconductor (CMOS) process and the results with a simulation program with integrated circuit emphasis (SPICE) showed that the proposed network is able to detect the approach by generating a peak current just before collision; the peak current allows detection of the approaching velocity and direction without collision. The proposed network could be applied to two-dimensional arrays for three-dimensional motion detection. 相似文献
18.
The trend prediction of the stock is a main challenge. Accidental factors often lead to short-term sharp fluctuations in stock markets, deviating from the original normal trend. The short-term fluctuation of stock price has high noise, which is not conducive to the prediction of stock trends. Therefore, we used discrete wavelet transform (DWT)-based denoising to denoise stock data. Denoising the stock data assisted us to eliminate the influences of short-term random events on the continuous trend of the stock. The denoised data showed more stable trend characteristics and smoothness. Extreme learning machine (ELM) is one of the effective training algorithms for fully connected single-hidden-layer feedforward neural networks (SLFNs), which possesses the advantages of fast convergence, unique results, and it does not converge to a local minimum. Therefore, this paper proposed a combination of ELM- and DWT-based denoising to predict the trend of stocks. The proposed method was used to predict the trend of 400 stocks in China. The prediction results of the proposed method are a good proof of the efficacy of DWT-based denoising for stock trends, and showed an excellent performance compared to 12 machine learning algorithms (e.g., recurrent neural network (RNN) and long short-term memory (LSTM)). 相似文献
19.
The Coronavirus disease 2019 (COVID-19) has become one of the threats to the world. Computed tomography (CT) is an informative tool for the diagnosis of COVID-19 patients. Many deep learning approaches on CT images have been proposed and brought promising performance. However, due to the high complexity and non-transparency of deep models, the explanation of the diagnosis process is challenging, making it hard to evaluate whether such approaches are reliable. In this paper, we propose a visual interpretation architecture for the explanation of the deep learning models and apply the architecture in COVID-19 diagnosis. Our architecture designs a comprehensive interpretation about the deep model from different perspectives, including the training trends, diagnostic performance, learned features, feature extractors, the hidden layers, the support regions for diagnostic decision, and etc. With the interpretation architecture, researchers can make a comparison and explanation about the classification performance, gain insight into what the deep model learned from images, and obtain the supports for diagnostic decisions. Our deep model achieves the diagnostic result of 94.75%, 93.22%, 96.69%, 97.27%, and 91.88% in the criteria of accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, which are 8.30%, 4.32%, 13.33%, 10.25%, and 6.19% higher than that of the compared traditional methods. The visualized features in 2-D and 3-D spaces provide the reasons for the superiority of our deep model. Our interpretation architecture would allow researchers to understand more about how and why deep models work, and can be used as interpretation solutions for any deep learning models based on convolutional neural network. It can also help deep learning methods to take a step forward in the clinical COVID-19 diagnosis field. 相似文献
20.
Caries prevention is essential for oral hygiene. A fully automated procedure that reduces human labor and human error is needed. This paper presents a fully automated method that segments tooth regions of interest from a panoramic radiograph to diagnose caries. A patient’s panoramic oral radiograph, which can be taken at any dental facility, is first segmented into several segments of individual teeth. Then, informative features are extracted from the teeth using a pre-trained deep learning network such as VGG, Resnet, or Xception. Each extracted feature is learned by a classification model such as random forest, k-nearest neighbor, or support vector machine. The prediction of each classifier model is considered as an individual opinion that contributes to the final diagnosis, which is decided by a majority voting method. The proposed method achieved an accuracy of 93.58%, a sensitivity of 93.91%, and a specificity of 93.33%, making it promising for widespread implementation. The proposed method, which outperforms existing methods in terms of reliability, and can facilitate dental diagnosis and reduce the need for tedious procedures. 相似文献