首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Neurofeedback training (NFT) has shown promising results in recent years as a tool to address the effects of age-related cognitive decline in the elderly. Since previous studies have linked reduced complexity of electroencephalography (EEG) signal to the process of cognitive decline, we propose the use of non-linear methods to characterise changes in EEG complexity induced by NFT. In this study, we analyse the pre- and post-training EEG from 11 elderly subjects who performed an NFT based on motor imagery (MI–NFT). Spectral changes were studied using relative power (RP) from classical frequency bands (delta, theta, alpha, and beta), whilst multiscale entropy (MSE) was applied to assess EEG-induced complexity changes. Furthermore, we analysed the subject’s scores from Luria tests performed before and after MI–NFT. We found that MI–NFT induced a power shift towards rapid frequencies, as well as an increase of EEG complexity in all channels, except for C3. These improvements were most evident in frontal channels. Moreover, results from cognitive tests showed significant enhancement in intellectual and memory functions. Therefore, our findings suggest the usefulness of MI–NFT to improve cognitive functions in the elderly and encourage future studies to use MSE as a metric to characterise EEG changes induced by MI–NFT.  相似文献   

3.
Brain–computer interface (BCI) technology allows people with disabilities to communicate with the physical environment. One of the most promising signals is the non-invasive electroencephalogram (EEG) signal. However, due to the non-stationary nature of EEGs, a subject’s signal may change over time, which poses a challenge for models that work across time. Recently, domain adaptive learning (DAL) has shown its superior performance in various classification tasks. In this paper, we propose a regularized reproducing kernel Hilbert space (RKHS) subspace learning algorithm with K-nearest neighbors (KNNs) as a classifier for the task of motion imagery signal classification. First, we reformulate the framework of RKHS subspace learning with a rigorous mathematical inference. Secondly, since the commonly used maximum mean difference (MMD) criterion measures the distribution variance based on the mean value only and ignores the local information of the distribution, a regularization term of source domain linear discriminant analysis (SLDA) is proposed for the first time, which reduces the variance of similar data and increases the variance of dissimilar data to optimize the distribution of source domain data. Finally, the RKHS subspace framework was constructed sparsely considering the sensitivity of the BCI data. We test the proposed algorithm in this paper, first on four standard datasets, and the experimental results show that the other baseline algorithms improve the average accuracy by 2–9% after adding SLDA. In the motion imagery classification experiments, the average accuracy of our algorithm is 3% higher than the other algorithms, demonstrating the adaptability and effectiveness of the proposed algorithm.  相似文献   

4.
Users of social networks have a variety of social statuses and roles. For example, the users of Weibo include celebrities, government officials, and social organizations. At the same time, these users may be senior managers, middle managers, or workers in companies. Previous studies on this topic have mainly focused on using the categorical, textual and topological data of a social network to predict users’ social statuses and roles. However, this cannot fully reflect the overall characteristics of users’ social statuses and roles in a social network. In this paper, we consider what social network structures reflect users’ social statuses and roles since social networks are designed to connect people. Taking an Enron email dataset as an example, we analyzed a preprocessing mechanism used for social network datasets that can extract users’ dynamic behavior features. We further designed a novel social network representation learning algorithm in order to infer users’ social statuses and roles in social networks through the use of an attention and gate mechanism on users’ neighbors. The extensive experimental results gained from four publicly available datasets indicate that our solution achieves an average accuracy improvement of 2% compared with GraphSAGE-Mean, which is the best applicable inductive representation learning method.  相似文献   

5.
The prevalence of neurodegenerative diseases (NDD) has grown rapidly in recent years and NDD screening receives much attention. NDD could cause gait abnormalities so that to screen NDD using gait signal is feasible. The research aim of this study is to develop an NDD classification algorithm via gait force (GF) using multiscale sample entropy (MSE) and machine learning models. The Physionet NDD gait database is utilized to validate the proposed algorithm. In the preprocessing stage of the proposed algorithm, new signals were generated by taking one and two times of differential on GF and are divided into various time windows (10/20/30/60-sec). In feature extraction, the GF signal is used to calculate statistical and MSE values. Owing to the imbalanced nature of the Physionet NDD gait database, the synthetic minority oversampling technique (SMOTE) was used to rebalance data of each class. Support vector machine (SVM) and k-nearest neighbors (KNN) were used as the classifiers. The best classification accuracies for the healthy controls (HC) vs. Parkinson’s disease (PD), HC vs. Huntington’s disease (HD), HC vs. amyotrophic lateral sclerosis (ALS), PD vs. HD, PD vs. ALS, HD vs. ALS, HC vs. PD vs. HD vs. ALS, were 99.90%, 99.80%, 100%, 99.75%, 99.90%, 99.55%, and 99.68% under 10-sec time window with KNN. This study successfully developed an NDD gait classification based on MSE and machine learning classifiers.  相似文献   

6.
Malware detection is in a coevolutionary arms race where the attackers and defenders are constantly seeking advantage. This arms race is asymmetric: detection is harder and more expensive than evasion. White hats must be conservative to avoid false positives when searching for malicious behaviour. We seek to redress this imbalance. Most of the time, black hats need only make incremental changes to evade them. On occasion, white hats make a disruptive move and find a new technique that forces black hats to work harder. Examples include system calls, signatures and machine learning. We present a method, called Hothouse, that combines simulation and search to accelerate the white hat’s ability to counter the black hat’s incremental moves, thereby forcing black hats to perform disruptive moves more often. To realise Hothouse, we evolve EEE, an entropy-based polymorphic packer for Windows executables. Playing the role of a black hat, EEE uses evolutionary computation to disrupt the creation of malware signatures. We enter EEE into the detection arms race with VirusTotal, the most prominent cloud service for running anti-virus tools on software. During our 6 month study, we continually improved EEE in response to VirusTotal, eventually learning a packer that produces packed malware whose evasiveness goes from an initial 51.8% median to 19.6%. We report both how well VirusTotal learns to detect EEE-packed binaries and how well VirusTotal forgets in order to reduce false positives. VirusTotal’s tools learn and forget fast, actually in about 3 days. We also show where VirusTotal focuses its detection efforts, by analysing EEE’s variants.  相似文献   

7.
In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks’ private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets’ sparsity and tasks’ correlations to the model performance is analyzed. Results show that the proposed model’s information sharing strategy is helpful to transfer learning and superior to the several comparison models.  相似文献   

8.
Depression is a public health issue that severely affects one’s well being and can cause negative social and economic effects to society. To raise awareness of these problems, this research aims at determining whether the long-lasting effects of depression can be determined from electroencephalographic (EEG) signals. The article contains an accuracy comparison for SVM, LDA, NB, kNN, and D3 binary classifiers, which were trained using linear (relative band power, alpha power variability, spectral asymmetry index) and nonlinear (Higuchi fractal dimension, Lempel–Ziv complexity, detrended fluctuation analysis) EEG features. The age- and gender-matched dataset consisted of 10 healthy subjects and 10 subjects diagnosed with depression at some point in their lifetime. Most of the proposed feature selection and classifier combinations achieved accuracy in the range of 80% to 95%, and all the models were evaluated using a 10-fold cross-validation. The results showed that the motioned EEG features used in classifying ongoing depression also work for classifying the long-lasting effects of depression.  相似文献   

9.
A hierarchical learning control framework (HLF) has been validated on two affordable control laboratories: an active temperature control system (ATCS) and an electrical rheostatic braking system (EBS). The proposed HLF is data-driven and model-free, while being applicable on general control tracking tasks which are omnipresent. At the lowermost level, L1, virtual state-feedback control is learned from input–output data, using a recently proposed virtual state-feedback reference tuning (VSFRT) principle. L1 ensures a linear reference model tracking (or matching) and thus, indirect closed-loop control system (CLCS) linearization. On top of L1, an experiment-driven model-free iterative learning control (EDMFILC) is then applied for learning reference input–controlled outputs pairs, coined as primitives. The primitives’ signals at the L2 level encode the CLCS dynamics, which are not explicitly used in the learning phase. Data reusability is applied to derive monotonic and safely guaranteed learning convergence. The learning primitives in the L2 level are finally used in the uppermost and final L3 level, where a decomposition/recomposition operation enables prediction of the optimal reference input assuring optimal tracking of a previously unseen trajectory, without relearning by repetitions, as it was in level L2. Hence, the HLF enables control systems to generalize their tracking behavior to new scenarios by extrapolating their current knowledge base. The proposed HLF framework endows the CLCSs with learning, memorization and generalization features which are specific to intelligent organisms. This may be considered as an advancement towards intelligent, generalizable and adaptive control systems.  相似文献   

10.
Individuals with mild cognitive impairment (MCI) are at high risk of developing Alzheimer’s disease (AD). Repetitive photic stimulation (PS) is commonly used in routine electroencephalogram (EEG) examinations for rapid assessment of perceptual functioning. This study aimed to evaluate neural oscillatory responses and nonlinear brain dynamics under the effects of PS in patients with mild AD, moderate AD, severe AD, and MCI, as well as healthy elderly controls (HC). EEG power ratios during PS were estimated as an index of oscillatory responses. Multiscale sample entropy (MSE) was estimated as an index of brain dynamics before, during, and after PS. During PS, EEG harmonic responses were lower and MSE values were higher in the AD subgroups than in HC and MCI groups. PS-induced changes in EEG complexity were less pronounced in the AD subgroups than in HC and MCI groups. Brain dynamics revealed a “transitional change” between MCI and Mild AD. Our findings suggest a deficiency in brain adaptability in AD patients, which hinders their ability to adapt to repetitive perceptual stimulation. This study highlights the importance of combining spectral and nonlinear dynamical analysis when seeking to unravel perceptual functioning and brain adaptability in the various stages of neurodegenerative diseases.  相似文献   

11.
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL.  相似文献   

12.
Radio frequency machine learning (RFML) can be loosely termed as a field that machine learning (ML) and deep learning (DL) techniques to applications related to wireless communications. However, traditional RFML basically assume that the data of training set and test set are independent and identically distributed and only a large number of labeled data can train a classification model which can effectively classify test set data. In other words, without enough training samples, it is impossible to learn an automatic modulation classifier that performs well in varying noise interference environment. Feature-based transfer learning minimizes the distribution difference between historical modulated signal data and new data by learning similarity-maximizing feature spaces. Therefore, in this paper, Dynamic Distribution Adaptation (DDA) is adopted to address the above challenges. We propose a Tensor Embedding RF Domain Adaptation (TERFDA) approach, which learns the latent subspace of the tensors formed by the time–frequency maps of the signals, so that use the multi-dimensional domain information of the signals to jointly learn the shared feature subspace of the source domain and the target domain, then perform DDA in the shared subspace. The experimental results show that under the modulated signal data, compared with the state-of-the-art DA algorithm, TERFDA has less requirements on the number of samples and categories, and has superior performance for confrontation the varying noise interference between source domain and target domain.  相似文献   

13.
The differential diagnosis of epileptic seizures (ES) and psychogenic non-epileptic seizures (PNES) may be difficult, due to the lack of distinctive clinical features. The interictal electroencephalographic (EEG) signal may also be normal in patients with ES. Innovative diagnostic tools that exploit non-linear EEG analysis and deep learning (DL) could provide important support to physicians for clinical diagnosis. In this work, 18 patients with new-onset ES (12 males, 6 females) and 18 patients with video-recorded PNES (2 males, 16 females) with normal interictal EEG at visual inspection were enrolled. None of them was taking psychotropic drugs. A convolutional neural network (CNN) scheme using DL classification was designed to classify the two categories of subjects (ES vs. PNES). The proposed architecture performs an EEG time-frequency transformation and a classification step with a CNN. The CNN was able to classify the EEG recordings of subjects with ES vs. subjects with PNES with 94.4% accuracy. CNN provided high performance in the assigned binary classification when compared to standard learning algorithms (multi-layer perceptron, support vector machine, linear discriminant analysis and quadratic discriminant analysis). In order to interpret how the CNN achieved this performance, information theoretical analysis was carried out. Specifically, the permutation entropy (PE) of the feature maps was evaluated and compared in the two classes. The achieved results, although preliminary, encourage the use of these innovative techniques to support neurologists in early diagnoses.  相似文献   

14.
Electroencephalography neurofeedback (EEG-NFB) training can induce changes in the power of targeted EEG bands. The objective of this study is to enhance and evaluate the specific changes of EEG power spectral density that the brain-machine interface (BMI) users can reliably generate for power augmentation through EEG-NFB training. First, we constructed an EEG-NFB training system for power augmentation. Then, three subjects were assigned to three NFB training stages, based on a 6-day consecutive training session as one stage. The subjects received real-time feedback from their EEG signals by a robotic arm while conducting flexion and extension movement with their elbow and shoulder joints, respectively. EEG signals were compared with each NFB training stage. The training results showed that EEG beta (12–40 Hz) power increased after the NFB training for both the elbow and the shoulder joints’ movements. EEG beta power showed sustained improvements during the 3-stage training, which revealed that even the short-term training could improve EEG signals significantly. Moreover, the training effect of the shoulder joints was more obvious than that of the elbow joints. These results suggest that NFB training can improve EEG signals and clarify the specific EEG changes during the movement. Our results may even provide insights into how the neural effects of NFB can be better applied to the BMI power augmentation system and improve the performance of healthy individuals.  相似文献   

15.
Currently, deep learning has shown state-of-the-art performance in image classification with pre-defined taxonomy. However, in a more real-world scenario, different users usually have different classification intents given an image collection. To satisfactorily personalize the requirement, we propose an interactive image classification system with an offline representation learning stage and an online classification stage. During the offline stage, we learn a deep model to extract the feature with higher flexibility and scalability for different users’ preferences. Instead of training the model only with the inter-class discrimination, we also encode the similarity between the semantic-embedding vectors of the category labels into the model. This makes the extracted feature adapt to multiple taxonomies with different granularities. During the online session, an annotation task iteratively alternates with a high-throughput verification task. When performing the verification task, the users are only required to indicate the incorrect prediction without giving the exact category label. For each iteration, our system chooses the images to be annotated or verified based on interactive efficiency optimization. To provide a high interactive rate, a unified active learning algorithm is used to search the optimal annotation and verification set by minimizing the expected time cost. After interactive annotation and verification, the new classified images are used to train a customized classifier online, which reflects the user-adaptive intent of categorization. The learned classifier is then used for subsequent annotation and verification tasks. Experimental results under several public image datasets show that our method outperforms existing methods.  相似文献   

16.
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers’ output probabilities as a portion of the weighted features.  相似文献   

17.
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.  相似文献   

18.
With the development of technology and the rise of the meta-universe concept, the brain-computer interface (BCI) has become a hotspot in the research field, and the BCI based on motor imagery (MI) EEG has been widely concerned. However, in the process of MI-EEG decoding, the performance of the decoding model needs to be improved. At present, most MI-EEG decoding methods based on deep learning cannot make full use of the temporal and frequency features of EEG data, which leads to a low accuracy of MI-EEG decoding. To address this issue, this paper proposes a two-branch convolutional neural network (TBTF-CNN) that can simultaneously learn the temporal and frequency features of EEG data. The structure of EEG data is reconstructed to simplify the spatio-temporal convolution process of CNN, and continuous wavelet transform is used to express the time-frequency features of EEG data. TBTF-CNN fuses the features learned from the two branches and then inputs them into the classifier to decode the MI-EEG. The experimental results on the BCI competition IV 2b dataset show that the proposed model achieves an average classification accuracy of 81.3% and a kappa value of 0.63. Compared with other methods, TBTF-CNN achieves a better performance in MI-EEG decoding. The proposed method can make full use of the temporal and frequency features of EEG data and can improve the decoding accuracy of MI-EEG.  相似文献   

19.
Cybercriminals use malicious URLs as distribution channels to propagate malware over the web. Attackers exploit vulnerabilities in browsers to install malware to have access to the victim’s computer remotely. The purpose of most malware is to gain access to a network, ex-filtrate sensitive information, and secretly monitor targeted computer systems. In this paper, a data mining approach known as classification based on association (CBA) to detect malicious URLs using URL and webpage content features is presented. The CBA algorithm uses a training dataset of URLs as historical data to discover association rules to build an accurate classifier. The experimental results show that CBA gives comparable performance against benchmark classification algorithms, achieving 95.8% accuracy with low false positive and negative rates.  相似文献   

20.
Multi-label learning is dedicated to learning functions so that each sample is labeled with a true label set. With the increase of data knowledge, the feature dimensionality is increasing. However, high-dimensional information may contain noisy data, making the process of multi-label learning difficult. Feature selection is a technical approach that can effectively reduce the data dimension. In the study of feature selection, the multi-objective optimization algorithm has shown an excellent global optimization performance. The Pareto relationship can handle contradictory objectives in the multi-objective problem well. Therefore, a Shapley value-fused feature selection algorithm for multi-label learning (SHAPFS-ML) is proposed. The method takes multi-label criteria as the optimization objectives and the proposed crossover and mutation operators based on Shapley value are conducive to identifying relevant, redundant and irrelevant features. The comparison of experimental results on real-world datasets reveals that SHAPFS-ML is an effective feature selection method for multi-label classification, which can reduce the classification algorithm’s computational complexity and improve the classification accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号