首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对现阶段全静压测试设备结构复杂、价格昂贵,并且自动化程度低、测试效率低等问题,提出一种基于增量式PID控制的全静压测试系统的设计方法。系统通过STM32单片机完成核心控制任务,实现对密闭容器及管路中压力的精确控制,并完成压力的自动测量和相关飞行参数的显示。实验证明该系统具有响应速度快,超调量小等优点,并且能达到全静压系统测试精度的要求。该全静压测试系统成本低廉,通用性强,使用方便,测试周期短,适合于多种机型,具有较好有应用前景。  相似文献   

2.
The accurate prediction of gross box-office markets is of great benefit for investment and management in the movie industry. In this work, we propose a machine learning-based method for predicting the movie box-office revenue of a country based on the empirical comparisons of eight methods with diverse combinations of economic factors. Specifically, we achieved a prediction performance of the relative root mean squared error of 0.056 in the US and of 0.183 in China for the two case studies of movie markets in time-series forecasting experiments from 2013 to 2016. We concluded that the support-vector-machine-based method using gross domestic product reached the best prediction performance and satisfies the easily available information of economic factors. The computational experiments and comparison studies provided evidence for the effectiveness and advantages of our proposed prediction strategy. In the validation process of the predicted total box-office markets in 2017, the error rates were 0.044 in the US and 0.066 in China. In the consecutive predictions of nationwide box-office markets in 2018 and 2019, the mean relative absolute percentage errors achieved were 0.041 and 0.035 in the US and China, respectively. The precise predictions, both in the training and validation data, demonstrate the efficiency and versatility of our proposed method.  相似文献   

3.
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL.  相似文献   

4.
许磊  陆明万  曹庆杰 《计算物理》2003,20(2):107-110
采用增量谐波平衡(IHB)方法导出一类分段线性电路系统稳态周期响应的计算格式,并对该系统进行了数值模拟,作出了特定控制参数下系统周期响应相图,同时获得了控制参数在某一带宽内变动时的系统响应特性,其结果与Runge Kutta方法进行了对比.  相似文献   

5.
给出了一个新的从能量角度判断超形变带全同带的方法,并运用这种方法分析了100多条超形变带.给出了全同带数目随判断参数的变化关系.还引进了量子化区间的概念,从统计角度分析角动量增量顺排的量子化问题,得出了全同带增量顺排随判据加严而趋向于量子化的性质,而正常形变全同带则不具有这一性质.讨论了新方法与粒子转子模型之间的关系.  相似文献   

6.
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.  相似文献   

7.
The automated classification of heart sounds plays a significant role in the diagnosis of cardiovascular diseases (CVDs). With the recent introduction of medical big data and artificial intelligence technology, there has been an increased focus on the development of deep learning approaches for heart sound classification. However, despite significant achievements in this field, there are still limitations due to insufficient data, inefficient training, and the unavailability of effective models. With the aim of improving the accuracy of heart sounds classification, an in-depth systematic review and an analysis of existing deep learning methods were performed in the present study, with an emphasis on the convolutional neural network (CNN) and recurrent neural network (RNN) methods developed over the last five years. This paper also discusses the challenges and expected future trends in the application of deep learning to heart sounds classification with the objective of providing an essential reference for further study.  相似文献   

8.
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.  相似文献   

9.
With the goal of understanding if the information contained in node metadata can help in the task of link weight prediction, we investigate herein whether incorporating it as a similarity feature (referred to as metadata similarity) between end nodes of a link improves the prediction accuracy of common supervised machine learning methods. In contrast with previous works, instead of normalizing the link weights, we treat them as count variables representing the number of interactions between end nodes, as this is a natural representation for many datasets in the literature. In this preliminary study, we find no significant evidence that metadata similarity improved the prediction accuracy of the four empirical datasets studied. To further explore the role of node metadata in weight prediction, we synthesized weights to analyze the extreme case where the weights depend solely on the metadata of the end nodes, while encoding different relationships between them using logical operators in the generation process. Under these conditions, the random forest method performed significantly better than other methods in 99.07% of cases, though the prediction accuracy was significantly degraded for the methods analyzed in comparison to the experiments with the original weights.  相似文献   

10.
Numerical optimization has been a popular research topic within various engineering applications, where differential evolution (DE) is one of the most extensively applied methods. However, it is difficult to choose appropriate control parameters and to avoid falling into local optimum and poor convergence when handling complex numerical optimization problems. To handle these problems, an improved DE (BROMLDE) with the Bernstein operator and refracted oppositional-mutual learning (ROML) is proposed, which can reduce parameter selection, converge faster, and avoid trapping in local optimum. Firstly, a new ROML strategy integrates mutual learning (ML) and refractive oppositional learning (ROL), achieving stochastic switching between ROL and ML during the population initialization and generation jumping period to balance exploration and exploitation. Meanwhile, a dynamic adjustment factor is constructed to improve the ability of the algorithm to jump out of the local optimum. Secondly, a Bernstein operator, which has no parameters setting and intrinsic parameters tuning phase, is introduced to improve convergence performance. Finally, the performance of BROMLDE is evaluated by 10 bound-constrained benchmark functions from CEC 2019 and CEC 2020, respectively. Two engineering optimization problems are utilized simultaneously. The comparative experimental results show that BROMLDE has higher global optimization capability and convergence speed on most functions and engineering problems.  相似文献   

11.
The Coronavirus disease 2019 (COVID-19) has become one of the threats to the world. Computed tomography (CT) is an informative tool for the diagnosis of COVID-19 patients. Many deep learning approaches on CT images have been proposed and brought promising performance. However, due to the high complexity and non-transparency of deep models, the explanation of the diagnosis process is challenging, making it hard to evaluate whether such approaches are reliable. In this paper, we propose a visual interpretation architecture for the explanation of the deep learning models and apply the architecture in COVID-19 diagnosis. Our architecture designs a comprehensive interpretation about the deep model from different perspectives, including the training trends, diagnostic performance, learned features, feature extractors, the hidden layers, the support regions for diagnostic decision, and etc. With the interpretation architecture, researchers can make a comparison and explanation about the classification performance, gain insight into what the deep model learned from images, and obtain the supports for diagnostic decisions. Our deep model achieves the diagnostic result of 94.75%, 93.22%, 96.69%, 97.27%, and 91.88% in the criteria of accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, which are 8.30%, 4.32%, 13.33%, 10.25%, and 6.19% higher than that of the compared traditional methods. The visualized features in 2-D and 3-D spaces provide the reasons for the superiority of our deep model. Our interpretation architecture would allow researchers to understand more about how and why deep models work, and can be used as interpretation solutions for any deep learning models based on convolutional neural network. It can also help deep learning methods to take a step forward in the clinical COVID-19 diagnosis field.  相似文献   

12.
大功率非晶态变压器磁偏饱和的预防方法   总被引:1,自引:0,他引:1       下载免费PDF全文
概述了非晶态材料的基本特性,深入分析了大功率非晶态变压器出现磁偏移饱和现象的原因,提出了幅值递增法,脉宽递增法和电流积分调宽法等预防磁偏饱和的方法. 关键词: 非晶态磁偏饱和 幅值递增法 脉宽递增法 电流积分调宽法  相似文献   

13.
Electric power forecasting plays a substantial role in the administration and balance of current power systems. For this reason, accurate predictions of service demands are needed to develop better programming for the generation and distribution of power and to reduce the risk of vulnerabilities in the integration of an electric power system. For the purposes of the current study, a systematic literature review was applied to identify the type of model that has the highest propensity to show precision in the context of electric power forecasting. The state-of-the-art model in accurate electric power forecasting was determined from the results reported in 257 accuracy tests from five geographic regions. Two classes of forecasting models were compared: classical statistical or mathematical (MSC) and machine learning (ML) models. Furthermore, the use of hybrid models that have made significant contributions to electric power forecasting is identified, and a case of study is applied to demonstrate its good performance when compared with traditional models. Among our main findings, we conclude that forecasting errors are minimized by reducing the time horizon, that ML models that consider various sources of exogenous variability tend to have better forecast accuracy, and finally, that the accuracy of the forecasting models has significantly increased over the last five years.  相似文献   

14.
针对数据集样本中带有噪声和离群点问题,提出了一种基于角度优化的鲁棒极端学习机算法。该方法利用鲁棒激活函数角度优化的原则,首先降低了离群点对分类算法的影响,从而保持数据样本的全局结构信息,达到更好的去噪效果。其次,有效的避免隐层节点输出矩阵求解不准的问题,进一步增强极端学习机的泛化性能。通过应用在普遍图像数据库上的实验结果表明,这种提出的算法与其他算法相比具有更强的鲁棒性和较高的识别率。  相似文献   

15.
Autoencoders are commonly used in representation learning. They consist of an encoder and a decoder, which provide a straightforward method to map n-dimensional data in input space to a lower m-dimensional representation space and back. The decoder itself defines an m-dimensional manifold in input space. Inspired by manifold learning, we showed that the decoder can be trained on its own by learning the representations of the training samples along with the decoder weights using gradient descent. A sum-of-squares loss then corresponds to optimizing the manifold to have the smallest Euclidean distance to the training samples, and similarly for other loss functions. We derived expressions for the number of samples needed to specify the encoder and decoder and showed that the decoder generally requires much fewer training samples to be well-specified compared to the encoder. We discuss the training of autoencoders in this perspective and relate it to previous work in the field that uses noisy training examples and other types of regularization. On the natural image data sets MNIST and CIFAR10, we demonstrated that the decoder is much better suited to learn a low-dimensional representation, especially when trained on small data sets. Using simulated gene regulatory data, we further showed that the decoder alone leads to better generalization and meaningful representations. Our approach of training the decoder alone facilitates representation learning even on small data sets and can lead to improved training of autoencoders. We hope that the simple analyses presented will also contribute to an improved conceptual understanding of representation learning.  相似文献   

16.
In mobile edge computing systems, the edge server placement problem is mainly tackled as a multi-objective optimization problem and solved with mixed integer programming, heuristic or meta-heuristic algorithms, etc. These methods, however, have profound defect implications such as poor scalability, local optimal solutions, and parameter tuning difficulties. To overcome these defects, we propose a novel edge server placement algorithm based on deep q-network and reinforcement learning, dubbed DQN-ESPA, which can achieve optimal placements without relying on previous placement experience. In DQN-ESPA, the edge server placement problem is modeled as a Markov decision process, which is formalized with the state space, action space and reward function, and it is subsequently solved using a reinforcement learning algorithm. Experimental results using real datasets from Shanghai Telecom show that DQN-ESPA outperforms state-of-the-art algorithms such as simulated annealing placement algorithm (SAPA), Top-K placement algorithm (TKPA), K-Means placement algorithm (KMPA), and random placement algorithm (RPA). In particular, with a comprehensive consideration of access delay and workload balance, DQN-ESPA achieves up to 13.40% and 15.54% better placement performance for 100 and 300 edge servers respectively.  相似文献   

17.
Recently, there has been a huge rise in malware growth, which creates a significant security threat to organizations and individuals. Despite the incessant efforts of cybersecurity research to defend against malware threats, malware developers discover new ways to evade these defense techniques. Traditional static and dynamic analysis methods are ineffective in identifying new malware and pose high overhead in terms of memory and time. Typical machine learning approaches that train a classifier based on handcrafted features are also not sufficiently potent against these evasive techniques and require more efforts due to feature-engineering. Recent malware detectors indicate performance degradation due to class imbalance in malware datasets. To resolve these challenges, this work adopts a visualization-based method, where malware binaries are depicted as two-dimensional images and classified by a deep learning model. We propose an efficient malware detection system based on deep learning. The system uses a reweighted class-balanced loss function in the final classification layer of the DenseNet model to achieve significant performance improvements in classifying malware by handling imbalanced data issues. Comprehensive experiments performed on four benchmark malware datasets show that the proposed approach can detect new malware samples with higher accuracy (98.23% for the Malimg dataset, 98.46% for the BIG 2015 dataset, 98.21% for the MaleVis dataset, and 89.48% for the unseen Malicia dataset) and reduced false-positive rates when compared with conventional malware mitigation techniques while maintaining low computational time. The proposed malware detection solution is also reliable and effective against obfuscation attacks.  相似文献   

18.
丁建勋  黄海军  田琼 《中国物理 B》2011,20(2):28901-028901
It is known that the commonly used NaSch cellular automaton (CA) model and its modifications can help explain the internal causes of the macro phenomena of traffic flow.However,the randomization probability of vehicle velocity used in these models is assumed to be an exogenous constant or a conditional constant,which cannot reflect the learning and forgetting behaviour of drivers with historical experiences.This paper further modifies the NaSch model by enabling the randomization probability to be adjusted on the bases of drivers’ memory.The Markov properties of this modified model are discussed.Analytical and simulation results show that the traffic fundamental diagrams can be indeed improved when considering drivers’ intelligent behaviour.Some new features of traffic are revealed by differently combining the model parameters representing learning and forgetting behaviour.  相似文献   

19.
With the popularity of Android and its open source, the Android platform has become an attractive target for hackers, and the detection and classification of malware has become a research hotspot. Existing malware classification methods rely on complex manual operation or large-volume high-quality training data. However, malware data collected by security providers contains user privacy information, such as user identity and behavior habit information. The increasing concern for user privacy poses a challenge to the current malware classification scheme. Based on this problem, we propose a new android malware classification scheme based on Federated learning, named FedHGCDroid, which classifies malware on Android clients in a privacy-protected manner. Firstly, we use a convolutional neural network and graph neural network to design a novel multi-dimensional malware classification model HGCDroid, which can effectively extract malicious behavior features to classify the malware accurately. Secondly, we introduce an FL framework to enable distributed Android clients to collaboratively train a comprehensive Android malware classification model in a privacy-preserving way. Finally, to adapt to the non-IID distribution of malware on Android clients, we propose a contribution degree-based adaptive classifier training mechanism FedAdapt to improve the adaptability of the malware classifier based on Federated learning. Comprehensive experimental studies on the Androzoo dataset (under different non-IID data settings) show that the FedHGCDroid achieves more adaptability and higher accuracy than the other state-of-the-art methods.  相似文献   

20.
Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号