首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
随着计算机科学技术的迅速发展,嵌入式领域实时图像处理应用越来越广泛,然而传统硬件因为自身架构导致并行化程度不高,针对在视频监控、机器视觉、视频压缩、医疗影像分析等领域需要对图像进行高性能计算的问题,提出一种以OpenCL软件模型和FPGA异构模式的高性能图像处理解决方案,实现了图像显示和OpenCL加速功能,以Sobel边缘检测算法为研究对象,进行了算法并行性分析,并在系统中运用OpenCL加速内核算法,与基本的ARM平台和OpenCL共享内存加速机制相比较,展开性能测试,对加速效果进行了研究。实验数据表明,使用该系统处理不同分辨率的图像,OpenCL加速子系统的处理较基于片上ARM硬核的软件处理,实现相同功能上有100倍左右的性能提升。  相似文献   

2.
蒋华  张乐乾  王鑫 《应用声学》2015,23(7):2559-2562
针对云计算环境下资源调度模型未充分考虑资源评价的问题,为更好适应不同节点计算性能和大规模数据环境的处理需求,提出了一种基于多维评价模型的虚拟机资源调度策略。首先,在云计算环境下建立包括网络性能在内的多维资源评价模型,在此基础上提出一种改进的蚁群优化算法实现资源调度策略;然后在云计算仿真平台CloudSim上进行实现。实验结果表明,该算法可以更好适应不同网络性能的计算环境,显著提高了资源调度的性能,同时降低了虚拟机负载均衡离差,满足了云计算环境下的虚拟机资源负载均衡需求。  相似文献   

3.
人工智能的快速发展需要人工智能专用硬件的快速发展,受人脑存算一体、并行处理启发而构建的包含突触与神经元的神经形态计算架构,可以有效地降低人工智能中计算工作的能耗.记忆元件在神经形态计算的硬件实现中展现出巨大的应用价值;相比传统器件,用忆阻器构建突触、神经元能极大地降低计算能耗,然而在基于忆阻器构建的神经网络中,更新、读...  相似文献   

4.
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.  相似文献   

5.
Divisive algorithms are of great importance for community detection in complex networks. One algorithm proposed by Girvan and Newman (GN) based on an edge centrality named betweenness, is a typical representative of this field. Here we studied three edge centralities based on network topology, walks and paths respectively to quantify the relevance of each edge in a network, and proposed a divisive algorithm based on the rationale of GN algorithm for finding communities that removes edges iteratively according to the edge centrality values in a certain order. In addition, we gave a comparison analysis of these measures with the edge betweenness and information centrality. We found the principal difference among these measures in the partition procedure is that the edge centrality based on walks first removes the edge connected with a leaf vertex, but the others first delete the edge as a bridge between communities. It indicates that the edge centrality based on walks is harder to uncover communities than other edge centralities. We also tested these measures for community detection. The results showed that the edge information centrality outperforms other measures, the edge centrality based on walks obtains the worst results, and the edge betweenness gains better performance than the edge centrality based on network topology. We also discussed our method’s efficiency and found that the edge centrality based on walks has a high time complexity and is not suitable for large networks.  相似文献   

6.
深度学习在检测领域高速发展,但受限于训练数据和计算效率,在基于嵌入式平台的边缘计算领域,尤其是实时跟踪应用中深度学习的智能化算法应用并不广泛。针对这一现象,同时为满足现阶段国产化、智能化的技术需求,提出了一种改进的孪生网络深度学习跟踪算法。在特征网络加入微调网络,解决了网络模型无法在线更新的问题,提升了跟踪的准确性;在IoUNet损失函数中加入中心距离惩罚项,解决了IoUNet当IoU相同时位置跳跃,存在收敛盲区和收敛速度慢的问题;将训练后的网络通过通道剪枝,缩减网络模型尺寸,提升了模型加载和运行的速度。在华为Atlas200NPU平台上实现了实时运行,算法准确率高达0.90(IoU>0.7),帧率达到66 Hz。  相似文献   

7.
针对常规视频监控系统在线实时性不强,海量视频数据传输迟滞,任务管理单一等问题,提出了构建在云计算环下基于多虚拟机技术在线视频监控系统 ,利用云计算平台中的物理资源与服务资源提升在线视频监控系统数据处理能力,虚拟机可同时处理大量的视频监控数据,并将视频数据以云存储的方式存储于云端服务器,降低了设备建设成本,可根据不同用户需求定制相关服务。本系统基于云计算平台设计,应用数十台乃至数百台虚拟机对在线视频监控数据进行处理,设计实现了云平台下在线视频监控系统的结构设计、以太网通信接口设计、服务器硬件配置和虚拟机控制。在软件设计方面通过对各虚拟机资源利用率的计算而动态分配资源,从而可以有效减少网络传输系统状态信息的带宽开销。通过系统功能与性能测试表明,在常规公共网络10M带宽的情况下,本系统在线视频监控数据的传输延迟时间相比于传统视频监控减少了85%以上,监控视频数据量减少了75%以上。  相似文献   

8.
利用蚕豆叶片可见-近红外反射光谱结合导数光谱对健康、少量、大量虫害三种等级的实验样本进行光谱特征分析,并选择虫害检测最优波段。采用Hadoop,Spark和VMWare虚拟机搭建云计算平台,使用MLlib机器学习库实现人工神经网络(ANN)和支持向量机(SVM)分类算法,并对三种等级蚕豆叶片全波段和最优波段光谱进行分类建模与预测。结果表明ANN虫害光谱分类模型准确率优于SVM虫害光谱分类模型,并且在云平台上运行效率更高,同时全光谱波段的预测准确性高于最优波段。通过扩展光谱数据集,云计算技术在光谱数据挖掘中的计算效率有显著提升。云计算分类检测可以为作物生物胁迫光谱识别提供新的技术和方法。  相似文献   

9.
汪璐 《物理》2017,46(9):597-605
深度学习是一类通过多层信息抽象来学习复杂数据内在表示关系的机器学习算法。近年来,深度学习算法在物体识别和定位、语音识别等人工智能领域,取得了飞跃性进展。文章将首先介绍深度学习算法的基本原理及其在高能物理计算中应用的主要动机。然后结合实例综述卷积神经网络、递归神经网络和对抗生成网络等深度学习算法模型的应用。最后,文章将介绍深度学习与现有高能物理计算环境结合的现状、问题及一些思考。  相似文献   

10.
11.
Edge computing can deliver network services with low latency and real-time processing by providing cloud services at the network edge. Edge computing has a number of advantages such as low latency, locality, and network traffic distribution, but the associated resource management has become a significant challenge because of its inherent hierarchical, distributed, and heterogeneous nature. Various cloud-based network services such as crowd sensing, hierarchical deep learning systems, and cloud gaming each have their own traffic patterns and computing requirements. To provide a satisfactory user experience for these services, resource management that comprehensively considers service diversity, client usage patterns, and network performance indicators is required. In this study, an algorithm that simultaneously considers computing resources and network traffic load when deploying servers that provide edge services is proposed. The proposed algorithm generates candidate deployments based on factors that affect traffic load, such as the number of servers, server location, and client mapping according to service characteristics and usage. A final deployment plan is then established using a partial vector bin packing scheme that considers both the generated traffic and computing resources in the network. The proposed algorithm is evaluated using several simulations that consider actual network service and device characteristics.  相似文献   

12.
针对日益增长的民用航空巨量数据,借助大数据存储和分析技术,构建民用航空运行大数据分析平台,可更有效支撑快速响应、航材管理、健康管理等各项民机运行业务。结合目前国内外民用航空领域大数据技术的应用现状,梳理民机运行的业务模式及数据类别,设计并构建民用航空大数据分析平台的整体架构。根据目前民用航空运行业务需求,对民用航空大数据平台的硬件平台的管理节点、数据节点的计算能力等功能性能进行设计,并对民用航空大数据平台的轻量级计算、离线数据计算、实时在线数据处理分析等计算需求进行研究,针对不同的计算方式,提供具体解决途径。最后对民机运行大数据分析平台的业务应用集成及接口技术进行研究。分析表明研究成果有助于提高我国民机运行效率,为民用飞机运行大数据平台提供支撑。  相似文献   

13.
混杂复合材料是一种新型复合材料,其复杂的细观结构导致预测其等效热传导性能极富挑战性.本文结合渐近均匀化方法、小波变换方法和机器学习方法发展了一种新的可以有效预测混杂复合材料等效热传导性能的小波-机器学习混合方法.该方法主要包括离线多尺度建模和在线机器学习两部分.首先借助渐近均匀化方法通过离线多尺度建模建立了混杂复合材料...  相似文献   

14.
近年来人工智能技术迅速发展,各高校广泛开展了人工智能课程.但对人工智能教学平台缺乏详细的分析.为此,本文以粒子群算法为例对人工智能课程进行了阐述,并讨论了教学注意事项.研究表明,及时预习基础知识有利于学生理解人工智能模型,结合具体问题讨论人工智能算法将有利于学生掌握技术,拓展人工智能技术应用范围并引导学生对算法本身思考将有助于学生建立正确概念,建立互动式解决实验问题将有助于增加学生的学习热情.  相似文献   

15.
杨素素 《应用声学》2017,25(3):55-59
针对城市消防联网远程监控系统中实时信息数据逐渐增长而引出的大数据问题,传统的消防系统无法实时、高效地处理消防实时数据的问题,提出了一种基于云计算和Storm实时数据处理系统的解决方案;对于开源的Storm框架进行需求和性能分析,实现对其技术架构上的改进,并结合消防系统的特点,提出一套高实时性、高可扩展性的消防联网监控中心的数据实时处理的体系架构,同时也进行了云计算平台的搭建,利用心跳检测机制保证各个监控单位的实时性连接;研究表明,基于云计算和Storm平台架构完全适用于消防联网监控中心的实时消防数据的处理,具有高效性、高可靠性、性能显著等特性。  相似文献   

16.
Advances in technology and computing power have led to the emergence of complex and large-scale software architectures in recent years. However, they are prone to performance anomalies due to various reasons, including software bugs, hardware failures, and resource contentions. Performance metrics represent the average load on the system and do not help discover the cause of the problem if abnormal behavior occurs during software execution. Consequently, system experts have to examine a massive amount of low-level tracing data to determine the cause of a performance issue. In this work, we propose an anomaly detection framework that reduces troubleshooting time, besides guiding developers to discover performance problems by highlighting anomalous parts in trace data. Our framework works by collecting streams of system calls during the execution of a process using the Linux Trace Toolkit Next Generation(LTTng), sending them to a machine learning module that reveals anomalous subsequences of system calls based on their execution times and frequency. Extensive experiments on real datasets from two different applications (e.g., MySQL and Chrome), for varying scenarios in terms of available labeled data, demonstrate the effectiveness of our approach to distinguish normal sequences from abnormal ones.  相似文献   

17.
Ming-Jian Guo 《中国物理 B》2022,31(7):78702-078702
Memristive neural network has attracted tremendous attention since the memristor array can perform parallel multiply-accumulate calculation (MAC) operations and memory-computation operations as compared with digital CMOS hardware systems. However, owing to the variability of the memristor, the implementation of high-precision neural network in memristive computation units is still difficult. Existing learning algorithms for memristive artificial neural network (ANN) is unable to achieve the performance comparable to high-precision by using CMOS-based system. Here, we propose an algorithm based on off-chip learning for memristive ANN in low precision. Training the ANN in the high-precision in digital CPUs and then quantifying the weight of the network to low precision, the quantified weights are mapped to the memristor arrays based on VTEAM model through using the pulse coding weight-mapping rule. In this work, we execute the inference of trained 5-layers convolution neural network on the memristor arrays and achieve an accuracy close to the inference in the case of high precision (64-bit). Compared with other algorithms-based off-chip learning, the algorithm proposed in the present study can easily implement the mapping process and less influence of the device variability. Our result provides an effective approach to implementing the ANN on the memristive hardware platform.  相似文献   

18.
6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.  相似文献   

19.
Z.J. Bao  L.J. Ding 《Physica A》2009,388(20):4491-4498
Complex networks may undergo a global cascade of overload failures when a single highly loaded vertex or edge is intentionally attacked. Here we use the recent load model of cascading failures to investigate the performance of the small-world (SW) and scale-free (SF) networks subject to deliberate attacks on vertex and edge. Simulation results suggest that compared with the SW network, the SF network is more vulnerable to deliberate vertex attacks and more robust to deliberate edge attacks. In the SF network, deliberate vertex attacks can result in larger cascading failures than deliberate edge attacks; however, in the SW network the situation is opposite. Furthermore, with the increase of the rewiring probability the SW network becomes more and more robust to deliberate vertex and edge attacks.  相似文献   

20.
卞金洪  吴瑞琦  周锋  赵力 《应用声学》2023,42(2):269-275
基于深度神经网络的方法已经在语声增强领域得到了广泛的应用,然而若想取得理想的性能,一般需要规模较大且复杂度较高的模型。因此,在计算资源有限的设备或对延时要求高的环境下容易出现部署困难的问题。为了解决此问题,提出了一种基于深度复卷积递归网络的师生学习语声增强方法。在师生深度复卷积递归网络模型结构中间的复长短时记忆递归模块提取实部和虚部特征流,并分别计算帧级师生距离损失以进行知识转移。同时使用多分辨率频谱损失以进一步提升低复杂度学生模型的性能。实验在公开数据集Voice Bank Demand和DNS Challenge上进行,结果显示所提方法相对于基线学生模型在各项指标上均有明显提升。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号