首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 140 毫秒
1.
蒋华  张乐乾  王鑫 《应用声学》2015,23(7):2559-2562
针对云计算环境下资源调度模型未充分考虑资源评价的问题,为更好适应不同节点计算性能和大规模数据环境的处理需求,提出了一种基于多维评价模型的虚拟机资源调度策略。首先,在云计算环境下建立包括网络性能在内的多维资源评价模型,在此基础上提出一种改进的蚁群优化算法实现资源调度策略;然后在云计算仿真平台CloudSim上进行实现。实验结果表明,该算法可以更好适应不同网络性能的计算环境,显著提高了资源调度的性能,同时降低了虚拟机负载均衡离差,满足了云计算环境下的虚拟机资源负载均衡需求。  相似文献   

2.
王常芳  徐文忠 《应用声学》2015,23(8):2861-2863
对云计算环境中的资源调度问题进行了研究,针对蚁群优化算法(ACO)在处理大规模组合优化问题时易陷入搜索速度慢和局部最优解的缺陷,提出了一种实现云计算负载均衡的双向蚁群优化算法(BACO)用于资源调度;该算法考虑到了每个虚拟机的负载和计算能力,同时在云环境中引入了蚂蚁的向前移动和向后移动;最后通过在CloudSim平台进行仿真实验,结果表明该算法的总任务完成时间较短,具有较好的寻优能力,并且能够实现负载均衡,是一种有效的资源调度算法。  相似文献   

3.
罗慧兰 《应用声学》2017,25(12):150-152, 176
为了缩短云计算执行时间,改善云计算性能,在一定程度上加强云计算资源节点完成任务成功率,需要对云计算资源进行调度。当前的云计算资源调度算法在进行调度时,通过选择合适的调度参数并利用CloudSim仿真工具,完成对云计算资源的调度。该算法在运行时有效地进行平衡负载,导致云计算资源调度的均衡性能较差,存在云计算资源调度结果误差大的问题。为此,提出一种基于Wi-Fi与Web的云计算资源调度算法。该算法首先利用自适应级联滤波算法对云计算资源数据流进行滤波降噪,然后以降噪结果为基础,采用本体论对云计算资源进行预处理操作,最后通过人工蜂群算法完成对云计算资源的调度。实验结果证明,所提算法可以良好地应用于云计算资源调度中,有效提高了云计算资源利用率,具有实用性以及可实践性,为该领域的后续研究发展提供了可靠支撑。  相似文献   

4.
于淑云 《应用声学》2017,25(12):195-198
光纤网络的通信质量受到云数据调度均衡性影响很大,为了改善光纤网络通信质量,提高网络中云数据传输的吞吐量和保真率,通过云数据并行调度,实现云数据传输均衡,提出一种基于自适应判决反馈均衡的光纤网络云数据并行调度模型。构建光纤网络通信的传输信道模型,采用最小均方误差估计方法进行光纤网络的量化融合估计,运用匹配滤波检测器进行云数据中的干扰滤波处理,结合自适应判决反馈均衡方法进行信道均衡,在均衡的信道中对滤波输出的云数据进行并行调度和多线程输出调制。仿真结果表明,采用该方法进行光纤网络中的云数据并行调度的均衡性较好,输出数据的保真率较高,误码率较低,改善了光纤网络的整体性能。  相似文献   

5.
何丹丹 《应用声学》2014,22(5):1626-1628,1631
针对传统云计算资源调度方法仅关注任务的最大完成时间,没有考虑到节能和资源负载均衡的问题,提出了一种基于混沌粒子群算法实现云资源优化调度的方法;首先,定义了以节能和负载均衡为目标的多目标数学模型,然后设计了一组靠近最优Pareto 前沿的解作为初始种群,采用改进的粒子群算法来搜索最优调度方案,当最优解连续两代未发生变化时,通过混沌遍历法对粒子进行局部寻优,以加快获取全局最优解;在CloudSim仿真环境下结合Matlab工具进行实验,结果表明:文中方法负载均衡离差平均值为0.156,且较其它方法,具有较好的负载均衡能力和较低的能耗,具有很强的可行性。  相似文献   

6.
基于遗传算法的云计算资源调度策略研究   总被引:1,自引:0,他引:1  
徐文忠  彭志平  左敬龙 《应用声学》2015,23(5):1653-1656
对云计算环境中的资源调度问题进行了研究,鉴于当前云计算环境中资源利用率不高,节点负载不均衡的问题,提出了一种新的基于遗传算法的关于虚拟机负载均衡的调度策略。根据历史数据和系统的当前状态以及通过遗传算法,该策略能够达到最佳负载均衡和减少或避免动态迁移,同时还引入了平均负载来衡量该算法的全局负载均衡效果。最后通过在CloudSim平台进行仿真实验,结果表明,该策略具有相当好的全局收敛性和效率,当系统虚拟机被调度之后,算法在很大程度上能够解决负载不均衡和高迁移成本问题,并且极大地提高了资源利用率。  相似文献   

7.
戚斌 《应用声学》2016,24(12):45-45
通过对数据库的存储结构优化设计,提高数据库的吞吐量。传统方法采用存储节点校验数据适应度分区的数据库存储模型,数据库中存在重复冗余数据,不能自适应滤除,导致数据存储开销较大。提出了一种基于分布结构自适应筛选的数据库存储优化模型,首先进行数据库的存储机制和分布式数据结构分析,采用相空间重构方法进行存储空间的结构分布重组,采用分布结构自适应筛选方法对提取的数据信息流进行重复冗余数据滤波处理,改善数据在数据库存储空间中的结构分布,实现数据库存储优化。仿真结果表明,采用改进的方法进行数据库构建,能提高数据库存储吞吐量,降低数据存储开销,提高数据库的访问和调度性能,展示较好的应用价值。  相似文献   

8.
孙花  朱锦新 《应用声学》2014,22(10):3343-3346
云计算异构环境中由于计算和存储资源物理分布的不一致性,往往容易导致在应用传统的调度算法进行任务资源分配时存在调度效率低和负载不均衡的问题,为此,设计了一种基于Q学习和双向ACO算法的云计算任务资源分配模型;首先,引入了基于主从结构的调度模型,并综合考虑任务计算完成时间、网络带宽和延迟等因素设计了资源分配目标函数,然后,设计了基于Q学习的云计算资源初始分配方法,将其获得的最优策略对应的Q值初始化网络中节点的Q值,最后,设计一种结合前向蚂蚁和后向蚂蚁的双向ACO算法实现任务资源的最终分配,并对算法进行了定义和描述;在CloudSim环境下进行仿真实验,结果证明文中方法能有效实现云计算异构环境下的任务资源分配,且与其它方法相比,负载均衡离差值平均约为0.071 5,是一种适用于云计算异构环境的有效资源分配方法。   相似文献   

9.
一种用于云计算资源调度的改进遗传算法   总被引:1,自引:0,他引:1  
刘峰  毕利  杨军 《应用声学》2016,24(5):202-206
针对轮询调度算法、遗传算法和模拟退火算法在云计算资源调度中存在收敛速度慢、易早熟和资源负载不均衡等问题,提出了一种基于模拟退火思想的改进遗传算法(Simulated Annealing Improved Genetic Algorithm: SAIGA)。改进算法设计了基于任务平均完成时间和负载均衡的双适应度函数和自适应的交叉变异概率函数,允许算法在退火过程中以一定概率接受劣质解从而避免早熟现象的发生,将虚拟资源上任务分配数的标准差作为选择个体的依据来实现节点的负载均衡。仿真结果表明,改进算法与上述算法相比,在任务平均完成时间、资源利用率以及收敛速度上表现得更优越,能够较快地找到资源最优调度方案,具有较好的可行性和实用性。  相似文献   

10.
云计算系统采用虚拟化技术可以更加灵活和高效地分配运算资源,便于管理员根据用户任务需求按需分配云计算资源。但虚拟化后的云计算中心存在种类多样、数量庞大的虚拟机资源,难以将虚拟机合理地放置到物理主机集群上并达到较好的负载均衡。为此,给出了云计算中心虚拟机放置到物理主机的负载均衡模型,采用改进后的粒子群算法(PSO)来求解最优解。最后通过和常用虚拟机放置算法的仿真对比实验,验证了所提云计算负载均衡优化算法的有效性。  相似文献   

11.
徐浙君  陈善雄 《应用声学》2017,25(1):127-130
针对云计算下的资源调度的问题,提出将蚁群算法的个体与云计算中的可行性资源调度进行对应,首先对云计算资源调度进行描述,其次针对蚁群算法的路径选择引入了平衡因子,对信息素进行了局部研究和全局研究,将蚁群个体引入到膜计算中,通过膜内运算和膜间运算,提高了算法的局部和全局收敛的能力,最后在云计算资源分配中,引入匹配表概念,将云计算任务和资源进行匹配,融合后的算法提高了算法的整体性能.仿真实验说明在网络消耗,成本消耗,能量消耗上有了明显的降低,提高了资源分配效率。  相似文献   

12.
Faced with limited network resources, diverse service requirements and complex network structures, how to efficiently allocate resources and improve network performances is an important issue that needs to be addressed in 5G or future 6G networks. In this paper, we propose a multi-timescale collaboration resource allocation algorithm for distributed fog radio access networks (F-RANs) based on self-learning. This algorithm uses a distributed computing architecture for parallel optimization and each optimization model includes large time-scale resource allocation and small time-scale resource scheduling. First, we establish a large time-scale resource allocation model based on long-term average information such as historical bandwidth requirements for each network slice in F-RAN by long short-term memory network (LSTM) to obtain its next period required bandwidth. Then, based on the allocated bandwidth, we establish a resource scheduling model based on short-term instantaneous information such as channel gain by reinforcement learning (RL) which can interact with the environment to realize adaptive resource scheduling. And the cumulative effects of small time-scale resource scheduling will trigger another round large time-scale resource reallocation. Thus, they constitute a self-learning resource allocation closed loop optimization. Simulation results show that compared with other algorithms, the proposed algorithm can significantly improve resource utilization.  相似文献   

13.
如何进行更好地资源调度一直都是云计算研究的热点,本文在云计算资源算法中引入布谷鸟算法,针对布谷鸟算法中出现的收敛速度快,容易局部震荡等现象,本文首先引入高斯变异算子来处理每一个阶段中的鸟窝最佳位置的选择,然后通过自适应动态因子来调整不同阶段中的鸟窝位置的选择,使得改进后的算法收敛精度提高,通过适应度函数的平衡以及遗传算法中的三种操作,使得本文算法能够有效的提高云计算环境下的资源分配效率,降低了网络消耗。在Cloudsim平台仿真实验中,通过三个方面的比较,本文算法在性能上、资源调度效率和任务调度方面都有很大改进,有效提高了云计算系统的资源调度能力。  相似文献   

14.
Critical healthcare application tasks require a real-time response because it affects patients’ life. Fog computing is the best solution to get a fast response and less energy consumption in healthcare. However, current solutions face difficulties in scheduling the tasks to the correct computing devices based on their priorities and capacity to meet the tasks’ deadlines and resource limitations with minimal latency. Furthermore, challenges of load balancing and prioritization are raised when dealing with inadequate computing resources and telecommunication networks while obtaining the best scheduling of emergency healthcare tasks. In this study, a fog computing resource management (FRM) model is proposed, which the proposed model has three main solutions. Firstly, resource availability is calculated according to the average execution time of each task. Secondly, load balancing is enhanced by proposing a hybrid approach that combines the multi-agent load balancing algorithm and the throttled load balancing algorithm. Thirdly, task scheduling is done based on priority, resource availability, and load balancing. The results have been acquired using the iFogSim toolkit. Two datasets are used in this study, the blood pressure dataset was acquired from the UTeM clinic, and the ECG dataset was acquired from the University of California at Irvine. Both datasets are integrated to enlarge the attributes and get accurate results. The results demonstrate the effectiveness of managing resources and optimizing task scheduling and balancing in a fog computing environment. In comparison with other research studies, the FRM model outperforms delay by 55%, response time by 72%, cost by 72%, and energy consumption by 70%.  相似文献   

15.
With the rapid development of cloud computing, data center application based on considerable storage and computing has become one of the most important service types. Currently the high performance computing facilities and large-capacity storage devices are highly distributed in different locations. Then how to make full use of the current data center mainly depends on the effective joint scheduling of application layer and network layer resources. According to the rigid requirement of data center application, a novel convergence control architecture, i.e. Service-Oriented Group Engine (SOGE) framework is proposed in multi-domain optical networks based on DREAM architecture, and also the corresponding resource demand model (RDM) is built. A resource joint scheduling algorithm (RJSA) for application layer and network layer resource is proposed and implemented based on SOGE framework. Moreover, the SOGE framework and resource joint scheduling algorithm are validated and demonstrated on the test-bed based on DREAM architecture.  相似文献   

16.
Edge computing can deliver network services with low latency and real-time processing by providing cloud services at the network edge. Edge computing has a number of advantages such as low latency, locality, and network traffic distribution, but the associated resource management has become a significant challenge because of its inherent hierarchical, distributed, and heterogeneous nature. Various cloud-based network services such as crowd sensing, hierarchical deep learning systems, and cloud gaming each have their own traffic patterns and computing requirements. To provide a satisfactory user experience for these services, resource management that comprehensively considers service diversity, client usage patterns, and network performance indicators is required. In this study, an algorithm that simultaneously considers computing resources and network traffic load when deploying servers that provide edge services is proposed. The proposed algorithm generates candidate deployments based on factors that affect traffic load, such as the number of servers, server location, and client mapping according to service characteristics and usage. A final deployment plan is then established using a partial vector bin packing scheme that considers both the generated traffic and computing resources in the network. The proposed algorithm is evaluated using several simulations that consider actual network service and device characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号