首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Mobile edge computing (MEC) is a key feature of next-generation mobile networks aimed at providing a variety of services for different applications by performing related processing tasks closer to the users. With the advent of the next-generation mobile networks, researchers have turned their attention to various aspects of edge computing in an effort to leverage the new capabilities offered by 5G. So, the integration of software defined networking (SDN) and MEC techniques was seriously considered to facilitate the orchestration and management of Mobile Edge Hosts (MEH). Edge clouds can be installed as an interface between the local servers and the core to provide the required services based on the known concept of the SDN networks. Nonetheless, the problem of reliability and fault tolerance will be of great importance in such networks. The paper introduced a dynamic architecture that focuses on the end-to-end mobility support required to maintain service continuity and quality of service. This paper also presents an SDN control plane with stochastic network calculus (SNC) framework to control MEC data flows. In accordance with the entrance processes of different QoS-class data flows, closed-form problems were formulated to determine the correlation between resource utilization and the violation probability of each data flow. Compared to other solutions investigated in the literature, the proposed approach exhibits a significant increase in the throughput distributed over the active links of mobile edge hosts. It also proved that the outage index and the system’s aggregate data rate can be effectively improved by up to 32%.  相似文献   

2.
云计算环境下资源调度系统设计与实现   总被引:1,自引:0,他引:1  
张露  尚艳玲 《应用声学》2017,25(1):131-134
在云计算环境下,对开放的网络大数据库信息系统中的数据进行优化调度,提高数据资源的利用效率和配置优化能力。传统的资源调度算法采用资源信息的自相关匹配方法进行资源调度,当数据传输信道中的干扰较大及资源信息流的先验数据缺乏时,资源调度的均衡性不好,准确配准度不高。提出一种基于云计算资源负载均衡控制和信道自适应均衡的资源调度算法,并进行调度系统的软件开发和设计。首先构建了云计算环境下开放网络大数据库信息资源流的时间序列分析模型,采用自适应级联滤波算法对拟合的资源信息流进行滤波降噪预处理,提取滤波输出的资源信息流的关联维特征,通过资源负载均衡控制和信道自适应均衡算法实现资源调度改进。仿真结果表明,采用资源调度算法进行资源调度系统的软件设计,提高了资源调度的信息配准能力和抗干扰能力,计算开销较小,技术指标具有优越性。  相似文献   

3.
蒋华  张乐乾  王鑫 《应用声学》2015,23(7):2559-2562
针对云计算环境下资源调度模型未充分考虑资源评价的问题,为更好适应不同节点计算性能和大规模数据环境的处理需求,提出了一种基于多维评价模型的虚拟机资源调度策略。首先,在云计算环境下建立包括网络性能在内的多维资源评价模型,在此基础上提出一种改进的蚁群优化算法实现资源调度策略;然后在云计算仿真平台CloudSim上进行实现。实验结果表明,该算法可以更好适应不同网络性能的计算环境,显著提高了资源调度的性能,同时降低了虚拟机负载均衡离差,满足了云计算环境下的虚拟机资源负载均衡需求。  相似文献   

4.
罗慧兰 《应用声学》2017,25(12):150-152, 176
为了缩短云计算执行时间,改善云计算性能,在一定程度上加强云计算资源节点完成任务成功率,需要对云计算资源进行调度。当前的云计算资源调度算法在进行调度时,通过选择合适的调度参数并利用CloudSim仿真工具,完成对云计算资源的调度。该算法在运行时有效地进行平衡负载,导致云计算资源调度的均衡性能较差,存在云计算资源调度结果误差大的问题。为此,提出一种基于Wi-Fi与Web的云计算资源调度算法。该算法首先利用自适应级联滤波算法对云计算资源数据流进行滤波降噪,然后以降噪结果为基础,采用本体论对云计算资源进行预处理操作,最后通过人工蜂群算法完成对云计算资源的调度。实验结果证明,所提算法可以良好地应用于云计算资源调度中,有效提高了云计算资源利用率,具有实用性以及可实践性,为该领域的后续研究发展提供了可靠支撑。  相似文献   

5.
Vehicular edge computing is a new computing paradigm. By introducing edge computing into the Internet of Vehicles (IoV), service providers are able to serve users with low-latency services, as edge computing deploys resources (e.g., computation, storage, and bandwidth) at the side close to the IoV users. When mobile nodes are moving and generating structured tasks, they can connect with the roadside units (RSUs) and then choose a proper time and several suitable Mobile Edge Computing (MEC) servers to offload the tasks. However, how to offload tasks in sequence efficiently is challenging. In response to this problem, in this paper, we propose a time-optimized, multi-task-offloading model adopting the principles of Optimal Stopping Theory (OST) with the objective of maximizing the probability of offloading to the optimal servers. When the server utilization is close to uniformly distributed, we propose another OST-based model with the objective of minimizing the total offloading delay. The proposed models are experimentally compared and evaluated with related OST models using simulated data sets and real data sets, and sensitivity analysis is performed. The results show that the proposed offloading models can be efficiently implemented in the mobile nodes and significantly reduce the total expected processing time of the tasks.  相似文献   

6.
With the advent of the Internet of Everything, the combination of AI (Artificial Intelligence) and edge computing has become a new research hotspot, and edge intelligence has emerged, which enables network edge devices to analyze data through AI algorithms. Since the edge computing environment is more complex and variable than cloud computing, there are many issues in building edge intelligence, such as lack of quantitative evaluation criteria, heterogeneous computing platforms, complex network topologies, and changing user requirements. To analyze the performance of edge intelligence workloads running on heterogeneous hardware platform, we target machine learning workloads in edge intelligence and analyze the impact of algorithm model complexity, edge data characteristics and heterogeneous platform differences in edge intelligence in terms of relative performance. By analyzing the machine learning workload in edge intelligence, we find that the inference time and memory usage of a model can be predicted based on the amount of computation and number of parameters of the model. Moreover, image complexity, edge data network features, and batch size all affect the performance of edge intelligence workloads. Furthermore, the upper limit of model performance on the same computing platform is limited by hardware resources. And finally, the model performance of a platform depends on its own computing power and bandwidth.  相似文献   

7.
针对常规视频监控系统在线实时性不强,海量视频数据传输迟滞,任务管理单一等问题,提出了构建在云计算环下基于多虚拟机技术在线视频监控系统 ,利用云计算平台中的物理资源与服务资源提升在线视频监控系统数据处理能力,虚拟机可同时处理大量的视频监控数据,并将视频数据以云存储的方式存储于云端服务器,降低了设备建设成本,可根据不同用户需求定制相关服务。本系统基于云计算平台设计,应用数十台乃至数百台虚拟机对在线视频监控数据进行处理,设计实现了云平台下在线视频监控系统的结构设计、以太网通信接口设计、服务器硬件配置和虚拟机控制。在软件设计方面通过对各虚拟机资源利用率的计算而动态分配资源,从而可以有效减少网络传输系统状态信息的带宽开销。通过系统功能与性能测试表明,在常规公共网络10M带宽的情况下,本系统在线视频监控数据的传输延迟时间相比于传统视频监控减少了85%以上,监控视频数据量减少了75%以上。  相似文献   

8.
Having shown promising performance with high flexibility and efficiency in vehicular edge computing (VEC) network, the parked vehicles (PVs) received an increasing number of attentions in recent years. However, PVs’ residual battery power restricts their running time. In addition, there is still no alternate resource pool for the PVs to cope with the emergencies in the previous VEC framework. To alleviate these problems, we model a cloud-assisted parked vehicular edge computing (PVEC) framework, in which the PVs are classified based on their residual battery power. PVs corporate with the cloud servers (CSs) for the computational resources provision. In addition, we formulate the utilities of the service provider (SP) and PVs and design a contract-based resource allocation problem for the maximization of the SP’s utility. Considering that it is intractable to solve the optimization problem directly, the primal problem is simplified and decoupled into two sub-problems. To design the optimal contracts, we solve the sub-problems by Lagrangian multiplier method and dual function. Simulation results prove that the utilities of PVs can reach to the maximum when they choose the contract corresponding to their types. In addition, the simulation results illustrate the superiority of proposed scheme over previous schemes in improving the utilities of the SP and social welfare.  相似文献   

9.
When an unmanned aerial vehicle (UAV) performs tasks such as power patrol inspection, water quality detection, field scientific observation, etc., due to the limitations of the computing capacity and battery power, it cannot complete the tasks efficiently. Therefore, an effective method is to deploy edge servers near the UAV. The UAV can offload some of the computationally intensive and real-time tasks to edge servers. In this paper, a mobile edge computing offloading strategy based on reinforcement learning is proposed. Firstly, the Stackelberg game model is introduced to model the UAV and edge nodes in the network, and the utility function is used to calculate the maximization of offloading revenue. Secondly, as the problem is a mixed-integer non-linear programming (MINLP) problem, we introduce the multi-agent deep deterministic policy gradient (MADDPG) to solve it. Finally, the effects of the number of UAVs and the summation of computing resources on the total revenue of the UAVs were simulated through simulation experiments. The experimental results show that compared with other algorithms, the algorithm proposed in this paper can more effectively improve the total benefit of UAVs.  相似文献   

10.
徐浙君  陈善雄 《应用声学》2017,25(1):127-130
针对云计算下的资源调度的问题,提出将蚁群算法的个体与云计算中的可行性资源调度进行对应,首先对云计算资源调度进行描述,其次针对蚁群算法的路径选择引入了平衡因子,对信息素进行了局部研究和全局研究,将蚁群个体引入到膜计算中,通过膜内运算和膜间运算,提高了算法的局部和全局收敛的能力,最后在云计算资源分配中,引入匹配表概念,将云计算任务和资源进行匹配,融合后的算法提高了算法的整体性能.仿真实验说明在网络消耗,成本消耗,能量消耗上有了明显的降低,提高了资源分配效率。  相似文献   

11.
如何进行更好地资源调度一直都是云计算研究的热点,本文在云计算资源算法中引入布谷鸟算法,针对布谷鸟算法中出现的收敛速度快,容易局部震荡等现象,本文首先引入高斯变异算子来处理每一个阶段中的鸟窝最佳位置的选择,然后通过自适应动态因子来调整不同阶段中的鸟窝位置的选择,使得改进后的算法收敛精度提高,通过适应度函数的平衡以及遗传算法中的三种操作,使得本文算法能够有效的提高云计算环境下的资源分配效率,降低了网络消耗。在Cloudsim平台仿真实验中,通过三个方面的比较,本文算法在性能上、资源调度效率和任务调度方面都有很大改进,有效提高了云计算系统的资源调度能力。  相似文献   

12.
基于遗传算法的云计算资源调度策略研究   总被引:1,自引:0,他引:1  
徐文忠  彭志平  左敬龙 《应用声学》2015,23(5):1653-1656
对云计算环境中的资源调度问题进行了研究,鉴于当前云计算环境中资源利用率不高,节点负载不均衡的问题,提出了一种新的基于遗传算法的关于虚拟机负载均衡的调度策略。根据历史数据和系统的当前状态以及通过遗传算法,该策略能够达到最佳负载均衡和减少或避免动态迁移,同时还引入了平均负载来衡量该算法的全局负载均衡效果。最后通过在CloudSim平台进行仿真实验,结果表明,该策略具有相当好的全局收敛性和效率,当系统虚拟机被调度之后,算法在很大程度上能够解决负载不均衡和高迁移成本问题,并且极大地提高了资源利用率。  相似文献   

13.
云计算负载均衡是保障SLA协议的关键问题之一。针对云计算负载均衡问题,提出一种面向SLA的负载均衡策略。该策略引入人工神经网络思想,建立负载均衡模型,采用单层感知器算法(SLPA)将虚拟机负载状态进行分类,然后利用结合了动态加权轮询算法的BP神经网络算法(BPNNA-DWRRA)有针对性地对虚拟机负载权重进行预测更新,最后将任务调度到最小权重所对应的可行虚拟机上。应用CloudSim进行仿真实验,结果表明了该策略的可行性,同时,相比加权最小链接算法和粒子群算法,该策略的平均响应时间分别节省了43.6%和22.5%,SLA违反率分别降低了20.7%和14.4%。因此,所提策略在响应用户任务时,请求响应时间短,SLA违反率低,保障了SLA。  相似文献   

14.
Nowadays, more and more multimedia services are supported by Mobile Edge Computing (MEC). However, the instability of the wireless environment brings a lot of uncertainty to the computational offloading. Additionally, intelligent reflecting surface (IRS) is considered as a potential technology to enhance Quality of Service (QoS). Therefore, in this paper, we establish a framework for IRS-assisted MEC computational offloading to solve this problem and take fairness optimization as a key point involving communication and computing resources. Minimize user consumption by optimizing bandwidth allocation, task offloading ratio, edge computing resources, transmission power and IRS phase shifts. Firstly, we decompose the problem into three aspects, such as bandwidth allocation, computing resource allocation, transmission power and IRS phase shifts. Then, an alternative optimization algorithm is proposed to find the optimum solution and its convergence is proved. Secondly, since the optimization problem on transmission power and IRS phase shifts is non-convex, we propose Riemann gradient descent (R-SGD) algorithm to solve it. Finally, numerical results show that our proposed algorithm performs better than other algorithms and achieves a superiority in the framework.  相似文献   

15.
Mobile edge computing (MEC) focuses on transferring computing resources close to the user’s device, and it provides high-performance and low-delay services for mobile devices. It is an effective method to deal with computationally intensive and delay-sensitive tasks. Given the large number of underutilized computing resources for mobile devices in urban areas, leveraging these underutilized resources offers tremendous opportunities and value. Considering the spatiotemporal dynamics of user devices, the uncertainty of rich computing resources and the state of network channels in the MEC system, computing resource allocation in mobile devices with idle computing resources will affect the response time of task requesting. To solve these problems, this paper considers the case in which a mobile device can learn from a neighboring IoT device when offloading a computing request. On this basis, a novel self-adaptive learning of task offloading algorithm (SAda) is designed to minimize the average offloading delay in the MEC system. SAda adopts a distributed working mode and has a perception function to adapt to the dynamic environment in reality; it does not require frequent access to equipment information. Extensive simulations demonstrate that SAda achieves preferable latency performance and low learning error compared to the existing upper bound algorithms.  相似文献   

16.
Critical healthcare application tasks require a real-time response because it affects patients’ life. Fog computing is the best solution to get a fast response and less energy consumption in healthcare. However, current solutions face difficulties in scheduling the tasks to the correct computing devices based on their priorities and capacity to meet the tasks’ deadlines and resource limitations with minimal latency. Furthermore, challenges of load balancing and prioritization are raised when dealing with inadequate computing resources and telecommunication networks while obtaining the best scheduling of emergency healthcare tasks. In this study, a fog computing resource management (FRM) model is proposed, which the proposed model has three main solutions. Firstly, resource availability is calculated according to the average execution time of each task. Secondly, load balancing is enhanced by proposing a hybrid approach that combines the multi-agent load balancing algorithm and the throttled load balancing algorithm. Thirdly, task scheduling is done based on priority, resource availability, and load balancing. The results have been acquired using the iFogSim toolkit. Two datasets are used in this study, the blood pressure dataset was acquired from the UTeM clinic, and the ECG dataset was acquired from the University of California at Irvine. Both datasets are integrated to enlarge the attributes and get accurate results. The results demonstrate the effectiveness of managing resources and optimizing task scheduling and balancing in a fog computing environment. In comparison with other research studies, the FRM model outperforms delay by 55%, response time by 72%, cost by 72%, and energy consumption by 70%.  相似文献   

17.
In this paper, we consider the latency minimization problem via designing intelligent reflecting surface (IRS)-assisted mobile edge computing (MEC) networks. For the scene when local users cannot complete all computing tasks independently, a common solution is transferring tasks to cloud servers. We consider that the MEC system contains multiple independent users, and each user sends task data to the base station in a partially offloaded manner. Our goal is to minimize the maximum latency for all users. The original problem is strongly non-convex, which caused difficulty to solve. We first introduce a new variable to transform the max–min problem into an alternative minimization problem, and then solve each optimization variable separately by the block coordinate descent method. Finally, our simulation experiments demonstrate that our proposed scheme obtain better performance with respect to other existing schemes.  相似文献   

18.
朱亚东 《应用声学》2017,25(1):167-169, 172
目前,云计算网络为人们的生产和生活提供了各种应用和服务,网络边界节点的识别问题一直较难解决。传统的网络中边界节点类型复杂,边界部署成本高,较多感知模型和静态场景难以实现。为此,提出一种改进的云计算网络中边界节点识别方法,通过制定边界部署规则确定边界节点部署数量及要求,对边界节点感知漏洞进行修补,保证边界节点对网络区域内的全覆盖识别,最后设计出云计算网络识别模型,实现了云计算网络中边界节点正确识别。仿真实验表明,提出的边界节点识别方法在稳定性、识别率和识别数量上都比传统方法有优越性,具有应用价值。  相似文献   

19.
With the rapid development of cloud computing, data center application based on considerable storage and computing has become one of the most important service types. Currently the high performance computing facilities and large-capacity storage devices are highly distributed in different locations. Then how to make full use of the current data center mainly depends on the effective joint scheduling of application layer and network layer resources. According to the rigid requirement of data center application, a novel convergence control architecture, i.e. Service-Oriented Group Engine (SOGE) framework is proposed in multi-domain optical networks based on DREAM architecture, and also the corresponding resource demand model (RDM) is built. A resource joint scheduling algorithm (RJSA) for application layer and network layer resource is proposed and implemented based on SOGE framework. Moreover, the SOGE framework and resource joint scheduling algorithm are validated and demonstrated on the test-bed based on DREAM architecture.  相似文献   

20.
With the rapid new advancements in technology, there is an enormous increase in devices and their versatile need for services. Fifth-generation (5G) cellular networks (5G-CNs) with network slicing (NS) have emerged as a necessity for future mobile communication. The available network is partitioned logically into multiple virtual networks to provide an enormous range of users’ specific services. Efficient resource allocation methods are critical to delivering the customers with their required Quality of Service (QoS) priorities. In this work, we have investigated a QoS based resource allocation (RA) scheme considering two types of 5G slices with different service requirements; (1) enhanced Mobile Broadband (eMBB) slice that requires a very high data rate and (2) massive Machine Type Communication (mMTC) slice that requires extremely low latency. We investigated the device-to-device (D2D) enabled 5G-CN model with NS to assign resources to users based on their QoS needs while considering the cellular and D2D user’s data rate requirements. We have proposed a Distributed Algorithm (DA) with edge computation to solve the optimization problem, which is novel as edge routers will solve the problem locally using the augmented Lagrange method. They then send this information to the central server to find the global optimum solution utilizing a consensus algorithm. Simulation analysis proves that this scheme is efficient as it assigns resources based on their QoS requirements. This scheme is excellent in reducing the central load and computational time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号