首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
基于高性能计算发展现状,提出了国家超算中心与隐私计算相结合的技术路线。通过对隐私计算进行分析,讨论了隐私计算与超算相结合的优势和不足。在此基础上,对高性能计算技术与隐私计算技术结合后的应用场景进行了展望。  相似文献   

2.
新型数字化技术和业务的兴起,以及信息数据的爆炸式增长,对云—边—端多级算力资源提出了巨大的应用需求,算力基础设施泛在化成为一大发展趋势。算力网络将算力等资源与网络协同统一,结合用户需求提供最优的资源配置策略,同时提高多级算力资源的协同工作效率,成为网络技术发展的新方向。分析了当前算力网络的技术路线,提出了一种基于域名解析机制的算力网络实现方案。该方案引入域名解析机制,使用URL语言对多种算力资源进行统一标识,由集中式的算力资源管理平台对算力资源进行统一的分配调度。用户在收到分配的算力资源标识时,通过域名解析系统解析出相应资源的网络位置信息,并通过算力网关与资源池建立网络连接。此方案满足灵活扩展算力资源标识的需求,具有很好的实用性和通用性。  相似文献   

3.
4.
绿色计算因顺应低碳经济的要求正日益受到学术界广泛关注,也是未来IT技术趋势之一.本文通过对绿色计算开展背景与现状的分析,提出了开展绿色计算的意义以及开展绿色计算的有效对策,为进一步研究绿色计算打下基础.  相似文献   

5.
随着数字技术与新型城镇化深度融合,算网城市将成为未来城市基础设施建设的重要组成。算网城市以算力基础设施为底座,以算力资源统筹调配和运营管理为抓手,通过构筑城市算力“一张网”,赋能城市数字化转型和发展,对实现我国数字中国建设目标具有重要作用。梳理了算网城市建设背景,分析了算网城市的内涵架构,明确了算网城市建设意义,并进一步提出算网城市的建设路径。  相似文献   

6.
网络感知算力并据此执行算网融合路由,是算网融合在网络基础设施侧的一种重要技术方案。相对于传统基于主机地址的网络路由机制,算网路由最主要的增量是在网络侧引入分布式多算力实例的优选,因此位置和归属无关的服务标识将成为新的寻址和路由标的。阐述了一种端到端的算网路由解决方案及其对路由协议的影响,并基于典型场景和测试用例,分析了面向服务标识的算网路由架构方案在功能和性能维度的收益。  相似文献   

7.
为应对未来算力需求爆炸性增长所带来的挑战,将计算重用技术引入算力网络中,通过重用计算任务结果,来缩短服务时延并减少计算资源消耗。在此基础上,提出基于服务联盟的上下文感知在线学习算法。首先,设计重用指数来减少额外查找时延;然后,基于服务联盟机制进行在线学习,根据上下文信息及历史经验做出计算任务调度决策。仿真实验结果表明,所提算法在服务时延、计算资源消耗等方面均优于基准算法。  相似文献   

8.
算力网络是云网融合技术的演进和发展,为用户提供算网一体的服务。文章基于云原生中轻量化的无服务器计算模式,以函数功能为边缘算力服务能力,提出面向无服务器计算的边缘算力网络服务体系架构,阐述架构中的功能需求、体系架构、部署方案等相关内容,并完成算力网络服务平台原型部署和业务验证,实现上层应用对无服务器计算边缘算力的能力调用,验证轻量化算力的应用与边缘算力网络能力开放的问题。  相似文献   

9.
王存良 《电光系统》2005,(1):4-7,10
介绍了双星电子侦察定位系统的组成、工作原理、定位模型、解算方法、解算过程及其仿真解算;并就测量误差对定位误差的影响做了仿真计算,给出了定位误差随测量误差变化的一组数值;分析了该系统的应用范围、应用前景及与三星电子侦察定位系统的关系。  相似文献   

10.
11.
The basis of the concept of reliability is that a given component has a certain stress-resisting capacity; if the stress induced by the operating conditions exceeds this capacity, failure results. Most of the published results in this area are based upon analytical modelling of stress and strength, using various probability distributions, and then trying to find an exact expression for system reliability, which can be very difficult to obtain sometimes. The approach used in this paper is very simple and uses simulation techniques to repeatedly generate stress and strength of a system by the computer, using a random number generator and methods such as the inverse transformation technique. The advantage of this approach is that it can be used for any stress-strength distribution functions. Finally, numerical results obtained from using this approach are compared with results obtained using the analytical methods for various strength-stress distribution functions, such as exponential, normal, log normal, gamma and Weibull. Results show the viability of the simulation approach.  相似文献   

12.
《现代电子技术》2015,(24):47-49
在云计算海量数据存储和数据中心节能算法的综合应用中,更加注重云计算系统数据能耗问题的有效解决。云计算系统中数据能耗问题的产生,不仅增加了二氧化碳的排放量,同时也带来了较为严重的环境问题。结合云计算的定义特点,对云计算系统数据的高能耗问题进行研究分析。通过分析数据节能算法的分类,对DVFS(动态电压频率调整)数据节能算法以及虚拟化节能算法进行分析,并对比其他算法优缺点,同时对应用场景进行描述,最后对云计算系统的数据能耗管理过程做了具体的总结。  相似文献   

13.
《现代电子技术》2018,(5):56-60
为了提高传统数据聚类算法在大数据挖掘应用中的性能,借助云计算的相关技术,并结合非负矩阵分解方法设计并实现了一种并行的数据层次聚类算法。该算法采用Map Reduce编程平台,利用Hadoop的HDFS存储大容量的电信运营商数据;描述了Map Reduce的数据分级聚类并行处理的工作机制与流程;通过Map和Reduce这种主-从编程模式很方便地使数据分级聚类的子任务在Hadoop的PC集群上运行。实验结果表明,该方法比传统用于数据聚类的非负矩阵方法具有更好的运行时间与加速比,能够在可以接受的时间范围内完成电信运营商的大数据处理。  相似文献   

14.
An efficient simulation algorithm for computing failure probability of a consecutive-k-out-of-r-from-n:F system (linear or circular) with any component reliability, is presented. The algorithm estimates both the failure probability of the system and the associated uncertainty (error). A complete interpretation of the algorithm results is given through a detailed error analysis  相似文献   

15.
A fast algorithm for searching a tree (FAST) is presented for computing the distance spectrum of convolutional codes. The distance profile of a code is used to limit substantially the error patterns that have to be searched. The algorithm can easily be modified to determine the number of nonzero information bits of an incorrect path as well as the length of an error event. For testing systematic codes, a faster version of the algorithm is given. FAST is much faster than the standard bidirectional search. On a microVAX, d=27 was verified for a rate R=1/2, memory M=25 code in 37 s of CPU time. Extensive tables of rate R=1/2 encoders are given. Several of the listed encoders have distance spectra superior to those of any previously known codes of the same rate and memory. A conjecture than an R=1/2 systematic convolutional code of memory 2M will perform as well as a nonsystematic convolutional code of memory M is given strong support  相似文献   

16.
We give a recursive algorithm to calculate submatrices of the Cramer-Rao (CR) matrix bound on the covariance of any unbiased estimator of a vector parameter &thetas;_. Our algorithm computes a sequence of lower bounds that converges monotonically to the CR bound with exponential speed of convergence. The recursive algorithm uses an invertible “splitting matrix” to successively approximate the inverse Fisher information matrix. We present a statistical approach to selecting the splitting matrix based on a “complete-data-incomplete-data” formulation similar to that of the well-known EM parameter estimation algorithm. As a concrete illustration we consider image reconstruction from projections for emission computed tomography  相似文献   

17.
An algorithm for obtaining nonnegative, joint time-frequency distributions Q(t, f) satisfying the univariate marginals |s(t)|2 and |S(f)|2 is presented and applied. The advantage of the algorithm is that large time series records can be processed without the need for large random access memory (RAM) and central processing unit (CPU) time. This algorithm is based on the Loughlin et al. (1992) method for synthesizing positive distributions using the principle of minimum cross-entropy. The nonnegative distributions with the correct marginals that are obtained using this approach are density functions as proposed by Cohen and Zaparovanny (1980) and Cohen and Posch (1985). Three examples are presented: the first is a nonlinear frequency modulation (FM) sweep signal (simulated data); the second and third are of physical systems (real data). The second example is the signal for the acoustic scattering response of an elastic cylindrical shell structure. The third example is of an acoustic transient signal from an underwater vehicle. Example one contains 7500 data points, example two contains 256 data points, and example three contains in excess of 30000 data points. The RAM requirements using the original Loughlin et al. algorithm for a 7500 data point signal is 240 mega bytes and for a 30000 data point signal is 3.5 billion bytes. The new algorithm reduces the 240 mega byte requirement to 1 mega byte and the 3.5 billion byte requirement to 4 million bytes. Furthermore, the fast algorithm runs 240 times faster for the 7500 data point signal and 3000 times faster for the 30000 data point signal as compared with the original Loughlin et al. algorithm  相似文献   

18.
Meeting the performance specifications of consolidated web services in a data center is challenging, as the control of the underlying cloud computing infrastructure must meet the service level agreement requirements and satisfy the system's constraints. In this article, we address the admission control and resource allocation problem jointly, by establishing a unified modeling and control framework. Convergence to a desired reference point and stability and feasibility of the control strategy are guaranteed while achieving high performance of the co‐hosted web applications. The efficacy of the proposed approach is illustrated in a real test bed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The Wigner-Ville distribution (WVD) is a particularly useful technique for analysing nonstationary signals and has been studied extensively. An algorithm has been proposed for computing the WVD requiring only real operations, but involving division by sine and cosine factors. However, this causes numerical instabilities because of roundoff errors in finite length registers. The authors present a fast and numerically stable algorithm for computing the WVD. The computational complexity of the proposed algorithm is also derived and compared with existing algorithms  相似文献   

20.
Consider a probabilistic graph in which the edges are perfectly reliable, but vertices can fail with some known probabilities. The K-terminal reliability of this graph is the probability that a given set of vertices K is connected. This reliability problem is #P-complete for general graphs, and remains #P-complete for chordal graphs and comparability graphs. This paper presents a linear-time algorithm for computing K-terminal reliability on proper interval graphs. A graph G = (V, E) is a proper interval graph if there exists a mapping from V to a class of intervals I of the real line with the properties that two vertices in G are adjacent if their corresponding intervals overlap and no interval in I properly contains another. This algorithm can be implemented in O(|V| + |E|) time  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号