共查询到20条相似文献,搜索用时 78 毫秒
1.
基于高性能计算发展现状,提出了国家超算中心与隐私计算相结合的技术路线。通过对隐私计算进行分析,讨论了隐私计算与超算相结合的优势和不足。在此基础上,对高性能计算技术与隐私计算技术结合后的应用场景进行了展望。 相似文献
2.
新型数字化技术和业务的兴起,以及信息数据的爆炸式增长,对云—边—端多级算力资源提出了巨大的应用需求,算力基础设施泛在化成为一大发展趋势。算力网络将算力等资源与网络协同统一,结合用户需求提供最优的资源配置策略,同时提高多级算力资源的协同工作效率,成为网络技术发展的新方向。分析了当前算力网络的技术路线,提出了一种基于域名解析机制的算力网络实现方案。该方案引入域名解析机制,使用URL语言对多种算力资源进行统一标识,由集中式的算力资源管理平台对算力资源进行统一的分配调度。用户在收到分配的算力资源标识时,通过域名解析系统解析出相应资源的网络位置信息,并通过算力网关与资源池建立网络连接。此方案满足灵活扩展算力资源标识的需求,具有很好的实用性和通用性。 相似文献
3.
4.
绿色计算因顺应低碳经济的要求正日益受到学术界广泛关注,也是未来IT技术趋势之一.本文通过对绿色计算开展背景与现状的分析,提出了开展绿色计算的意义以及开展绿色计算的有效对策,为进一步研究绿色计算打下基础. 相似文献
5.
随着数字技术与新型城镇化深度融合,算网城市将成为未来城市基础设施建设的重要组成。算网城市以算力基础设施为底座,以算力资源统筹调配和运营管理为抓手,通过构筑城市算力“一张网”,赋能城市数字化转型和发展,对实现我国数字中国建设目标具有重要作用。梳理了算网城市建设背景,分析了算网城市的内涵架构,明确了算网城市建设意义,并进一步提出算网城市的建设路径。 相似文献
6.
7.
8.
9.
介绍了双星电子侦察定位系统的组成、工作原理、定位模型、解算方法、解算过程及其仿真解算;并就测量误差对定位误差的影响做了仿真计算,给出了定位误差随测量误差变化的一组数值;分析了该系统的应用范围、应用前景及与三星电子侦察定位系统的关系。 相似文献
10.
11.
The basis of the concept of reliability is that a given component has a certain stress-resisting capacity; if the stress induced by the operating conditions exceeds this capacity, failure results. Most of the published results in this area are based upon analytical modelling of stress and strength, using various probability distributions, and then trying to find an exact expression for system reliability, which can be very difficult to obtain sometimes. The approach used in this paper is very simple and uses simulation techniques to repeatedly generate stress and strength of a system by the computer, using a random number generator and methods such as the inverse transformation technique. The advantage of this approach is that it can be used for any stress-strength distribution functions. Finally, numerical results obtained from using this approach are compared with results obtained using the analytical methods for various strength-stress distribution functions, such as exponential, normal, log normal, gamma and Weibull. Results show the viability of the simulation approach. 相似文献
12.
13.
14.
An efficient simulation algorithm for computing failure probability of a consecutive-k-out-of-r-from-n:F system (linear or circular) with any component reliability, is presented. The algorithm estimates both the failure probability of the system and the associated uncertainty (error). A complete interpretation of the algorithm results is given through a detailed error analysis 相似文献
15.
Cedervall M.L. Johannesson R. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1989,35(6):1146-1159
A fast algorithm for searching a tree (FAST) is presented for computing the distance spectrum of convolutional codes. The distance profile of a code is used to limit substantially the error patterns that have to be searched. The algorithm can easily be modified to determine the number of nonzero information bits of an incorrect path as well as the length of an error event. For testing systematic codes, a faster version of the algorithm is given. FAST is much faster than the standard bidirectional search. On a microVAX, d ∞=27 was verified for a rate R =1/2, memory M =25 code in 37 s of CPU time. Extensive tables of rate R =1/2 encoders are given. Several of the listed encoders have distance spectra superior to those of any previously known codes of the same rate and memory. A conjecture than an R =1/2 systematic convolutional code of memory 2M will perform as well as a nonsystematic convolutional code of memory M is given strong support 相似文献
16.
Hero A. Fessler J.A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1994,40(4):1205-1210
We give a recursive algorithm to calculate submatrices of the Cramer-Rao (CR) matrix bound on the covariance of any unbiased estimator of a vector parameter &thetas;_. Our algorithm computes a sequence of lower bounds that converges monotonically to the CR bound with exponential speed of convergence. The recursive algorithm uses an invertible “splitting matrix” to successively approximate the inverse Fisher information matrix. We present a statistical approach to selecting the splitting matrix based on a “complete-data-incomplete-data” formulation similar to that of the well-known EM parameter estimation algorithm. As a concrete illustration we consider image reconstruction from projections for emission computed tomography 相似文献
17.
An algorithm for obtaining nonnegative, joint time-frequency distributions Q(t, f) satisfying the univariate marginals |s(t)|2 and |S(f)|2 is presented and applied. The advantage of the algorithm is that large time series records can be processed without the need for large random access memory (RAM) and central processing unit (CPU) time. This algorithm is based on the Loughlin et al. (1992) method for synthesizing positive distributions using the principle of minimum cross-entropy. The nonnegative distributions with the correct marginals that are obtained using this approach are density functions as proposed by Cohen and Zaparovanny (1980) and Cohen and Posch (1985). Three examples are presented: the first is a nonlinear frequency modulation (FM) sweep signal (simulated data); the second and third are of physical systems (real data). The second example is the signal for the acoustic scattering response of an elastic cylindrical shell structure. The third example is of an acoustic transient signal from an underwater vehicle. Example one contains 7500 data points, example two contains 256 data points, and example three contains in excess of 30000 data points. The RAM requirements using the original Loughlin et al. algorithm for a 7500 data point signal is 240 mega bytes and for a 30000 data point signal is 3.5 billion bytes. The new algorithm reduces the 240 mega byte requirement to 1 mega byte and the 3.5 billion byte requirement to 4 million bytes. Furthermore, the fast algorithm runs 240 times faster for the 7500 data point signal and 3000 times faster for the 30000 data point signal as compared with the original Loughlin et al. algorithm 相似文献
18.
A control‐theoretic approach towards joint admission control and resource allocation of cloud computing services 下载免费PDF全文
Dimitrios Dechouniotis Nikolaos Leontiou Nikolaos Athanasopoulos Athanasios Christakidis Spyros Denazis 《International Journal of Network Management》2015,25(3):159-180
Meeting the performance specifications of consolidated web services in a data center is challenging, as the control of the underlying cloud computing infrastructure must meet the service level agreement requirements and satisfy the system's constraints. In this article, we address the admission control and resource allocation problem jointly, by establishing a unified modeling and control framework. Convergence to a desired reference point and stability and feasibility of the control strategy are guaranteed while achieving high performance of the co‐hosted web applications. The efficacy of the proposed approach is illustrated in a real test bed. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
19.
Sundaram R.S. Prabhu K.M.M. 《Vision, Image and Signal Processing, IEE Proceedings -》1997,144(1):46-48
The Wigner-Ville distribution (WVD) is a particularly useful technique for analysing nonstationary signals and has been studied extensively. An algorithm has been proposed for computing the WVD requiring only real operations, but involving division by sine and cosine factors. However, this causes numerical instabilities because of roundoff errors in finite length registers. The authors present a fast and numerically stable algorithm for computing the WVD. The computational complexity of the proposed algorithm is also derived and compared with existing algorithms 相似文献
20.
Min-Sheng Lin 《Reliability, IEEE Transactions on》2002,51(1):58-62
Consider a probabilistic graph in which the edges are perfectly reliable, but vertices can fail with some known probabilities. The K-terminal reliability of this graph is the probability that a given set of vertices K is connected. This reliability problem is #P-complete for general graphs, and remains #P-complete for chordal graphs and comparability graphs. This paper presents a linear-time algorithm for computing K-terminal reliability on proper interval graphs. A graph G = (V, E) is a proper interval graph if there exists a mapping from V to a class of intervals I of the real line with the properties that two vertices in G are adjacent if their corresponding intervals overlap and no interval in I properly contains another. This algorithm can be implemented in O(|V| + |E|) time 相似文献