首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Grid is a promising technology for providing access to distributed high-end computational capabilities. Thus, computational tasks can be performed spontaneously by other resources in the Grid that are not under the user’s control. However, one of the key problems in the Grid is deciding which jobs are to be allocated to which resources at what time. In this context, the use of market mechanisms for scheduling and allocating Grid resources is a promising approach toward solving these problems. This paper proposes an auction mechanism for allocating and scheduling computer resources such as processors or storage space which have multiple quality attributes. The mechanism is evaluated according to its economic and computational performance as well as its practical applicability by means of a simulation.  相似文献   

2.
As the computational power of high‐performance computing systems continues to increase by using a huge number of cores or specialized processing units, high‐performance computing applications are increasingly prone to faults. In this paper, we present a new class of numerical fault tolerance algorithms to cope with node crashes in parallel distributed environments. This new resilient scheme is designed at application level and does not require extra resources, that is, computational unit or computing time, when no fault occurs. In the framework of iterative methods for the solution of sparse linear systems, we present numerical algorithms to extract relevant information from available data after a fault, assuming a separate mechanism ensures the fault detection. After data extraction, a well‐chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to restart the iterative scheme. We have developed these methods, referred to as interpolation–restart techniques, for Krylov subspace linear solvers. After a fault, lost entries of the current iterate computed by the solver are interpolated to define a new initial guess to restart the Krylov method. A well‐suited initial guess is computed by using the entries of the faulty iterate available on surviving nodes. We present two interpolation policies that preserve key numerical properties of well‐known linear solvers, namely, the monotonic decrease of the A‐norm of the error of the conjugate gradient or the residual norm decrease of generalized minimal residual algorithm for solving. The qualitative numerical behavior of the resulting scheme has been validated with sequential simulations, when the number of faults and the amount of data losses are varied. Finally, the computational costs associated with the recovery mechanism have been evaluated through parallel experiments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Applying computationally expensive simulations in design or process optimization results in long-running solution processes even when using a state-of-the-art distributed algorithm and hardware. Within these simulation-based optimization problems the optimizer has to treat the simulation systems as black-boxes. The distributed solution of this kind of optimization problem demands efficient utilization of resources (i.e. processors) and evaluation of the solution quality. Analyzing the parallel performance is therefore an important task in the development of adequate distributed approaches taking into account the numerical algorithm, its implementation, and the used hardware architecture. In this paper, simulation-based optimization problems are characterized and a distributed solution algorithm is presented. Different performance analysis techniques (e.g. scalability analysis, computational complexity) are discussed and a new approach integrating parallel performance and solution quality is developed. This approach combines a priori and a posteriori techniques and can be applied in early stages of the solution process. The feasibility of the approach is demonstrated by applying it to three different classes of simulation-based optimization problems from groundwater management.  相似文献   

4.
In the last years, demand and availability of computational capabilities experienced radical changes. Desktops and laptops increased their processing resources, exceeding users’ demand for large part of the day. On the other hand, computational methods are more and more frequently adopted by scientific communities, which often experience difficulties in obtaining access to the required resources. Consequently, data centers for outsourcing use, relying on the cloud computing paradigm, are proliferating. Notwithstanding the effort to build energy-efficient data centers, their energy footprint is still considerable, since cooling a large number of machines situated in the same room or container requires a significant amount of power. The volunteer cloud, exploiting the users’ willingness to share a quote of their underused machine resources, can constitute an effective solution to have the required computational resources when needed. In this paper, we foster the adoption of the volunteer cloud computing as a green (i.e., energy efficient) solution even able to outperform existing data centers in specific tasks. To manage the complexity of such a large scale heterogeneous system, we propose a distributed optimization policy to task scheduling with the aim of reducing the overall energy consumption executing a given workload. To this end, we consider an integer programming problem relying on the Alternating Direction Method of Multipliers (ADMM) for its solution. Our approach is compared with a centralized one and other non-green targeting solutions. Results show that the distributed solution found by the ADMM constitutes a good suboptimal solution, worth to be applied in a real environment.  相似文献   

5.
We present a new scattered data fitting method, where local approximating polynomials are directly extended to smooth (C 1 or C 2) splines on a uniform triangulation Δ (the four-directional mesh). The method is based on designing appropriate minimal determining sets consisting of whole triangles of domain points for a uniformly distributed subset of Δ. This construction allows to use discrete polynomial least squares approximations to the local portions of the data directly as parts of the approximating spline. The remaining Bernstein–Bézier coefficients are efficiently computed by extension, i.e., using the smoothness conditions. To obtain high quality local polynomial approximations even for difficult point constellations (e.g., with voids, clusters, tracks), we adaptively choose the polynomial degrees by controlling the smallest singular value of the local collocation matrices. The computational complexity of the method grows linearly with the number of data points, which facilitates its application to large data sets. Numerical tests involving standard benchmarks as well as real world scattered data sets illustrate the approximation power of the method, its efficiency and ability to produce surfaces of high visual quality, to deal with noisy data, and to be used for surface compression.  相似文献   

6.
A simulation and decision support system, RealOpt©, for planning large-scale emergency dispensing clinics to respond to biological threats and infectious disease outbreaks is described. The system allows public health administrators to investigate clinic design and staffing scenarios quickly. RealOpt© incorporates efficient optimization technology seamlessly interfaced with a simulation module. The system's correctness and computational advantage are validated via comparisons against simulation runs of the same model developed on a commercial system. Simulation studies to explore facility layout and staffing scenarios for smallpox vaccination and for an actual anthrax-treatment dispensing exercise and post event analysis are presented. The system produces results consistent with the model built on the commercial system, but requires only a fraction of the computational time. Each smallpox scenario runs within 1 CPU minute on RealOpt©, versus run times of over 5–10 h on the commercial system. The system's fast computational time enables its use in large-scale studies, in particular an anthrax response planning exercise involving a county with 864,000 households. The computational effort required for this exercise was roughly 30 min for all scenarios considered, demonstrating that RealOpt© offers a very promising avenue for pursuing a comprehensive investigation involving a more diverse set of scenarios, and justifying work towards development of a robust system that can be widely deployed for use by state, local, and tribal health practitioners. Using our staff allocation and assignments for the Anthrax field exercise, DeKalb county achieved the highest throughput among all counties that simultaneously conducted the same scale of Anthrax exercise at various locations, with labor usage at or below the other counties. Indeed, DeKalb exceeded the targeted number of households, and it processed 50% more individuals compared to the second place county. None of the other counties achieved the targeted number of households. The external evaluators commented that DeKalb produced the most efficient floor plan (with no path crossing), the most cost-effective dispensing (lowest labor/throughput value), and the smoothest operations (shortest average wait time, average queue length, equalized utilization rate). The study proves that even without historical data, using our system one can plan ahead and be able to wisely estimate the required labor resources. The exercise also revealed many areas that need attention during the operations planning and design of dispensing centers. The type of disaster being confronted (e.g., biological attack, infectious disease outbreak, or a natural disaster) also dictates different design considerations with respect to the dispensing clinic, facility locations, dispensing and backup strategies, and level of security protection. Depending on the situation, backup plans will be different, and the level of security and military personnel, as well as the number of healthcare workers required, will vary. In summary, the study shows that a real-time decision support system is viable through careful design of a stand-alone simulator coupled with powerful tailor-designed optimization solvers. The flexibility of performing empirical tests quickly means the system is amenable for use in training and preparation, and for strategic planning before and during an emergency situation. The system facilitates analysis of “what-if'' scenarios, and serves as an invaluable tool for operational planning and dynamic on-the-fly reconfigurations of large-scale emergency dispensing clinics. It also allows for “virtual field exercises” to be performed on the decision support system, offering insight into operations flow and bottlenecks when mass dispensing is required for a region with a large population. The system, designed in modular form with a flexible implementation, enables future expansion and modification regarding emergency center design with respect to treatment for different biological threats or disease outbreaks. Working with emergency response departments, further fine-tuning and development of the system will be made to address different biological attacks and infectious disease outbreaks, and to ensure its practicality and usability.  相似文献   

7.
We revisit the interactive model-based approach to global optimization proposed in Wang and Garcia (J Glob Optim 61(3):479–495, 2015) in which parallel threads independently execute a model-based search method and periodically interact through a simple acceptance-rejection rule aimed at preventing duplication of search efforts. In that paper it was assumed that each thread successfully identifies a locally optimal solution every time the acceptance-rejection rule is implemented. Under this stylized model of computational time, the rate of convergence to a globally optimal solution was shown to increase exponentially in the number of threads. In practice however, the computational time required to identify a locally optimal solution varies greatly. Therefore, when the acceptance-rejection rule is implemented, several threads may fail to identify a locally optimal solution. This situation calls for reallocation of computational resources in order to speed up the identification of local optima when one or more threads repeatedly fail to do so. In this paper we consider an implementation of the interactive model-based approach that accounts for real time, that is, it takes into account the possibility that several threads may fail to identify a locally optimal solution whenever the acceptance-rejection rule is implemented. We propose a modified acceptance-rejection rule that alternates between enforcing diverse search (in order to prevent duplication) and reallocation of computational effort (in order to speed up the identification of local optima). We show that the rate of convergence in real-time increases with the number of threads. This result formalizes the idea that in parallel computing, exploitation and exploration can be complementary provided relatively simple rules for interaction are implemented. We report the results from extensive numerical experiments which are illustrate the theoretical analysis of performance.  相似文献   

8.
最优资源分配问题是无线通信系统设计中的基本问题之一.最优地分配功率、传输波形和频谱等资源能够极大地提高整个通信系统的传输性能.目前,相对于通信技术在现实生活中的蓬勃发展,通信系统优化的数学理论和方法显得相对滞后,在某些方面已经成为影响其发展和应用的关键因素.无线通信中的最优资源分配问题常常可建模为带有特殊结构的非凸非线性约束优化问题.一方面,这些优化问题常常具有高度的非线性性,一般情况下难于求解;另一方面,它们又有自身的特殊结构,如隐含的凸性和可分结构等.本文着重考虑多用户干扰信道中物理层资源最优分配问题的复杂性刻画,以及如何利用问题的特殊结构设计有效且满足分布式应用等实际要求的计算方法.  相似文献   

9.
《随机分析与应用》2013,31(5):1151-1173
Abstract

In this paper, we consider a finite-buffer bulk-arrival and bulk-service queue with variable server capacity: M X /G Y /1/K + B. The main purpose of this paper is to discuss the analytic and computational aspects of this system. We first derive steady-state departure-epoch probabilities based on the embedded Markov chain method. Next, we demonstrate two numerically stable relationships for the steady-state probabilities of the queue lengths at three different epochs: departure, random, and arrival. Finally, based on these relationships, we present various useful performance measures of interest such as moments of the number of customers in the queue at three different epochs, the loss probability, and the probability that server is busy. Numerical results are presented for a deterministic service-time distribution – a case that has gained importance in recent years.  相似文献   

10.
11.
We present a way to use the augmented system approach in interior point methods. We elaborate on the increased freedom in determining the pivot order which makes this approach computationally very competitive. This means that with the pivot search heuristics presented here, in most of the cases we can achieve a performance not worse than the AD−1AT method, and in several cases much better. In reality, the augmented system seems to be the only safe method in case of dense columns or ‘bad’ nonzero pattern. We found that both methods are important for their usefulness. Our implementation includes both. It is also equipped with an analyzer that is able to determine which of them to use. It is based on the evaluation of the nonzero pattern of the constraint matrix. We also point out that the treatment of free variables is also more efficient in the framework of the augmented system. We report on some very favorable computational experiences achieved with our implementation of the augmented system based on these ideas.  相似文献   

12.
Abstract. Computing the maximum cycle-mean of a weighted digraph is relevant to a number of applications, and combinatorial algorlthnls of complexity 0(n) are known.We present a new algorithm, with computational evidence to suggest an expected run-time growth rate below O(n^3)  相似文献   

13.
In general, solving Global Optimization (GO) problems by Branch-and-Bound (B&B) requires a huge computational capacity. Parallel execution is used to speed up the computing time. As in this type of algorithms, the foreseen computational workload (number of nodes in the B&B tree) changes dynamically during the execution, the load balancing and the decision on additional processors is complicated. We use the term left-over to represent the number of nodes that still have to be evaluated at a certain moment during execution. In this work, we study new methods to estimate the left-over value based on the observed amount of pruning. This provides information about the remaining running time of the algorithm and the required computational resources. We focus on their use for interval B&B GO algorithms.  相似文献   

14.
The main purpose of this paper is to use elementary methods and properties of the classical Gauss sums to study the computational problem of one kind of fourth power mean of the generalized quadratic Gauss sums mod q (a positive odd number), and give an exact computational formula for it.  相似文献   

15.
Many data dissemination techniques have been proposed for wireless sensor networks (WSNs) to facilitate data dissemination and query processing. However, these techniques may not work well in a large scale sensor network where a huge amount of sensing data is generated. In this paper, we propose an integrated distributed connected dominating set based indexing (CBI) data dissemination scheme to support scalable handling of large amount of sensing data in large scale WSNs. Our CBI can minimize the use of limited network and computational resources while providing timely responses to queries. Moreover, our data dissemination framework ensures scalability and load balance as well as adaptivity in the presence of dynamic changes. Analysis and simulations are conducted to evaluate the performance of our CBI scheme. The results show that the CBI scheme outperforms the external storage-based scheme, local storage-based scheme and the data-centric storage-based scheme in overall performance.  相似文献   

16.
We report on the Matlab program package HILBERT. It provides an easily-accessible implementation of lowest order adaptive Galerkin boundary element methods for the numerical solution of the Poisson equation in 2D. The library was designed to serve several purposes: The stable implementation of the integral operators may be used in research code. The framework of Matlab ensures usability in lectures on boundary element methods or scientific computing. Finally, we emphasize the use of adaptivity as general concept and for boundary element methods in particular. In this work, we summarize recent analytical results on adaptivity in the context of BEM and illustrate the use of HILBERT. Various benchmarks are performed to empirically analyze the performance of the proposed adaptive algorithms and to compare adaptive and uniform mesh-refinements. In particular, we do not only focus on mathematical convergence behavior but also on the usage of critical system resources such as memory consumption and computational time. In any case, the superiority of the proposed adaptive approach is empirically supported.  相似文献   

17.
E-science infrastructures are becoming the essential tools for computational scientific research. In this paper, we describe two e-science infrastructures: Science and Engineering Applications Grid (SEAGrid) and molecular modeling and parametrization (ParamChem). The SEAGrid is a virtual organization with a diverse set of hardware and software resources and provides services to access such resources in a routine and transparent manner. These essential services include allocations of computational resources, client-side application interfaces, computational job and data management tools, and consulting activities. ParamChem is another e-science project dedicated for molecular force-field parametrization based on both ab-initio and molecular mechanics calculations on high performance computers (HPCs) driven by scientific workflow middleware services. Both the projects share a similar three-tier computational infrastructure that consists of a front-end client, a middleware web services layer, and a remote HPC computational layer. The client is a Java Swing desktop application with components for pre- and post-data processing, communications with middleware server and local data management. The middleware service is based on Axis2 web service and MySQL relational database, which provides functionalities for user authentication and session control, HPC resource information collections, discovery and matching, job information logging and notification. It can also be integrated with scientific workflow to manage computations on HPC resources. The grid credentials for accessing HPCs are delegated through MyProxy infrastructure. Currently SEAGrid has integrated several popular application software suites such as Gaussian for quantum chemistry, NAMD for molecular dynamics and engineering software such as Abacus for mechanical engineering. ParamChem has integrated CGenFF (CHARMM General Force-Field) for molecular force-field parametrization of drug-like molecules. Long-term storage of user data is handled by tertiary data archival mechanisms. SEAGrid science gateway serves more than 500 users while more than 1000 users use ParamChem services such as atom typing and initial force-field parameter guess at present.  相似文献   

18.
The Shapley–Shubik power index in a voting situation depends on the number of orderings in which each player is pivotal. The Banzhaf power index depends on the number of ways in which each voter can effect a swing. If there are n players in a voting situation, then the function which measures the worst case running time for computing these indices is in O(n2n). We present a combinatorial method based in generating functions to compute these power indices efficiently in weighted double or triple majority games and we study the time complexity of the algorithms. Moreover, we calculate these power indices for the countries in the Council of Ministers of the European Union under the new decision rules prescribed by the Treaty of Nice.  相似文献   

19.
The goal of this work is to present some optimal control aspects of distributed systems described by nonlinear Cahn–Hilliard equations (CH). Theoretical conclusions on distributed control of CH system associated with quadratic criteria are obtained by the variational theory (see [17]). A computational result is stated by a new semi-discrete algorithm, constructed on the basis of the finite-element method with the updated (nonlinear) conjugate gradient method for minimizing the performance index efficiently. Finally, the implementation of a laboratory demonstration is included to show the efficiency of the proposed nonlinear scheme.  相似文献   

20.
We present an average case analysis of the minimum spanning tree heuristic for the power assignment problem. The worst‐case approximation ratio of this heuristic is 2. We show that in Euclidean d‐dimensional space, when the vertex set consists of a set of i.i.d. uniform random independent, identically distributed random variables in [0,1]d, and the distance power gradient equals the dimension d, the minimum spanning tree‐based power assignment converges completely to a constant depending only on d.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号