首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using Carstensen's results from 1991 we state a theorem concerning the localization of polynomial zeros and derive two a posteriori error bound methods with the convergence order 3 and 4. These methods possess useful property of inclusion methods to produce disks containing all simple zeros of a polynomial. We establish computationally verifiable initial conditions that guarantee the convergence of these methods. Some computational aspects and the possibility of implementation on parallel computers are considered, including two numerical examples. A comparison of a posteriori error bound methods with the corresponding circular interval methods, regarding the computational costs and sizes of produced inclusion disks, were given.  相似文献   

2.
This paper presents a new class of methods for solving unconstrained optimization problems on parallel computers. The methods are intended to solve small to moderate dimensional problems where function and derivative evaluation is the dominant cost. They utilize multiple processors to evaluate the function, (finite difference) gradient, and a portion of the finite difference Hessian simultaneously at each iterate. We introduce three types of new methods, which all utilize the new finite difference Hessian information in forming the new Hessian approximation at each iteration; they differ in whether and how they utilize the standard secant information from the current step as well. We present theoretical analyses of the rate of convergence of several of these methods. We also present computational results which illustrate their performance on parallel computers when function evaluation is expensive.Research supported by AFOSR grant AFOSR-85-0251, ARO contract DAAG 29-84-K-0140, NSF grant DCR-8403483, and NFS cooperative agreement DCR -8420944.  相似文献   

3.
The article is the text of a survey report on methods of obtaining lower bounds on the computational complexity for abstract computers. Besides the methods for obtaining the lower bounds, related methods for the simulation of some machines by others, with the preservation of some complexity measures at the expense of increase in others (trade-off results), are presented. Methods of crossing sequences, tails, overlaps, and related methods are examined. A new proof of an old result is sometimes given to illustrate the working of a method, or a new result is proved.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 118, pp. 4–24, 1982.  相似文献   

4.
The computational Grid is currently gaining in popularity, and it enables computers scattered all over the world to be connected by the Internet as if they are part of a large computational infrastructure. While the computational Grid gathers more and more computational resources and the number of the applications for the computational Grid is increasing, load balancing for the computational Grid is still not effective enough. Because the computers are connected by a wide area network on the computational Grid, the significant communication latency and the frequency of large wave throughputs make it difficult to achieve effective load balancing. Thus, in this paper, we propose an algorithm to predict networking loads on the computational Grid to make the use of computational resources more efficient. The proposed algorithm based on the Markov model is evaluated using an actual networking load. As a result, the Markov model based algorithm offers the most accurate predictions compared with the related work. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

5.
本文将考虑一类最高阶导数项前含有小参数ε的空间方向为二维的对流—扩散方程的一致精度的差分格式,分三部进行讨论。 1 连续问题 我们考虑下面的对流—扩散方程  相似文献   

6.
The block preconditioned conjugate gradient method on vector computers   总被引:4,自引:0,他引:4  
We present vectorizable versions of the block preconditioners introduced by Concus. Golub & Meurant [2] for use with the conjugate gradient method. In [2] it was shown that the block preconditioners give less computational work than the classical point ones on conventional serial computers.Here we give numerical results for various vector computers, CDC Cyber 205, CRAY 1-S, CRAY X-MP and for several problems, which show that for most cases the block method with slight modifications, gives better results also on vector computers.Dedicated to Germund DALHQUIST on his sixtieth birthday  相似文献   

7.
Carstensen’s results from 1991, connected with Gerschgorin’s disks, are used to establish a theorem concerning the localization of polynomial zeros and to derive an a posteriori error bound method. The presented quasi-interval method possesses useful property of inclusion methods to produce disks containing all simple zeros of a polynomial. The centers of these disks behave as approximations generated by a cubic derivative free method where the use of quantities already calculated in the previous iterative step decreases the computational cost. We state initial convergence conditions that guarantee the convergence of error bound method and prove that the method has the order of convergence three. Initial conditions are computationally verifiable since they depend only on the polynomial coefficients, its degree and initial approximations. Some computational aspects and the possibility of implementation on parallel computers are considered, including two numerical examples.In honor of Professor Richard S. Varga.  相似文献   

8.
Under study is the performance of some computational models of filtration combustion of gases on multi-core computers. The analysis is restricted to the models based on explicit difference schemes. In particular, an explicit two-level parallel algorithm with an adaptive mesh is constructed. The two shared memory parallelization methods are applied: the straightforward application of OpenMP directives and special distribution of data among the threads. The simulation shows that the last method has a substantial performance advantage.  相似文献   

9.
David E. Keyes 《PAMM》2007,7(1):1026401-1026402
Towards Optimal Petascale Simulations (TOPS) is a scalable solver software project based on domain decomposed parallelization to research, implement, and support in collaborations with users an open-source package for large-scale discretized PDE problems. Optimal complexity methods, such as multigrid/multilevel preconditioners, keep the time spent in dominant algebraic kernels close to linear in discrete problem size as the applications scale on massively parallel computers. Krylov accelerators and Jacobian-free variants of Newton's method, as appropriate, are wrapped around the multilevel methods to deliver robustness in multirate, multiscale coupled systems, which are solved either implicitly or in more traditional forms of operator splitting. The TOPS software framework is being extended beyond direct computational simulation to computational optimization, including design, control, and inverse problems. We outline and illustrate the philosophy of TOPS. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
Contemporary computers collect databases that can be too large for classical methods to handle. The present work takes data whose observations are distribution functions (rather than the single numerical point value of classical data) and presents a computational statistical approach of a new methodology to group the distributions into classes. The clustering method links the searched partition to the decomposition of mixture densities, through the notions of a function of distributions and of multi-dimensional copulas. The new clustering technique is illustrated by ascertaining distinct temperature and humidity regions for a global climate dataset and shows that the results compare favorably with those obtained from the standard EM algorithm method.  相似文献   

11.
Model reduction is an area of fundamental importance in many modeling and control applications. In this paper we analyze the use of parallel computing in model reduction methods based on balanced truncation of large-scale dense systems. The methods require the computation of the Gramians of a linear-time invariant system. Using a sign function-based solver for computing full-rank factors of the Gramians yields some favorable computational aspects in the subsequent computation of the reduced-order model, particularly for non-minimal systems. As sign function-based computations only require efficient implementations of basic linear algebra operations readily available, e.g., in the BLAS, LAPACK, and ScaLAPACK, good performance of the resulting algorithms on parallel computers is to be expected. Our experimental results on a PC cluster show the performance and scalability of the parallel implementation.  相似文献   

12.
We generalize the extended backward differentiation formulas (EBDFs) introduced by Cash and by Psihoyios and Cash so that the system matrix in the modified Newton process can be block-diagonalized, enabling an efficient parallel implementation. The purpose of this paper is to justify the use of diagonalizable EBDFs on parallel computers and to offer a starting point for the development of a variable stepsize-variable order method. We construct methods which are L-stable up to order p = 6 and which have the same computational complexity per processor as the conventional BDF methods. Numerical experiments with the order 6 method show that a speedup factor of between 2 and 4 on four processors can be expected.  相似文献   

13.
It has been a long-time dream in electronic structure theory in physical chemistry/chemical physics to compute ground state energies of atomic and molecular systems by employing a variational approach in which the two-body reduced density matrix (RDM) is the unknown variable. Realization of the RDM approach has benefited greatly from recent developments in semidefinite programming (SDP). We present the actual state of this new application of SDP as well as the formulation of these SDPs, which can be arbitrarily large. Numerical results using parallel computation on high performance computers are given. The RDM method has several advantages including robustness and provision of high accuracy compared to traditional electronic structure methods, although its computational time and memory consumption are still extremely large. The work of Mituhiro Fukuda was primarily conducted at the Courant Institute of Mathematical Sciences, New York University.  相似文献   

14.
We investigate the numerical solution of the stable generalized Lyapunov equation via the sign function method. This approach has already been proposed to solve standard Lyapunov equations in several publications. The extension to the generalized case is straightforward. We consider some modifications and discuss how to solve generalized Lyapunov equations with semidefinite constant term for the Cholesky factor. The basic computational tools of the method are basic linear algebra operations that can be implemented efficiently on modern computer architectures and in particular on parallel computers. Hence, a considerable speed-up as compared to the Bartels–Stewart and Hammarling methods is to be expected. We compare the algorithms by performing a variety of numerical tests.  相似文献   

15.
Many problems arising in different fields of science and engineering can be reduced, by applying some appropriate discretization, either to a system of linear algebraic equations or to a sequence of such systems. The solution of a system of linear algebraic equations is very often the most time-consuming part of the computational process during the treatment of the original problem, because these systems can be very large (containing up to many millions of equations). It is, therefore, important to select fast, robust and reliable methods for their solution, also in the case where fast modern computers are available. Since the coefficient matrices of the systems are normally sparse (i.e. most of their elements are zeros), the first requirement is to efficiently exploit the sparsity. However, this is normally not sufficient when the systems are very large. The computation of preconditioners based on approximate LU-factorizations and their use in the efforts to increase further the efficiency of the calculations will be discussed in this paper. Computational experiments based on comprehensive comparisons of many numerical results that are obtained by using ten well-known methods for solving systems of linear algebraic equations (the direct Gaussian elimination and nine iterative methods) will be reported. Most of the considered methods are preconditioned Krylov subspace algorithms.  相似文献   

16.
Methods of constructing preconditioning of explicit iterative methods of solving systems of linear, algebraic equations with sparse matrices are considered in the work. The techniques considered can, first of all, be realized within the framework of the simplest data structures; secondly, the graph structures of the corresponding algorithms are well adapted to realization on parallel computers; thirdly, in conjunction with modifications of Chebyshev methods they make it possible to construct rather effective computational algorithms. Experimental data are presented which demonstrate the effect of the proposed techniques of preconditioning of the distribution of the eigenvalues of matrices of systems arising in discretization of two-dimensional elliptic boundary-value problems.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 139, pp. 51–60, 1984.  相似文献   

17.
Optical computing   总被引:1,自引:0,他引:1  
  相似文献   

18.
We discuss methods for solving the unconstrained optimization problem on parallel computers, when the number of variables is sufficiently small that quasi-Newton methods can be used. We concentrate mainly, but not exclusively, on problems where function evaluation is expensive. First we discuss ways to parallelize both the function evaluation costs and the linear algebra calculations in the standard sequential secant method, the BFGS method. Then we discuss new methods that are appropriate when there are enough processors to evaluate the function, gradient, and part but not all of the Hessian at each iteration. We develop new algorithms that utilize this information and analyze their convergence properties. We present computational experiments showing that they are superior to parallelization either the BFGS methods or Newton's method under our assumptions on the number of processors and cost of function evaluation. Finally we discuss ways to effectively utilize the gradient values at unsuccessful trial points that are available in our parallel methods and also in some sequential software packages.Research supported by AFOSR grant AFOSR-85-0251, ARO contract DAAG 29-84-K-0140, NSF grants DCR-8403483 and CCR-8702403, and NSF cooperative agreement DCR-8420944.  相似文献   

19.
Summary  Linear systems represent the computational kernel of many models that describe problems arising in the field of social, economic as well as technical and scientific disciplines. Therefore, much effort has been devoted to the development of methods, algorithms and software for the solution of linear systems. Finite precision computer arithmetics makes rounding error analysis and perturbation theory a fundamental issue in this framework (Higham 1996). Indeed, Interval Arithmetics was firstly introduced to deal with the solution of problems with computers (Moore 1979, Rump 1983), since a floating point number actually corresponds to an interval of real numbers. On the other hand, in many applications data are affected by uncertainty (Jerrell 1995, Marino & Palumbo 2002), that is, they are only known to lie within certain intervals. Thus, bounding the solution set of interval linear systems plays a crucial role in many problems. In this work, we focus on the state of the art of theory and methods for bounding the solution set of interval linear systems. We start from basic properties and main results obtained in the last years, then we give an overview on existing methods.  相似文献   

20.
A sophisticated computational model of metal inert gas arc welding of aluminium alloys is presented. The arc plasma, the wire electrode and the workpiece are included in the computational domain self-consistently. The flow in the arc plasma and in the weld pool are calculated in three dimensions using equations of computational fluid dynamics, modified to take into account plasma effects and coupled to electromagnetic equations. The formation of metal vapour from the wire electrode and workpiece is considered, as is the mixing of the wire electrode alloy with the workpiece alloy in the weld pool. A graphical user interface (GUI) has been developed, and the model runs on standard desktop or laptop computers.The computational model is described, and results are presented for lap-fillet weld geometry. The importance of including the arc in the computational domain is shown. The predictions of the model show good agreement with measurements of weld geometry and weld composition. The GUI is introduced, and the application of the model to predicting the thermal history of the workpiece, which is the input information that is required for predicting important weld properties such as residual stress and distortion and weld microstructure, is discussed. Initial predictions of residual stress and distortion of the workpiece are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号