首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 55 毫秒
1.
Testing Parallel Variable Transformation   总被引:2,自引:0,他引:2  
This paper studies performance of the parallel variable transformation (PVT) algorithm for unconstrained nonlinear optimization through numerical experiments on a Fujitsu VPP500, one of the most up-to-date vector parallel computers. Special attention is paid to a particular form of the PVT algorithm that is regarded as a generalization of the block Jacobi algorithm that allows overlapping of variables among processors. Implementation strategies on the VPP500 are described in detail and results of numerical experiments are reported.  相似文献   

2.
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.  相似文献   

3.
In this paper, we present parallel bundle-based decomposition algorithms to solve a class of structured large-scale convex optimization problems. An example in this class of problems is the block-angular linear programming problem. By dualizing, we transform the original problem to an unconstrained nonsmooth concave optimization problem which is in turn solved by using a modified bundle method. Further, this dual problem consists of a collection of smaller independent subproblems which give rise to the parallel algorithms. We discuss the implementation on the CRYSTAL multi-computer. Finally, we present computational experience with block-angular linear programming problems and observe that more than 70% efficiency can be obtained using up to eleven processors for one group of test problems, and more than 60% efficiency can be obtained for relatively smaller problems using up to five processors for another group of problems.  相似文献   

4.
在二阶拟牛顿方程的基础上,结合Zhang H.C.提出的非单调线搜索构造了一种求解大规模无约束优化问题的对角二阶拟牛顿算法.算法在每次迭代中利用对角矩阵逼近Hessian矩阵的逆,使计算搜索方向的存储量和工作量明显减少,为大型无约束优化问题的求解提供了新的思路.在通常的假设条件下,证明了算法的全局收敛性和超线性收敛性.数值实验表明算法是有效可行的.  相似文献   

5.
An effective algorithm is described for solving the general constrained parameter optimization problem. The method is quasi-second-order and requires only function and gradient information. An exterior point penalty function method is used to transform the constrained problem into a sequence of unconstrained problems. The penalty weightr is chosen as a function of the pointx such that the sequence of optimization problems is computationally easy. A rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index. The optimization algorithm accounts for the usual difficulties associated with discontinuous second derivatives of the augmented index. Finite convergence is exhibited for a quadratic performance index with linear constraints; accelerated convergence is demonstrated for nonquadratic indices and nonlinear constraints. A computer program has been written to implement the algorithm and its performance is illustrated in fourteen test problems.  相似文献   

6.
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm.  相似文献   

7.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

8.
In order to solve the constrained global optimization problem,we use penalty functions not only on constraints but also on objective function. Then within the framework of interval analysis,an interval Branch-and-Bound algorithm is given,which does not need to solve a sequence of unconstrained problems. Global convergence is proved. Numerical examples show that this algorithm is efficient.  相似文献   

9.
基于信赖域技术和修正拟牛顿方程,结合Zhang H.C.非单调策略,设计了新的求解无约束最优化问题的非单调超记忆梯度算法,分析了算法的收敛性和收敛速度.数值实验表明算法是有效的,适于求解大规模问题.  相似文献   

10.
In this paper we propose an algorithm using only the values of the objective function and constraints for solving one-dimensional global optimization problems where both the objective function and constraints are Lipschitzean and nonlinear. The constrained problem is reduced to an unconstrained one by the index scheme. To solve the reduced problem a new method with local tuning on the behavior of the objective function and constraints over different sectors of the search region is proposed. Sufficient conditions of global convergence are established. We also present results of some numerical experiments.  相似文献   

11.
A new filter-line-search algorithm for unconstrained nonlinear optimization problems is proposed. Based on the filter technique introduced by Fletcher and Leyffer (Math. Program. 91:239–269, 2002) it extends an existing technique of Wächter and Biegler (SIAM J. Comput. 16:1–31, 2005) for nonlinear equality constrained problem to the fully general unconstrained optimization problem. The proposed method, which differs from their approach, does not depend on any external restoration procedure. Global and local quadratic convergence is established under some reasonable conditions. The results of numerical experiments indicate that it is very competitive with the classical line search algorithm.  相似文献   

12.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

13.
The Hestenes–Stiefel (HS) method is an efficient method for solving large-scale unconstrained optimization problems. In this paper, we extend the HS method to solve constrained nonlinear equations, and propose a modified HS projection method, which combines the modified HS method proposed by Zhang et al. with the projection method developed by Solodov and Svaiter. Under some mild assumptions, we show that the new method is globally convergent with an Armijo line search. Moreover, the R-linear convergence rate of the new method is established. Some preliminary numerical results show that the new method is efficient even for large-scale constrained nonlinear equations.  相似文献   

14.
A parallel Uzawa-type algorithm, for solving unconstrained minimization of large-scale partially separable functions, is presented. Using auxiliary unknowns, the unconstrained minimization problem is transformed into a (linearly) constrained minimization of a separable function.The augmented Lagrangian of this problem decomposes into a sum of partially separable augmented Lagrangian functions. To take advantage of this property, a Uzawa block relaxation is applied. In every iteration, unconstrained minimization subproblems are solved in parallel before updating Lagrange multipliers. Numerical experiments show that the speed-up factor gained using our algorithm is significant.  相似文献   

15.
在拟态物理学优化算法APO的基础上,将一种基于序值的无约束多目标算法RMOAPO的思想引入到约束多目标优化领域中.提出一种基于拟态物理学的约束多目标共轭梯度混合算法CGRMOAPA.算法采取外点罚函数法作为约束问题处理技术,并借鉴聚集函数法的思想,将约束多目标优化问题转化为单目标无约束优化问题,最终利用共轭梯度法进行求解.通过与CRMOAPO、MOGA、NSGA-II的实验对比,表明了算法CGRMOAPA具有较好的分布性能,也为约束多目标优化问题的求解提供了一种新的思路.  相似文献   

16.
一类约束不可微优化问题的区间极大熵方法   总被引:23,自引:0,他引:23  
本文研究求解不等式约束离散minimax问题的区间算法,其中目标函数和约束函数是 C~1类函数.利用罚函数法和极大熵函数思想将问题转化为无约束可微优化问题,讨论了极大熵函数的区间扩张,证明了收敛性等性质,提出了无解区域删除原则,建立了区间极大熵算法,并给出了数值算例.该算法是收敛、可靠和有效的.  相似文献   

17.
Two dual methods for solving constrained optimization problems are presented: the Dual Active Set algorithm and an algorithm combining an unconstrained minimization scheme, an augmented Lagrangian and multiplier updates. A new preconditioner is introduced that has a significant impact on the speed of convergence.This research was supported by US Army Research Office Contract DAAL03-89-G-0082, and by National Science Foundation Grant DMS-9022899.  相似文献   

18.
We discuss methods for solving the unconstrained optimization problem on parallel computers, when the number of variables is sufficiently small that quasi-Newton methods can be used. We concentrate mainly, but not exclusively, on problems where function evaluation is expensive. First we discuss ways to parallelize both the function evaluation costs and the linear algebra calculations in the standard sequential secant method, the BFGS method. Then we discuss new methods that are appropriate when there are enough processors to evaluate the function, gradient, and part but not all of the Hessian at each iteration. We develop new algorithms that utilize this information and analyze their convergence properties. We present computational experiments showing that they are superior to parallelization either the BFGS methods or Newton's method under our assumptions on the number of processors and cost of function evaluation. Finally we discuss ways to effectively utilize the gradient values at unsuccessful trial points that are available in our parallel methods and also in some sequential software packages.Research supported by AFOSR grant AFOSR-85-0251, ARO contract DAAG 29-84-K-0140, NSF grants DCR-8403483 and CCR-8702403, and NSF cooperative agreement DCR-8420944.  相似文献   

19.
In this paper, a new descent algorithm for solving unconstrained optimization problem is presented. Its search direction is descent and line search procedure can be avoided except for the first iteration. It is globally convergent under mild conditions. The search direction of the new algorithm is generalized and convergence of corresponding algorithm is also proved. Numerical results show that the algorithm is efficient for given test problems.  相似文献   

20.
Infinite-dimensional optimization problems occur in various applications such as optimal control problems and parameter identification problems. If these problems are solved numerically the methods require a discretization which can be viewed as a perturbation of the data of the optimization problem. In this case the expected convergence behavior of the numerical method used to solve the problem does not only depend on the discretized problem but also on the original one. Algorithms which are analyzed include the gradient projection method, conditional gradient method, Newton's method and quasi-Newton methods for unconstrained and constrained problems with simple constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号