首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space. We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity, constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices, and nonlinear Knapsack problem's constraint. All these problems, except for the latter in integers, have polynomial time algorithms which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the complexity of the respective problems. In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear optimization. MSC classification: 90C30, 68Q25  相似文献   

2.
We consider the problem of finding the nearest point (by Euclidean distance) in a simplicial cone to a given point, and develop an exterior penalty algorithm for it. Each iteration in the algorithm consists of a single Newton step following a reduction in the value of the penalty parameter. Proofs of convergence of the algorithm are given. Various other versions of exterior penalty algorithms for nearest point problems in nonsimplicial polyhedral cones and for convex quadratic programs, all based on a single descent step following a reduction in the value of the penalty parameter per iteration, are discussed. The performance of these algorithms in large scale computational experiments is very encouraging. It shows that the number of iterations grows very slowly, if at all, with the dimension of the problem.Partially supported by NSF Grant No. ECS-8521183, and by the two universities.  相似文献   

3.
A family of interior point algorithms is considered. These algorithms can be used for solving mathematical programming problems with nonlinear inequality constraints. Some weighted Euclidean norms are applied to finding a descent direction for improving the solution. These norms vary with iterations. A theoretical justification of the algorithms with some assumptions (including the nonsingularity of the problem) is presented.  相似文献   

4.
In this paper, we introduce an iterative algorithm for finding a common element of the set of solutions of a mixed equilibrium problem, the set of fixed points of a nonexpansive mapping, and the the set of solutions of a variational inclusion in a real Hilbert space. Furthermore, we prove that the proposed iterative algorithm converges strongly to a common element of the above three sets, which is a solution of a certain optimization problem related to a strongly positive bounded linear operator.  相似文献   

5.
We present the main results in the author’s Ph.D. thesis (Iori 2004), defended at the University of Bologna in April 2004 and supervised by S. Martello. The thesis is written in English and is available from the author upon request. It proposes exact and metaheuristic algorithms for solving some relevant combinatorial optimization problems, with particular emphasis on scheduling, two-dimensional cutting and packing and capacitated vehicle routing. The performance of each algorithm is tested through extensive computational experiments and comparison with other approaches in the literature.Received: 21 September 2004, AMS classification: 90-08, 90C27, 90C59  相似文献   

6.
In this paper, we consider a class of nonlinear minimum-maximum optimization problems subject to boundedness constraints on the decision vectors. Three algorithms are developed for finding the min-max point using the concept of solving an associated dynamical system. In the first and third algorithms, solutions are obtained by solving systems of differential equations. The second algorithm is a discrete version of the first algorithm. The trajectories generated by the first and second algorithms may move inside or on the boundary of the constraint set, while the third algorithm ensures that any trajectory that begins inside the constraint region remains in its interior. Sufficient conditions for global convergence of the two algorithms are also established. For illustration, four numerical examples are solved.This work was partially supported by a research grant from the Australian Research Committee.  相似文献   

7.
We exhibit linear problems for which every linear algorithm has infinite error, and show a (mildly) nonlinear algorithm with finite error. The error of this nonlinear algorithm can be arbitrarily small if appropriate information is used. We illustrate these examples by the inversion of a finite Laplace transform, a problem arising in remote sensing.  相似文献   

8.
9.
Nonlinear optimization algorithms are rarely discussed from a complexity point of view. Even the concept of solving nonlinear problems on digital computers is not well defined. The focus here is on a complexity approach for designing and analyzing algorithms for nonlinear optimization problems providing optimal solutions with prespecified accuracy in the solution space. We delineate the complexity status of convex problems over network constraints, dual of flow constraints, dual of multi-commodity, constraints defined by a submodular rank function (a generalized allocation problem), tree networks, diagonal dominant matrices, and nonlinear knapsack problem’s constraint. All these problems, except for the latter in integers, have polynomial time algorithms which may be viewed within a unifying framework of a proximity-scaling technique or a threshold technique. The complexity of many of these algorithms is furthermore best possible in that it matches lower bounds on the complexity of the respective problems. In general nonseparable optimization problems are shown to be considerably more difficult than separable problems. We compare the complexity of continuous versus discrete nonlinear problems and list some major open problems in the area of nonlinear optimization. An earlier version of this paper appeared in 4OR, 3:3, 171–216, 2005.  相似文献   

10.
Given a non-convex two-dimensional area and identical rectangular stands, we consider the problem of placing the maximum number of stands in the area, by satisfying a number of operational constraints. We present linear programming models and show the total unimodularity of the matrices associated with their constraint sets. We then give computational results obtained by applying the models to the real-world case of the Beira Mar handcraft fair of Fortaleza (Brazil).  相似文献   

11.
This paper deals with new variable-metric algorithms for nonsmooth optimization problems, the so-called adaptive algorithms. The essence of these algorithms is that there are two simultaneously working gradient algorithms: the first is in the main space and the second is in the space of the matrices that modify the main variables. The convergence of these algorithms is proved for different cases. The results of numerical experiments are also given.  相似文献   

12.
Infinite-dimensional optimization problems occur in various applications such as optimal control problems and parameter identification problems. If these problems are solved numerically the methods require a discretization which can be viewed as a perturbation of the data of the optimization problem. In this case the expected convergence behavior of the numerical method used to solve the problem does not only depend on the discretized problem but also on the original one. Algorithms which are analyzed include the gradient projection method, conditional gradient method, Newton's method and quasi-Newton methods for unconstrained and constrained problems with simple constraints.  相似文献   

13.
In this paper, we introduce and analyze Uzawa algorithms for non-symmetric saddle point systems. Convergence for the algorithms is established based on new spectral results about Schur complements. A new Uzawa type algorithm with optimal relaxation parameters at each new iteration is introduced and analyzed in a general framework. Numerical results supporting the efficiency of the algorithms are presented for finite element discretization of steady state Navier-Stokes equations.  相似文献   

14.
In this paper, we consider iterative algorithms of Uzawa type for solving linear nonsymmetric saddle point problems. Specifically, we consider systems, written as usual in block form, where the upper left block is an invertible linear operator with positive definite symmetric part. Such saddle point problems arise, for example, in certain finite element and finite difference discretizations of Navier-Stokes equations, Oseen equations, and mixed finite element discretization of second order convection-diffusion problems. We consider two algorithms, each of which utilizes a preconditioner for the operator in the upper left block. Convergence results for the algorithms are established in appropriate norms. The convergence of one of the algorithms is shown assuming only that the preconditioner is spectrally equivalent to the inverse of the symmetric part of the operator. The other algorithm is shown to converge provided that the preconditioner is a sufficiently accurate approximation of the inverse of the upper left block. Applications to the solution of steady-state Navier-Stokes equations are discussed, and, finally, the results of numerical experiments involving the algorithms are presented.

  相似文献   


15.
New versions and extensions of Benson’s outer approximation algorithm for solving linear vector optimization problems are presented. Primal and dual variants are provided in which only one scalar linear program has to be solved in each iteration rather than two or three as in previous versions. Extensions are given to problems with arbitrary pointed solid polyhedral ordering cones. Numerical examples are provided, one of them involving a new set-valued risk measure for multivariate positions.  相似文献   

16.
Florian A. Potra 《PAMM》2007,7(1):2010009-2010010
A short survey of interior point methods for linear complementarity problems with emphasis on algorithms that have both polynomial complexity and superlinear convergence is presented. Some recent results obtained by the author and his collaborators are briefly summarized and several directions of future research are proposed. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Let C be a nonempty closed convex subset of a real Hilbert space H. Let Q:CC be a fixed contraction and S,T:CC be two nonexpansive mappings such that Fix(T)≠?. Consider the following two-step iterative algorithm: $$\begin{array}{@{}rll}x_{n+1}&=&\alpha_{n}Qx_{n}+(1-\alpha_{n})Ty_{n},\\[1.5mm]y_{n}&=&\beta_{n}Sx_{n}+(1-\beta_{n})x_{n},\quad n\geq0.\end{array}$$ It is proven that under appropriate conditions, the above iterative sequence {x n } converges strongly to $\tilde{x}\in \mathrm{Fix}(T)$ which solves some variational inequality depending on a given criterion S, namely: find $\tilde{x}\in H$ ; $0\in (I-S)\tilde{x}+N_{\mathrm{Fix}(T)}\tilde{x}$ , where N Fix(T) denotes the normal cone to the set of fixed points of T.  相似文献   

18.
A unified treatment is given for partially and totally asynchronous parallel successive overrelaxation (SOR) algorithms for the linear complementarity problem. Convergence conditions are established and compared to previous results. Convergence of the partially asynchronous method for the symmetric linear complementarity problem can be guaranteed if the relaxation factor is sufficiently small. Unlike previous results, this relaxation factor interval does not depend explicitly on problem size.This material is based on research supported by the Air Force Office of Scientific Research Grant No. AFOSR-89-0410.The author wishes to thank the referee for pointing out how to improve the bound (12). The same technique can be used to reduce the factorn in Ref. 5, p. 553, to .  相似文献   

19.
Numerical Algorithms - The purpose of this paper is to study and analyze two different kinds of extragradient-viscosity-type iterative methods for finding a common element of the set of solutions...  相似文献   

20.
The subgradient extragradient method can be considered as an improvement of the extragradient method for variational inequality problems for the class of monotone and Lipschitz continuous mappings. In this paper, we propose two new algorithms as combination between the subgradient extragradient method and Mann-like method for finding a common element of the solution set of a variational inequality and the fixed point set of a demicontractive mapping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号