首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We use normal directions of the outcome set to develop a method of outer approximation for solving generalized convex multiobjective programming problems. We prove the convergence of the method and report some computational experiments. As an application, we obtain an algorithm to solve an associated multiplicative problem over a convex constraint set.  相似文献   

2.
The problem of optimal response [1, 2] with nonsmooth (generally speaking, nonfunctional) constraints imposed on the state variables is considered. This problem is used to illustrate the method of proving the necessary conditions of optimality in the problems of optimal control with phase constraints, based on constructive approximation of the initial problem with constraints by a sequence of problems of optimal control with constraint-free state variables. The variational analysis of the approximating problems is carried out by means of a purely algebraic method involving the formulas for the incremental growth of a functional [3, 4] and the theorems of separability of convex sets is not used.Using a passage to the limit, the convergence of the approximating problems to the initial problem with constraints is proved, and for general assumptions the necessary conditions of optimality resembling the Pontriagin maximum principle [1] are derived for the generalized solutions of the initial problem. The conditions of transversality are expressed, in the case of nonsmooth (nonfunctional) constraints by a novel concept of a cone conjugate to an arbitrary closed set of a finite-dimensional space. The concept generalizes the usual notions of the normal and the normal cone for the cases of smooth and convex manifolds.  相似文献   

3.
A descent method is given for minimizing a nondifferentiable function which can be locally approximated by pointwise minima of convex functions. At each iterate the algorithm finds several directions by solving several linear or quadratic programming subproblems. These directions are then used in an Armijo-like search for the next iterate. A feasible direction extension to inequality constrained minimization problems is also presented. The algorithms converge to points satisfying necessary optimality conditions which are sharper than the ones involved in convergence results for algorithms based on the Clarke subdifferential.This research was sponsored by Project 02.15.  相似文献   

4.
The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.  相似文献   

5.
This article presents for the first time an algorithm specifically designed for globally minimizing a finite, convex function over the weakly efficient set of a multiple objective nonlinear programming problem (V1) that has both nonlinear objective functions and a convex, nonpolyhedral feasible region. The algorithm uses a branch and bound search in the outcome space of problem (V1), rather than in the decision space of the problem, to find a global optimal solution. Since the dimension of the outcome space is usually much smaller than the dimension of the decision space, often by one or more orders of magnitude, this approach can be expected to considerably shorten the search. In addition, the algorithm can be easily modified to obtain an approximate global optimal weakly efficient solution after a finite number of iterations. Furthermore, all of the subproblems that the algorithm must solve can be easily solved, since they are all convex programming problems. The key, and sometimes quite interesting, convergence properties of the algorithm are proven, and an example problem is solved.  相似文献   

6.
We consider the problem of minimizing a nondifferentiable function that is the pointwise maximum over a compact family of continuously differentiable functions. We suppose that a certain convex approximation to the objective function can be evaluated. An iterative method is given which uses as successive search directions approximate solutions of semi-infinite quadratic programming problems calculated via a new generalized proximity algorithm. Inexact line searches ensure global convergence of the method to stationary points.This work was supported by Project No. CPBP-02.15/2.1.1.  相似文献   

7.
A nonconvex optimal control problem is examined for a system that is linear with respect to state and has a terminal objective functional representable as the difference of two convex functions. A new local search method is proposed, and its convergence is proved. A strategy is also developed for the search of a globally optimal control process, because the Pontryagin and Bellman principles as applied to the above problem do not distinguish between the locally and globally optimal processes. The convergence of this strategy under appropriate conditions is proved.  相似文献   

8.
We develop an affine-scaling algorithm for box-constrained optimization which has the property that each iterate is a scaled cyclic Barzilai–Borwein (CBB) gradient iterate that lies in the interior of the feasible set. Global convergence is established for a nonmonotone line search, while there is local R-linear convergence at a nondegenerate local minimizer where the second-order sufficient optimality conditions are satisfied. Numerical experiments show that the convergence speed is insensitive to problem conditioning. The algorithm is particularly well suited for image restoration problems which arise in positron emission tomography where the cost function can be infinite on the boundary of the feasible set. This material is based upon work supported by the National Science Foundation under Grants 0203270, 0619080, and 0620286.  相似文献   

9.
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.  相似文献   

10.
The presence of control constraints, because they are nondifferentiable in the space of control functions, makes it difficult to cope with terminal equality constraints in optimal control problems. Gradient-projection algorithms, for example, cannot be employed easily. These difficulties are overcome in this paper by employing an exact penalty function to handle the cost and terminal equality constraints and using the control constraints to define the space of permissible search directions in the search-direction subalgorithm. The search-direction subalgorithm is, therefore, more complex than the usual linear program employed in feasible-directions algorithms. The subalgorithm approximately solves a convex optimal control problem to determine the search direction; in the implementable version of the algorithm, the accuracy of the approximation is automatically increased to ensure convergence.This work was supported by the United Kingdom Science Research Council, by the US Army Research Office, Contract No. DAAG-29-73-C-0025, and by the National Science Foundation, Grant No. ENG-73-08214-A01.  相似文献   

11.
Many important classes of decision models give rise to the problem of finding a global maximum of a convex function over a convex set. This problem is known also as concave minimization, concave programming or convex maximization. Such problems can have many local maxima, therefore finding the global maximum is a computationally difficult problem, since standard nonlinear programming procedures fail. In this article, we provide a very simple and practical approach to find the global solution of quadratic convex maximization problems over a polytope. A convex function achieves its global maximum at extreme points of the feasible domain. Since an inscribed ball does not contain any extreme points of the domain, we use the largest inscribed ball for an inner approximation while a minimal enclosing box is exploited for an outer approximation of the domain. The approach is based on the use of these approximations along with the standard local search algorithm and cutting plane techniques.  相似文献   

12.
Strong convergence theorem of viscosity approximation methods for nonexpansive mapping have been studied. We also know that CQ algorithm for solving the split feasibility problem (SFP) has a weak convergence result. In this paper, we use viscosity approximation methods and some related knowledge to solve a class of generalized SFP’s with monotone variational inequalities in Hilbert space. We propose some iterative algorithms based on viscosity approximation methods and get strong convergence theorems. As applications, we can use algorithms we proposed for solving split variational inequality problems (SVIP), split constrained convex minimization problems and some related problems in Hilbert space.  相似文献   

13.
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is convex (or even uniformly convex). We propose to solve unconstrained nonconvex optimization problems by a self-scaling BFGS algorithm with nonmonotone linear search. Nonmonotone line search has been recognized in numerical practices as a competitive approach for solving large-scale nonlinear problems. We consider two different nonmonotone line search forms and study the global convergence of these nonmonotone self-scale BFGS algorithms. We prove that, under some weaker condition than that in the literature, both forms of the self-scaling BFGS algorithm are globally convergent for unconstrained nonconvex optimization problems.  相似文献   

14.
Recently an affine scaling, interior point algorithm ASL was developed for box constrained optimization problems with a single linear constraint (Gonzalez-Lima et al., SIAM J. Optim. 21:361–390, 2011). This note extends the algorithm to handle more general polyhedral constraints. With a line search, the resulting algorithm ASP maintains the global and R-linear convergence properties of ASL. In addition, it is shown that the unit step version of the algorithm (without line search) is locally R-linearly convergent at a nondegenerate local minimizer where the second-order sufficient optimality conditions hold. For a quadratic objective function, a sublinear convergence property is obtained without assuming either nondegeneracy or the second-order sufficient optimality conditions.  相似文献   

15.
At each iteration, the algorithm determines a feasible descent direction by minimizing a linear or quadratic approximation to the cost on the feasible set. The algorithm is easy to implement if the approximation is easy to minimize on the feasible set, which happens in some important cases. Convergence rate information is obtained, which is sufficient to enable deduction of the number of iterations needed to achieve a specified reduction in the distance from the optimum (measured in terms of the cost). Existing convergence rates for algorithms for solving such convex problems are either asymptotic (and so do not enable the required number of iterations to be deduced) or decrease as the number of constraints increases. The convergence rate information obtained here, however, is independent of the number of constraints. For the case where the quadratic approximation to the cost is not strictly convex (which includes the linear approximation case), the diameter is the only property of the feasible set which affects the convergence rate information. If the quadratic approximation is strictly convex, the convergence rate is independent of the size and geometry of the feasible set. An application to a control-constrained optimal control problem is outlined.  相似文献   

16.
A method of estimating the rate of convergence of approximation to convex, control-constrained optimal-control problems is proposed. In the method, conditions of optimality involving projections on the set of admissible control are exploited. General results are illustrated by examples of Galerkin-type approximations to optimal-control problems for parabolic systems.  相似文献   

17.
In this paper, a finite branch-and-bound algorithm is developed for the minimization of a concave power law over a polytope. Linear terms are also included in the objective function. Using the first order necessary conditions of optimality, the optimization problem is transformed into an equivalent problem consisting of a linear objective function, a set of linear constraints, a set of convex constraints, and a set of bilinear complementary constraints. The transformed problem is then solved using a finite branch-and-bound algorithm that solves two convex problems at each of its nodes. The method is illustrated by means of an example from the literature.  相似文献   

18.
A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library.  相似文献   

19.
Abstract

This paper presents an algorithm, named adaptive projected subgradient method that can minimize asymptotically a certain sequence of nonnegative convex functions over a closed convex set in a real Hilbert space. The proposed algorithm is a natural extension of the Polyak's subgradient algorithm, for nonsmooth convex optimization problem with a fixed target value, to the case where the convex objective itself keeps changing in the whole process. The main theorem, showing the strong convergence of the algorithm as well as the asymptotic optimality of the sequence generated by the algorithm, can serve as a unified guiding principle of a wide range of set theoretic adaptive filtering schemes for nonstationary random processes. These include not only the existing adaptive filtering techniques; e.g., NLMS, Projected NLMS, Constrained NLMS, APA, and Adaptive parallel outer projection algorithm etc., but also new techniques; e.g., Adaptive parallel min-max projection algorithm, and their embedded constraint versions. Numerical examples show that the proposed techniques are well-suited for robust adaptive signal processing problems.  相似文献   

20.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号