首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

2.
An interval algorithm for constrained global optimization   总被引:7,自引:0,他引:7  
An interval algorithm for bounding the solutions of a constrained global optimization problem is described. The problem functions are assumed only to be continuous. It is shown how the computational cost of bounding a set which satisfies equality constraints can often be reduced if the equality constraint functions are assumed to be continuously differentiable. Numerical results are presented.  相似文献   

3.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

4.
Minimizing a differentiable function over a differential manifold   总被引:5,自引:0,他引:5  
To generalize the descent methods of unconstrained optimization to the constrained case, we define intrinsically the gradient field of the objective function on the constraint manifold and analyze descent methods along geodesics, including the gradient projection and reduced gradient methods for special choices of coordinate systems. In particular, we generalize the quasi-Newton methods and establish their superlinear convergence; we show that they only require the updating of a reduced size matrix. In practice, the geodesic search is approximated by a tangent step followed by a constraints restoration or by a simple arc search again followed by a restoration step.A first draft of this paper was first presented at the 10th International Symposium on Mathematical Programming, Montreal, Canada, 1979. The present version has been prepared while the author was visiting the Department of Engineering-Economic Systems at Stanford University, partially supported by AFOSR Grant No. 77-3141.  相似文献   

5.
The present study is an attempt to extend Barzilai and Borwein’s method for dealing with unconstrained single objective optimization problems to multiobjective ones. As compared with Newton, Quasi-Newton and steepest descent multi-objective optimization methods, Barzilai and Borwein multiobjective optimization (BBMO) method requires simple and quick calculations in that it makes no use of the line search methods like the Armijo rule that necessitates function evaluations at each iteration. It goes without saying that the innovative aspect of the current study is due to the use of no function evaluations in comparison with other multi-objective optimization non-parametric methods (e.g. Newton, Quasi-Newton and steepest descent methods, to name a few) that have been investigated so far. Also, the convergence of the BBMO method for the objective functions assumed to be twice continuously differentiable has been proved. MATLAB software was utilized to implement the BBMO method, and the results were compared with the other methods mentioned earlier. Using some performance assessment, the quality of nondominated frontier of BBMO was analogized to above mentioned methods. In addition, the approximate nondominated frontiers gained from the methods were compared with the exact nondominated frontier for some problems. Also, performance profiles are considered to visualize numerical results presented in tables.  相似文献   

6.
The aim of this paper is to show that the new continuously differentiable exact penalty functions recently proposed in literature can play an important role in the field of constrained global optimization. In fact they allow us to transfer ideas and results proposed in unconstrained global optimization to the constrained case.First, by drawing our inspiration from the unconstrained case and by using the strong exactness properties of a particular continuously differentiable penalty function, we propose a sufficient condition for a local constrained minimum point to be global.Then we show that every constrained local minimum point satisfying the second order sufficient conditions is an attraction point for a particular implementable minimization algorithm based on the considered penalty function. This result can be used to define new classes of global algorithms for the solution of general constrained global minimization problems. As an example, in this paper we describe a simulated annealing algorithm which produces a sequence of points converging in probability to a global minimum of the original constrained problem.  相似文献   

7.
In Optimization Theory, necessary and sufficient optimality conditions play an essential role. They allow, first of all, checking whether a point under study satisfies the conditions, and, secondly, if it does not, finding a “better” point. For the class of directionally differentiable functions, a necessary condition for an unconstrained minimum requires the directional derivative to be non-negative in all directions. This condition becomes efficient for special classes of directionally differentiable functions. For example, in the case of convex and max-type functions, the necessary condition for a minimum takes the form of inclusion. The problem of verifying this condition is reduced to that of finding the point of some convex and compact set C which is nearest to the origin. If the origin does not belong to C, we easily find the steepest descent direction, and are able to construct a numerical method. In the classical Chebyshev polynomial approximation problem, necessary optimality conditions are expressed in the form of alternation of signs of some values. In the present paper, a generalization of the alternance approach to a general optimization problem is given. Two equivalent forms of the alternance condition (the so-called inside form and the outside one) are discussed in detail. In some cases, it may be more convenient to use the conditions in the form of inclusion, in some other—the condition in the alternance form as in the Chebyshev approximation problem. Numerical methods based on the condition in the form of inclusion usually are “gradient-type” methods, while methods employing the alternance form are often “Newton-type”. It is hoped that in some cases it will be possible to enrich the existing armory of optimization algorithms by a new family of efficient tools. In the paper, we discuss only unconstrained optimization problems in the finite-dimensional setting. In many cases, a constrained optimization problem can be reduced (via Exact Penalization Techniques) to an unconstrained one.  相似文献   

8.
A constrained minimax problem is converted to minimization of a sequence of unconstrained and continuously differentiable functions in a manner similar to Morrison's method for constrained optimization. One can thus apply any efficient gradient minimization technique to do the unconstrained minimization at each step of the sequence. Based on this approach, two algorithms are proposed, where the first one is simpler to program, and the second one is faster in general. To show the efficiency of the algorithms even for unconstrained problems, examples are taken to compare the two algorithms with recent methods in the literature. It is found that the second algorithm converges faster with respect to the other methods. Several constrained examples are also tried and the results are presented.  相似文献   

9.
In this paper, we propose a strongly sub-feasible direction method for the solution of inequality constrained optimization problems whose objective functions are not necessarily differentiable. The algorithm combines the subgradient aggregation technique with the ideas of generalized cutting plane method and of strongly sub-feasible direction method, and as results a new search direction finding subproblem and a new line search strategy are presented. The algorithm can not only accept infeasible starting points but also preserve the “strong sub-feasibility” of the current iteration without unduly increasing the objective value. Moreover, once a feasible iterate occurs, it becomes automatically a feasible descent algorithm. Global convergence is proved, and some preliminary numerical results show that the proposed algorithm is efficient.  相似文献   

10.
This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.  相似文献   

11.
In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.  相似文献   

12.
New Classes of Globally Convexized Filled Functions for Global Optimization   总被引:14,自引:0,他引:14  
We propose new classes of globally convexized filled functions. Unlike the globally convexized filled functions previously proposed in literature, the ones proposed in this paper are continuously differentiable and, under suitable assumptions, their unconstrained minimization allows to escape from any local minima of the original objective function. Moreover we show that the properties of the proposed functions can be extended to the case of box constrained minimization problems. We also report the results of a preliminary numerical experience.  相似文献   

13.
We consider unconstrained finite dimensional multi-criteria optimization problems, where the objective functions are continuously differentiable. Motivated by previous work of Brosowski and da Silva (1994), we suggest a number of tests (TEST 1–4) to detect, whether a certain point is a locally (weakly) efficient solution for the underlying vector optimization problem or not. Our aim is to show: the points, at which none of the TESTs 1–4 can be applied, form a nowhere dense set in the state space. TESTs 1 and 2 are exactly those proposed by Brosowski and da Silva. TEST 3 deals with a local constant behavior of at least one of the objective functions. TEST 4 includes some conditions on the gradients of objective functions satisfied locally around the point of interest. It is formulated as a Conjecture. It is proven under additional assumptions on the objective functions, such as linear independence of the gradients, convexity or directional monotonicity. This work was partially supported by grant 55681 of the CONACyT.  相似文献   

14.
一种新的无约束优化线搜索算法   总被引:3,自引:2,他引:1  
在对各种有效的线搜索算法分析的基础上,给出了一种求解光滑无约束优化问题的新的线搜索算法.对于目标函数是二次连续可微且下有界的无约束优化问题,算法具有与Wolfe-Powell线搜索算法相同的理论性质.在每一步迭代中算法至多需要计算两次梯度,对于计算目标函数梯度花费较大的情形可以节省一定的计算量.数值试验表明本文算法是可行的和有效的.  相似文献   

15.
AbstractIn this paper, motivated by the Martinez and Qi methods[l], we propose one type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems in which the objective functions are not twice differentiable, but have LC gradient. They make the norm of the gradient decreasing. These methods are implementable and globally convergent. We prove that the algorithms have superlinear convergence rates under some mile conditions.The methods may also be used to solve nonsmooth equations.  相似文献   

16.
In this paper, we study the application of non-monotone derivative-free optimization algorithms to wireless local area networks (WLAN) planning, which can be modeled as an unconstrained minimization problem. We wish to determine the access point (AP) positions that maximize coverage in order to provide connectivity to static and mobile users. As the objective function of the optimization model is not everywhere differentiable, previous research has discarded gradient methods and employed heuristics such as neighborhood search (NS) and simulated annealing (SA). In this paper, we show that the model fulfills the conditions required by recently proposed non-monotone derivative-free (DF) algorithms. Unlike SA, DF has guaranteed convergence. The numerical tests reveal that a tailored DF implementation (termed “zone search”) outperforms NS and SA. A collaboration between U. of Vigo, Spain and USB, Venezuela.  相似文献   

17.
G. Giorgi  B. Jiménez  V. Novo 《TOP》2009,17(2):288-304
We consider a Pareto multiobjective optimization problem with a feasible set defined by inequality and equality constraints and a set constraint, where the objective and inequality constraints are locally Lipschitz, and the equality constraints are Fréchet differentiable. We study several constraint qualifications in the line of Maeda (J. Optim. Theory Appl. 80: 483–500, 1994) and, under the weakest ones, we establish strong Kuhn–Tucker necessary optimality conditions in terms of Clarke subdifferentials so that the multipliers of the objective functions are all positive.  相似文献   

18.
一类约束不可微优化问题的区间极大熵方法   总被引:23,自引:0,他引:23  
本文研究求解不等式约束离散minimax问题的区间算法,其中目标函数和约束函数是 C~1类函数.利用罚函数法和极大熵函数思想将问题转化为无约束可微优化问题,讨论了极大熵函数的区间扩张,证明了收敛性等性质,提出了无解区域删除原则,建立了区间极大熵算法,并给出了数值算例.该算法是收敛、可靠和有效的.  相似文献   

19.
We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case is reported. Received: November 12, 1999 / Accepted: April 6, 2001?Published online October 26, 2001  相似文献   

20.
We develop optimality conditions for the second-order cone program. Our optimality conditions are well-defined and smooth everywhere. We then reformulate the optimality conditions into several systems of equations. Starting from a solution to the original problem, the sequence generated by Newton’s method converges Q-quadratically to a solution of the perturbed problem under some assumptions. We globalize the algorithm by (1) extending the gradient descent method for differentiable optimization to minimizing continuous functions that are almost everywhere differentiable; (2) finding a directional derivative of the equations. Numerical examples confirm that our algorithm is good for “warm starting” second-order cone programs—in some cases, the solution of a perturbed instance is hit in two iterations. In the progress of our algorithm development, we also generalize the nonlinear complementarity function approach for two variables to several variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号