共查询到20条相似文献,搜索用时 22 毫秒
1.
J. T. Betts 《Journal of Optimization Theory and Applications》1975,16(1-2):1-24
An effective algorithm is described for solving the general constrained parameter optimization problem. The method is quasi-second-order and requires only function and gradient information. An exterior point penalty function method is used to transform the constrained problem into a sequence of unconstrained problems. The penalty weightr is chosen as a function of the pointx such that the sequence of optimization problems is computationally easy. A rank-one optimization algorithm is developed that takes advantage of the special properties of the augmented performance index. The optimization algorithm accounts for the usual difficulties associated with discontinuous second derivatives of the augmented index. Finite convergence is exhibited for a quadratic performance index with linear constraints; accelerated convergence is demonstrated for nonquadratic indices and nonlinear constraints. A computer program has been written to implement the algorithm and its performance is illustrated in fourteen test problems. 相似文献
2.
A second-order smooth penalty function algorithm for constrained optimization problems 总被引:1,自引:0,他引:1
Xinsheng Xu Zhiqing Meng Jianwu Sun Liguo Huang Rui Shen 《Computational Optimization and Applications》2013,55(1):155-172
This paper introduces a second-order differentiability smoothing technique to the classical l 1 exact penalty function for constrained optimization problems(COP). Error estimations among the optimal objective values of the nonsmooth penalty problem, the smoothed penalty problem and the original optimization problem are obtained. Based on the smoothed problem, an algorithm for solving COP is proposed and some preliminary numerical results indicate that the algorithm is quite promising. 相似文献
3.
《Optimization》2012,61(3):403-419
In this article, the application of the electromagnetism-like method (EM) for solving constrained optimization problems is investigated. A number of penalty functions have been tested with EM in this investigation, and their merits and demerits have been discussed. We have also provided motivations for such an investigation. Finally, we have compared EM with two recent global optimization algorithms from the literature. We have shown that EM is a suitable alternative to these methods and that it has a role to play in solving constrained global optimization problems. 相似文献
4.
Cosimo Resina 《European Journal of Operational Research》1985,21(1):93-100
In this paper a general problem of constrained minimization is studied. The minima are determined by searching for the asymptotical values of the solutions of a suitable system of ordinary differential equations.For this system, if the initial point is feasible, then any trajectory is always inside the set of constraints and tends towards a set of critical points. Each critical point that is not a relative minimum is unstable. For formulas of one-step numerical integration, an estimate of the step of integration is given, so that the above mentioned qualitative properties of the system of ordinary differential equations are kept. 相似文献
5.
Liya Liu Xiaolong Qin Jen‐Chih Yao 《Mathematical Methods in the Applied Sciences》2019,42(18):7367-7380
The purpose of this paper is to introduce a hybrid descent algorithm for finding a common point in fixed‐point sets of quasi‐nonexpansive mappings and solution sets of variational inequality problems. In the framework of Hilbert spaces, the strong convergence of the hybrid descent algorithm is established. Numerical experiments for the bandwidth allocation, which demonstrate the effectiveness, performance, and convergence of the proposed algorithm, are provided. 相似文献
6.
7.
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality
constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods
for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the
iterates is progressively enforced thanks to shift variables and an exact penalty approach. Global and q-superlinear convergence is obtained for a fixed penalty parameter; global convergence to the analytic center of the optimal
set is ensured when the barrier parameter tends to zero, provided strict complementarity holds.
Received: December 21, 2000 / Accepted: July 13, 2001?Published online February 14, 2002 相似文献
8.
We introduce a first-order Mirror-Descent (MD) type algorithm for solving nondifferentiable convex problems having a combination of simple constraint set X (ball, simplex, etc.) and an additional functional constraint. The method is tuned to exploit the structure of X by employing an appropriate non-Euclidean distance-like function. Convergence results and efficiency estimates are derived. The performance of the algorithm is demonstrated by solving certain image deblurring problems. 相似文献
9.
《Optimization》2012,61(3-4):215-228
This paper is concerned with the stable solution of ill-posed convex semi-infinite problems on the base of their sequential approximation by finite dimensional convex problems on a sequence of grids. These auxiliary programs are constructed by using the iterative Prox-regularization and for solving each of them only one step of a penalty method is applied. A simple deletion procedure of inactive constraints is suggested. The choice of the control parameters secure the convergence of the methods and a linear convergence rate is obtained 相似文献
10.
Eliane R. Panier 《Mathematical Programming》1987,37(3):269-292
An algorithm for solving linearly constrained optimization problems is proposed. The search direction is computed by a bundle
principle and the constraints are treated through an active set strategy. Difficulties that arise when the objective function
is nonsmooth, require a clever choice of a constraint to relax. A certain nondegeneracy assumption is necessary to obtain
convergence.
Most of this research was performed when the author was with I.N.R.I.A. (Domaine de Voluceau-Rocquencourt, B.P. 105, 78153
Le Chesnay Cédex, France).
This research was supported in part by the National Science Foundation, Grants No. DMC-84-51515 and OIR-85-00108. 相似文献
11.
12.
In this article, we provide a general iterative method for solving an equilibrium and a constrained convex minimization problem. By using the idea of regularized gradient-projection algorithm (RGPA), we find a common element, which is also a solution of a variational inequality problem. Then the strong convergence theorems are obtained under suitable conditions. 相似文献
13.
In this paper a constrained optimization problem is transformed into an equivalent one in terms of an auxiliary penalty function.
A Lagrange function method is then applied to this transformed problem. Zero duality gap and exact penalty results are obtained
without any coercivity assumption on either the objective function or constraint functions.
The work of the authors was supported by the Australian Research Council (grant DP0343998), the Research Grants Council of
Hong Kong (PolyU 5145/02E) and NNSF (10571174) of China, respectively. 相似文献
14.
Mathematical Programming - Motivated by modern regression applications, in this paper, we study the convexification of a class of convex optimization problems with indicator variables and... 相似文献
15.
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem
with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore,
the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated
in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision.
This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair
the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter
in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in
practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented
Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable
convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems. 相似文献
16.
17.
R. Fletcher 《Mathematical Programming》1972,2(1):133-165
An algorithm for minimization of functions of many variables, subject possibly to linear constraints on the variables, is described. In it a subproblem is solved in which a quadratic approximation is made to the object function and minimized over a region in which the approximation is valid. A strategy for deciding when this region should be expanded or contracted is given. The quadratic approximation involves estimating the hessian of the object function by a matrix which is updated at each iteration by a formula recently reported by Powell [6]. This formula enables convergence of the algorithm from any feasible point to be proved. Use of such an approximation, as against using exact second derivatives, also enables a reduction of about 60% to be made in the number of operations to solve the subproblem. Numerical evidence is reported showing that the algorithm is efficient in the number of function evaluations required to solve well known test problems.This paper was presented at the 7th International Mathematical Programming Symposium 1970, The Hague, The Netherlands. 相似文献
18.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported. 相似文献
19.
Z. R. Gabidullina 《Journal of Mathematical Sciences》1995,73(5):538-543
Translated fromIssledovaniya po Prikladnoi Matematike, No. 18, 1992, pp. 30–38. 相似文献
20.
In this paper, we consider the linearly constrained multiobjective minimization, and we propose a new reduced gradient method for solving this problem. Our approach solves iteratively a convex quadratic optimization subproblem to calculate a suitable descent direction for all the objective functions, and then use a bisection algorithm to find an optimal stepsize along this direction. We prove, under natural assumptions, that the proposed algorithm is well-defined and converges globally to Pareto critical points of the problem. Finally, this algorithm is implemented in the MATLAB environment and comparative results of numerical experiments are reported. 相似文献