首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Quadratic models of objective functions are highly useful in many optimization algorithms. They are updated regularly to include new information about the objective function, such as the difference between two gradient vectors. We consider the case, however, when each model interpolates some function values, so an update is required when a new function value replaces an old one. We let the number of interpolation conditions, m say, be such that there is freedom in each new quadratic model that is taken up by minimizing the Frobenius norm of the second derivative matrix of the change to the model. This variational problem is expressed as the solution of an (m+n+1)×(m+n+1) system of linear equations, where n is the number of variables of the objective function. Further, the inverse of the matrix of the system provides the coefficients of quadratic Lagrange functions of the current interpolation problem. A method is presented for updating all these coefficients in ({m+n}2) operations, which allows the model to be updated too. An extension to the method is also described that suppresses the constant terms of the Lagrange functions. These techniques have a useful stability property that is investigated in some numerical experiments.  相似文献   

2.
In this paper, a new derivative free trust region method is developed basedon the conic interpolation model for the unconstrained optimization. The conic inter-polation model is built by means of the quadratic model function, the collinear scalingformula, quadratic approximation and interpolation. All the parameters in this model axedetermined by objective function interpolation condition. A new derivative free method isdeveloped based upon this model and the global convergence of this new method is provedwithout any information on gradient.  相似文献   

3.
In this paper,we propose a derivative-free trust region algorithm for constrained minimization problems with separable structure,where derivatives of the objective function are not available and cannot be directly approximated.At each iteration,we construct a quadratic interpolation model of the objective function around the current iterate.The new iterates are generated by minimizing the augmented Lagrangian function of this model over the trust region.The filter technique is used to ensure the feasibility and optimality of the iterative sequence.Global convergence of the proposed algorithm is proved under some suitable assumptions.  相似文献   

4.
UOBYQA: unconstrained optimization by quadratic approximation   总被引:5,自引:0,他引:5  
UOBYQA is a new algorithm for general unconstrained optimization calculations, that takes account of the curvature of the objective function, F say, by forming quadratic models by interpolation. Therefore, because no first derivatives are required, each model is defined by ?(n+1)(n+2) values of F, where n is the number of variables, and the interpolation points must have the property that no nonzero quadratic polynomial vanishes at all of them. A typical iteration of the algorithm generates a new vector of variables, t say, either by minimizing the quadratic model subject to a trust region bound, or by a procedure that should improve the accuracy of the model. Then usually F( t ) is obtained, and one of the interpolation points is replaced by t . Therefore the paper addresses the initial positions of the interpolation points, the adjustment of trust region radii, the calculation of t in the two cases that have been mentioned, and the selection of the point to be replaced. Further, UOBYQA works with the Lagrange functions of the interpolation equations explicitly, so their coefficients are updated when an interpolation point is moved. The Lagrange functions assist the procedure that improves the model, and also they provide an estimate of the error of the quadratic approximation to F, which allows the algorithm to achieve a fast rate of convergence. These features are discussed and a summary of the algorithm is given. Finally, a Fortran implementation of UOBYQA is applied to several choices of F, in order to investigate accuracy, robustness in the presence of rounding errors, the effects of first derivative discontinuities, and the amount of work. The numerical results are very promising for n≤20, but larger values are problematical, because the routine work of an iteration is of fourth order in the number of variables. Received: December 7, 2000 / Accepted: August 31, 2001?Published online April 12, 2002  相似文献   

5.
Some highly successful algorithms for unconstrained minimization without derivatives construct changes to the variables by applying trust region methods to quadratic approximations to the objective function ${F (\underline{x}), \underline{x} \in \mathcal{R}^n}$ . A quadratic model has (n + 1) (n + 2)/2 independent parameters, but each new model may interpolate only 2n + 1 values of F, for instance. The symmetric Broyden method takes up the remaining freedom by minimizing the Frobenius norm of the difference between the second derivative matrices of the old and new models, which usually works well in practice. We consider an extension of this technique that combines changes in first derivatives with changes in second derivatives. A simple example suggests that the extension does bring some advantages, but numerical experiments on three test problems with up to 320 variables are disappointing. On the other hand, rates of convergence are investigated numerically when F is a homogeneous quadratic function, which allows very high accuracy to be achieved in practice, the initial and final errors in the variables being about 10 and 10?5000, respectively. It is clear in some of these experiments that the extension does reduce the number of iterations. The main difficulty in the work was finding a way of implementing the extension sufficiently accurately in only ${\mathcal{O}( n^2 )}$ operations on each iteration. A version of the truncated conjugate gradient procedure is suitable, that is used in the numerical experiments, and that is described in detail in an appendix.  相似文献   

6.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

7.
This paper studies subspace properties of trust region methods for unconstrained optimization, assuming the approximate Hessian is updated by quasi- Newton formulae and the initial Hessian approximation is appropriately chosen. It is shown that the trial step obtained by solving the trust region subproblem is in the subspace spanned by all the gradient vectors computed. Thus, the trial step can be defined by minimizing the quasi-Newton quadratic model in the subspace. Based on this observation, some subspace trust region algorithms are proposed and numerical results are also reported.  相似文献   

8.
A tensor given by its canonical decomposition is approximated by another tensor (again, in the canonical decomposition) of fixed lower rank. For this problem, the structure of the Hessian matrix of the objective function is analyzed. It is shown that all the auxiliary matrices needed for constructing the quadratic model can be calculated so that the computational effort is a quadratic function of the tensor dimensionality (rather than a cubic function as in earlier publications). An economical version of the trust region Newton method is proposed in which the structure of the Hessian matrix is efficiently used for multiplying this matrix by vectors and for scaling the trust region. At each step, the subproblem of minimizing the quadratic model in the trust region is solved using the preconditioned conjugate gradient method, which is terminated if a negative curvature direction is detected for the Hessian matrix.  相似文献   

9.
Interpolation-based trust-region methods are an important class of algorithms for Derivative-Free Optimization which rely on locally approximating an objective function by quadratic polynomial interpolation models, frequently built from less points than there are basis components. Often, in practical applications, the contribution of the problem variables to the objective function is such that many pairwise correlations between variables are negligible, implying, in the smooth case, a sparse structure in the Hessian matrix. To be able to exploit Hessian sparsity, existing optimization approaches require the knowledge of the sparsity structure. The goal of this paper is to develop and analyze a method where the sparse models are constructed automatically. The sparse recovery theory developed recently in the field of compressed sensing characterizes conditions under which a sparse vector can be accurately recovered from few random measurements. Such a recovery is achieved by minimizing the 1-norm of a vector subject to the measurements constraints. We suggest an approach for building sparse quadratic polynomial interpolation models by minimizing the 1-norm of the entries of the model Hessian subject to the interpolation conditions. We show that this procedure recovers accurate models when the function Hessian is sparse, using relatively few randomly selected sample points. Motivated by this result, we developed a practical interpolation-based trust-region method using deterministic sample sets and minimum 1-norm quadratic models. Our computational results show that the new approach exhibits a promising numerical performance both in the general case and in the sparse one.  相似文献   

10.
We consider iterative trust region algorithms for the unconstrained minimization of an objective function $F ( \underline{x})$ , $\underline{x}\in \mathcal{R}^{n}$ , when F is differentiable but no derivatives are available, and when each model of F is a linear or a quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. In the quadratic case, second derivatives of the models are derived from information from previous iterations, but there are so few data that typically only the magnitudes of second derivative estimates are correct. Nevertheless, numerical results show that much faster convergence is achieved when quadratic models are employed instead of linear ones. Just one new value of F is calculated on each iteration. Changes to the variables are either trust region steps or are designed to maintain suitable volumes and diameters of the convex hulls of the interpolation points. It is proved that, if F is bounded below, if ?2 F is also bounded, and if the number of iterations is infinite, then the sequence of gradients $\underline{\nabla}F ( \underline{x}_{\,k} )$ , k=1,2,3,??, converges to zero, where $\underline{x}_{\,k}$ is the centre of the trust region of the k-th iteration.  相似文献   

11.
Based on the NEWUOA algorithm, a new derivative-free algorithm is developed, named LCOBYQA. The main aim of the algorithm is to find a minimizer $x^{*} \in\mathbb{R}^{n}$ of a non-linear function, whose derivatives are unavailable, subject to linear inequality constraints. The algorithm is based on the model of the given function constructed from a set of interpolation points. LCOBYQA is iterative, at each iteration it constructs a quadratic approximation (model) of the objective function that satisfies interpolation conditions, and leaves some freedom in the model. The remaining freedom is resolved by minimizing the Frobenius norm of the change to the second derivative matrix of the model. The model is then minimized by a trust-region subproblem using the conjugate gradient method for a new iterate. At times the new iterate is found from a model iteration, designed to improve the geometry of the interpolation points. Numerical results are presented which show that LCOBYQA works well and is very competing against available model-based derivative-free algorithms.  相似文献   

12.
In this paper, we present a new trust region algorithm for the system of singular nonlinear equations with the regularized trust region subproblem. The new algorithm preserves the global convergence of the traditional trust region algorithm, and has the quadratic convergence under some suitable conditions. Finally, some numerical results are given.  相似文献   

13.
In this paper, we present the new trust region method for nonlinear equations with the trust region converging to zero. The new method preserves the global convergence of the traditional trust region methods in which the trust region radius will be larger than a positive constant. We study the convergence rate of the new method under the local error bound condition which is weaker than the nonsingularity. An example given by Y.X. Yuan shows that the convergence rate can not be quadratic. Finally, some numerical results are given. This work is supported by Chinese NSFC grants 10401023 and 10371076, Research Grants for Young Teachers of Shanghai Jiao Tong University, and E-Institute of Computational Sciences of Shanghai Universities. An erratum to this article is available at .  相似文献   

14.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

15.
We study a new trust region affine scaling method for general bound constrained optimization problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.  相似文献   

16.
Based on simple quadratic models of the trust region subproblem, we combine the trust region method with the nonmonotone and adaptive techniques to propose a new nonmonotone adaptive trust region algorithm for unconstrained optimization. Unlike traditional trust region method, our trust region subproblem is very simple by using a new scale approximation of the minimizing function??s Hessian. The new method needs less memory capacitance and computational complexity. The convergence results of the method are proved under certain conditions. Numerical results show that the new method is effective and attractive for large scale unconstrained problems.  相似文献   

17.
Jiang  Rujun  Li  Duan  Wu  Baiyi 《Mathematical Programming》2018,169(2):531-563
Mathematical Programming - We investigate in this paper the generalized trust region subproblem (GTRS) of minimizing a general quadratic objective function subject to a general quadratic inequality...  相似文献   

18.
In this paper, we propose a trust region method for minimizing a function whose Hessian matrix at the solutions may be singular. The global convergence of the method is obtained under mild conditions. Moreover, we show that if the objective function is LC 2 function, the method possesses local superlinear convergence under the local error bound condition without the requirement of isolated nonsingular solution. This is the first regularized Newton method with trust region technique which possesses local superlinear (quadratic) convergence without the assumption that the Hessian of the objective function at the solution is nonsingular. Preliminary numerical experiments show the efficiency of the method. This work is partly supported by the National Natural Science Foundation of China (Grant Nos. 70302003, 10571106, 60503004, 70671100) and Science Foundation of Beijing Jiaotong University (2007RC014).  相似文献   

19.
On the truncated conjugate gradient method   总被引:7,自引:0,他引:7  
In this paper, we consider the truncated conjugate gradient method for minimizing a convex quadratic function subject to a ball trust region constraint. It is shown that the reduction in the objective function by the solution obtained by the truncated CG method is at least half of the reduction by the global minimizer in the trust region. Received January 19, 1999 / Revised version received October 1, 1999?Published online November 30, 1999  相似文献   

20.
带非线性不等式约束优化问题的信赖域算法   总被引:1,自引:0,他引:1  
欧宜贵 《应用数学》2006,19(1):80-85
借助于KKT条件和NCP函数,提出了求解带非线性不等式约束优化问题的信赖域算法.该算法在每一步迭代时,不必求解带信赖域界的二次规划子问题,仅需求一线性方程组系统.在适当的假设条件下,它还是整体收敛的和局部超线性收敛的.数值实验结果表明该方法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号