首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an approximation of the Hessian/vector product through finite differences. For search direction computation, the method uses a forward difference approximation to the Hessian/vector product in combination with a careful choice of the finite difference interval. For the step length computation we suggest an acceleration scheme able to improve the efficiency of the algorithm. Under common assumptions, the method is proved to be globally convergent. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms including CONMIN by Shanno and Phua [D.F. Shanno, K.H. Phua, Algorithm 500, minimization of unconstrained multivariate functions, ACM Trans. Math. Softw. 2 (1976) 87–94], SCALCG by Andrei [N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Comput. Optim. Appl. 38 (2007) 401–416; N. Andrei, Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Optim. Methods Softw. 22 (2007) 561–571; N. Andrei, A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, Appl. Math. Lett. 20 (2007) 645–650], and new conjugacy condition and related new conjugate gradient by Li, Tang and Wei [G. Li, C. Tang, Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math. 202 (2007) 523–539] or truncated Newton TN by Nash [S.G. Nash, Preconditioning of truncated-Newton methods, SIAM J. on Scientific and Statistical Computing 6 (1985) 599–616] using a set of 750 unconstrained optimization test problems show that the suggested algorithm outperforms these conjugate gradient algorithms as well as TN.  相似文献   

2.
In this paper, an unconstrained minimization algorithm is defined in which a nonmonotone line search technique is employed in association with a truncated Newton algorithm. Numerical results obtained for a set of standard test problems are reported which indicate that the proposed algorithm is highly effective in the solution of illconditioned as well as of large dimensional problems.  相似文献   

3.
The truncated Newton algorithm was devised by Dembo and Steihaug (Ref. 1) for solving large sparse unconstrained optimization problems. When far from a minimum, an accurate solution to the Newton equations may not be justified. Dembo's method solves these equations by the conjugate direction method, but truncates the iteration when a required degree of accuracy has been obtained. We present favorable numerical results obtained with the algorithm and compare them with existing codes for large-scale optimization.  相似文献   

4.
Filter approaches, initially presented by Fletcher and Leyffer in 2002, are attractive methods for nonlinear programming. In this paper, we propose an interior-point barrier projected Hessian updating algorithm with line search filter method for nonlinear optimization. The Lagrangian function value instead of the objective function value is used in the filter. The damped BFGS updating is employed to maintain the positive definiteness of the matrices in projected Hessian updating algorithm. The numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

5.
A parallel asynchronous Newton algorithm for unconstrained optimization   总被引:1,自引:0,他引:1  
A new approach to the solution of unconstrained optimization problems is introduced. It is based on the exploitation of parallel computation techniques and in particular on an asynchronous communication model for the data exchange among concurrent processes. The proposed approach arises by interpreting the Newton method as being composed of a set of iterative and independent tasks that can be mapped onto a parallel computing system for the execution.Numerical experiments on the resulting algorithm have been carried out to compare parallel versions using synchronous and asynchronous communication mechanisms in order to assess the benefits of the proposed approach on a variety of parallel computing architectures. It is pointed out that the proposed asynchronous Newton algorithm is preferable for medium and large-scale problems, in the context of both distributed and shared memory architectures.This research work was partially supported by the National Research Council of Italy, within the special project Sistemi Informatici e Calcolo Parallelo, under CNR Contract No. 90.00675.PF69.  相似文献   

6.
In this paper there is stated a result on sets in ordered linear spaces which can be used to show that some properties of the sets are inherited by their convex hulls under suitable conditions. As applications one gives a characterization of weakly efficient points and a duality result for nonconvex vector optimization problems.  相似文献   

7.
In this paper, we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization. By performing the QR decomposition of an affine scaling equality constraint matrix, the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint. By using both the trust-region strategy and the line-search technique, each iteration switches to a backtracking interior point step generated by the trustregion subproblem. The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions. A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases. Selected from Journal of Shanghai Normal University (Natural Science), 2003, 32(4): 7–13  相似文献   

8.
We present a predictor-corrector algorithm for linear optimization based on a modified Newton direction. In each main iteration, the algorithm operates two kinds of steps: a modified Newton step and a damped predictor step. The modified Newton step is generated from an equivalent reformulation of the centering equation from the system, which defines the central path, and move in the direction of a small neighborhood of the central path. While the damped predictor step is used to move in the direction of optimal solution and reduce the duality gap. The procedure is repeated until an ?-approximate solution is found. We derive the complexity for the algorithm, and obtain the best-known result for linear optimization.  相似文献   

9.
In this paper, a parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen as a variant of the multi-objective simplex (the Evans–Steuer) algorithm (Math Program 5(1):54–72, 1973). Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a solution in the sense of Löhne (Vector optimization with infimum and supremum. Springer, Berlin, 2011), that is, it finds a subset of efficient solutions that allows to generate the whole efficient frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear optimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone in Ruszczyński and Vanderbei (Econometrica 71(4):1287–1297, 2003). The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone C and for bounded as well as unbounded problems. Numerical results are provided to compare the proposed algorithm with an objective space based LVOP algorithm [Benson’s algorithm in Hamel et al. (J Global Optim 59(4):811–836, 2014)], that also provides a solution in the sense of Löhne (2011), and with the Evans–Steuer algorithm (1973). The results show that for non-degenerate problems the proposed algorithm outperforms Benson’s algorithm and is on par with the Evans–Steuer algorithm. For highly degenerate problems Benson’s algorithm (Hamel et al. 2014) outperforms the simplex-type algorithms; however, the parametric simplex algorithm is for these problems computationally much more efficient than the Evans–Steuer algorithm.  相似文献   

10.
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations.  相似文献   

11.
Huang  Haoen  Fu  Dongyang  Wang  Guancheng  Jin  Long  Liao  Shan  Wang  Huan 《Numerical Algorithms》2021,87(2):575-599
Numerical Algorithms - The solution of nonlinear optimization is usually encountered in many fields of scientific researches and engineering applications, which spawns a large number of...  相似文献   

12.
We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction. A test based on the quadratic model of the objective function is used to select the most promising between the two search directions. Both the latter selection rule and the CG stopping criterion for approximately solving Newton’s equation, strongly rely on conjugacy conditions. An appropriate linesearch technique is adopted for each search direction: a nonmonotone stabilization is used with the approximate Newton step, while an Armijo type linesearch is used for the negative curvature direction. The proposed algorithm is both globally and superlinearly convergent to stationary points satisfying second order necessary conditions. We carry out a significant numerical experience in order to test our proposal.  相似文献   

13.
《Optimization》2012,61(12):1399-1419
The aim of this article is to introduce and analyse a general vector optimization problem in a unified framework. Using a well-known nonlinear scalarizing function defined by a solid set, we present complete scalarizations of the solution set to the vector problem without any convexity assumptions. As applications of our results we obtain new optimality conditions for several classical optimization problems by characterizing their solution set.  相似文献   

14.
Starting from the paper by Nash and Sofer (1990), we propose a heuristic adaptive truncation criterion for the inner iterations within linesearch-based truncated Newton methods. Our aim is to possibly avoid “over-solving” of the Newton equation, based on a comparison between the predicted reduction of the objective function and the actual reduction obtained. A numerical experience on unconstrained optimization problems highlights a satisfactory effectiveness and robustness of the adaptive criterion proposed, when a residual-based truncation criterion is selected.  相似文献   

15.
In this paper we focus on approximate minimal points of a set in Hausdorff locally convex spaces. Our aim is to develop a general framework from which it is possible to deduce important properties of these points by applying simple results. For this purpose we introduce a new concept of ε-efficient point based on set-valued mappings and we obtain existence results and properties on the behavior of these approximate efficient points when ε is fixed and by considering that ε tends to zero. Finally, the obtained results are applied to vector optimization problems with set-valued mappings.  相似文献   

16.
We prove a fixed point theorem related to the set P2 of [17]. The result gives access to nontrivial infinite ordered sets with the fixed point property. We also show how the result can be used to provide an elementary proof of part of Baclawski and Björner’s results on truncated lattices.Dedicated to the memory of Ivan RivalReceived December 1, 2002; accepted in final form June 18, 2004.This revised version was published online in August 2005 with a corrected cover date.  相似文献   

17.
Nonconvex separation theorems and some applications in vector optimization   总被引:10,自引:0,他引:10  
Separation theorems for an arbitrary set and a not necessarily convex set in a linear topological space are proved and applied to vector optimization. Scalarization results for weakly efficient points and properly efficient points are deduced.  相似文献   

18.
In this paper, we define an unconstrained optimization algorithm employing only first-order derivatives, in which a nonmonotone stabilization technique is used in conjunction with a quasidiscrete Newton method for the computation of the search direction. Global and superlinear convergence is proved, and numerical results are reported.  相似文献   

19.
In this paper we propose a primal-dual algorithm for the solution of general nonlinear programming problems. The core of the method is a local algorithm which relies on a truncated procedure for the computation of a search direction, and is thus suitable for large scale problems. The truncated direction produces a sequence of points which locally converges to a KKT pair with superlinear convergence rate.  相似文献   

20.
The aim of this paper is to extend the so-called perturbation approach in order to deal with conjugate duality for constrained vector optimization problems. To this end we use two conjugacy notions introduced in the past in the literature in the framework of set-valued optimization. As a particular case we consider a vector variational inequality which we rewrite in the form of a vector optimization problem. The conjugate vector duals introduced in the first part allow us to introduce new gap functions for the vector variational inequality. The properties in the definition of the gap functions are verified by using the weak and strong duality theorems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号