首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Stochastic approximation problem is to find some root or extremum of a non- linear function for which only noisy measurements of the function are available.The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm,which uses the noisy evaluation of the negative gradient direction as the iterative direction.In order to accelerate the RM algorithm,this paper gives a flame algorithm using adaptive iterative directions.At each iteration,the new algorithm goes towards either the noisy evaluation of the negative gradient direction or some other directions under some switch criterions.Two feasible choices of the criterions are pro- posed and two corresponding flame algorithms are formed.Different choices of the directions under the same given switch criterion in the flame can also form different algorithms.We also proposed the simultanous perturbation difference forms for the two flame algorithms.The almost surely convergence of the new algorithms are all established.The numerical experiments show that the new algorithms are promising.  相似文献   

2.
Based on a new efficient identification technique of active constraints introduced in this paper, a new sequential systems of linear equations (SSLE) algorithm generating feasible iterates is proposed for solving nonlinear optimization problems with inequality constraints. In this paper, we introduce a new technique for constructing the system of linear equations, which recurs to a perturbation for the gradients of the constraint functions. At each iteration of the new algorithm, a feasible descent direction is obtained by solving only one system of linear equations without doing convex combination. To ensure the global convergence and avoid the Maratos effect, the algorithm needs to solve two additional reduced systems of linear equations with the same coefficient matrix after finite iterations. The proposed algorithm is proved to be globally and superlinearly convergent under some mild conditions. What distinguishes this algorithm from the previous feasible SSLE algorithms is that an improving direction is obtained easily and the computation cost of generating a new iterate is reduced. Finally, a preliminary implementation has been tested.  相似文献   

3.
In this paper, two PVD-type algorithms are proposed for solving inseparable linear constraint optimization. Instead of computing the residual gradient function, the new algorithm uses the reduced gradients to construct the PVD directions in parallel computation, which can greatly reduce the computation amount each iteration and is closer to practical applications for solve large-scale nonlinear programming. Moreover, based on an active set computed by the coordinate rotation at each iteration, a feasible descent direction can be easily obtained by the extended reduced gradient method. The direction is then used as the PVD direction and a new PVD algorithm is proposed for the general linearly constrained optimization. And the global convergence is also proved.  相似文献   

4.
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.  相似文献   

5.
The simplified Newton method, at the expense of fast convergence, reduces the work required by Newton method by reusing the initial Jacobian matrix. The composite Newton method attempts to balance the trade-off between expense and fast convergence by composing one Newton step with one simplified Newton step. Recently, Mehrotra suggested a predictor-corrector variant of primal-dual interior point method for linear programming. It is currently the interior-point method of the choice for linear programming. In this work we propose a predictor-corrector interior-point algorithm for convex quadratic programming. It is proved that the algorithm is equivalent to a level-1 perturbed composite Newton method. Computations in the algorithm do not require that the initial primal and dual points be feasible. Numerical experiments are made.  相似文献   

6.
非线性不等式约束最优化快速收敛的可行信赖域算法   总被引:5,自引:0,他引:5  
简金宝 《计算数学》2002,24(3):273-282
In this paper,by combining the trust region technique with the generalized gradient projection.a new trust region algorithm with feasible iteration points is presented for nonlinear inequality constrained optimization,and its trust region is a general compact set containing the origion as an inteior point.No penalty function is used in the algorithm,and it is feasible descent .Under suitable assumptions,the algorithm is proved to possess global and strong convergence as well as superlinear and quadratic convergence.Some numerical results are reported.  相似文献   

7.
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameter in the search directions. In this note, conditions are given on the parameter in the conjugate gradient directions to ensure the descent property of the search directions. Global convergence of such a class of methods is discussed. It is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continuously differentiable function with a modification of the Curry-Altman‘s step-size rule and a bounded level set. Combining PR method with our new method, PR method is modified to have global convergence property.Numerical experiments show that the new methods are efficient by comparing with FR conjugate gradient method.  相似文献   

8.
The minimization of nonconvex, nondifferentiable functions that are compositions of max-type functions formed by nondifferentiable convex functions is discussed in this paper. It is closely related to practical engineering problems. By utilizing the globality of ε-subdifferential and the theory of quasidifferential, and by introducing a new scheme which selects several search directions and consider them simultaneously at each iteration, a minimizing algorithm is derived. It is simple in structure, implementable, numerically efficient and has global convergence. The shortcomings of the existing algorithms are thus overcome both in theory and in application.  相似文献   

9.
We extend the classical affine scaling interior trust region algorithm for the linear constrained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in association with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.  相似文献   

10.
A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain first-order necessary condition even if the original problem is itself infeasible. Under certain condition, some global convergence results are proved and local superlinear convergence results are also obtained. Preliminary numerical results are reported.  相似文献   

11.
In this paper, a new superlinearly convergent algorithm is presented for optimization problems with general nonlineer equality and inequality Constraints, Comparing with other methods for these problems, the algorithm has two main advantages. First, it doesn‘t solve anyquadratic programming (QP), and its search directions are determined by the generalized projection technique and the solutions of two systems of linear equations. Second, the sequential points generated by the algoritbh satisfy all inequity constraints and its step-length is computed by the straight line search,The algorithm is proved to possesa global and auperlinear convergence.  相似文献   

12.
In this paper, a sequential quadratically constrained quadratic programming method of feasible directions is proposed for the optimization problems with nonlinear inequality constraints. At each iteration of the proposed algorithm, a feasible direction of descent is obtained by solving only one subproblem which consist of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the functions of the discussed problems, and such a subproblem can be formulated as a second-order cone programming which can be solved by interior point methods. To overcome the Maratos effect, an efficient higher-order correction direction is obtained by only one explicit computation formula. The algorithm is proved to be globally convergent and superlinearly convergent under some mild conditions without the strict complementarity. Finally, some preliminary numerical results are reported. Project supported by the National Natural Science Foundation (No. 10261001), Guangxi Science Foundation (Nos. 0236001, 064001), and Guangxi University Key Program for Science and Technology Research (No. 2005ZD02) of China.  相似文献   

13.
For current sequential quadratic programming (SQP) type algorithms, there exist two problems: (i) in order to obtain a search direction, one must solve one or more quadratic programming subproblems per iteration, and the computation amount of this algorithm is very large. So they are not suitable for the large-scale problems; (ii) the SQP algorithms require that the related quadratic programming subproblems be solvable per iteration, but it is difficult to be satisfied. By using ε-active set procedure with a special penalty function as the merit function, a new algorithm of sequential systems of linear equations for general nonlinear optimization problems with arbitrary initial point is presented. This new algorithm only needs to solve three systems of linear equations having the same coefficient matrix per iteration, and has global convergence and local superlinear convergence. To some extent, the new algorithm can overcome the shortcomings of the SQP algorithms mentioned above. Project partly supported by the National Natural Science Foundation of China and Tianyuan Foundation of China.  相似文献   

14.
This paper discusses optimization problems with nonlinear inequality constraints and presents a new sequential quadratically-constrained quadratic programming (NSQCQP) method of feasible directions for solving such problems. At each iteration. the NSQCQP method solves only one subproblem which consists of a convex quadratic objective function, convex quadratic equality constraints, as well as a perturbation variable and yields a feasible direction of descent (improved direction). The following results on the NSQCQP are obtained: the subproblem solved at each iteration is feasible and solvable: the NSQCQP is globally convergent under the Mangasarian-Fromovitz constraint qualification (MFCQ); the improved direction can avoid the Maratos effect without the assumption of strict complementarity; the NSQCQP is superlinearly and quasiquadratically convergent under some weak assumptions without thestrict complementarity assumption and the linear independence constraint qualification (LICQ). Research supported by the National Natural Science Foundation of China Project 10261001 and Guangxi Science Foundation Projects 0236001 and 0249003. The author thanks two anonymous referees for valuable comments and suggestions on the original version of this paper.  相似文献   

15.
One of the most interesting topics related to sequential quadratic programming algorithms is how to guarantee the consistence of all quadratic programming subproblems. In this decade, much work trying to change the form of constraints to obtain the consistence of the subproblems has been done. The method proposed by De O. Pantoja J.F. A. and coworkers solves the consistent problem of SQP method, and is the best to the authors’ knowledge. However, the scale and complexity of the subproblems in De O. Pantoja’s work will be increased greatly since all equality constraints have to be changed into absolute form. A new sequential quadratic programming type algorithm is presented by means of a special ε-active set scheme and a special penalty function. Subproblems of the new algorithm are all consistent, and the form of constraints of the subproblems is as simple as one of the general SQP type algorithms. It can be proved that the new method keeps global convergence and Local superlinear convergence. Project partly supported by the National Natural Science Foundation of China.  相似文献   

16.
This paper discusses a special class of mathematical programs with nonlinear complementarity constraints, its goal is to present a globally and superlinearly convergent algorithm for the discussed problems. We first reformulate the complementarity constraints as a standard nonlinear equality and inequality constraints by making use of a class of generalized smoothing complementarity functions, then present a new SQP algorithm for the discussed problems. At each iteration, with the help of a pivoting operation, a master search direction is yielded by solving a quadratic program, and a correction search direction for avoiding the Maratos effect is generated by an explicit formula. Under suitable assumptions, without the strict complementarity on the upper-level inequality constraints, the proposed algorithm converges globally to a B-stationary point of the problems, and its convergence rate is superlinear.AMS Subject Classification: 90C, 49MThis work was supported by the National Natural Science Foundation (10261001) and the Guangxi Province Science Foundation (0236001, 0249003) of China.  相似文献   

17.
18.
We propose a one-step smoothing Newton method for solving the non-linear complementarity problem with P0-function (P0-NCP) based on the smoothing symmetric perturbed Fisher function(for short, denoted as the SSPF-function). The proposed algorithm has to solve only one linear system of equations and performs only one line search per iteration. Without requiring any strict complementarity assumption at the P0-NCP solution, we show that the proposed algorithm converges globally and superlinearly under mild conditions. Furthermore, the algorithm has local quadratic convergence under suitable conditions. The main feature of our global convergence results is that we do not assume a priori the existence of an accumulation point. Compared to the previous literatures, our algorithm has stronger convergence results under weaker conditions.  相似文献   

19.
An improved SQP algorithm for inequality constrained optimization   总被引:5,自引:0,他引:5  
In this paper, the feasible type SQP method is improved. A new algorithm is proposed to solve nonlinear inequality constrained problem, in which a new modified method is presented to decrease the computational complexity. It is required to solve only one QP subproblem with only a subset of the constraints estimated as active per single iteration. Moreover, a direction is generated to avoid the Maratos effect by solving a system of linear equations. The theoretical analysis shows that the algorithm has global and superlinear convergence under some suitable conditions. In the end, numerical experiments are given to show that the method in this paper is effective.This work is supported by the National Natural Science Foundation (No. 10261001) and Guangxi Science Foundation (No. 0236001 and 0249003) of China. Acknowledgement.We would like to thank one anonymous referee for his valuable comments and suggestions, which greatly improved the quality of this paper.  相似文献   

20.
We present a globally convergent phase I-phase II algorithm for inequality-constrained minimization, which computes search directions by approximating the solution to a generalized quadratic program. In phase II these search directions are feasible descent directions. The algorithm is shown to converge linearly under convexity assumptions. Both theory and numerical experiments suggest that it generally converges faster than the Polak-Trahan-Mayne method of centers.The research reported herein was sponsored in part by the Air Force Office of Scientific Research (Grant AFOSR-90-0068), the National Science Foundation (Grant ECS-8713334), and a Howard Hughes Doctoral Fellowship (Hughes Aircraft Co.).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号