首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper concerns a short-update primal-dual interior-point method for linear optimization based on a new search direction. We apply a vector-valued function generated by a univariate function on the nonlinear equation of the system which defines the central path. The common way to obtain the equivalent form of the central path is using the square root function. In this paper we consider a new function formed by the difference of the identity map and the square root function. We apply Newton’s method in order to get the new directions. In spite of the fact that the analysis is more difficult in this case, we prove that the complexity of the algorithm is identical with the one of the best known methods for linear optimization.  相似文献   

2.
Given a data matrix, we find its nearest symmetric positive-semidefinite Toeplitz matrix. In this paper, we formulate the problem as an optimization problem with a quadratic objective function and semidefinite constraints. In particular, instead of solving the so-called normal equations, our algorithm eliminates the linear feasibility equations from the start to maintain exact primal and dual feasibility during the course of the algorithm. Subsequently, the search direction is found using an inexact Gauss-Newton method rather than a Newton method on a symmetrized system and is computed using a diagonal preconditioned conjugate-gradient-type method. Computational results illustrate the robustness of the algorithm.  相似文献   

3.
A primal, interior point method is developed for linear programming problems for which the linear objective function is to be maximised over polyhedra that are not necessarily in standard form. This algorithm concurs with the affine scaling method of Dikin when the polyhedron is in standard form, and satisfies the usual conditions imposed for using that method. If the search direction is regarded as a function of the current iterate, then it is shown that this function has a unique, continuous extension to the boundary. In fact, on any given face, this extension is just the value the search direction would have for the problem of maximising the objective function over that face. This extension is exploited to prove convergence. The algorithm presented here can be used to exploit such special constraint structure as bounds, ranges, and free variables without increasing the size of the linear programming problem.This paper is in final form and no version of it will be submitted for publication elsewhere.  相似文献   

4.
The Tikhonov identical regularized total least squares (TI) is to deal with the ill-conditioned system of linear equations where the data are contaminated by noise. A standard approach for (TI) is to reformulate it as a problem of finding a zero point of some decreasing concave non-smooth univariate function such that the classical bisection search and Dinkelbach’s method can be applied. In this paper, by exploring the hidden convexity of (TI), we reformulate it as a new problem of finding a zero point of a strictly decreasing, smooth and concave univariate function. This allows us to apply the classical Newton’s method to the reformulated problem, which converges globally to the unique root with an asymptotic quadratic convergence rate. Moreover, in every iteration of Newton’s method, no optimization subproblem such as the extended trust-region subproblem is needed to evaluate the new univariate function value as it has an explicit expression. Promising numerical results based on the new algorithm are reported.  相似文献   

5.
This paper presents a feasible direction algorithm for the minimization of a pseudoconvex function over a smooth, compact, convex set. We establish that each cluster point of the generated sequence is an optimal solution of the problem without introducing anti-jamming procedures. Each iteration of the algorithm involves as subproblems only one line search for a zero of a continuously differentiable convex function and one univariate function minimization on a compact interval.  相似文献   

6.
In this paper, LCP is converted to an equivalent nonsmooth nonlinear equation system H(x,y) = 0 by using the famous NCP function-Fischer-Burmeister function. Note that some equations in H(x, y) = 0 are nonsmooth and nonlinear hence difficult to solve while the others are linear hence easy to solve. Then we further convert the nonlinear equation system H(x, y) = 0 to an optimization problem with linear equality constraints. After that we study the conditions under which the K-T points of the optimization problem are the solutions of the original LCP and propose a method to solve the optimization problem. In this algorithm, the search direction is obtained by solving a strict convex programming at each iterative point, However, our algorithm is essentially different from traditional SQP method. The global convergence of the method is proved under mild conditions. In addition, we can prove that the algorithm is convergent superlinearly under the conditions: M is P0 matrix and the limit point is a strict complementarity solution of LCP. Preliminary numerical experiments are reported with this method.  相似文献   

7.
In this paper we define multisections of intervals that yield sharp lower bounds in branch-and-bound type methods for interval global optimization. A so called 'generalized kite', defined for differentiable univariate functions, is built simultaneously with linear boundary forms and suitably chosen centered forms. Proofs for existence and uniqueness of optimal cuts are given. The method described may be used either as an accelerating device or in a global optimization algorithm with an efficient pruning effect. A more general principle for decomposition of boxes is suggested.  相似文献   

8.
The paper describes an objective function hyperplane search heuristic for solving the general all-integer linear programming problem (ILP). The algorithm searches a series of objective function hyperplanes and the search over any given hyperplane is formulated as a bounded knapsack problem. Theory developed for combinations of the objective function and problem constraints is used to guide the search. We evaluate the algorithm's performance on a class of ILP problems to assess the areas of effectiveness.  相似文献   

9.
In this paper, we proposed an implementation of stochastic perturbation of reduced gradient and bisection (SPRGB) method for optimizing a non-convex differentiable function subject to linear equality constraints and non-negativity bounds on the variables. In particular, at each iteration, we compute a search direction by reduced gradient, and optimal line search by bisection algorithm along this direction yields a decrease in the objective value. SPRGB method is desired to establish the global convergence of the algorithm. An implementation and tests of SPRGB algorithm are given, and some numerical results of large-scale problems are presented, which show the efficient of this approach.  相似文献   

10.
Recently linear lower bounding functions (LLBF's) were proposed and used to find -global minima. Basically an LLBF over an interval is a linear function which lies below a given function over the interval and matches the function value at one end point. By comparing it with the best function value found, it can be used to eliminate subregions which do not contain -global minima. To develop a more efficient LLBF algorithm, two important issues need to be addressed: how to construct a better LLBF and how to use it efficiently. In this paper, an improved LLBF for factorable functions overn-dimensional boxes is derived, in the sense that the new LLBF is always better than those in [3] for continuously differentiable functions. Exploration of the properties of the LLBF enables us to develop a new LLBF-based univariate global optimization algorithm, which is again better than those in [3]. Numerical results on some standard test functions indicate the high potential of our algorithm.This work was supported in part by VLSI Technology Inc. and Tyecin Systems Inc. through the University of California MICRO proram with grant number 92-024.  相似文献   

11.
The commutative class of search directions for semidefinite programming was first proposed by Monteiro and Zhang (Ref. 1). In this paper, we investigate the corresponding class of search directions for linear programming over symmetric cones, which is a class of convex optimization problems including linear programming, second-order cone programming, and semidefinite programming as special cases. Complexity results are established for short-step, semilong-step, and long-step algorithms. Then, we propose a subclass of the commutative class for which we can prove polynomial complexities of the interior-point method using semilong steps and long steps. This subclass still contains the Nesterov–Todd direction and the Helmberg–Rendl–Vanderbei–Wolkowicz/Kojima–Shindoh–Hara/Monteiro direction. An explicit formula to calculate any member of the class is also given.  相似文献   

12.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

13.
基于著名的PRP共轭梯度方法,利用CG_DESCENT共轭梯度方法的结构,本文提出了一种求解大规模无约束最优化问题的修正PRP共轭梯度方法。该方法在每一步迭代中均能够产生一个充分下降的搜索方向,且独立于任何线搜索条件。在标准Wolfe线搜索条件下,证明了修正PRP共轭梯度方法的全局收敛性和线性收敛速度。数值结果展示了修正PRP方法对给定的测试问题是非常有效的。  相似文献   

14.
We present a novel optimization algorithm for computing the ranges of multivariate polynomials using the Bernstein polynomial approach. The proposed algorithm incorporates four accelerating devices, namely the cut-off test, the simplified vertex test, the monotonicity test, and the concavity test, and also possess many new features, such as, the generalized matrix method for Bernstein coefficient computation, a new subdivision direction selection rule and a new subdivision point selection rule. The features and capabilities of the proposed algorithm are compared with those of other optimization techniques: interval global optimization, the filled function method, a global optimization method for imprecise problems, and a hybrid approach combining simulated annealing, tabu search and a descent method. The superiority of the proposed method over the latter methods is illustrated by numerical experiments and qualitative comparisons.  相似文献   

15.
An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.   相似文献   

16.
This paper presents a version of an algorithm, due to Shor, for solving unconstrained nondifferentiable optimization problems. The algorithm uses a space extension operator in the direction of the difference between two successive subgradients. The search direction is determined by multiplying the negative of a subgradient by a positive-definite and symmetric matrix. This matrix is updated by a formula similar to the DFP updating formula for differentiable problems. An approximate line search due to Wolfe is presented. A linear rate of convergence of the successive function values is established.  相似文献   

17.
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS.  相似文献   

18.
针对粒子群算法局部搜索能力差,后期收敛速度慢等缺点,提出了一种改进的粒子群算法,该算法是在粒子群算法后期加入拟牛顿方法,充分发挥了粒子群算法的全局搜索性和拟牛顿法的局部精细搜索性,从而克服了粒子群算法的不足,把超越方程转化为函数优化的问题,利用该算法求解,数值实验结果表明,算法有较高的收敛速度和求解精度。  相似文献   

19.
The performance of interval analysis branch-and-bound global optimization algorithms strongly depends on the efficiency of selection, bounding, elimination, division, and termination rules used in their implementation. All the information obtained during the search process has to be taken into account in order to increase algorithm efficiency, mainly when this information can be obtained and elaborated without additional cost (in comparison with traditional approaches). In this paper a new way to calculate interval analysis support functions for multiextremal univariate functions is presented. The new support functions are based on obtaining the same kind of information used in interval analysis global optimization algorithms. The new support functions enable us to develop more powerful bounding, selection, and rejection criteria and, as a consequence, to significantly accelerate the search. Numerical comparisons made on a wide set of multiextremal test functions have shown that on average the new algorithm works almost two times faster than a traditional interval analysis global optimization method.  相似文献   

20.
The central path plays a very important role in interior-point methods. By an equivalent reformulation of the central path, we obtain a new search direction which targets at a small neighborhood of the central path. For a full-Newton step interior-point algorithm based on this search direction, the complexity bound of the algorithm is the best known for linear optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号