共查询到20条相似文献,搜索用时 31 毫秒
1.
Maria Daneva Torbjörn Larsson Michael Patriksson Clas Rydergren 《Computational Optimization and Applications》2010,46(3):451-466
The feasible direction method of Frank and Wolfe has been claimed to be efficient for solving the stochastic transportation
problem. While this is true for very moderate accuracy requirements, substantially more efficient algorithms are otherwise
diagonalized Newton and conjugate Frank–Wolfe algorithms, which we describe and evaluate. Like the Frank–Wolfe algorithm,
these two algorithms take advantage of the structure of the stochastic transportation problem. We also introduce a Frank–Wolfe
type algorithm with multi-dimensional search; this search procedure exploits the Cartesian product structure of the problem.
Numerical results for two classic test problem sets are given. The three new methods that are considered are shown to be superior
to the Frank–Wolfe method, and also to an earlier suggested heuristic acceleration of the Frank–Wolfe method. 相似文献
2.
Combining search directions using gradient flows 总被引:2,自引:0,他引:2
The efficient combination of directions is a significant problem in line search methods that either use negative curvature,
or wish to include additional information such as the gradient or different approximations to the Newton direction.
In this paper we describe a new procedure to combine several of these directions within an interior-point primal-dual algorithm.
Basically, we combine in an efficient manner a modified Newton direction with the gradient of a merit function and a direction
of negative curvature, if it exists. We also show that the procedure is well-defined, and it has reasonable theoretical properties
regarding the rate of convergence of the method.
We also present numerical results from an implementation of the proposed algorithm on a set of small test problems from the
CUTE collection.
Received: November 2000 / Accepted: October 2002 Published online: February 14, 2003
Key Words. negative curvature – primal-dual methods – interior-point methods – nonconvex optimization – line searches
Mathematics Subject Classification (1991): 49M37, 65K05, 90C30 相似文献
3.
We study the inverse problem of recovering an interior interface from a boundary measurement in an elliptic boundary value
problem arising from a semiconductor transistor model. We set up a nonlinear least-squares formulation for solving the inverse
problem, and establish the necessary derivatives with respect to the interface. We then propose both the Gauss–Newton iterative
method and the conjugate gradient method for the least-squares problem, and present implementation of these methods using
integral equations. 相似文献
4.
Roberto Andreani Joaquim J. Júdice José Mario Martínez Joao Patrício 《Numerical Algorithms》2011,57(4):457-485
Interior–point algorithms are among the most efficient techniques for solving complementarity problems. In this paper, a procedure
for globalizing interior–point algorithms by using the maximum stepsize is introduced. The algorithm combines exact or inexact
interior–point and projected–gradient search techniques and employs a line–search procedure for the natural merit function
associated with the complementarity problem. For linear problems, the maximum stepsize is shown to be acceptable if the Newton
interior–point search direction is employed. Complementarity and optimization problems are discussed, for which the algorithm
is able to process by either finding a solution or showing that no solution exists. A modification of the algorithm for dealing
with infeasible linear complementarity problems is introduced which, in practice, employs only interior–point search directions.
Computational experiments on the solution of complementarity problems and convex programming problems by the new algorithm
are included. 相似文献
5.
The limiting factors of second-order methods for large-scale semidefinite optimization are the storage and factorization of
the Newton matrix. For a particular algorithm based on the modified barrier method, we propose to use iterative solvers instead
of the routinely used direct factorization techniques. The preconditioned conjugate gradient method proves to be a viable
alternative for problems with a large number of variables and modest size of the constrained matrix. We further propose to
avoid explicit calculation of the Newton matrix either by an implicit scheme in the matrix–vector product or using a finite-difference
formula. This leads to huge savings in memory requirements and, for certain problems, to further speed-up of the algorithm.
Dedicated to the memory of Jos Sturm. 相似文献
6.
LiPingZHANG JiYeHAN ZhengHaiHUANG 《数学学报(英文版)》2005,21(1):117-128
We propose a one-step smoothing Newton method for solving the non-linear complementarity problem with P0-function (P0-NCP) based on the smoothing symmetric perturbed Fisher function(for short, denoted as the SSPF-function). The proposed algorithm has to solve only one linear system of equations and performs only one line search per iteration. Without requiring any strict complementarity assumption at the P0-NCP solution, we show that the proposed algorithm converges globally and superlinearly under mild conditions. Furthermore, the algorithm has local quadratic convergence under suitable conditions. The main feature of our global convergence results is that we do not assume a priori the existence of an accumulation point. Compared to the previous literatures, our algorithm has stronger convergence results under weaker conditions. 相似文献
7.
This paper is devoted to the study of the proximal point algorithm for solving monotone second-order cone complementarity
problems. The proximal point algorithm is to generate a sequence by solving subproblems that are regularizations of the original
problem. After given an appropriate criterion for approximate solutions of subproblems by adopting a merit function, the proximal
point algorithm is verified to have global and superlinear convergence properties. For the purpose of solving the subproblems
efficiently, we introduce a generalized Newton method and show that only one Newton step is eventually needed to obtain a
desired approximate solution that approximately satisfies the appropriate criterion under mild conditions. Numerical comparisons
are also made with the derivative-free descent method used by Pan and Chen (Optimization 59:1173–1197, 2010), which confirm the theoretical results and the effectiveness of the algorithm. 相似文献
8.
We consider a quadratic cut method based on analytic centers for two cases of convex quadratic feasibility problems. In the
first case, the convex set is defined by a finite yet large number, N, of convex quadratic inequalities. We extend quadratic cut algorithm of Luo and Sun [3] for solving such problems by placing
or translating the quadratic cuts directly through the current approximate center. We show that, in terms of total number
of addition and translation of cuts, our algorithm has the same polynomial worst case complexity as theirs [3]. However, the
total number of steps, where steps consist of (damped) Newton steps, function evaluations and arithmetic operations, required
to update from one approximate center to another is , where ε is the radius of the largest ball contained in the feasible set. In the second case, the convex set is defined by
an infinite number of certain strongly convex quadratic inequalities. We adapt the same quadratic cut method for the first
case to the second one. We show that in this case the quadratic cut algorithm is a fully polynomial approximation scheme.
Furthermore, we show that, at each iteration, k, the total number steps (as described above) required to update from one approximate center to another is at most , with ε as defined above.
Received: April 2000 / Accepted: June 2002 Published online: September 5, 2002
Key words. convex quadratic feasibility problem – interior-point methods – analytic center – quadratic cuts – potential function 相似文献
9.
B. M. Podlevs’kyi 《Journal of Mathematical Sciences》2009,160(3):357-367
The iterative algorithm for determination of bilateral (alternating) approximations to the eigenvalues of nonlinear spectral
problems that uses a bilateral analog of the Newton method and a new efficient numerical procedure for calculation of the
Newton correction and its derivative is proposed.
Translated from Matematychni Metody ta Fizyko-Mekhanichni Polya, Vol. 51, No. 1, pp. 65–73, January–March, 2008. 相似文献
10.
Adam Ouorou 《Mathematical Programming》2009,119(2):239-271
An algorithm is developed for minimizing nonsmooth convex functions. This algorithm extends Elzinga–Moore cutting plane algorithm
by enforcing the search of the next test point not too far from the previous ones, thus removing compactness assumption. Our
method is to Elzinga–Moore’s algorithm what a proximal bundle method is to Kelley’s algorithm. Instead of lower approximations
used in proximal bundle methods, the present approach is based on some objects regularizing translated functions of the objective
function. We propose some variants and using some academic test problems, we conduct a numerical comparative study with Elzinga–Moore
algorithm and two other well-known nonsmooth methods.
相似文献
11.
A feasible semismooth asymptotically Newton method for mixed complementarity problems 总被引:2,自引:0,他引:2
Semismooth Newton methods constitute a major research area for solving mixed complementarity problems (MCPs). Early research
on semismooth Newton methods is mainly on infeasible methods. However, some MCPs are not well defined outside the feasible
region or the equivalent unconstrained reformulations of other MCPs contain local minimizers outside the feasible region.
As both these problems could make the corresponding infeasible methods fail, more recent attention is on feasible methods.
In this paper we propose a new feasible semismooth method for MCPs, in which the search direction asymptotically converges
to the Newton direction. The new method overcomes the possible non-convergence of the projected semismooth Newton method,
which is widely used in various numerical implementations, by minimizing a one-dimensional quadratic convex problem prior
to doing (curved) line searches.
As with other semismooth Newton methods, the proposed method only solves one linear system of equations at each iteration.
The sparsity of the Jacobian of the reformulated system can be exploited, often reducing the size of the system that must
be solved. The reason for this is that the projection onto the feasible set increases the likelihood of components of iterates
being active. The global and superlinear/quadratic convergence of the proposed method is proved under mild conditions. Numerical
results are reported on all problems from the MCPLIB collection [8].
Received: December 1999 / Accepted: March 2002 Published online: September 5, 2002
RID="★"
ID="★" This work was supported in part by the Australian Research Council.
Key Words. mixed complementarity problems – semismooth equations – projected Newton method – convergence
AMS subject classifications. 90C33, 90C30, 65H10 相似文献
12.
We consider a reaction–diffusion parabolic problem on branched structures. The Hodgkin–Huxley reaction–diffusion equations
are formulated on each edge of the graph. The problems are coupled by some conjugation conditions at branch points. It is
important to note that two different types of the flux conservation equations are considered. The first one describes a conservation
of the axial currents at branch points, and the second equation defines the conservation of the current flowing at the soma
in neuron models. We study three different types of finite-difference schemes. The fully implicit scheme is based on the backward
Euler algorithm. The stability and convergence of the discrete solution is proved in the maximum norm, and the analysis is
done by using the maximum principle method. In order to decouple computations at each edge of the graph, we consider two modified
schemes. In the predictor algorithm, the values of the solution at branch points are computed by using an explicit approximation
of the conservation equations. The stability analysis is done using the maximum principle method. In the predictor–corrector
method, in addition to the previous algorithm, the values of the solution at the branch points are recomputed by an implicit
algorithm, when the discrete solution is obtained on each subdomain. The stability of this algorithm is investigated numerically.
The results of computational experiments are presented. 相似文献
13.
We present an algorithm for large-scale unconstrained optimization based onNewton's method. In large-scale optimization, solving
the Newton equations at each iteration can be expensive and may not be justified when far from a solution. Instead, an inaccurate
solution to the Newton equations is computed using a conjugate gradient method. The resulting algorithm is shown to have strong
convergence properties and has the unusual feature that the asymptotic convergence rate is a user specified parameter which
can be set to anything between linear and quadratic convergence. Some numerical results on a 916 vriable test problem are
given. Finally, we contrast the computational behavior of our algorithm with Newton's method and that of a nonlinear conjugate
gradient algorithm.
This research was supported in part by DOT Grant CT-06-0011, NSF Grant ENG-78-21615 and grants from the Norwegian Research
Council for Sciences and the Humanities and the Norway-American Association.
This paper was originally presented at the TIMS-ORSA Joint National Meeting, Washington, DC, May 1980. 相似文献
14.
We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step
computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed
by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows
for the presence of negative curvature without requiring information about the inertia of the primal–dual iteration matrix.
Negative curvature may arise from second-order information of the problem functions, but in fact exact second derivatives
are not required in the approach. The complete algorithm is characterized by its emphasis on sufficient reductions in a model
of an exact penalty function. We analyze the global behavior of the algorithm and present numerical results on a collection
of test problems. 相似文献
15.
Recently, Li et al. (Comput. Optim. Appl. 26:131–147, 2004) proposed a regularized Newton method for convex minimization problems. The method retains local quadratic convergence property
without requirement of the singularity of the Hessian. In this paper, we develop a truncated regularized Newton method and
show its global convergence. We also establish a local quadratic convergence theorem for the truncated method under the same
conditions as those in Li et al. (Comput. Optim. Appl. 26:131–147, 2004). At last, we test the proposed method through numerical experiments and compare its performance with the regularized Newton
method. The results show that the truncated method outperforms the regularized Newton method.
The work was supported by the 973 project granted 2004CB719402 and the NSF project of China granted 10471036. 相似文献
16.
We present two algorithms to compute m-fold hypergeometric solutions of linear recurrence equations for the classical shift case and for the q-case, respectively. The first is an m-fold generalization and q-generalization of the algorithm by van Hoeij (Appl Algebra Eng Commun Comput 17:83–115, 2005; J. Pure Appl Algebra 139:109–131, 1998) for recurrence equations. The second is a combination of an improved version of the algorithms by Petkovšek (Discrete Math
180:3–22, 1998; J Symb Comput 14(2–3):243–264, 1992) for recurrence and q-recurrence equations and the m-fold algorithm from Petkovšek and Salvy (ISSAC 1993 Proceedings, pp 27–33, 1993) for recurrence equations. We will refer to the classical algorithms as van Hoeij or Petkovšek respectively. To formulate our ideas, we first need to introduce an adapted version of an m-fold Newton polygon and its characteristic polynomials for the classical case and q-case, and to prove the important properties in this case. Using the data from the Newton polygon, we are able to present
efficient m-fold versions of the van Hoeij and Petkovšek algorithms for the classical shift case and for the q-case, respectively. Furthermore, we show how one can use the Newton polygon and our characteristic polynomials to conclude
for which
m ? \mathbbN{m\in \mathbb{N}} there might be an m-fold hypergeometric solution at all. Again by using the information obtained from the Newton polygon, the presentation of
the q-Petkovšek algorithm can be simplified and streamlined. Finally, we give timings for the ‘classical’ q-Petkovšek, our q-van Hoeij and our modified q-Petkovšek algorithm on some classes of problems and we present a Maple implementation of the m-fold algorithms for the q-case. 相似文献
17.
Maria Gonzalez-Lima Hua Wei Henry Wolkowicz 《Computational Optimization and Applications》2009,44(2):213-247
This paper studies a primal–dual interior/exterior-point path-following approach for linear programming that is motivated
on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal–dual
optimality equations. Under nondegeneracy assumptions, this nonlinear system is well-posed, i.e. it has a nonsingular Jacobian
at optimality and is not necessarily ill-conditioned as the iterates approach optimality. Assuming that a basis matrix (easily
factorizable and well-conditioned) can be found, we apply a simple preprocessing step to eliminate both the primal and dual
feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. Sparsity is
maintained. We then apply either a direct solution method or an iterative solver (within an inexact Newton framework) to solve
this equation. Since the linearization is well posed, we use affine scaling and do not maintain nonnegativity once we are
close enough to the optimum, i.e. we apply a change to a pure Newton step technique. In addition, we correctly identify some of the primal and dual variables that converge to 0 and delete them (purify
step).
We test our method with random nondegenerate problems and problems from the Netlib set, and we compare it with the standard
Normal Equations NEQ approach. We use a heuristic to find the basis matrix. We show that our method is efficient for large,
well-conditioned problems. It is slower than NEQ on ill-conditioned problems, but it yields higher accuracy solutions. 相似文献
18.
We derive a numerical scheme to compute invariant manifolds for time-variant discrete dynamical systems, i.e., nonautonomous
difference equations. Our universally applicable method is based on a truncated Lyapunov–Perron operator and computes invariant
manifolds using a system of nonlinear algebraic equations which can be solved both locally using (nonsmooth) inexact Newton,
and globally using continuation algorithms. Compared to other algorithms, our approach is quite flexible, since it captures
time-dependent, nonsmooth, noninvertible or implicit equations and enables us to tackle the full hierarchy of strongly stable,
stable and center-stable manifolds, as well as their unstable counterparts. Our results are illustrated using a test example
and are applied to a population dynamical model and the Hénon map. Finally, we discuss a linearly implicit Euler–Bubnov–Galerkin
discretization of a reaction diffusion equation in order to approximate its inertial manifold. 相似文献
19.
We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based
technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions:
the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction.
A test based on the quadratic model of the objective function is used to select the most promising between the two search
directions. Both the latter selection rule and the CG stopping criterion for approximately solving Newton’s equation, strongly
rely on conjugacy conditions. An appropriate linesearch technique is adopted for each search direction: a nonmonotone stabilization
is used with the approximate Newton step, while an Armijo type linesearch is used for the negative curvature direction. The
proposed algorithm is both globally and superlinearly convergent to stationary points satisfying second order necessary conditions.
We carry out a significant numerical experience in order to test our proposal. 相似文献
20.
A new algorithm is proposed for computing the intersection of two plane curves given in rational parametric form. It relies
on the Ehrlich–Aberth iteration complemented with some computational tools like the properties of Sylvester and Bézout matrices,
a stopping criterion based on the concept of pseudo-zero, an inclusion result and the choice of initial approximations based
on the Newton polygon. The algorithm is implemented as a Fortran 95 module. From the numerical experiments performed with
a wide set of test problems it shows a better robustness and stability with respect to the Manocha–Demmel approach based on
eigenvalue computation. In fact, the algorithm provides better approximations in terms of the relative error and performs
successfully in many critical cases where the eigenvalue computation fails. 相似文献