共查询到20条相似文献,搜索用时 484 毫秒
1.
An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis,
Gould and Toint (Part I, Math Program, doi:, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University
of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard
and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case
global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter
second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most O(e-3/2){\mathcal{O}(\epsilon^{-3/2})} iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below
the desired accuracy e{\epsilon}, and O(e-3){\mathcal{O}(\epsilon^{-3})} iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm
3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that
it allows the cubic model to be solved only approximately and may employ approximate Hessians. 相似文献
2.
Index information algorithm with local tuning for solving multidimensional global optimization problems with multiextremal constraints 总被引:2,自引:0,他引:2
Multidimensional optimization problems where the objective function and the constraints are multiextremal non-differentiable
Lipschitz functions (with unknown Lipschitz constants) and the feasible region is a finite collection of robust nonconvex
subregions are considered. Both the objective function and the constraints may be partially defined. To solve such problems
an algorithm is proposed, that uses Peano space-filling curves and the index scheme to reduce the original problem to a H?lder
one-dimensional one. Local tuning on the behaviour of the objective function and constraints is used during the work of the
global optimization procedure in order to accelerate the search. The method neither uses penalty coefficients nor additional
variables. Convergence conditions are established. Numerical experiments confirm the good performance of the technique.
Received: April 2002 / Accepted: December 2002
Published online: March 21, 2003
RID="⋆"
ID="⋆" This research was supported by the following grants: FIRB RBNE01WBBB, FIRB RBAU01JYPN, and RFBR 01–01–00587.
Key Words. global optimization – multiextremal constraints – local tuning – index approach 相似文献
3.
In this paper, two types of Levitin–Polyak well-posedness of vector equilibrium problems with variable domination structures
are investigated. Criteria and characterizations for two types of Levitin–Polyak well-posedness of vector equilibrium problems
are shown. Moreover, by virtue of a gap function for vector equilibrium problems, the equivalent relations between the Levitin–Polyak
well-posedness for an optimization problem and the Levitin–Polyak well-posedness for a vector equilibrium problem are obtained.
This research was partially supported by the National Natural Science Foundation of China (Grant number: 60574073) and Natural
Science Foundation Project of CQ CSTC (Grant number: 2007BB6117). 相似文献
4.
We modify the first order algorithm for convex programming described by Nesterov in his book (in Introductory lectures on convex optimization. A basic course. Kluwer, Boston, 2004). In his algorithms, Nesterov makes explicit use of a Lipschitz constant L for the function gradient, which is either assumed known (Nesterov in Introductory lectures on convex optimization. A basic course. Kluwer, Boston, 2004), or is estimated by an adaptive procedure (Nesterov 2007). We eliminate the use of L at the cost of an extra imprecise line search, and obtain an algorithm which keeps the optimal complexity properties and also inherit the global convergence properties of the steepest descent method for general continuously differentiable optimization. Besides this, we develop an adaptive procedure for estimating a strong convexity constant for the function. Numerical tests for a limited set of toy problems show an improvement in performance when compared with the original Nesterov’s algorithms. 相似文献
5.
Hongchao Zhang 《Computational Optimization and Applications》2014,57(1):27-43
A new nonmonotone algorithm is proposed and analyzed for unconstrained nonlinear optimization. The nonmonotone techniques applied in this algorithm are based on the estimate sequence proposed by Nesterov (Introductory Lectures on Convex Optimization: A Basic Course, 2004) for convex optimization. Under proper assumptions, global convergence of this algorithm is established for minimizing general nonlinear objective function with Lipschitz continuous derivatives. For convex objective function, this algorithm maintains the optimal convergence rate of convex optimization. In numerical experiments, this algorithm is specified by employing safe-guarded nonlinear conjugate gradient search directions. Numerical results show the nonmonotone algorithm performs significantly better than the corresponding monotone algorithm for solving the unconstrained optimization problems in the CUTEr (Bongartz et al. in ACM Trans. Math. Softw. 21:123–160, 1995) library. 相似文献
6.
Summary. In this paper, global optimization (GO) Lipschitz problems are considered where the multi-dimensional multiextremal objective
function is determined over a hyperinterval. An efficient one-dimensional GO method using local tuning on the behavior of
the objective function is generalized to the multi-dimensional case by the diagonal approach using two partition strategies.
Global convergence conditions are established for the obtained diagonal geometric methods. Results of a wide numerical comparison
show a strong acceleration reached by the new methods working with estimates of the local Lipschitz constants over different
subregions of the search domain in comparison with the traditional approach.
Received July 13, 2001 / Revised version received March 14, 2002 / Published online October 29, 2002
Mathematics Subject Classification (1991): 65K05, 90C30
Correspondence to: Yaroslav D. Sergeyev 相似文献
7.
A new family of conjugate gradient methods 总被引:1,自引:0,他引:1
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak–Ribiére–Polyak and Liu–Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions. 相似文献
8.
In this paper we construct the linear support vector machine (SVM) based on the nonlinear rescaling (NR) methodology (see
[Polyak in Math Program 54:177–222, 1992; Polyak in Math Program Ser A 92:197–235, 2002; Polyak and Teboulle in Math Program
76:265–284, 1997] and references therein). The formulation of the linear SVM based on the NR method leads to an algorithm
which reduces the number of support vectors without compromising the classification performance compared to the linear soft-margin
SVM formulation. The NR algorithm computes both the primal and the dual approximation at each step. The dual variables associated
with the given data-set provide important information about each data point and play the key role in selecting the set of
support vectors. Experimental results on ten benchmark classification problems show that the NR formulation is feasible. The
quality of discrimination, in most instances, is comparable to the linear soft-margin SVM while the number of support vectors
in several instances were substantially reduced. 相似文献
9.
In this paper, we consider Levitin–Polyak well-posedness of parametric generalized equilibrium problems and optimization problems
with generalized equilibrium constraints. Some criteria for these types of well-posedness are derived. In particular, under
certain conditions, we show that generalized Levitin–Polyak well-posedness of a parametric generalized equilibrium problem
is equivalent to the nonemptiness and compactness of its solution set. Finally, for an optimization problem with generalized
equilibrium constraints, we also obtain that, under certain conditions, Levitin–Polyak well-posedness in the generalized sense
is equivalent to the nonemptiness and compactness of its solution set. 相似文献
10.
In this article we develop a global optimization algorithm for quasiconvex programming where the objective function is a Lipschitz
function which may have “flat parts”. We adapt the Extended Cutting Angle method to quasiconvex functions, which reduces significantly
the number of iterations and objective function evaluations, and consequently the total computing time. Applications of such
an algorithm to mathematical programming problems in which the objective function is derived from economic systems and location
problems are described. Computational results are presented. 相似文献
11.
We study a simple, yet unconventional approach to the global optimization of unconstrained nonlinear least-squares problems.
Non-convexity of the sum of least-squares objective in parameter estimation problems may often lead to the presence of multiple
local minima. Here, we focus on the spatial branch-and-bound algorithm for global optimization and experiment with one of
its implementations, BARON (Sahinidis in J. Glob. Optim. 8(2):201–205, 1996), to solve parameter estimation problems. Through the explicit use of first-order optimality conditions, we are able to significantly
expedite convergence to global optimality by strengthening the relaxation of the lower-bounding problem that forms a crucial
part of the spatial branch-and-bound technique. We analyze the results obtained from 69 test cases taken from the statistics
literature and discuss the successes and limitations of the proposed idea. In addition, we discuss software implementation
for the automation of our strategy. 相似文献
12.
In this paper, we first discuss how the nearly exact (NE) method proposed by Moré and Sorensen [14] for solving trust region
(TR) subproblems can be modified to solve large-scale “low-rank” TR subproblems efficiently. Our modified algorithm completely
avoids computation of Cholesky factorizations by instead relying primarily on the Sherman–Morrison–Woodbury formula for computing
inverses of “diagonal plus low-rank” type matrices. We also implement a specific version of the modified log-barrier (MLB)
algorithm proposed by Polyak [17] where the generated log-barrier subproblems are solved by a trust region method. The corresponding
direction finding TR subproblems are of the low-rank type and are then solved by our modified NE method. We finally discuss
the computational results of our implementation of the MLB method and its comparison with a version of LANCELOT [5] based
on a collection extracted from CUTEr [12] of nonlinear programming problems with simple bound constraints.
相似文献
13.
Napsu Karmitsa Mario Tanaka Filho José Herskovits 《Journal of Optimization Theory and Applications》2011,148(3):528-549
Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas
of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper,
we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines
the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible
direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation
points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz
continuous functions and give some preliminary results from numerical experiments. 相似文献
14.
In the paper, a global optimization problem is considered where the objective function f (x) is univariate, black-box, and its first derivative f ′(x) satisfies the Lipschitz condition with an unknown Lipschitz constant K. In the literature, there exist methods solving this problem by using an a priori given estimate of K, its adaptive estimates, and adaptive estimates of local Lipschitz constants. Algorithms working with a number of Lipschitz
constants for f ′(x) chosen from a set of possible values are not known in spite of the fact that a method working in this way with Lipschitz
objective functions, DIRECT, has been proposed in 1993. A new geometric method evolving its ideas to the case of the objective
function having a Lipschitz derivative is introduced and studied in this paper. Numerical experiments executed on a number
of test functions show that the usage of derivatives allows one to obtain, as it is expected, an acceleration in comparison
with the DIRECT algorithm.
This research was supported by the RFBR grant 07-01-00467-a and the grant 4694.2008.9 for supporting the leading research
groups awarded by the President of the Russian Federation. 相似文献
15.
In this paper, we consider Levitin–Polyak type well-posedness for a general constrained vector optimization problem. We introduce
several types of (generalized) Levitin–Polyak well-posednesses. Criteria and characterizations for these types of well-posednesses
are given. Relations among these types of well-posedness are investigated. Finally, we consider convergence of a class of
penalty methods under the assumption of a type of generalized Levitin–Polyak well-posedness. 相似文献
16.
In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization
reformulations for it. We then discuss first-order methods suitable for solving these reformulations, namely, Nesterov’s optimal
method (Nesterov in Doklady AN SSSR 269:543–547, 1983; Math Program 103:127–152, 2005), Nesterov’s smooth approximation scheme
(Nesterov in Math Program 103:127–152, 2005), and Nemirovski’s prox-method (Nemirovski in SIAM J Opt 15:229–251, 2005), and
propose a variant of Nesterov’s optimal method which has outperformed the latter one in our computational experiments. We
also derive iteration-complexity bounds for these first-order methods applied to the proposed primal-dual reformulations of
the cone programming problem. The performance of these methods is then compared using a set of randomly generated linear programming
and semidefinite programming instances. We also compare the approach based on the variant of Nesterov’s optimal method with
the low-rank method proposed by Burer and Monteiro (Math Program Ser B 95:329–357, 2003; Math Program 103:427–444, 2005) for
solving a set of randomly generated SDP instances. 相似文献
17.
Quoc Tran-Dinh 《Computational Optimization and Applications》2017,66(3):425-451
We propose an adaptive smoothing algorithm based on Nesterov’s smoothing technique in Nesterov (Math Prog 103(1):127–152, 2005) for solving “fully” nonsmooth composite convex optimization problems. Our method combines both Nesterov’s accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the \(\mathcal {O}\left( \frac{1}{\varepsilon }\right) \)-worst-case iteration-complexity while preserves the same complexity-per-iteration as in Nesterov’s method and allows one to automatically update the smoothness parameter at each iteration. Then, we customize our algorithm to solve four special cases that cover various applications. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on a primal sequence of iterates. We demonstrate our algorithm through three numerical examples and compare it with other related algorithms. 相似文献
18.
This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(?−3) iterations may have to be performed for finding an iterate which is within ? of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods. 相似文献
19.
An optimal method for stochastic composite optimization 总被引:1,自引:0,他引:1
Guanghui Lan 《Mathematical Programming》2012,133(1-2):365-397
This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems. 相似文献
20.
In this paper, the global optimization problem with a multiextremal objective function satisfying the Lipschitz condition over a hypercube is considered. An algorithm that belongs to the class of information methods introduced by R.G. Strongin is proposed. The knowledge of the Lipschitz constant is not supposed. The local tuning on the behavior of the objective function and a new technique, named the local improvement, are used in order to accelerate the search. Two methods are presented: the first one deals with the one-dimensional problems and the second with the multidimensional ones (by using Peano-type space-filling curves for reduction of the dimension of the problem). Convergence conditions for both algorithms are given. Numerical experiments executed on more than 600 functions show quite a promising performance of the new techniques. 相似文献