首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-?ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with Hölderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple method: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form \(O(q^{k})\) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with \(\ell ^1\) regularization.  相似文献   

2.
Tao  Ting  Pan  Shaohua  Bi  Shujun 《Journal of Global Optimization》2021,81(4):991-1017

This paper is concerned with the squared F(robenius)-norm regularized factorization form for noisy low-rank matrix recovery problems. Under a suitable assumption on the restricted condition number of the Hessian matrix of the loss function, we establish an error bound to the true matrix for the non-strict critical points with rank not more than that of the true matrix. Then, for the squared F-norm regularized factorized least squares loss function, we establish its KL property of exponent 1/2 on the global optimal solution set under the noisy and full sample setting, and achieve this property at its certain class of critical points under the noisy and partial sample setting. These theoretical findings are also confirmed by solving the squared F-norm regularized factorization problem with an accelerated alternating minimization method.

  相似文献   

3.
考虑求解目标函数为光滑损失函数与非光滑正则函数之和的凸优化问题的一种基于线搜索的邻近梯度算法及其收敛性分析,证明了在梯度局部Lipschitz连续条件下该算法是$R$-线性收敛的,并在非光滑部分为稀疏块LASSO正则函数情况下给出了误差界条件成立的证明,得到了线性收敛率。最后,数值实验结果验证了方法的有效性。  相似文献   

4.
In this paper, we consider the computation of a rigorous lower error bound for the optimal value of convex optimization problems. A discussion of large-scale problems, degenerate problems, and quadratic programming problems is included. It is allowed that parameters, whichdefine the convex constraints and the convex objective functions, may be uncertain and may vary between given lower and upper bounds. The error bound is verified for the family of convex optimization problems which correspond to these uncertainties. It can be used to perform a rigorous sensitivity analysis in convex programming, provided the width of the uncertainties is not too large. Branch and bound algorithms can be made reliable by using such rigorous lower bounds.  相似文献   

5.
Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods and also their empirical bootstrap have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.  相似文献   

6.
We consider an optimization reformulation approach for the generalized Nash equilibrium problem (GNEP) that uses the regularized gap function of a quasi-variational inequality (QVI). The regularized gap function for QVI is in general not differentiable, but only directionally differentiable. Moreover, a simple condition has yet to be established, under which any stationary point of the regularized gap function solves the QVI. We tackle these issues for the GNEP in which the shared constraints are given by linear equalities, while the individual constraints are given by convex inequalities. First, we formulate the minimization problem involving the regularized gap function and show the equivalence to GNEP. Next, we establish the differentiability of the regularized gap function and show that any stationary point of the minimization problem solves the original GNEP under some suitable assumptions. Then, by using a barrier technique, we propose an algorithm that sequentially solves minimization problems obtained from GNEPs with the shared equality constraints only. Further, we discuss the case of shared inequality constraints and present an algorithm that utilizes the transformation of the inequality constraints to equality constraints by means of slack variables. We present some results of numerical experiments to illustrate the proposed approach.  相似文献   

7.
The strong conical hull intersection property and bounded linear regularity are properties of a collection of finitely many closed convex intersecting sets in Euclidean space. These fundamental notions occur in various branches of convex optimization (constrained approximation, convex feasibility problems, linear inequalities, for instance). It is shown that the standard constraint qualification from convex analysis implies bounded linear regularity, which in turn yields the strong conical hull intersection property. Jameson’s duality for two cones, which relates bounded linear regularity to property (G), is re-derived and refined. For polyhedral cones, a statement dual to Hoffman’s error bound result is obtained. A sharpening of a result on error bounds for convex inequalities by Auslender and Crouzeix is presented. Finally, for two subspaces, property (G) is quantified by the angle between the subspaces. Received October 1, 1997 / Revised version received July 21, 1998? Published online June 11, 1999  相似文献   

8.
In this paper we present a robust conjugate duality theory for convex programming problems in the face of data uncertainty within the framework of robust optimization, extending the powerful conjugate duality technique. We first establish robust strong duality between an uncertain primal parameterized convex programming model problem and its uncertain conjugate dual by proving strong duality between the deterministic robust counterpart of the primal model and the optimistic counterpart of its dual problem under a regularity condition. This regularity condition is not only sufficient for robust duality but also necessary for it whenever robust duality holds for every linear perturbation of the objective function of the primal model problem. More importantly, we show that robust strong duality always holds for partially finite convex programming problems under scenario data uncertainty and that the optimistic counterpart of the dual is a tractable finite dimensional problem. As an application, we also derive a robust conjugate duality theorem for support vector machines which are a class of important convex optimization models for classifying two labelled data sets. The support vector machine has emerged as a powerful modelling tool for machine learning problems of data classification that arise in many areas of application in information and computer sciences.  相似文献   

9.
Many polynomial and discrete optimization problems can be reduced to multiextremal quadratic type models of nonlinear programming. For solving these problems one may use Lagrangian bounds in combination with branch and bound techniques. The Lagrangian bounds may be improved for some important examples by adding in a model the so-called superfluous quadratic constraints which modify Lagrangian bounds. Problems of finding Lagrangian bounds as a rule can be reduced to minimization of nonsmooth convex functions and may be successively solved by modern methods of nondifferentiable optimization. This approach is illustrated by examples of solving polynomial-type problems and some discrete optimization problems on graphs.  相似文献   

10.
We investigate whether some merit functions for variational inequality problems (VIP) provide error bounds for the underlying VIP. Under the condition that the involved mapping F is strongly monotone, but not necessarily Lipschitz continuous, we prove that the so-called regularized gap function provides an error bound for the underlying VIP. We give also an example showing that the so-called D-gap function might not provide error bounds for a strongly monotone VIP.This research was supported by United College and by a direct grant of the Chinese University of Hong Kong. The authors thank the referees for helpful comments and suggestions.  相似文献   

11.
We present an inexact spectral bundle method for solving convex quadratic semidefinite optimization problems. This method is a first-order method, hence requires much less computational cost in each iteration than second-order approaches such as interior-point methods. In each iteration of our method, we solve an eigenvalue minimization problem inexactly, and solve a small convex quadratic semidefinite program as a subproblem. We give a proof of the global convergence of this method using techniques from the analysis of the standard bundle method, and provide a global error bound under a Slater type condition for the problem in question. Numerical experiments with matrices of order up to 3000 are performed, and the computational results establish the effectiveness of this method.  相似文献   

12.
For a parametric convex programming problem in a Hilbert space with a strongly convex objective functional, a regularized Kuhn-Tucker theorem in nondifferential form is proved by the dual regularization method. The theorem states (in terms of minimizing sequences) that the solution to the convex programming problem can be approximated by minimizers of its regular Lagrangian (which means that the Lagrange multiplier for the objective functional is unity) with no assumptions made about the regularity of the optimization problem. Points approximating the solution are constructively specified. They are stable with respect to the errors in the initial data, which makes it possible to effectively use the regularized Kuhn-Tucker theorem for solving a broad class of inverse, optimization, and optimal control problems. The relation between this assertion and the differential properties of the value function (S-function) is established. The classical Kuhn-Tucker theorem in nondifferential form is contained in the above theorem as a particular case. A version of the regularized Kuhn-Tucker theorem for convex objective functionals is also considered.  相似文献   

13.
The global minimization of large-scale partially separable non-convex problems over a bounded polyhedral set using a parallel branch and bound approach is considered. The objective function consists of a separable concave part, an unseparated convex part, and a strictly linear part, which are all coupled by the linear constraints. These large-scale problems are characterized by having the number of linear variables much greater than the number of nonlinear variables. An important special class of problems which can be reduced to this form are the synomial global minimization problems. Such problems often arise in engineering design, and previous computational methods for such problems have been limited to the convex posynomial case. In the current work, a convex underestimating function to the objective function is easily constructed and minimized over the feasible domain to get both upper and lower bounds on the global minimum function value. At each minor iteration of the algorithm, the feasible domain is divided into subregions and convex underestimating problems over each subregion are solved in parallel. Branch and bound techniques can then be used to eliminate parts of the feasible domain from consideration and improve the upper and lower bounds. It is shown that the algorithm guarantees that a solution is obtained to within any specified tolerance in a finite number of steps. Computational results obtained on the four processor Cray 2, both sequentially and in parallel on all four processors, are also presented.  相似文献   

14.
The notion of weak sharp minima is an important tool in the analysis of the perturbation behavior of certain classes of optimization problems as well as in the convergence analysis of algorithms designed to solve these problems. It has been studied extensively by several authors. This paper is the second of a series on this subject where the basic results on weak sharp minima in Part I are applied to a number of important problems in convex programming. In Part II we study applications to the linear regularity and bounded linear regularity of a finite collection of convex sets as well as global error bounds in convex programming. We obtain both new results and reproduce several existing results from a fresh perspective. We dedicate this paper to our friend and mentor Terry Rockafellar on the occasion of his 70th birthday. He has been our guide in mathematics as well as in the backcountry and waterways of the Olympic and Cascade mountains. Research supported in part by the National Science Foundation Grant DMS-0203175.  相似文献   

15.
In this paper, we study the backward–forward algorithm as a splitting method to solve structured monotone inclusions, and convex minimization problems in Hilbert spaces. It has a natural link with the forward–backward algorithm and has the same computational complexity, since it involves the same basic blocks, but organized differently. Surprisingly enough, this kind of iteration arises when studying the time discretization of the regularized Newton method for maximally monotone operators. First, we show that these two methods enjoy remarkable involutive relations, which go far beyond the evident inversion of the order in which the forward and backward steps are applied. Next, we establish several convergence properties for both methods, some of which were unknown even for the forward–backward algorithm. This brings further insight into this well-known scheme. Finally, we specialize our results to structured convex minimization problems, the gradient-projection algorithms, and give a numerical illustration of theoretical interest.  相似文献   

16.
This paper proposes a mechanism to produce equivalent Lipschitz surrogates for zero-norm and rank optimization problems by means of the global exact penalty for their equivalent mathematical programs with an equilibrium constraint (MPECs). Specifically, we reformulate these combinatorial problems as equivalent MPECs by the variational characterization of the zero-norm and rank function, show that their penalized problems, yielded by moving the equilibrium constraint into the objective, are the global exact penalization, and obtain the equivalent Lipschitz surrogates by eliminating the dual variable in the global exact penalty. These surrogates, including the popular SCAD function in statistics, are also difference of two convex functions (D.C.) if the function and constraint set involved in zero-norm and rank optimization problems are convex. We illustrate an application by designing a multi-stage convex relaxation approach to the rank plus zero-norm regularized minimization problem.  相似文献   

17.
Generalized Nash equilibrium problems (GNEPs) allow, in contrast to standard Nash equilibrium problems, a dependence of the strategy space of one player from the decisions of the other players. In this paper, we consider jointly convex GNEPs which form an important subclass of the general GNEPs. Based on a regularized Nikaido-Isoda function, we present two (nonsmooth) reformulations of this class of GNEPs, one reformulation being a constrained optimization problem and the other one being an unconstrained optimization problem. While most approaches in the literature compute only a so-called normalized Nash equilibrium, which is a subset of all solutions, our two approaches have the property that their minima characterize the set of all solutions of a GNEP. We also investigate the smoothness properties of our two optimization problems and show that both problems are continuous under a Slater-type condition and, in fact, piecewise continuously differentiable under the constant rank constraint qualification. Finally, we present some numerical results based on our unconstrained optimization reformulation.  相似文献   

18.
In this paper, we study convex programming problems with data uncertainty in both the objective function and the constraints. Under the framework of robust optimization, we employ a robust regularity condition, which is much weaker than the ones in the open literature, to establish various properties and characterizations of the set of all robust optimal solutions of the problems. These are expressed in term of subgradients, Lagrange multipliers and epigraphs of conjugate functions. We also present illustrative examples to show the significances of our theoretical results.  相似文献   

19.
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping, regularized by the group reproducing kernel norm. This class of problems arise naturally from applications in group Lasso, which is a popular technique for variable selection. An effective approach to solve such problems is by the proximal gradient method. In this paper we derive and study theoretically the efficient algorithms for the class of the convex problems, analyze the convergence of the algorithm and its subalgorithm.  相似文献   

20.
In this paper, we consider a class of split mixed vector quasivariational inequality problems in real Hilbert spaces and establish new gap functions by using the method of the nonlinear scalarization function. Further, we obtain some error bounds for the underlying split mixed vector quasivariational inequality problems in terms of regularized gap functions. Finally, we give some examples to illustrate our results. The results obtained in this paper are new.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号