首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present sufficient global optimality conditions for weakly convex minimization problems using abstract convex analysis theory. By introducing (L,X)-subdifferentials of weakly convex functions using a class of quadratic functions, we first obtain some sufficient conditions for global optimization problems with weakly convex objective functions and weakly convex inequality and equality constraints. Some sufficient optimality conditions for problems with additional box constraints and bivalent constraints are then derived.   相似文献   

2.
In the first part of this paper we apply a saddle point theorem from convex analysis to show that various constrained minimization problems are equivalent to the problem of smoothing by spline functions. In particular, we show that near-interpolants are smoothing splines with weights that arise as Lagrange multipliers corresponding to the constraints in the problem of near-interpolation. In the second part of this paper we apply certain fixed point iterations to compute these weights. A similar iteration is applied to the computation of the smoothing parameter in the problem of smoothing.

  相似文献   


3.
Univariate cubic L 1 smoothing splines are capable of providing shape-preserving C 1-smooth approximation of multi-scale data. The minimization principle for univariate cubic L 1 smoothing splines results in a nondifferentiable convex optimization problem that, for theoretical treatment and algorithm design, can be formulated as a generalized geometric program. In this framework, a geometric dual with a linear objective function over a convex feasible domain is derived, and a linear system for dual to primal conversion is established. Numerical examples are given to illustrate this approach. Sensitivity analysis for data with uncertainty is presented. This work is supported by research grant #DAAG55-98-D-0003 of the Army Research Office, USA.  相似文献   

4.
In this paper, we propose a new smooth function that possesses a property not satisfied by the existing smooth functions. Based on this smooth function, we discuss the existence and continuity of the smoothing path for solving theP 0 function nonlinear complementarity problem ( NCP). Using the characteristics of the new smooth function, we investigate the boundedness of the iteration sequence generated by the non-interior continuation methods for solving theP 0 function NCP under the assumption that the solution set of the NCP is nonempty and bounded. We show that the assumption that the solution set of the NCP is nonempty and bounded is weaker than those required by a few existing continuation methods for solving the NCP  相似文献   

5.
Convex approximations to sparse PCA via Lagrangian duality   总被引:1,自引:0,他引:1  
We derive a convex relaxation for cardinality constrained Principal Component Analysis (PCA) by using a simple representation of the L1 unit ball and standard Lagrangian duality. The resulting convex dual bound is an unconstrained minimization of the sum of two nonsmooth convex functions. Applying a partial smoothing technique reduces the objective to the sum of a smooth and nonsmooth convex function for which an efficient first order algorithm can be applied. Numerical experiments demonstrate its potential.  相似文献   

6.
We propose an adaptive smoothing algorithm based on Nesterov’s smoothing technique in Nesterov (Math Prog 103(1):127–152, 2005) for solving “fully” nonsmooth composite convex optimization problems. Our method combines both Nesterov’s accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the \(\mathcal {O}\left( \frac{1}{\varepsilon }\right) \)-worst-case iteration-complexity while preserves the same complexity-per-iteration as in Nesterov’s method and allows one to automatically update the smoothness parameter at each iteration. Then, we customize our algorithm to solve four special cases that cover various applications. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on a primal sequence of iterates. We demonstrate our algorithm through three numerical examples and compare it with other related algorithms.  相似文献   

7.
By smoothing a perturbed minimum function, we propose in this paper a new smoothing function. The existence and continuity of a smooth path for solving the nonlinear complementarity problem (NCP) with a P 0 function are discussed. We investigate the boundedness of the iteration sequence generated by noninterior continuation/smoothing methods under the assumption that the solution set of the NCP is nonempty and bounded. Based on the new smoothing function, we present a predictor-corrector smoothing Newton algorithm for solving the NCP with a P 0 function, which is shown to be globally linearly and locally superlinearly convergent under suitable assumptions. Some preliminary computational results are reported.  相似文献   

8.
Kernel functions play an important role in the design and analysis of primal-dual interior-point algorithms. They are not only used for determining the search directions but also for measuring the distance between the given iterate and the μ-center for the algorithms. In this paper we present a unified kernel function approach to primal-dual interior-point algorithms for convex quadratic semidefinite optimization based on the Nesterov and Todd symmetrization scheme. The iteration bounds for large- and small-update methods obtained are analogous to the linear optimization case. Moreover, this unifies the analysis for linear, convex quadratic and semidefinite optimizations.  相似文献   

9.
In this note we show that many classes of global optimization problems can be treated most satisfactorily by classical optimization theory and conventional algorithms. We focus on the class of problems involving the minimization of the product of several convex functions on a convex set which was studied recently by Kunoet al. [3]. It is shown that these problems are typical composite concave programming problems and thus can be handled elegantly by c-programming [4]–[8] and its techniques.  相似文献   

10.
In this paper, we introduce the notion of a self-regular function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy a polynomial ?(n log) iteration bound, where q≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends on q and the growth degree p≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(lognlog) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor . Our unified analysis provides also the ?(log) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension of the above results to semidefinite optimization (SDO) is also presented. Received: March 2000 / Accepted: December 2001?Published online April 12, 2002  相似文献   

11.
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing 1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general 1-regularized convex minimization problems, i.e., the problem of minimizing an 1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale 1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale 1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale 1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.  相似文献   

12.
Joydeep Dutta 《TOP》2005,13(2):185-279
During the early 1960’s there was a growing realization that a large number of optimization problems which appeared in applications involved minimization of non-differentiable functions. One of the important areas where such problems appeared was optimal control. The subject of nonsmooth analysis arose out of the need to develop a theory to deal with the minimization of nonsmooth functions. The first impetus in this direction came with the publication of Rockafellar’s seminal work titledConvex Analysis which was published by the Princeton University Press in 1970. It would be impossible to overstate the impact of this book on the development of the theory and methods of optimization. It is also important to note that a large part of convex analysis was already developed by Werner Fenchel nearly twenty years earlier and was circulated through his mimeographed lecture notes titledConvex Cones, Sets and Functions, Princeton University, 1951. In this article we trace the dramatic development of nonsmooth analysis and its applications to optimization in finite dimensions. Beginning with the fundamentals of convex optimization we quickly move over to the path breaking work of Clarke which extends the domain of nonsmooth analysis from convex to locally Lipschitz functions. Clarke was the second doctoral student of R.T. Rockafellar. We discuss the notions of Clarke directional derivative and the Clarke generalized gradient and also the relevant calculus rules and applications to optimization. While discussing locally Lipschitz optimization we also try to blend in the computational aspects of the theory wherever possible. This is followed by a discussion of the geometry of sets with nonsmooth boundaries. The approach to develop the notion of the normal cone to an arbitrary set is sequential in nature. This approach does not rely on the standard techniques of convex analysis. The move away from convexity was pioneered by Mordukhovich and later culminated in the monographVariational Analysis by Rockafellar and Wets. The approach of Mordukhovich relied on a nonconvex separation principle called theextremal principle while that of Rockafellar and Wets relied on various convergence notions developed to suit the needs of optimization. We then move on to a parallel development in nonsmooth optimization due to Demyanov and Rubinov called Quasidifferentiable optimization. They study the class of directionally differentiable functions whose directional derivatives can be represented as a difference of two sublinear functions. On other hand the directional derivative of a convex function and also the Clarke directional derivatives are sublinear functions of the directions. Thus it was thought that the most useful generalizations of directional derivatives must be a sublinear function of the directions. Thus Demyanov and Rubinov made a major conceptual change in nonsmooth optimization. In this section we define the notion of a quasidifferential which is a pair of convex compact sets. We study some calculus rules and their applications to optimality conditions. We also study the interesting notion of Demyanov difference between two sets and their applications to optimization. In the last section of this paper we study some second-order tools used in nonsmooth analysis and try to see their relevance in optimization. In fact it is important to note that unlike the classical case, the second-order theory of nonsmoothness is quite complicated in the sense that there are many approaches to it. However we have chosen to describe those approaches which can be developed from the first order nonsmooth tools discussed here. We shall present three different approaches, highlight the second order calculus rules and their applications to optimization.  相似文献   

13.
In this article, we consider a DC (difference of two convex functions) function approach for solving joint chance-constrained programs (JCCP), which was first established by Hong et al. (Oper Res 59:617–630, 2011). They used a DC function to approximate the probability function and constructed a sequential convex approximation method to solve the approximation problem. However, the DC function they used was nondifferentiable. To alleviate this difficulty, we propose a class of smoothing functions to approximate the joint chance-constraint function, based on which smooth optimization problems are constructed to approximate JCCP. We show that the solutions of a sequence of smoothing approximations converge to a Karush–Kuhn–Tucker point of JCCP under a certain asymptotic regime. To implement the proposed method, four examples in the class of smoothing functions are explored. Moreover, the numerical experiments show that our method is comparable and effective.  相似文献   

14.
In this paper we consider the solution of certain convex integer minimization problems via greedy augmentation procedures. We show that a greedy augmentation procedure that employs only directions from certain Graver bases needs only polynomially many augmentation steps to solve the given problem. We extend these results to convex N-fold integer minimization problems and to convex 2-stage stochastic integer minimization problems. Finally, we present some applications of convex N-fold integer minimization problems for which our approach provides polynomial time solution algorithms.  相似文献   

15.
Methods for minimization of composite functions with a nondifferentiable polyhedral convex part are considered. This class includes problems involving minimax functions and norms. Local convergence results are given for “active set” methods, in which an equality-constrained quadratic programming subproblem is solved at each iteration. The active set consists of components of the polyhedral convex function which are active or near-active at the current iteration. The effects of solving the subproblem inexactly at each iteration are discussed; rate-of-convergence results which depend on the degree of inexactness are given.  相似文献   

16.
A trust region algorithm for minimization of locally Lipschitzian functions   总被引:7,自引:0,他引:7  
Qi  Liqun  Sun  Jie 《Mathematical Programming》1994,66(1-3):25-43
The classical trust region algorithm for smooth nonlinear programs is extended to the nonsmooth case where the objective function is only locally Lipschitzian. At each iteration, an objective function that carries both first and second order information is minimized over a trust region. The term that carries the first order information is an iteration function that may not explicitly depend on subgradients or directional derivatives. We prove that the algorithm is globally convergent. This convergence result extends the result of Powell for minimization of smooth functions, the result of Yuan for minimization of composite convex functions, and the result of Dennis, Li and Tapia for minimization of regular functions. In addition, compared with the recent model of Pang, Han and Rangaraj for minimization of locally Lipschitzian functions using a line search, this algorithm has the same convergence property without assuming positive definiteness and uniform boundedness of the second order term. Applications of the algorithm to various nonsmooth optimization problems are discussed.This author's work was supported in part by the Australian Research Council.This author's work was carried out while he was visiting the Department of Applied Mathematics at the University of New South Wales.  相似文献   

17.
The problems of (bi-)proportional rounding of a nonnegative vector or matrix, resp., are written as particular separable convex integer minimization problems. Allowing any convex (separable) objective function we use the notions of vector and matrix apportionment problems. As a broader class of problems we consider separable convex integer minimization under linear equality restrictions Ax = b with any totally unimodular coefficient matrix A. By the total unimodularity Fenchel duality applies, despite the integer restrictions of the variables. The biproportional algorithm of Balinski and Demange (Math Program 45:193–210, 1989) is generalized and derives from the dual optimization problem. Also, a primal augmentation algorithm is stated. Finally, for the smaller class of matrix apportionment problems we discuss the alternating scaling algorithm, which is a discrete variant of the well-known Iterative Proportional Fitting procedure.  相似文献   

18.
Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems.  相似文献   

19.
In the past decade, eigenvalue optimization has gained remarkable attention in various engineering applications. One of the main difficulties with numerical analysis of such problems is that the eigenvalues, considered as functions of a symmetric matrix, are not smooth at those points where they are multiple. We propose a new explicit nonsmooth second-order bundle algorithm based on the idea of the proximal bundle method on minimizing the arbitrary eigenvalue over an affine family of symmetric matrices, which is a special class of eigenvalue function–D.C. function. To the best of our knowledge, few methods currently exist for minimizing arbitrary eigenvalue function. In this work, we apply the -Lagrangian theory to this class of D.C. functions: the arbitrary eigenvalue function λi with affine matrix-valued mappings, where λi is usually not convex. We prove the global convergence of our method in the sense that every accumulation point of the sequence of iterates is stationary. Moreover, under mild conditions we show that, if started close enough to the minimizer x*, the proposed algorithm converges to x* quadratically. The method is tested on some constrained optimization problems, and some encouraging preliminary numerical results show the efficiency of our method.  相似文献   

20.
In this paper, a new hybrid method is proposed for solving nonlinear complementarity problems (NCP) with P 0 function. In the new method, we combine a smoothing nonmonotone trust region method based on a conic model and line search techniques. We reformulate the NCP as a system of semismooth equations using the Fischer-Burmeister function. Using Kanzow’s smooth approximation function to construct the smooth operator, we propose a smoothing nonmonotone trust region algorithm of a conic model for solving the NCP with P 0 functions. This is different from the classical trust region methods, in that when a trial step is not accepted, the method does not resolve the trust region subproblem but generates an iterative point whose steplength is defined by a line search. We prove that every accumulation point of the sequence generated by the algorithm is a solution of the NCP. Under a nonsingularity condition, the superlinear convergence of the algorithm is established without a strict complementarity condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号