首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider convex relaxations for the problem of minimizing a (possibly nonconvex) quadratic objective subject to linear and (possibly nonconvex) quadratic constraints. Let $\mathcal{F }$ denote the feasible region for the linear constraints. We first show that replacing the quadratic objective and constraint functions with their convex lower envelopes on $\mathcal{F }$ is dominated by an alternative methodology based on convexifying the range of the quadratic form $\genfrac(){0.0pt}{}{1}{x}\genfrac(){0.0pt}{}{1}{x}^T$ for $x\in \mathcal{F }$ . We next show that the use of ?? $\alpha $ BB?? underestimators as computable estimates of convex lower envelopes is dominated by a relaxation of the convex hull of the quadratic form that imposes semidefiniteness and linear constraints on diagonal terms. Finally, we show that the use of a large class of D.C. (??difference of convex??) underestimators is dominated by a relaxation that combines semidefiniteness with RLT constraints.  相似文献   

2.
Gradient methods for minimizing composite functions   总被引:1,自引:0,他引:1  
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate $O\left({1 \over k}\right)$ ), and an accelerated multistep version with convergence rate $O\left({1 \over k^2}\right)$ , where $k$ is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.  相似文献   

3.
In power system generation, the economic dispatch (ED) is used to allocate the real power output of thermal generating units to meet the required load demand so as the total cost of thermal generating units is minimized. This paper proposes a swarm based mean-variance mapping optimization \((\hbox {MVMO}^{S})\) for solving the ED problems with convex and nonconvex objective functions. The proposed method is the extension of the original single particle mean-variance mapping optimization by initializing a set of particles. The special feature of the proposed method is a mapping function applied for the mutation based on the mean and variance of n-best population. The proposed \(\hbox {MVMO}^{S}\) is tested on various systems and the obtained results are compared to those from many other optimization methods in the literature. Test results have shown that the proposed method can obtain better solution quality than the other methods. Therefore, the proposed \(\hbox {MVMO}^{S}\) is a potential method for efficiently solving the convex and nonconvex ED problems in power systems.  相似文献   

4.
This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB/MULTIMIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB/OQNLP, NEWUOA, and TOMLAB/MULTIMIN show superior performance in terms of refining a near-optimal solution.  相似文献   

5.
In this paper our interest is in investigating properties and numerical solutions of Proximal Split feasibility Problems. First, we consider the problem of finding a point which minimizes a convex function \(f\) such that its image under a bounded linear operator \(A\) minimizes another convex function \(g\) . Based on an idea introduced in Lopez (Inverse Probl 28:085004, 2012), we propose a split proximal algorithm with a way of selecting the step-sizes such that its implementation does not need any prior information about the operator norm. Because the calculation or at least an estimate of the operator norm \(\Vert A\Vert \) is not an easy task. Secondly, we investigate the case where one of the two involved functions is prox-regular, the novelty of this approach is that the associated proximal mapping is not nonexpansive any longer. Such situation is encountered, for instance, in numerical solution to phase retrieval problem in crystallography, astronomy and inverse scattering Luke (SIAM Rev 44:169–224, 2002) and is therefore of great practical interest.  相似文献   

6.
7.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka–?ojasiewicz property.  相似文献   

8.
In this paper, we study the Kurdyka–?ojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo–Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is \(\frac{1}{2}\). The Luo–Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is \(\frac{1}{2}\). This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with \(\ell _1\) regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery.  相似文献   

9.
In deterministic continuous constrained global optimization, upper bounding the objective function generally resorts to local minimization at several nodes/iterations of the branch and bound. We propose in this paper an alternative approach when the constraints are inequalities and the feasible space has a non-null volume. First, we extract an inner region, i.e., an entirely feasible convex polyhedron or box in which all points satisfy the constraints. Second, we select a point inside the extracted inner region and update the upper bound with its cost. We describe in this paper two original inner region extraction algorithms implemented in our interval B&B called IbexOpt (AAAI, pp 99–104, 2011). They apply to nonconvex constraints involving mathematical operators like , \( +\; \bullet ,\; /,\; power,\; sqrt,\; exp,\; log,\; sin\) . This upper bounding shows very good performance obtained on medium-sized systems proposed in the COCONUT suite.  相似文献   

10.
Jeyakumar (Methods Oper. Res. 55:109–125, 1985) and Weir and Mond (J. Math. Anal. Appl. 136:29–38, 1988) introduced the concept of preinvex function. The preinvex functions have some interesting properties. For example, every local minimum of a preinvex function is a global minimum and nonnegative linear combinations of preinvex functions are preinvex. Invex functions were introduced by Hanson (J. Math. Anal. Appl. 80:545–550, 1981) as a generalization of differentiable convex functions. These functions are more general than the convex and pseudo convex ones. The type of invex function is equivalent to the type of function whose stationary points are global minima. Under some conditions, an invex function is also a preinvex function. Syau (Fuzzy Sets Syst. 115:455–461, 2000) introduced the concepts of pseudoconvexity, invexity, and pseudoinvexity for fuzzy mappings of one variable by using the notion of differentiability and the results proposed by Goestschel and Voxman (Fuzzy Sets Syst. 18:31–43, 1986). Wu and Xu (Fuzzy Sets Syst 159:2090–2103, 2008) introduced the concepts of fuzzy pseudoconvex, fuzzy invex, fuzzy pseudoinvex, and fuzzy preinvex mapping from \(\mathbb{R}^{n}\) to the set of fuzzy numbers based on the concept of differentiability of fuzzy mapping defined by Wang and Wu (Fuzzy Sets Syst. 138:559–591, 2003). In this paper, we present some characterizations of preinvex fuzzy mappings. The necessary and sufficient conditions for differentiable and twice differentiable preinvex fuzzy mapping are provided. These characterizations correct and improve previous results given by other authors. This fact is shown with examples. Moreover, we introduce additional conditions under which these results are valid.  相似文献   

11.
In this paper we present an extension of the proximal point algorithm with Bregman distances to solve constrained minimization problems with quasiconvex and convex objective function on Hadamard manifolds. The proposed algorithm is a modified and extended version of the one presented in Papa Quiroz and Oliveira (J Convex Anal 16(1): 49–69, 2009). An advantage of the proposed algorithm, for the nonconvex case, is that in each iteration the algorithm only needs to find a stationary point of the proximal function and not a global minimum. For that reason, from the computational point of view, the proposed algorithm is more practical than the earlier proximal method. Another advantage, for the convex case, is that using minimal condition on the problem data as well as on the proximal parameters we get the same convergence results of the Euclidean proximal algorithm using Bregman distances.  相似文献   

12.
The purpose of this article is to review the similarity and difference between financial risk minimization and a class of machine learning methods known as support vector machines, which were independently developed. By recognizing their common features, we can understand them in a unified mathematical framework. On the other hand, by recognizing their difference, we can develop new methods. In particular, employing the coherent measures of risk, we develop a generalized criterion for two-class classification. It includes existing criteria, such as the margin maximization and \(\nu \) -SVM, as special cases. This extension can also be applied to the other type of machine learning methods such as multi-class classification, regression and outlier detection. Although the new criterion is first formulated as a nonconvex optimization, it results in a convex optimization by employing the nonnegative \(\ell _1\) -regularization. Numerical examples demonstrate how the developed methods work for bond rating.  相似文献   

13.
The complexity of finding $\epsilon $ -approximate first-order critical points for the general smooth constrained optimization problem is shown to be no worse that $O(\epsilon ^{-2})$ in terms of function and constraints evaluations. This result is obtained by analyzing the worst-case behaviour of a first-order short-step homotopy algorithm consisting of a feasibility phase followed by an optimization phase, and requires minimal assumptions on the objective function. Since a bound of the same order is known to be valid for the unconstrained case, this leads to the conclusion that the presence of possibly nonlinear/nonconvex inequality/equality constraints is irrelevant for this bound to apply.  相似文献   

14.
15.
In this paper we are concerned with the problem of unboundedness and existence of an optimal solution in reverse convex and concave integer optimization problems. In particular, we present necessary and sufficient conditions for existence of an upper bound for a convex objective function defined over the feasible region contained in ${\mathbb{Z}^n}$ . The conditions for boundedness are provided in a form of an implementable algorithm, showing that for the considered class of functions, the integer programming problem is unbounded if and only if the associated continuous problem is unbounded. We also address the problem of boundedness in the global optimization problem of maximizing a convex function over a set of integers contained in a convex and unbounded region. It is shown in the paper that in both types of integer programming problems, the objective function is either unbounded from above, or it attains its maximum at a feasible integer point.  相似文献   

16.
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem \(\min _x \mathbf{E}_v\left[ f_v\big (\mathbf{E}_w [g_w(x)]\big ) \right] .\) In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of \(f_v,g_{w}\) and use an auxiliary variable to track the unknown quantity \(\mathbf{E}_w\left[ g_w(x)\right] \). We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of \(\mathcal {O}(k^{-1/4})\) in the general case and \(\mathcal {O}(k^{-2/3})\) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of \(\mathcal {O}(k^{-2/7})\) in the general case and \(\mathcal {O}(k^{-4/5})\) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.  相似文献   

17.
Multivariate incomplete polynomials are considered on compact 0-symmetric starlike domains. Problems of density and quantitative approximation properties of such polynomials are investigated. It is shown that density holds for a certain class of starlike domains which includes both convex and some nonconvex domains. On the other hand, a family of nonconvex starlike domains is also found for which density fails. In addition, it is also shown that on 0-symmetric convex bodies in $\mathbb{R}^{d}$ , continuous functions can be approximated by θ-incomplete polynomials with the rate O(ω 2(n ?1/(d+3))). Moreover, if the convex body is the intersection of simplexes with vertex at the origin, then this order improves to $O (\omega_{2}(f,1/\sqrt{n}) )$ .  相似文献   

18.
Most branch-and-bound algorithms in global optimization depend on convex underestimators to calculate lower bounds of a minimization objective function. The $\alpha $ BB methodology produces such underestimators for sufficiently smooth functions by analyzing interval Hessian approximations. Several methods to rigorously determine the $\alpha $ BB parameters have been proposed, varying in tightness and computational complexity. We present new polynomial-time methods and compare their properties to existing approaches. The new methods are based on classical eigenvalue bounds from linear algebra and a more recent result on interval matrices. We show how parameters can be optimized with respect to the average underestimation error, in addition to the maximum error commonly used in $\alpha $ BB methods. Numerical comparisons are made, based on test functions and a set of randomly generated interval Hessians. The paper shows the relative strengths of the methods, and proves exact results where one method dominates another.  相似文献   

19.
20.
We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most ${O(1/\epsilon)}$ iterations to obtain an ${\epsilon}$ -optimal solution, while our accelerated (i.e., fast) versions of them require at most ${O(1/\sqrt{\epsilon})}$ iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号