首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 666 毫秒
1.
A homogeneous interior-point algorithm for solving nonsymmetric convex conic optimization problems is presented. Starting each iteration from the vicinity of the central path, the method steps in the approximate tangent direction and then applies a correction phase to locate the next well-centered primal–dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed. We prove convergence to \(\epsilon \)-accuracy in \({\mathcal {O}}(\sqrt{\nu } \log {(1/\epsilon )})\) iterations. To improve performance, the algorithm employs a new Runge–Kutta type second order search direction suitable for the general nonsymmetric conic problem. Moreover, quasi-Newton updating is used to reduce the number of factorizations needed, implemented so that data sparsity can still be exploited. Extensive and promising computational results are presented for the \(p\)-cone problem, the facility location problem, entropy maximization problems and geometric programs; all formulated as nonsymmetric convex conic optimization problems.  相似文献   

2.
In power system generation, the economic dispatch (ED) is used to allocate the real power output of thermal generating units to meet the required load demand so as the total cost of thermal generating units is minimized. This paper proposes a swarm based mean-variance mapping optimization \((\hbox {MVMO}^{S})\) for solving the ED problems with convex and nonconvex objective functions. The proposed method is the extension of the original single particle mean-variance mapping optimization by initializing a set of particles. The special feature of the proposed method is a mapping function applied for the mutation based on the mean and variance of n-best population. The proposed \(\hbox {MVMO}^{S}\) is tested on various systems and the obtained results are compared to those from many other optimization methods in the literature. Test results have shown that the proposed method can obtain better solution quality than the other methods. Therefore, the proposed \(\hbox {MVMO}^{S}\) is a potential method for efficiently solving the convex and nonconvex ED problems in power systems.  相似文献   

3.
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.  相似文献   

4.
The interval subset sum problem (ISSP) is a generalization of the well-known subset sum problem. Given a set of intervals \(\left\{ [a_{i,1},a_{i,2}]\right\} _{i=1}^n\) and a target integer T, the ISSP is to find a set of integers, at most one from each interval, such that their sum best approximates the target T but cannot exceed it. In this paper, we first study the computational complexity of the ISSP. We show that the ISSP is relatively easy to solve compared to the 0–1 knapsack problem. We also identify several subclasses of the ISSP which are polynomial time solvable (with high probability), albeit the problem is generally NP-hard. Then, we propose a new fully polynomial time approximation scheme for solving the general ISSP problem. The time and space complexities of the proposed scheme are \({{\mathcal {O}}}\left( n \max \left\{ 1 / \epsilon ,\log n\right\} \right) \) and \(\mathcal{O}\left( n+1/\epsilon \right) ,\) respectively, where \(\epsilon \) is the relative approximation error. To the best of our knowledge, the proposed scheme has almost the same time complexity but a significantly lower space complexity compared to the best known scheme. Both the correctness and efficiency of the proposed scheme are validated by numerical simulations. In particular, the proposed scheme successfully solves ISSP instances with \(n=100{,}000\) and \(\epsilon =0.1\%\) within 1 s.  相似文献   

5.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

6.
This paper devotes to the quasi \(\epsilon \)-solution (one sort of approximate solutions) for a robust convex optimization problem in the face of data uncertainty. Using robust optimization approach (worst-case approach), we establish approximate optimality theorem and approximate duality theorems in term of Wolfe type on quasi \(\epsilon \)-solution for the robust convex optimization problem. Moreover, some examples are given to illustrate the obtained results.  相似文献   

7.
In this paper, we consider approximate solutions (\(\epsilon \)-solutions) for a convex semidefinite programming problem in the face of data uncertainty. Using robust optimization approach (worst-case approach), we prove an approximate optimality theorem and approximate duality theorems for \(\epsilon \)-solutions in robust convex semidefinite programming problem under the robust characteristic cone constraint qualification. Moreover, an example is given to illustrate the obtained results.  相似文献   

8.
In the problem of signal detection in the heteroscedastic Gaussian white noise we show asymptotic minimaxity of kernel-based tests. The test statistics equal L 2- norms of kernel estimators. The sets of alternatives are defined by the sets of all signals such that L 2- norms of signals smoothed by the kernel exceed some constants \({\rho_\epsilon}\) . The constants \({\rho_\epsilon}\) depend on the power \({\epsilon}\) of noise and \({\rho_\epsilon \to 0}\) as \({\epsilon \to 0}\) . The setup is considered in the zone of moderate deviation probabilities. We suppose that type I or type II error probabilities of tests tend to zero as \({\epsilon \to 0}\).  相似文献   

9.
In this paper, we study nonlinear optimization problems involving eigenvalues of symmetric matrices. One of the difficulties in solving these problems is that the eigenvalue functions are not differentiable when the multiplicity of the function is not one. We apply the \({\mathcal {U}}\)-Lagrangian theory to analyze the largest eigenvalue function of a convex matrix-valued mapping which extends the corresponding results for linear mapping in the literature. We also provides the formula of first-and second-order derivatives of the \({\mathcal {U}}\)-Lagrangian under mild assumptions. These theoretical results provide us new second-order information about the largest eigenvalue function along a suitable smooth manifold, and leads to a new algorithmic framework for analyzing the underlying optimization problem.  相似文献   

10.
Let I be an interval. We consider the non-monotonic convex self-mappings \(f:I\rightarrow I\) such that \(f^2\) is convex. They have the property that all iterates \(f^n\) are convex. In the class of these mappings we study three families of functions possessing convex iterative roots. A function f is said to be iteratively convex if f possesses convex iterative roots of all orders. A mapping f is said to be dyadically convex if for every \(n\ge 2\) there exists a convex iterative root \(f^{1/2^n}\) of order \(2^n\) and the sequence \(\{f^{1/2^n}\}\) satisfies the condition of compatibility, that is \( f^{1/2^n}\circ f^{1/2^n}= f^{1/2^{n-1}}.\) A function f is said to be flowly convex if it possesses a convex semi-flow of f, that is a family of convex functions \(\{f^t,t>0\}\) such that \(f^t\circ f^s=f^{t+s}, \ \ t,s >0\) and \(f^1=f\). We show the relations among these three types of convexity and we determine all convex iterative roots of non-monotonic functions.  相似文献   

11.
In this paper we are concerned with the Split Feasibility Problem (SFP) in which there are given two Hilbert spaces \(H_1\) and \(H_2\), nonempty, closed and convex sets \(C\subseteq H_1\) and \(Q\subseteq H_2\), and a bounded and linear operator \(A:H_1 \rightarrow H_2\). The SFP is then to find a point \(x^*\in C\) such that its image under A belongs to Q, meaning that \(Ax^*\in Q\). This reformulation was employed successfully for solving many inverse problems: for example, in intensity-modulated radiation therapy treatment planning. One of the typical classes of methods that have been used to solve the SFP is the class of projection method. This note focuses on the modified relaxation CQ algorithm with the Armijo-line search rule for solving the SFP. Under common and standard assumptions, we show that the proposed method weakly converges to a solution of the SFP. Numerical examples illustrating our method’s efficiency are presented for solving the LASSO problem in which the goal is to recover a sparse signal from a limited number of observations.  相似文献   

12.
We consider the anisotropic Ginzburg–Landau model in a three-dimensional periodic setting, in the London limit as the Ginzburg–Landau parameter \({\kappa=1/{\epsilon}\to\infty}\) . By means of matching upper and lower bounds on the energy of minimizers, we derive an expression for a limiting energy in the spirit of Γ-convergence. We show that, to highest order as \({\epsilon\to0}\) , the normalized induced magnetic field approaches a constant vector. We obtain a formula for the lower critical field H c1 as a function of the orientation of the external field \({h^\epsilon_{ex}}\) with respect to the principal axes of the anisotropy, and determine the direction of the limiting induced field as a minimizer of a convex geometrical problem.  相似文献   

13.
A parallel Nesterov algorithm, for solving unconstrained minimization of large scale partially separable convex functions, is presented. The problem is first transformed into a linearly constrained minimization of a separable function. A fast projected gradient (Nesterov) method is then applied to obtain a decomposition method with \(O(1/k^2)\) rate of convergence (where k is the iteration number). Preliminary numerical experiments show the efficiency of the proposed approach.  相似文献   

14.
In this paper we study two inexact fast augmented Lagrangian algorithms for solving linearly constrained convex optimization problems. Our methods rely on a combination of the excessive-gap-like smoothing technique introduced in Nesterov (SIAM J Optim 16(1):235–249, 2005) and the general inexact oracle framework studied in Devolder (Math Program 146:37–75, 2014). We develop and analyze two augmented based algorithmic instances with constant and adaptive smoothness parameters, and derive a total computational complexity estimate in terms of projections on a simple primal feasible set for each algorithm. For the constant parameter algorithm we obtain the overall computational complexity of order \(\mathcal {O}(\frac{1}{\epsilon ^{5/4}})\), while for the adaptive one we obtain \(\mathcal {O}(\frac{1}{\epsilon })\) total number of projections onto the primal feasible set in order to achieve an \(\epsilon \)-optimal solution for the original problem.  相似文献   

15.
In this paper we propose a new class of Mehrotra-type predictor-corrector algorithm for the monotone linear complementarity problems (LCPs). At each iteration, the method computes a corrector direction in addition to the Ai–Zhang direction (SIAM J Optim 16:400–417, 2005), in an attempt to improve performance. Starting with a feasible point \((x^0, s^0)\) in the wide neighborhood \(\mathcal {N}(\tau ,\beta )\), the algorithm enjoys the low iteration bound of \(O(\sqrt{n}L)\), where \(n\) is the dimension of the problem and \(L=\log \frac{(x^0)^T s^0}{\varepsilon }\) with \(\varepsilon \) the required precision. We also prove that the new algorithm can be specified into an easy implementable variant for solving the monotone LCPs, in such a way that the iteration bound is still \(O(\sqrt{n}L)\). Some preliminary numerical results are provided as well.  相似文献   

16.
Concave mixed-integer quadratic programming is the problem of minimizing a concave quadratic polynomial over the mixed-integer points in a polyhedral region. In this work we describe an algorithm that finds an \(\epsilon \)-approximate solution to a concave mixed-integer quadratic programming problem. The running time of the proposed algorithm is polynomial in the size of the problem and in \(1/\epsilon \), provided that the number of integer variables and the number of negative eigenvalues of the objective function are fixed. The running time of the proposed algorithm is expected unless \(\mathcal {P}=\mathcal {NP}\).  相似文献   

17.
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem \(\min _x \mathbf{E}_v\left[ f_v\big (\mathbf{E}_w [g_w(x)]\big ) \right] .\) In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of \(f_v,g_{w}\) and use an auxiliary variable to track the unknown quantity \(\mathbf{E}_w\left[ g_w(x)\right] \). We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of \(\mathcal {O}(k^{-1/4})\) in the general case and \(\mathcal {O}(k^{-2/3})\) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of \(\mathcal {O}(k^{-2/7})\) in the general case and \(\mathcal {O}(k^{-4/5})\) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.  相似文献   

18.
For strictly increasing concave functions \({\varphi}\) whose inverse functions are log-concave, the \({\varphi}\)-Brunn–Minkowski inequality for planar convex bodies is established. It is shown that for convex bodies in \({\mathbb{R}^n}\) the \({\varphi}\)-Brunn–Minkowski is equivalent to the \({\varphi}\)-Minkowski mixed volume inequalities.  相似文献   

19.
We consider the linearly constrained separable convex minimization problem, whose objective function consists of the sum of \(m\) individual convex functions in the absence of any coupling variables. While augmented Lagrangian-based decomposition methods have been well developed in the literature for solving such problems, a noteworthy requirement of these methods is that an additional correction step is a must to guarantee their convergence. This note shows that a straightforward Jacobian decomposition of the augmented Lagrangian method is globally convergent if the involved functions are further assumed to be strongly convex.  相似文献   

20.
In this paper we extend the smoothing technique (Nesterov in Math Program 103(1): 127–152, 2005; Nesterov in Unconstrained convex mimimization in relative scale, 2003) onto the problems of semidefinite optimization. For that, we develop a simple framework for estimating a Lipschitz constant for the gradient of some symmetric functions of eigenvalues of symmetric matrices. Using this technique, we can justify the Lipschitz constants for some natural approximations of maximal eigenvalue and the spectral radius of symmetric matrices. We analyze the efficiency of the special gradient-type schemes on the problems of minimizing the maximal eigenvalue or the spectral radius of the matrix, which depends linearly on the design variables. We show that in the first case the number of iterations of the method is bounded by \(O({1}/{\epsilon})\), where \(\epsilon\) is the required absolute accuracy of the problem. In the second case, the number of iterations is bounded by \({({4}/{\delta})} \sqrt{(1 + \delta) r\, \ln r }\), where δ is the required relative accuracy and r is the maximal rank of corresponding linear matrix inequality. Thus, the latter method is a fully polynomial approximation scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号