首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
A new class of smooth exact penalty functions was recently introduced by Huyer and Neumaier. In this paper, we prove that the new smooth penalty function for a constrained optimization problem is exact if and only if the standard nonsmooth penalty function for this problem is exact. We also provide some estimates of the exact penalty parameter of the smooth penalty function, and, in particular, show that it asymptotically behaves as the square of the exact penalty parameter of the standard \(\ell _1\) penalty function. We briefly discuss a simple way to reduce the exact penalty parameter of the smooth penalty function, and study the effect of nonlinear terms on the exactness of this function.  相似文献   

2.
This paper presents a coercive smoothed penalty framework for nonsmooth and nonconvex constrained global optimization problems. The properties of the smoothed penalty function are derived. Convergence to an \(\varepsilon \)-global minimizer is proved. At each iteration k, the framework requires the \(\varepsilon ^{(k)}\)-global minimizer of a subproblem, where \(\varepsilon ^{(k)} \rightarrow \varepsilon \). We show that the subproblem may be solved by well-known stochastic metaheuristics, as well as by the artificial fish swarm (AFS) algorithm. In the limit, the AFS algorithm convergence to an \(\varepsilon ^{(k)}\)-global minimum of the real-valued smoothed penalty function is guaranteed with probability one, using the limiting behavior of Markov chains. In this context, we show that the transition probability of the Markov chain produced by the AFS algorithm, when generating a population where the best fitness is in the \(\varepsilon ^{(k)}\)-neighborhood of the global minimum, is one when this property holds in the current population, and is strictly bounded from zero when the property does not hold. Preliminary numerical experiments show that the presented penalty algorithm based on the coercive smoothed penalty gives very competitive results when compared with other penalty-based methods.  相似文献   

3.
Recently, Hoheisel et al. (Nonlinear Anal 72(5):2514–2526, 2010) proved the exactness of the classical \(l_1\) penalty function for the mathematical programs with vanishing constraints (MPVC) under the MPVC-linearly independent constraint qualification (MPVC-LICQ) and the bi-active set being empty at a local minimum \(x^*\) of MPVC. In this paper, by relaxing the two conditions in the above result, we show that the \(l_1\) penalty function is still exact at a local minimum \(x^*\) of MPVC under the MPVC-generalized pseudonormality and a new assumption. Our \(l_1\) exact penalty result includes the one of Hoheisel et al. as a special case.  相似文献   

4.
We study the minimization problem of a non-convex sparsity promoting penalty function, the transformed \(l_1\) (TL1), and its application in compressed sensing (CS). The TL1 penalty interpolates \(l_0\) and \(l_1\) norms through a nonnegative parameter \(a \in (0,+\infty )\), similar to \(l_p\) with \(p \in (0,1]\), and is known to satisfy unbiasedness, sparsity and Lipschitz continuity properties. We first consider the constrained minimization problem, and discuss the exact recovery of \(l_0\) norm minimal solution based on the null space property (NSP). We then prove the stable recovery of \(l_0\) norm minimal solution if the sensing matrix A satisfies a restricted isometry property (RIP). We formulated a normalized problem to overcome the lack of scaling property of the TL1 penalty function. For a general sensing matrix A, we show that the support set of a local minimizer corresponds to linearly independent columns of A. Next, we present difference of convex algorithms for TL1 (DCATL1) in computing TL1-regularized constrained and unconstrained problems in CS. The DCATL1 algorithm involves outer and inner loops of iterations, one time matrix inversion, repeated shrinkage operations and matrix-vector multiplications. The inner loop concerns an \(l_1\) minimization problem on which we employ the Alternating Direction Method of Multipliers. For the unconstrained problem, we prove convergence of DCATL1 to a stationary point satisfying the first order optimality condition. In numerical experiments, we identify the optimal value \(a=1\), and compare DCATL1 with other CS algorithms on two classes of sensing matrices: Gaussian random matrices and over-sampled discrete cosine transform matrices (DCT). Among existing algorithms, the iterated reweighted least squares method based on \(l_{1/2}\) norm is the best in sparse recovery for Gaussian matrices, and the DCA algorithm based on \(l_1\) minus \(l_2\) penalty is the best for over-sampled DCT matrices. We find that for both classes of sensing matrices, the performance of DCATL1 algorithm (initiated with \(l_1\) minimization) always ranks near the top (if not the top), and is the most robust choice insensitive to the conditioning of the sensing matrix A. DCATL1 is also competitive in comparison with DCA on other non-convex penalty functions commonly used in statistics with two hyperparameters.  相似文献   

5.
A set-valued gap function, \(\phi \), existing in the literature for smooth and nonsmooth multiobjective optimization problems is dealt with. It is known that \(0\in \phi (x^*)\) is a sufficient condition for efficiency of a feasible solution \(x^*\), while the converse does not hold. In the current work, the converse of this assertion is proved for properly efficient solutions. Afterwards, to avoid the complexities of set-valued maps some new single-valued gap functions, for nonsmooth multiobjective optimization problems with locally Lipschitz data are introduced. Important properties of the new gap functions are established.  相似文献   

6.
In this paper we study local sharp minima of the nonlinear programming problem via exact penalization. Utilizing generalized differentiation tools in variational analysis such as subderivatives and regular subdifferentials, we obtain some primal and dual characterizations for a penalty function associated with the nonlinear programming problem to have a local sharp minimum. These general results are then applied to the ? p penalty function with 0 ≤ p ≤ 1. In particular, we present primal and dual equivalent conditions in terms of the original data of the nonlinear programming problem, which guarantee that the ? p penalty function has a local sharp minimum with a finite penalty parameter in the case of \(p\in (\frac {1}{2}, 1]\) and \(p=\frac {1}{2}\) respectively. By assuming the Guignard constraint qualification (resp. the generalized Guignard constraint qualification), we also show that a local sharp minimum of the nonlinear programming problem can be an exact local sharp minimum of the ? p penalty function with p ∈ [0, 1] (resp. \(p\in [0, \frac {1}{2}]\)). Finally, we give some formulas for calculating the smallest penalty parameter for a penalty function to have a local sharp minimum.  相似文献   

7.
Fully robust OSCV is a modification of the OSCV method that produces consistent bandwidths in the cases of smooth and nonsmooth regression functions. We propose the practical implementation of the method based on the robust cross-validation kernel \(H_I\) in the case when the Gaussian kernel \(\phi \) is used in computing the resulting regression estimate. The kernel \(H_I\) produces practically unbiased bandwidths in the smooth and nonsmooth cases and performs adequately in the data examples. Negative tails of \(H_I\) occasionally result in unacceptably wiggly OSCV curves in the neighborhood of zero. This problem can be resolved by selecting the bandwidth from the largest local minimum of the curve. Further search for the robust kernels with desired properties brought us to consider the quartic kernel for the cross-validation purposes. The quartic kernel is almost robust in the sense that in the nonsmooth case it substantially reduces the asymptotic relative bandwidth bias compared to \(\phi \). However, the quartic kernel is found to produce more variable bandwidths compared to \(\phi \). Nevertheless, the quartic kernel has an advantage of producing smoother OSCV curves compared to \(H_I\). A simplified scale-free version of the OSCV method based on a rescaled one-sided kernel is proposed.  相似文献   

8.
In this paper, we consider a full-Newton step feasible interior-point algorithm for \(P_*(\kappa )\)-linear complementarity problem. The perturbed complementarity equation \(xs=\mu e\) is transformed by using a strictly increasing function, i.e., replacing \(xs=\mu e\) by \(\psi (xs)=\psi (\mu e)\) with \(\psi (t)=\sqrt{t}\), and the proposed interior-point algorithm is based on that algebraic equivalent transformation. Furthermore, we establish the currently best known iteration bound for \(P_*(\kappa )\)-linear complementarity problem, namely, \(O((1+4\kappa )\sqrt{n}\log \frac{n}{\varepsilon })\), which almost coincides with the bound derived for linear optimization, except that the iteration bound in the \(P_{*}(\kappa )\)-linear complementarity problem case is multiplied with the factor \((1+4\kappa )\).  相似文献   

9.
Recently, the convergence of the Douglas–Rachford splitting method (DRSM) was established for minimizing the sum of a nonsmooth strongly convex function and a nonsmooth hypoconvex function under the assumption that the strong convexity constant \(\beta \) is larger than the hypoconvexity constant \(\omega \). Such an assumption, implying the strong convexity of the objective function, precludes many interesting applications. In this paper, we prove the convergence of the DRSM for the case \(\beta =\omega \), under relatively mild assumptions compared with some existing work in the literature.  相似文献   

10.
The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.  相似文献   

11.
We show existence of solutions to the least gradient problem on the plane for boundary data in \(BV(\partial \varOmega )\). We also provide an example of a function \(f \in L^1(\partial \varOmega ) \backslash \) \((C(\partial \varOmega ) \cup BV(\partial \varOmega ))\), for which the solution exists. We also show non-uniqueness of solutions even for smooth boundary data in the anisotropic case for a nonsmooth anisotropy. We additionally prove a regularity result valid also in higher dimensions.  相似文献   

12.
For each \({\alpha\in[0,2)}\) we consider the eigenvalue problem \({-{\rm div}(|x|^\alpha \nabla u)=\lambda u}\) in a bounded domain \({\Omega\subset \mathbb{R}^N}\) (\({N\geq 2}\)) with smooth boundary and \({0\in \Omega}\) subject to the homogeneous Dirichlet boundary condition. Denote by \({\lambda_1(\alpha)}\) the first eigenvalue of this problem. Using \({\Gamma}\)-convergence arguments we prove the continuity of the function \({\lambda_1}\) with respect to \({\alpha}\) on the interval \({[0,2)}\).  相似文献   

13.
The Belgian chocolate problem involves maximizing a parameter \(\delta \) over a non-convex region of polynomials. In this paper we detail a global optimization method for this problem that outperforms previous such methods by exploiting underlying algebraic structure. Previous work has focused on iterative methods that, due to the complicated non-convex feasible region, may require many iterations or result in non-optimal \(\delta \). By contrast, our method locates the largest known value of \(\delta \) in a non-iterative manner. We do this by using the algebraic structure to go directly to large limiting values, reducing the problem to a simpler combinatorial optimization problem. While these limiting values are not necessarily feasible, we give an explicit algorithm for arbitrarily approximating them by feasible \(\delta \). Using this approach, we find the largest known value of \(\delta \) to date, \(\delta = 0.9808348\). We also demonstrate that in low degree settings, our method recovers previously known upper bounds on \(\delta \) and that prior methods converge towards the \(\delta \) we find.  相似文献   

14.
The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order p (for \(p\ge 1\)) and to assume Lipschitz continuity of the p-th derivative, then an \(\epsilon \)-approximate first-order critical point can be computed in at most \(O(\epsilon ^{-(p+1)/p})\) evaluations of the problem’s objective function and its derivatives. This generalizes and subsumes results known for \(p=1\) and \(p=2\).  相似文献   

15.
This paper considers the problem of positive semidefinite factorization (PSD factorization), a generalization of exact nonnegative matrix factorization. Given an m-by-n nonnegative matrix X and an integer k, the PSD factorization problem consists in finding, if possible, symmetric k-by-k positive semidefinite matrices \(\{A^1,\ldots ,A^m\}\) and \(\{B^1,\ldots ,B^n\}\) such that \(X_{i,j}=\text {trace}(A^iB^j)\) for \(i=1,\ldots ,m\), and \(j=1,\ldots ,n\). PSD factorization is NP-hard. In this work, we introduce several local optimization schemes to tackle this problem: a fast projected gradient method and two algorithms based on the coordinate descent framework. The main application of PSD factorization is the computation of semidefinite extensions, that is, the representations of polyhedrons as projections of spectrahedra, for which the matrix to be factorized is the slack matrix of the polyhedron. We compare the performance of our algorithms on this class of problems. In particular, we compute the PSD extensions of size \(k=1+ \lceil \log _2(n) \rceil \) for the regular n-gons when \(n=5\), 8 and 10. We also show how to generalize our algorithms to compute the square root rank (which is the size of the factors in a PSD factorization where all factor matrices \(A^i\) and \(B^j\) have rank one) and completely PSD factorizations (which is the special case where the input matrix is symmetric and equality \(A^i=B^i\) is required for all i).  相似文献   

16.
In this paper, we propose several integer programming (IP) formulations to exactly solve the minimum-cost \(\lambda \)-edge-connected k-subgraph problem, or the \((k,\lambda )\)-subgraph problem, based on its graph properties. Special cases of this problem include the well-known k-minimum spanning tree problem (if \(\lambda =1\)), \(\lambda \)-edge-connected spanning subgraph problem (if \(k=|V|\)) and k-clique problem (if \(\lambda = k-1\) and there are exact k vertices in the subgraph). As a generalization of k-minimum spanning tree and a case of the \((k,\lambda )\)-subgraph problem, the (k, 2)-subgraph problem is studied, and some special graph properties are proved to find stronger and more compact IP formulations. Additionally, we study the valid inequalities for these IP formulations. Numerical experiments are performed to compare proposed IP formulations and inequalities.  相似文献   

17.
In this paper, we are concerned with a bilevel optimization problem \(P_{0}\), where the lower level problem is a vector optimization problem. First, we give an equivalent one level optimization problem for which the nonsmooth Mangasarian–Fromowitz constraint qualification can hold at feasible solution. Using a special scalarization function, one deduces necessary optimality condition for the initial problem.  相似文献   

18.
In this paper, we consider a global optimization problem for a symmetric Lipschitz continuous function \(g:[a,b]^k\rightarrow {\mathbb {R}}\), whose domain \([a,b]^k\subset {\mathbb {R}}^k\) consists of k! hypertetrahedrons of the same size and shape, in which function g attains equal values. A global minimum can therefore be searched for in one hypertetrahedron only, but then this becomes a global optimization problem with linear constraints. Apart from that, some known global optimization algorithms in standard form cannot be applied to solving the problem. In this paper, it is shown how this global optimization problem with linear constraints can easily be transformed into a global optimization problem on hypercube \([0,1]^k\), for the solving of which an applied DIRECT algorithm in standard form is possible. This approach has a somewhat lower efficiency than known global optimization methods for symmetric Lipschitz continuous functions (such as SymDIRECT or DISIMPL), but, on the other hand, this method allows for the use of publicly available and well developed computer codes for solving a global optimization problem on hypercube \([0,1]^k\) (e.g. the DIRECT algorithm). The method is illustrated and tested on standard symmetric functions and very demanding center-based clustering problems for the data that have only one feature. An application to the image segmentation problem is also shown.  相似文献   

19.
This paper describes an algorithm for solving structured nonsmooth convex optimization problems using the optimal subgradient algorithm (OSGA), which is a first-order method with the complexity \(\mathcal {O}(\varepsilon ^{-2})\) for Lipschitz continuous nonsmooth problems and \(\mathcal {O}(\varepsilon ^{-1/2})\) for smooth problems with Lipschitz continuous gradient. If the nonsmoothness of the problem is manifested in a structured way, we reformulate the problem so that it can be solved efficiently by a new setup of OSGA (called OSGA-V) with the complexity \(\mathcal {O}(\varepsilon ^{-1/2})\). Further, to solve the reformulated problem, we equip OSGA-O with an appropriate prox-function for which the OSGA-O subproblem can be solved either in a closed form or by a simple iterative scheme, which decreases the computational cost of applying the algorithm for large-scale problems. We show that applying the new scheme is feasible for many problems arising in applications. Some numerical results are reported confirming the theoretical foundations.  相似文献   

20.
For \(s<3/2\), it is shown that the Cauchy problem for the b-family of equations is ill-posed in Sobolev spaces \(H^s\) on both the torus and the line when \(b>1\). The proof of ill-posedness depends on the value of b, where the most interesting case arises for \(b=3\), the Degasperis–Procesi equation. Considering that the b-family of equations is locally well-posed in \(H^s\) for \(s>3/2\), this work establishes 3 / 2 as the critical index of well-posedness in Sobolev spaces for \(b>1\).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号