首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown that a locally Lipschitz function is approximately convex if, and only if, its Clarke subdifferential is a submonotone operator. Consequently, in finite dimensions, the class of locally Lipschitz approximately convex functions coincides with the class of lower-C1 functions. Directional approximate convexity is introduced and shown to be a natural extension of the class of lower-C1 functions in infinite dimensions. The following characterization is established: a multivalued operator is maximal cyclically submonotone if, and only if, it coincides with the Clarke subdifferential of a locally Lipschitz directionally approximately convex function, which is unique up to a constant. Furthermore, it is shown that in Asplund spaces, every regular function is generically approximately convex.  相似文献   

2.
Abstract

We generalize the outer subdifferential construction suggested by Cánovas, Henrion, López and Parra for max type functions to pointwise minima of regular Lipschitz functions. We also answer an open question about the relation between the outer subdifferential of the support of a regular function and the end set of its subdifferential posed by Li, Meng and Yang.  相似文献   

3.
《Optimization》2012,61(7):1057-1073
In this article, generalization of some mixed-integer nonlinear programming algorithms to cover convex nonsmooth problems is studied. In the extended cutting plane method, gradients are replaced by the subgradients of the convex function and the resulting algorithm shall be proved to converge to a global optimum. It is shown through a counterexample that this type of generalization is insufficient with certain versions of the outer approximation algorithm. However, with some modifications to the outer approximation method a special type of nonsmooth functions for which the subdifferential at any point is a convex combination of a finite number of subgradients at the point can be considered. Numerical results with extended cutting plane method are also reported.  相似文献   

4.
Locating proximal points is a component of numerous minimization algorithms. This work focuses on developing a method to find the proximal point of a convex function at a point, given an inexact oracle. Our method assumes that exact function values are at hand, but exact subgradients are either not available or not useful. We use approximate subgradients to build a model of the objective function, and prove that the method converges to the true prox-point within acceptable tolerance. The subgradient g k used at each step k is such that the distance from g k to the true subdifferential of the objective function at the current iteration point is bounded by some fixed ε > 0. The algorithm includes a novel tilt-correct step applied to the approximate subgradient.  相似文献   

5.
This article introduces a new method for computing regression quantile functions. This method applies a finite smoothing algorithm based on smoothing the nondifferentiable quantile regression objective function ρτ. The smoothing can be done for all τ ∈ (0, 1), and the convergence is finite for any finite number of τi ∈ (0, 1), i = 1,…,N. Numerical comparison shows that the finite smoothing algorithm outperforms the simplex algorithm in computing speed. Compared with the powerful interior point algorithm, which was introduced in an earlier article, it is competitive overall; however, it is significantly faster than the interior point algorithm when the design matrix in quantile regression has a large number of covariates. Additionally, the new algorithm provides the same accuracy as the simplex algorithm. In contrast, the interior point algorithm gives only the approximate solutions in theory, and rounding may be necessary to improve the accuracy of these solutions in practice.  相似文献   

6.
The subdifferential of a function is a generalization for nonsmooth functions of the concept of gradient. It is frequently used in variational analysis, particularly in the context of nonsmooth optimization. The present work proposes algorithms to reconstruct a polyhedral subdifferential of a function from the computation of finitely many directional derivatives. We provide upper bounds on the required number of directional derivatives when the space is ?1 and ?2, as well as in ? n where subdifferential is known to possess at most three vertices.  相似文献   

7.
For shape optimization of fluid flows governed by the Navier–Stokes equation, we investigate effectiveness of shape gradient algorithms by analyzing convergence and accuracy of mixed finite element approximations to both the distributed and boundary types of shape gradients. We present convergence analysis with a priori error estimates for the two approximate shape gradients. The theoretical analysis shows that the distributed formulation has superconvergence property. Numerical results with comparisons are presented to verify theory and show that the shape gradient algorithm based on the distributed formulation is highly effective and robust for shape optimization.  相似文献   

8.
Tabu search (TS) is a metaheuristic, which proved efficient to solve various combinatorial optimization problems. However, few works deal with its application to the global minimization of functions depending on continuous variables. To perform this task, we propose an hybrid method combining tabu search and simplex search (SS). TS allows to cover widely the solution space, to stimulate the search towards solutions far from the current solution, and to avoid the risk of trapping into a local minimum. SS is used to accelerate the convergence towards a minimum. The Nelder–Mead simplex algorithm is a classical very powerful local descent algorithm, making no use of the objective function derivatives. A “simplex” is a geometrical figure consisting, in n-dimensions, of (n+1) points. If any point of a simplex is taken as the origin, the n other points define vector directions that span the n-dimension vector space. Through a sequence of elementary geometric transformations (reflection, contraction and extension), the initial simplex moves, expands or contracts. To select the appropriate transformation, the method only uses the values of the function to be optimized at the vertices of the simplex considered. After each transformation, the current worst vertex is replaced by a better one. Our algorithm called continuous tabu simplex search (CTSS) implemented in two different forms (CTSSsingle, CTSSmultiple) is made up of two steps: first, an adaptation of TS to continuous optimization problems, allowing to localize a “promising area”; then, intensification within this promising area, involving SS. The efficiency of CTSS is extensively tested by using analytical test functions of which global and local minima are known. A comparison is proposed with several variants of tabu search, genetic algorithms and simulated annealing. CTSS is applied to the design of a eddy current sensor aimed at non-destructive control.  相似文献   

9.
Projected gradient methods for linearly constrained problems   总被引:23,自引:0,他引:23  
The aim of this paper is to study the convergence properties of the gradient projection method and to apply these results to algorithms for linearly constrained problems. The main convergence result is obtained by defining a projected gradient, and proving that the gradient projection method forces the sequence of projected gradients to zero. A consequence of this result is that if the gradient projection method converges to a nondegenerate point of a linearly constrained problem, then the active and binding constraints are identified in a finite number of iterations. As an application of our theory, we develop quadratic programming algorithms that iteratively explore a subspace defined by the active constraints. These algorithms are able to drop and add many constraints from the active set, and can either compute an accurate minimizer by a direct method, or an approximate minimizer by an iterative method of the conjugate gradient type. Thus, these algorithms are attractive for large scale problems. We show that it is possible to develop a finite terminating quadratic programming algorithm without non-degeneracy assumptions. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38. Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.  相似文献   

10.
The paper concerns first-order necessary optimality conditions for problems of minimizing nonsmooth functions under various constraints in infinite-dimensional spaces. Based on advanced tools of variational analysis and generalized differential calculus, we derive general results of two independent types called lower subdifferential and upper subdifferential optimality conditions. The former ones involve basic/limiting subgradients of cost functions, while the latter conditions are expressed via Fréchet/regular upper subgradients in fairly general settings. All the upper subdifferential and major lower subdifferential optimality conditions obtained in the paper are new even in finite dimensions. We give applications of general optimality conditions to mathematical programs with equilibrium constraints deriving new results for this important class of intrinsically nonsmooth optimization problems.  相似文献   

11.
12.
Minimization of the sum of three linear fractional functions   总被引:1,自引:0,他引:1  
In this paper, we will propose an efficient and reliable heuristic algorithm for minimizing and maximizing the sum of three linear fractional functions over a polytope. These problems are typical nonconvex minimization problems of practical as well as theoretical importance. This algorithm uses a primal-dual parametric simplex algorithm to solve a subproblem in which the value of one linear function is fixed. A subdivision scheme is employed in the space of this linear function to obtain an approximate optimal solution of the original problem. It turns out that this algorithm is much more efficient and usually generates a better solution than existing algorithms. Also, we will develop a similar algorithm for minimizing the product of three linear fractional functions.  相似文献   

13.
In this paper, a notion of generalized gradient on Riemannian manifolds is considered and a subdifferential calculus related to this subdifferential is presented. A characterization of the tangent cone to a nonempty subset S of a Riemannian manifold M at a point x is obtained. Then, these results are applied to characterize epi-Lipschitz subsets of complete Riemannian manifolds.  相似文献   

14.
The Kelley cutting plane method is one of the methods commonly used to optimize the dual function in the Lagrangian relaxation scheme. Usually the Kelley cutting plane method uses the simplex method as the optimization engine. It is well known that the simplex method leaves the current vertex, follows an ascending edge and stops at the nearest vertex. What would happen if one would continue the line search up to the best point instead? As a possible answer, we propose the face simplex method, which freely explores the polyhedral surface by following the Rosen’s gradient projection combined with a global line search on the whole surface. Furthermore, to avoid the zig-zagging of the gradient projection, we propose a conjugate gradient version of the face simplex method. For our preliminary numerical tests we have implemented this method in Matlab. This implementation clearly outperforms basic Matlab implementations of the simplex method. In the case of state-of-the-art simplex implementations in C or similar, our Matlab implementation is only competitive for the case of many cutting planes.  相似文献   

15.
Projections in a foveal space at u approximate functions with a resolution that decreases proportionally to the distance from u. Such spaces are defined by dilating a finite family of foveal wavelets, which are not translated. Their general properties are studied and illustrated with spline functions. Orthogonal bases are constructed with foveal wavelets of compact support and high regularity. Foveal wavelet coefficients give pointwise characterization of nonoscillatory singularities. An algorithm to detect singularities and choose foveal points is derived. Precise approximations of piecewise regular functions are obtained with foveal approximations centered at singularity locations.  相似文献   

16.
The paper gives an estimate for the Hilbert space distance from a ?-optimal point to the minimum point of a convex, closed function, the subdifferential of which is a strongly monotone operator in its definition domain. Also, the Hausdorff distance between the ?-optimal points of the Tikhonov functions in the non-correct problems of mathematical programming is estimated.  相似文献   

17.
Many optimization algorithms require gradients of the model functions, but computing accurate gradients can be computationally expensive. We study the implications of using inexact gradients in the context of the multilevel optimization algorithm MG/Opt. MG/Opt recursively uses (typically cheaper) coarse models to obtain search directions for finer-level models. However, MG/Opt requires the gradient on the fine level to define the recursion. Our primary focus here is the impact of the gradient errors on the multilevel recursion. We analyze, partly through model problems, how MG/Opt is affected under various assumptions about the source of the error in the gradients, and demonstrate that in many cases the effect of the errors is benign. Computational experiments are included.  相似文献   

18.
The design and implementation is discussed of FireμSat2, an algorithm to detect microsatellites (short approximate tandem repeats) in DNA. The algorithm relies on deterministic finite automata. The parameters are designed to support requirements expressed by molecular biologists in data exploration. By setting the parameters of FireμSat2 as liberally as possible, FireμSat2 is able to detect more microsatellites than all other software algorithms that we have encountered. Furthermore FireμSat2 was found to be faster than all the other algorithms that were investigated for approximate tandem repeat detection. In addition to being fast and accurate, the FireμSat2 algorithm that is described is robust and easily useable.  相似文献   

19.
Automatic numerical algorithms attempt to provide approximate solutions that differ from exact solutions by no more than a user-specified error tolerance. The computational cost is often determined adaptively by the algorithm based on the function values sampled. While adaptive, automatic algorithms are widely used in practice, most lack guarantees, i.e., conditions on input functions that ensure that the error tolerance is met.  相似文献   

20.
We consider a class of smoothing methods for minimization problems where the feasible set is convex but the objective function is not convex, not differentiable and perhaps not even locally Lipschitz at the solutions. Such optimization problems arise from wide applications including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium and spherical approximations. In this paper, we focus on smoothing methods for solving such optimization problems, which use the structure of the minimization problems and composition of smoothing functions for the plus function (x)+. Many existing optimization algorithms and codes can be used in the inner iteration of the smoothing methods. We present properties of the smoothing functions and the gradient consistency of subdifferential associated with a smoothing function. Moreover, we describe how to update the smoothing parameter in the outer iteration of the smoothing methods to guarantee convergence of the smoothing methods to a stationary point of the original minimization problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号