首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Global error bounds for possibly degenerate or nondegenerate monotone affine variational inequality problems are given. The error bounds are on an arbitrary point and are in terms of the distance between the given point and a solution to a convex quadratic program. For the monotone linear complementarity problem the convex program is that of minimizing a quadratic function on the nonnegative orthant. These bounds may form the basis of an iterative quadratic programming procedure for solving affine variational inequality problems. A strong upper semicontinuity result is also obtained which may be useful for finitely terminating any convergent algorithm by periodically solving a linear program.This material is based on research supported by Air Force Office of Scientific Research Grant AFOSR-89-0410 and National Science Foundation Grants CCR-9101801 and CCR-9157632.  相似文献   

2.
W. Hare 《Optimization Letters》2017,11(7):1217-1227
Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.  相似文献   

3.
In this paper, we consider the computation of a rigorous lower error bound for the optimal value of convex optimization problems. A discussion of large-scale problems, degenerate problems, and quadratic programming problems is included. It is allowed that parameters, whichdefine the convex constraints and the convex objective functions, may be uncertain and may vary between given lower and upper bounds. The error bound is verified for the family of convex optimization problems which correspond to these uncertainties. It can be used to perform a rigorous sensitivity analysis in convex programming, provided the width of the uncertainties is not too large. Branch and bound algorithms can be made reliable by using such rigorous lower bounds.  相似文献   

4.
We resolve a conjecture of Kalai relating approximation theory of convex bodies by simplicial polytopes to the face numbers and primitive Betti numbers of these polytopes and their toric varieties. The proof uses higher notions of chordality. Further, for C 2-convex bodies, asymptotically tight lower bounds on the g-numbers of the approximating polytopes are given, in terms of their Hausdorff distance from the convex body.  相似文献   

5.
Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems.  相似文献   

6.
This paper presents solutions for numerical computation on convex hulls; computational algorithms that ensure logical consistency and accuracy are proposed. A complete numerical error analysis is presented. It is shown that a global error bound for vertex-facet adjacency does not exist under logically consistent procedures. To cope with practical requirements, vertex preconditioned polytope computations are introduced using point and hyperplane adjustments. A global bound on vertex-facet adjacency error is affected by the global bound on vertices; formulas are given for a conservative choice of global error bounds.  相似文献   

7.
Computable lower and upper bounds on the optimal and dual optimal solutions of a nonlinear, convex separable program are obtained from its piecewise linear approximation. They provide traditional error and sensitivity measures and are shown to be attainable for some problems. In addition, the bounds on the solution can be used to develop an efficient solution approach for such programs, and the dual bounds enable us to determine a subdivision interval which insures the objective function accuracy of a prespecified level. A generalization of the bounds to certain separable, but nonconvex, programs is given and some numerical examples are included.  相似文献   

8.
Let ?? be a convex co-compact Fuchsian group. We formulate a conjecture on the critical line, i.e. what is the largest half-plane with finitely many resonances for the Laplace operator on the infinite-area hyperbolic surface ${X = \Gamma \backslash \mathbb{H}^2}$ . An upper bound depending on the dimension ?? of the limit set is proved which is in favor of the conjecture for small values of ?? and in the case when ???> 1/2 and ?? is a subgroup of an arithmetic group. New omega lower bounds for the error term in the hyperbolic lattice point counting problem are derived.  相似文献   

9.
Ruscheweyh and Sheil-Small proved the PólyarSchoenberg conjecture that the class of convex analytic functions is closed under convolution or Hadamard product. They also showed that close-to-convexity is preserved under convolution with convex analytic functions. In this note, we investigate harmonic analogs. Beginning with convex analytic functions, we form certain harmonic functions which preserve close-to-convexity under convolution. An auxiliary function enables us to obtain necessary and sufficient convolution conditions for convex and starlike harmonic functions, which lead to sufficient coefficient bounds for inclusion in these classes.  相似文献   

10.
We derive bounds on the expectation of a class of periodic functions using the total variations of higher-order derivatives of the underlying probability density function. These bounds are a strict improvement over those of Romeijnders et al. (Math Program 157:3–46, 2016b), and we use them to derive error bounds for convex approximations of simple integer recourse models. In fact, we obtain a hierarchy of error bounds that become tighter if the total variations of additional higher-order derivatives are taken into account. Moreover, each error bound decreases if these total variations become smaller. The improved bounds may be used to derive tighter error bounds for convex approximations of more general recourse models involving integer decision variables.  相似文献   

11.
We give analytical bounds on the Value-at-Risk and on convex risk measures for a portfolio of random variables with fixed marginal distributions under an additional positive dependence structure. We show that assuming positive dependence information in our model leads to reduced dependence uncertainty spreads compared to the case where only marginals information is known. In more detail, we show that in our model the assumption of a positive dependence structure improves the best-possible lower estimate of a risk measure, while leaving unchanged its worst-possible upper risk bounds. In a similar way, we derive for convex risk measures that the assumption of a negative dependence structure leads to improved upper bounds for the risk while it does not help to increase the lower risk bounds in an essential way. As a result we find that additional assumptions on the dependence structure may result in essentially improved risk bounds.  相似文献   

12.
Summary On the basis of an existence theorem for solutions of nonlinear systems, a method is given for finding rigorous error bounds for computed eigenvalues and eigenvectors of real matrices. It does not require the usual assumption that the true eigenvectors span the whole space. Further, a priori error estimates for eigenpairs corrected by an iterative method are given. Finally the results are illustrated with numerical examples.Dedicated to Professor Yoshikazu Nakai on his sixtieth birthday  相似文献   

13.
After a brief survey on condition numbers for linear systems of equalities, we analyse error bounds for convex functions and convex sets. The canonical representation of a convex set is defined. Other representations of a convex set by a convex function are compared with the canonical representation. Then, condition numbers are introduced for convex sets and their convex representations.  相似文献   

14.
The notion of weak sharp minima unifies a number of important ideas in optimization. Part I of this work provides the foundation for the theory of weak sharp minima in the infinite-dimensional setting. Part II discusses applications of these results to linear regularity and error bounds for nondifferentiable convex inequalities. This work applies the results of Part I to error bounds for differentiable convex inclusions. A number of standard constraint qualifications for such inclusions are also examined.  相似文献   

15.
The concepts of convex order and comonotonicity have become quite popular in risk theory, essentially since Kaas et al. [Kaas, R., Dhaene, J., Goovaerts, M.J., 2000. Upper and lower bounds for sums of random variables. Insurance: Math. Econ. 27, 151-168] constructed bounds in the convex order sense for a sum S of random variables without imposing any dependence structure upon it. Those bounds are especially helpful, if the distribution of S cannot be calculated explicitly or is too cumbersome to work with. This will be the case for sums of lognormally distributed random variables, which frequently appear in the context of insurance and finance.In this article we quantify the maximal error in terms of truncated first moments, when S is approximated by a lower or an upper convex order bound to it. We make use of geometrical arguments; from the unknown distribution of S only its variance is involved in the computation of the error bounds. The results are illustrated by pricing an Asian option. It is shown that under certain circumstances our error bounds outperform other known error bounds, e.g. the bound proposed by Nielsen and Sandmann [Nielsen, J.A., Sandmann, K., 2003. Pricing bounds on Asian options. J. Financ. Quant. Anal. 38, 449-473].  相似文献   

16.
We give lower bounds for the volume, the surface area, and the other quermass-integrals of centro-symmetric convex universal covers in n-dimensional Euclidean spaces. The estimates are sharp in the case n = 2. The given bounds are also bounds for the quermassintegrals of convex translation covers.  相似文献   

17.
In this paper, we consider error bounds for DC multifunctions (difference of two convex multifunctions) with/without set constraints. We give some Robinson-Ursescu type results in Banach spaces. Using some techniques of convex analysis, we present some results on the existence of error bounds in terms of normal cone and coderivative.  相似文献   

18.
Abstract

In this article, our main aim is to develop gap functions and error bounds for a (non-smooth) convex vector optimization problem. We show that by focusing on convexity we are able to quite efficiently compute the gap functions and try to gain insight about the structure of set of weak Pareto minimizers by viewing its graph. We will discuss several properties of gap functions and develop error bounds when the data are strongly convex. We also compare our results with some recent results on weak vector variational inequalities with set-valued maps, and also argue as to why we focus on the convex case.  相似文献   

19.
This paper is concerned with the semilocal convergence of a continuation method between two third-order iterative methods, namely, the Halley’s and the convex acceleration of Newton’s method, also known as the Super-Halley’s method. This convergence analysis is discussed using the recurrence relations approach. This approach simplifies the analysis and leads to improved results. The convergence analysis is established under the assumption that the second Frëchet derivative satisfies Lipschitz continuity condition. An existence-uniqueness theorem is given. Also, a closed form of error bound is derived in terms of a real parameter α ∈ [0, 1]. Two numerical examples are worked out to demonstrate the efficacy of our approach. On comparing the existence and uniqueness region and error bounds for the solution obtained by our analysis with those obtained by using majorizing sequences [15], we observed that our analysis gives better results. Further, we have observed that for particular values of the α, our analysis reduces to those for the Halley’s method (α = 0) and the convex acceleration of Newton’s method (α = 1), respectively, with improved results.  相似文献   

20.
We study computability and applicability of error bounds for a given semidefinite pro-gramming problem under the assumption that the recession function associated with the constraint system satisfies the Slater condition. Specifically, we give computable error bounds for the distances between feasible sets, optimal objective values, and optimal solution sets in terms of an upper bound for the condition number of a constraint system, a Lipschitz constant of the objective function, and the size of perturbation. Moreover, we are able to obtain an exact penalty function for semidefinite programming along with a lower bound for penalty parameters. We also apply the results to a class of statistical problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号