首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary An unconstrained nonlinear programming problem with nondifferentiabilities is considered. The nondifferentiabilities arise from terms of the form max [f 1(x), ...,f n (x)], which may enter nonlinearly in the objective function. Local convex polyhedral upper approximations to the objective function are introduced. These approximations are used in an iterative method for solving the problem. The algorithm proceeds by solving quadratic programming subproblems to generate search directions. Approximate line searches ensure global convergence of the method to stationary points. The algorithm is conceptually simple and easy to implement. It generalizes efficient variable metric methods for minimax calculations.  相似文献   

2.
We develop a new simple iteration formula, which does not require any derivatives of f(x), for solving a nonlinear equation f(x) = 0. It is proved that the convergence order of the new method is quadratic. Furthermore, the new method can approximate complex roots. By several numerical examples we show that the presented method will give desirable approximation to the root without a particularly good initial approximation and be efficient for all cases, regardless of the behavior of f(x).  相似文献   

3.
In geometric terms, the Ekeland variational principle says that a lower-bounded proper lower-semicontinuous functionf defined on a Banach spaceX has a point (x 0,f(x 0)) in its graph that is maximal in the epigraph off with respect to the cone order determined by the convex coneK λ = {(x, α) ∈X × ?:λ ∥x∥ ≤ ? α}, where λ is a fixed positive scalar. In this case, we write (x 0,f(x 0))∈λ-extf. Here, we investigate the following question: if (x 0,f(x 0))∈λ-extf, wheref is a convex function, and if 〈f n 〉 is a sequence of convex functions convergent tof in some sense, can (x 0,f(x 0)) be recovered as a limit of a sequence of points taken from λ-extf n ? The convergence notions that we consider are the bounded Hausdorff convergence, Mosco convergence, and slice convergence, a new convergence notion that agrees with the Mosco convergence in the reflexive setting, but which, unlike the Mosco convergence, behaves well without reflexivity.  相似文献   

4.
In this paper, we consider functions of the form f(x,y)=f(x)g(y){\phi(x,y)=f(x)g(y)} over a box, where f(x), x ? \mathbb R{f(x), x\in {\mathbb R}} is a nonnegative monotone convex function with a power or an exponential form, and g(y), y ? \mathbb Rn{g(y), y\in {\mathbb R}^n} is a component-wise concave function which changes sign over the vertices of its domain. We derive closed-form expressions for convex envelopes of various functions in this category. We demonstrate via numerical examples that the proposed envelopes are significantly tighter than popular factorable programming relaxations.  相似文献   

5.
The construction and convergence of an approximate solution to the initial value problem x′ = f(t, x), x(0) = x0, defined on closed subsets of locally convex spaces are given. Sufficient conditions that guarantee the existence of an approximate solution are analyzed in relation to the Nagumo boundary condition used in the Banach space case. It is also shown that the Nagumo boundary condition does not guarantee the existence of an approximate solution. Applications to fixed point theorems for weakly inward mappings are given.  相似文献   

6.
A function f is said to be cone superadditive if there exists a partition of R n into a family of polyhedral convex cones such that f(z?+?x) + f(z?+?y) ≤ f(z) + f(z?+?x?+?y) holds whenever x and y belong to the same cone in the family. This concept is useful in nonlinear integer programming in that, if the objective function is cone superadditive, the global minimality can be characterized by local optimality criterion involving Hilbert bases. This paper shows cone superadditivity of L-convex and M-convex functions with respect to conic partitions that are independent of particular functions. L-convex and M-convex functions in discrete variables (integer vectors) as well as in continuous variables (real vectors) are considered.  相似文献   

7.
In this paper we introduce the ‘rectification method’ for the construction of algorithms of pre-designed order r for the solution of nonlinear equations f(x) = 0. Our method is based upon the derivation of a rectified approximation g(x) to f(x), via Padé formulas, such that the application of the Newton-Raphson iterations to g generates the desired rth order algorithm. Various properties of g are explored as are recursive relations among rectified approximations associated with successive orders of convergence. It is demonstrated that the use of g in favor of f can relax standard sufficient conditions assuring convergence of the iterations.  相似文献   

8.
This paper is concerned with sequential Monte Carlo methods for optimizing a system under constraints. We wish to minimize f(x), where qi(x) ? 0 (i = 1, …, m) must hold. We can calculate the qi(x), but f(x) can only be observed in the presence of noise. A general approach, based on an adaptation of a version of stochastic approximation to the penalty function method, is discussed and a convergence theorem is proved.  相似文献   

9.
In this paper, a generalization of the Farkas lemma is presented for nonlinear mappings which involve a convex process and a generalized convex function. Using this result, a complete characterization of optimality is obtained for the following nonsmooth programming problem: minimizef(x), subject to – H(x) wheref is a locally Lipschitz function satisfying a generalized convexity hypothesis andH is a closed convex process.This work was partially written while the author was a PhD Student under the supervision of Dr. B. D. Craven, University of Melbourne, whose helpful guidance is much appreciated.  相似文献   

10.
Convex programs with an additional reverse convex constraint   总被引:2,自引:0,他引:2  
A method is presented for solving a class of global optimization problems of the form (P): minimizef(x), subject toxD,g(x)0, whereD is a closed convex subset ofR n andf,g are convex finite functionsR n . Under suitable stability hypotheses, it is shown that a feasible point is optimal if and only if 0=max{g(x):xD,f(x)f( )}. On the basis of this optimality criterion, the problem is reduced to a sequence of subproblemsQ k ,k=1, 2, ..., each of which consists in maximizing the convex functiong(x) over some polyhedronS k . The method is similar to the outer approximation method for maximizing a convex function over a compact convex set.  相似文献   

11.
《Optimization》2012,61(4):389-399
We study the stability of a Hummel–Seebeck like method for solving variational inclusions of the form 0?∈?f(x)?+?G(x), where f is a single-valued function while G stands for a set-valued mapping, both of them acting in Banach spaces. Then, we investigate a measure of conditioning of these inclusions under canonical perturbations.  相似文献   

12.
We say that f is reciprocally convex if x?f(x) is concave and x?f(1/x) is convex on (0,+∞). Reciprocally convex functions generate a sequence of quasi-arithmetic means, with the first one between harmonic and arithmetic mean and others above the arithmetic mean. We present several examples related to the gamma function and we show that if f is a Stieltjes transform, then −f is reciprocally convex. An application in probability is also presented.  相似文献   

13.
A mathematical programming problem is said to have separated nonconvex variables when the variables can be divided into two groups: x=(x 1,...,x n ) and y=( y 1,...,y n ), such that the objective function and any constraint function is a sum of a convex function of (x, y) jointly and a nonconvex function of x alone. A method is proposed for solving a class of such problems which includes Lipschitz optimization, reverse convex programming problems and also more general nonconvex optimization problems.  相似文献   

14.
Let c(x), d(x) and h(x) be linear functions defined over a convex polyhedron X = {x | Ax = r and 0 ⩽ xu }, where A is the node-edge incidence matrix of a given network. Let f(x) = c(x)d(x) + h(x) be a (particular) quadratic function which we intend to minimize over X. In this paper we use the theory of multiobjective programming do develop algorithms for the semidefinite positive case (f(x) = ϱ[d(x)]2 + h(x) and ϱ is a strictly postive real constant) and the semidefinite negative case (f(x) = ϱ[d(x)]2 + h (x) and ϱ is a strictly negative real constant). A special indefinite case (h(x) is constant over X) is also studied.In the proposed algorithms, basic solutions are used in all the interactions with possible exception for the last one. Moreover only the concepts of the simplex method are used; consequently these algorithms can be used even when A is not the node-edge incidence matrix of a network. However they are particularly attractive for the network case. In fact, network basic solutions are efficiently manipulated when appropriated data structures are used in the computational implementation of an algorithm.  相似文献   

15.
We find conditions for a smooth nonlinear map f: U → V between open subsets of Hilbert or Banach spaces to be locally convex in the sense that for some c and each positive ? < c the image f(B ?(x)) of each ?-ball B ?(x) ? U is convex. We give a lower bound on c via the second order Lipschitz constant Lip2(f), the Lipschitz-open constant Lipo(f) of f, and the 2-convexity number conv2(X) of the Banach space X.  相似文献   

16.
The aim of this paper is to investigate a method of approximating a solution of the operator equation of Hammerstein type x + KF(x) = f by solutions of similar finite-dimensional problems which contain operators better than K and F. Conditions of convergence and convergence rate are given and an iteration method to solve the approximative equation is proposed and applied to a concrete example.  相似文献   

17.
In this paper, we present the combination of the inexact Newton method and the generalized Newton method for solving nonsmooth equations F(x)?=?0, characterizing the local convergence in terms of the perturbations and residuals. We assume that both iteration matrices taken from the B-differential and vectors F(x (k)) are perturbed at each step. Some results are motivated by the approach of C?tina? regarding to smooth equations. We study the conditions, which determine admissible magnitude of perturbations to preserve the convergence of method. Finally, the utility of these results is considered based on some variant of the perturbed inexact generalized Newton method for solving some general optimization problems.  相似文献   

18.
We study the almost everythere convergence to the initial dataf(x)=u(x, 0) of the solutionu(x, t) of the two-dimensional linear Schrödinger equation Δu=i? t u. The main result is thatu(x, t) →f(x) almost everywhere fort → 0 iffH p (R2), wherep may be chosen <1/2. To get this result (improving on Vega’s work, see [6]), we devise a strategy to capture certain cancellations, which we believe has other applications in related problems.  相似文献   

19.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a worst-case O(1 / t) convergence rate, where t denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.  相似文献   

20.
This paper presents the essentials of a method designed to solve optimization problems whose objective functions are of the form g(x)+ ψ(u(x)), where ψ is differentiable and either concave or convex. It is shown that solutions to such problems can be obtained through the solutions of the Lagrangian problem whose objective function is of the form g(x)+ λu(x).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号