首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 250 毫秒
1.
Summary A recent note of Ih-Ching Hsu poses an unsolved problem, to wit, the general solution of the functional equation g(x1, x2) + g(1(x1), 2(x2)) = g(x1, 2(x2)) + g(1(x1),x2), where the i are given functions. This short paper obtains the general solution. It gives conditions which imply that anycontinuous solution has form g1(x1) + g2(x2).  相似文献   

2.
A nonnegative, infinitely differentiable function defined on the real line is called a Friedrichs mollifier function if it has support in [0, 1] and 0 1 (t)dt=1. In this article, the following problem is considered. Determine k =inf 0 1 |(k)(t)|dt,k=1, 2, ..., where (k) denotes thekth derivative of and the infimum is taken over the set of all mollifier functions , which is a convex set. This problem has applications to monotone polynomial approximation as shown by this author elsewhere. The problem is reducible to three equivalent problems, a nonlinear programming problem, a problem on the functions of bounded variation, and an approximation problem involving Tchebycheff polynomials. One of the results of this article shows that k =k!22k–1,k=1, 2, .... The numerical values of the optimal solutions of the three problems are obtained as a function ofk. Some inequalities of independent interest are also derived.This research was supported in part by the National Science Foundation, Grant No. GK-32712.  相似文献   

3.
A smooth method for the finite minimax problem   总被引:2,自引:0,他引:2  
We consider unconstrained minimax problems where the objective function is the maximum of a finite number of smooth functions. We prove that, under usual assumptions, it is possible to construct a continuously differentiable function, whose minimizers yield the minimizers of the max function and the corresponding minimum values. On this basis, we can define implementable algorithms for the solution of the minimax problem, which are globally convergent at a superlinear convergence rate. Preliminary numerical results are reported.This research was partially supported by the National Research Program on Metodi di ottimizzazione per le decisioni, Ministero dell'Università e della Ricerca Scientifica e Tecnologica, Italy.  相似文献   

4.
Given a convex functionf: p × q (–, +], the marginal function is defined on p by (x)=inf{f(x, y)|y q }. Our purpose in this paper is to express the approximate first-order and second-order directional derivatives of atx 0 in terms of those off at (x 0,y 0), wherey 0 is any element for which (x 0)=f(x 0,y 0).The author is indebted to one referee for pointing out an inaccuracy in an earlier version of Theorem 4.1.  相似文献   

5.
Résumé Certaines méthodes directes et indirectes pour le calcul de Max {x t Ax, (x)1} sont étudiées.Les méthodes directes sont basées sur les propriétés particulières des normes 1, 2 et . Ces méthodes sont très simples mais ne s'appliquent qu'à certaines familles de matrices.La méthode indirecte est la méthode autoduale introduite dans [25, 26] avec = 1. Dans ce cas, le choix du vecteur initial pour qu'il y ait convergence vers une solution optimale est largement discuté.
Some methods for computing the maximum of quadratic from on the unit ball of the maximum norm
Summary Some direct and indirect methods are studied for computing Max {x t Ax, (x)1} whereA is symmetric definite positive.Direct methods are constructed using particular properties of 1, 2, norms. These methods are very simple, but uniquely suitable to certains families of matrices.The indirect method is the autodual method, introduced in [25, 26, 29] with = 1. In this case the problem of choosing an initial vector so that convergence of the iterative sequence occurs to an optimal solution is largely discussed.
  相似文献   

6.
An Iterative Approach to Quadratic Optimization   总被引:30,自引:0,他引:30  
Assume that C 1, . . . , C N are N closed convex subsets of a real Hilbert space H having a nonempty intersection C. Assume also that each C i is the fixed point set of a nonexpansive mapping T i of H. We devise an iterative algorithm which generates a sequence (x n ) from an arbitrary initial x 0H. The sequence (xn) is shown to converge in norm to the unique solution of the quadratic minimization problem min xC (1/2)Ax, xx, u, where A is a bounded linear strongly positive operator on H and u is a given point in H. Quadratic–quadratic minimization problems are also discussed.  相似文献   

7.
This paper deals with polynomial approximations(x) to the exponential function exp(x) related to numerical procedures for solving initial value problems. Motivated by stability requirements, we present a numerical study of the largest diskD()={z C: |z+|} that is contained in the stability regionS()={z C: |(z)|1}. The radius of this largest disk is denoted byr(), the stability radius. On the basis of our numerical study, several conjectures are made concerningr m,p=sup {r(): m,p}. Here m, p (1pm; p, m integers) is the class of all polynomials(x) with real coefficients and degree m for which(x)=exp(x)+O(x p+1) (forx 0).  相似文献   

8.
LetF 1 andF 2 be normed linear spaces andS:F 0 F 2 a linear operator on a balanced subsetF 0 ofF 1. IfN denotes a finite dimensional linear information operator onF 0, it is known that there need not be alinear algorithm:N(F 4) F 2 which is optimal in the sense that (N(f)) –S(f is minimized. We show that the linear problem defined byS andN can be regarded as having a linear optimal algorithm if we allow the range of to be extended in a natural way. The result depends upon imbeddingF 2 isometrically in the space of continuous functions on a compact Hausdorff spaceX. This is done by making use of a consequence of the classical Banach-Alaoglu theorem.  相似文献   

9.
We propose a solution strategy for fractional programming problems of the form max xx g(x)/ (u(x)), where the function satisfies certain convexity conditions. It is shown that subject to these conditions optimal solutions to this problem can be obtained from the solution of the problem max xx g(x) + u(x), where is an exogenous parameter. The proposed strategy combines fractional programming andc-programming techniques. A maximal mean-standard deviation ratio problem is solved to illustrate the strategy in action.  相似文献   

10.
It is known that the problem of minimizing a convex functionf(x) over a compact subsetX of n can be expressed as minimizing max{g(x, y)|y X}, whereg is a support function forf[f(x) g(x, y), for ally X andf(x)=g(x, x)]. Standard outer-approximation theory can then be employed to obtain outer-approximation algorithms with procedures for dropping previous cuts. It is shown here how this methodology can be extended to nonconvex nondifferentiable functions.This research was supported by the Science and Engineering Research Council, UK, and by the National Science Foundation under Grant No. ECS-79-13148.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号