首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss local convergence of Newton’s method to a singular solution x * of the nonlinear equations F(x) =  0, for $F:{\mathbb{R}}^n \rightarrow {\mathbb{R}}^n$ . It is shown that an existing proof of Griewank, concerning linear convergence to a singular solution x * from a starlike domain around x * for F twice Lipschitz continuously differentiable and x * satisfying a particular regularity condition, can be adapted to the case in which F′ is only strongly semismooth at the solution. Further, Newton’s method can be accelerated to produce fast linear convergence to a singular solution by overrelaxing every second Newton step. These results are applied to a nonlinear-equations reformulation of the nonlinear complementarity problem (NCP) whose derivative is strongly semismooth when the function f arising in the NCP is sufficiently smooth. Conditions on f are derived that ensure that the appropriate regularity conditions are satisfied for the nonlinear-equations reformulation of the NCP at x *.  相似文献   

2.
The following results are proven. All subsystems of a dissipative Kolmogorov vector field ?i = xifi(x) are robustly permanent if and only if the external Lyapunov exponents are positive for every ergodic probability measure μ with support in the boundary of the nonnegative orthant. If the vector field is also totally competitive, its carrying simplex is C1. Applying these results to dissipative Lotka-Volterra systems, robust permanence of all subsystems is equivalent to every equilibrium x* satisfying fi(x* > 0 whenever xi* = 0. If in addition the Lotka-Volterra system is totally competitive, then its carrying simplex is C1.  相似文献   

3.
We study generalized equations of the following form:
(render)
0f(x)+g(x)+F(x),
where f is Fréchet differentiable in a neighborhood of a solution x* of (*) and g is Fréchet differentiable at x* and where F is a set-valued map acting in Banach spaces. We prove the existence of a sequence (xk) satisfying
which is super-linearly convergent to a solution of (*). We also present other versions of this iterative procedure that have superlinear and quadratic convergence, respectively.  相似文献   

4.
A new semilocal convergence theorem for Newton's method is established for solving a nonlinear equation F(x) = 0, defined in Banach spaces. It is assumed that the operator F is twice Fréchet differentiable, and F″ satisfies a Lipschitz type condition. Results on uniqueness of solution and error estimates are also given. Finally, these results are compared with those that use Kantorovich conditions.  相似文献   

5.
Letf 0(x) be a function of one variable with a simple zero atr 0. An iteration scheme is said to be locally convergent if, for some initial approximationsx 1, ...,x s nearr 0 and all functionsf which are sufficiently close (in a certain sense) tof 0, the scheme generates a sequence {x k} which lies nearr 0 and converges to a zeror off. The order of convergence of the scheme is the infimum of the order of convergence of {x k} for all such functionsf. We study iteration schemes which are locally convergent and use only evaluations off,f, ...,f [d] atx 1, ...,x k–1 to determinex k, and we show that no such scheme has order greater thand+2. This bound is the best possible, for it is attained by certain schemes based on polynomial interpolation.This work was supported (in part) by the Office of Naval Research under contract numbers N0014-69-C-0023 and N0014-71-C-0112.  相似文献   

6.
We consider a complete metric space (X, d) and a countable number of contraction mappings on X, F = {F i : i ∈ ?}. We show the existence of a smallest invariant set (with respect to inclusion) for F. If the maps F i are of the form F i (x) = r i x + b i on X = ? d , we prove a converse of the classic result on contraction mappings, more precisely, there exists a unique bounded invariant set if and only if r = sup i r i is strictly smaller than 1. Further, if ρ = {ρ k } k∈? is a probability sequence, we show that if there exists an invariant measure for the system (F, ρ), then its support must be precisely this smallest invariant set. If in addition there exists any bounded invariant set, this invariant measure is unique, even though there may be more than one invariant set.  相似文献   

7.
Under certain conditions, the contraction mapping fixed point theorem guarantees the convergence of the iterationx i+1=f(x i ) toward a fixed point of the functionf:R nR n. When an interval extensionF off is used in a similar iteration scheme to obtain a sequence of interval vectors these conditions need not provide convergence to a degenerate interval vector representing the fixed point, even if the width of the initial interval vector is chosen arbitrarily small. We give a sufficient condition on the extensionF in order that the convergence is guaranteed. The centered form of Moore satisfies this condition.  相似文献   

8.
In this paper, we present the combination of the inexact Newton method and the generalized Newton method for solving nonsmooth equations F(x)?=?0, characterizing the local convergence in terms of the perturbations and residuals. We assume that both iteration matrices taken from the B-differential and vectors F(x (k)) are perturbed at each step. Some results are motivated by the approach of C?tina? regarding to smooth equations. We study the conditions, which determine admissible magnitude of perturbations to preserve the convergence of method. Finally, the utility of these results is considered based on some variant of the perturbed inexact generalized Newton method for solving some general optimization problems.  相似文献   

9.
Given r real functions F 1(x),...,F r (x) and an integer p between 1 and r, the Low Order-Value Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y 1,...,y r ) is a vector of data and T(x, t i ) is the predicted value of the observation i with the parameters , it is natural to define F i (x) =  (T(x, t i ) − y i )2 (the quadratic error in observation i under the parameters x). When pr this LOVO problem coincides with the classical nonlinear least-squares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When pr the LOVO problem may be used to find hidden structures in data sets. One of the most successful applications includes the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/~martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.  相似文献   

10.
LetF n be a Finsler space with metric functionF(x, y). M. Matsumoto [6] has defined a modified Finsler spaceF n * whose metric functionF *(x, y) is given byF *2 = = F2 + (Xi(x)yi)2, whereX i are the components of a covariant vector which is a function of coordintae only. Since a concurrent vector is a function of coordinate only, Matsumoto and Eguchi [9] have studied various properties of the modified Finsler spaceF n * under the assumption thatX i are the components of a concurrent vector field inF n. In this paper we shall introduce the concept of semi-parallel vector field inF n and study the properties of modified Finsler spaceF n * .  相似文献   

11.
Let F1(x, y),…, F2h+1(x, y) be the representatives of equivalent classes of positive definite binary quadratic forms of discriminant ?q (q is a prime such that q ≡ 3 mod 4) with integer coefficients, then the number of integer solutions of Fi(x, y) = n (i = 1,…, 2h + 1) can be calculated for each natural number n using L-functions of imaginary quadratic field Q((?q)1/2).  相似文献   

12.
This article is concerned with a class of shape preserving four-point subdivision schemes which are stationary and which interpolate nonuniform univariate data {(xifi)}. These data are functional data, i.e., xixj if ij. Subdivision for the strictly monotone x-values is performed by a subdivision scheme that makes the grid locally uniform. This article is concerned with constructing suitable subdivision methods for the f-data which preserve convexity; i.e., the data at the kth level, {x(k)ifi(k)} is a convex data set for all k provided the initial data are convex. First, a sufficient condition for preservation of convexity is presented. Additional conditions on the subdivision methods for convergence to a C1 limit function are given. This leads to explicit rational convexity preserving subdivision schemes which generate continuously differentiable limit functions from initial convex data. The class of schemes is further restricted to schemes that reproduce quadratic polynomials. It is proved that these schemes are third order accurate. In addition, nonuniform linear schemes are examined which extend the well-known linear four-point scheme to the case of nonuniform data. Smoothness of the limit function generated by these linear schemes is proved by using the well-known smoothness criteria of the uniform linear four-point scheme.  相似文献   

13.
Summary We study linear sequential (adaptive) information for approximating zeros of polynomials of unbounded degree and establish a theorem on constrained approximation of smooth functions by polynomials.For a positive we seek a pointx * such that|x * p | , where p is a zero of a real polynomialp in the interval [a, b]. We assume thatp belongs to the classF 1 of polynomials of bounded arbitrary seminorm and having a root in [a, b] or to the classF 2 of polynomials which are nonpositive ata, nonnegative atb and have exactly one simple zero in [a, b]. The information onp consists ofn sequential (adaptive) evaluations of arbitrary linear functionals. The pointx * is constructed by means of an algorithm which is an arbitrary mapping depending on the information onp. We show that there exists no information and no algorithm for computingx * for everyp fromF 1, no matter how large the value ofn is. This is a stronger result than that obtained by us for smooth functions.For the classF 2 we can find a pointx * for arbitraryp and. Anoptimal algorithm, i.e., an algorithm with the smallest error, is thebisection of the smallest known interval containing the root ofp. We also exhibitoptimal information operators, i.e., the linear functionals for which the error of an optimal algorithm that uses them is minimal. It turns out that in the class of nonsequential (parallel) information, i.e., when the functionals are given simultaneously, optimal information consists of the evaluations of a polynomial atn-equidistant points in [a, b]. In the class of sequential continuous information, optimal information consists of evaluations of a polynomial atn points generated by thebisection method. To prove this result we establish a theorem on constrained approximation of smooth functions by polynomials. More precisely, we prove that a smooth function can be arbitrarily well uniformly approximated by a polynomial which satisfies constrains given byn arbitrary continuous linear functionals.Our results indicate that the problem of finding an -approximation to a real zero of a real polynomial (of unknown degree) is essentially of the same difficulty as the problem of finding an -approximation to a zero of an infinitely differentiable function.  相似文献   

14.
This paper studies a monotone empirical Bayes test δ n * for testing H 0:θθ 0 against H 1:θ>θ 0 for the positive exponential family f(x|θ)=c(θ)u(x)exp?(?x/θ), x>0, using a weighted quadratic error loss. We investigate the convergence rate of the empirical Bayes test δ n * . It is shown that the regret of δ n * converges to zero at a rate O(n ?1), where n is the number of past data available when the present testing problem is considered. Errors regarding the rate of convergence claimed in Gupta and Li (J. Stat. Plan. Inference, 129: 3–18, 2005) are addressed.  相似文献   

15.
For any normed spaceX, the unit ball ofX is weak *-dense in the unit ball ofX **. This says that for any ε>0, for anyF in the unit ball ofX **, and for anyf 1,…,f n inX *, the system of inequalities |f i(x)?F(f i)|≤ε can be solved for somex in the unit ball ofX. The author shows that the requirement that ε be strictly positive can be dropped only ifX is reflexive.  相似文献   

16.
Let F be a finite field with q=pf elements, where p is a prime. Let N be the number of solutions (x1,…,xn) of the equation c1xd11+···+cnxdnn=c over the finite fields, where d1q−1, ciϵF*(i=1, 2,…,n), and cϵF. In this paper, we prove that if b1 is the least integer such that b1≥∑ni=1 (f/ri) (Di, p−1)/(p−1), then q[b1/f]−1N, where ri is the least integer such that dipri−1, Didi=pri−1, the (Di, p−1) denotes the greatest common divisor of Di and p−1, [b1/f] denotes the integer part of b1/f. If di=d, then this result is an improvement of the theorem that pbN, where b is an integer less than n/d, obtained by J. Ax (1969, Amer. J. Math.86, 255–261) and D. Wan (1988, Proc. AMS103, 1049–1052), under a certain natural restriction on d and n.  相似文献   

17.
A new algorithm is proposed which, under mild assumptions, generates a sequence{x i } that starting at any point inR n will converge to a setX defined by a mixed system of equations and inequalities. Any iteration of the algorithm requires the solution of a linear programming problem with relatively few constraints. By only assuming that the functions involved are continuously differentiable a superlinear rate of convergence is achieved. No convexity whatsoever is required by the algorithm.  相似文献   

18.
We consider iterative trust region algorithms for the unconstrained minimization of an objective function $F ( \underline{x})$ , $\underline{x}\in \mathcal{R}^{n}$ , when F is differentiable but no derivatives are available, and when each model of F is a linear or a quadratic polynomial. The models interpolate F at n+1 points, which defines them uniquely when they are linear polynomials. In the quadratic case, second derivatives of the models are derived from information from previous iterations, but there are so few data that typically only the magnitudes of second derivative estimates are correct. Nevertheless, numerical results show that much faster convergence is achieved when quadratic models are employed instead of linear ones. Just one new value of F is calculated on each iteration. Changes to the variables are either trust region steps or are designed to maintain suitable volumes and diameters of the convex hulls of the interpolation points. It is proved that, if F is bounded below, if ?2 F is also bounded, and if the number of iterations is infinite, then the sequence of gradients $\underline{\nabla}F ( \underline{x}_{\,k} )$ , k=1,2,3,??, converges to zero, where $\underline{x}_{\,k}$ is the centre of the trust region of the k-th iteration.  相似文献   

19.
对x = (x1, x2,···, xn) ∈ (0,1)n 和 r ∈ {1, 2,···, n} 定义对称函数 Fn(x, r) = Fn(x1, x2,···, xn; r) =∏1≤i1j=1r(1+xi3/1- xi3)1/r, 其中i1, i2, ···, ir 是整数. 该文证明了Fn(x, r) 是(0,1)n 上的Schur凸、Schur乘性凸和Schur调和凸函数. 作为应用,利用控制理论建立了若干不等式.  相似文献   

20.
In this paper, an algorithm for minimizing a convex functionalF(x), defined by the maximum of convex smooth functionalsF i (x), is described. The sequence of valuesF(x n ) is monotonically decreasing, the algorithm is convergent, and it attains quadratic convergence near the solution.This work was supported by the National Research Council of Italy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号