首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Assume that we want to recover $f : \Omega \to {\bf C}$ in the $L_r$-quasi-norm ($0 < r \le \infty$) by a linear sampling method $$ S_n f = \sum_{j=1}^n f(x^j) h_j , $$ where $h_j \in L_r(\Omega )$ and $x^j \in \Omega$ and $\Omega \subset {\bf R}^d$ is an arbitrary bounded Lipschitz domain. We assume that $f$ is from the unit ball of a Besov space $B^s_{pq} (\Omega)$ or of a Triebel--Lizorkin space $F^s_{pq} (\Omega)$ with parameters such that the space is compactly embedded into $C(\overline{\Omega})$. We prove that the optimal rate of convergence of linear sampling methods is $$ n^{ -{s}/{d} + ({1}/{p}-{1}/{r})_+} , $$ nonlinear methods do not yield a better rate. To prove this we use a result from Wendland (2001) as well as results concerning the spaces $B^s_{pq} (\Omega) $ and $F^s_{pq}(\Omega)$. Actually, it is another aim of this paper to complement the existing literature about the function spaces $B^s_{pq} (\Omega)$ and $F^s_{pq} (\Omega)$ for bounded Lipschitz domains $\Omega \subset {\bf R}^d$. In this sense, the paper is also a continuation of a paper by Triebel (2002).  相似文献   

3.
The gradient sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first-order information about the objective function, it generalizes the steepest descent method, one of the most classical methods for minimizing a smooth function. This study aims at determining under which circumstances one can expect the same local convergence result of the Cauchy method for the gradient sampling algorithm under the assumption that the problem is stated by a finite max-function around the optimal point. Additionally, at the end, we show how to practically accomplish the required hypotheses during the execution of the algorithm.  相似文献   

4.
广义投影型的超线性收敛算法   总被引:1,自引:0,他引:1  
该文利用矩阵分解与广义投影等技巧,给出了求解线性约束的非线性规划的一个广义投影型的超线性收敛算法,不需要δ-主动约束与每一步反复计算投影矩阵,避免了计算的数值不稳定性,利用矩阵求逆的递推公式,计算简便,由于采用了非精确搜索,算法实用可行,文中证明了算法具有收敛性及超线性的收敛速度.  相似文献   

5.
To construct an effective minimization algorithm for locally Lipschitz functions, we show how to compute a descent direction satisfying Armijo’s condition. We present a finitely terminating algorithm to construct an approximating set for the Goldstein subdifferential leading to the desired descent direction. Using this direction, we propose a minimization algorithm for locally Lipschitz functions and prove its convergence. Finally, we implement our algorithm with matrix laboratory (MATLAB) codes and report our testing results. The comparative numerical results attest to the efficiency of the proposed algorithm.  相似文献   

6.
A result by Elton(6) states that an iterated function system
of i.i.d. random Lipschitz maps F 1,F 2,... on a locally compact, complete separable metric space converges weakly to its unique stationary distribution if the pertinent Liapunov exponent is a.s. negative and for some . Diaconis and Freedman(5) showed the convergence rate be geometric in the Prokhorov metric if for some p>0, where L 1 denotes the Lipschitz constant of F 1. The same and also polynomial rates have been recently obtained in Alsmeyer and Fuh(1) by different methods. In this article, necessary and sufficient conditions are given for the positive Harris recurrence of (M n ) n0 on some absorbing subset . If and the support of has nonempty interior, we further show that the same respective moment conditions ensuring the weak convergence rate results mentioned above now lead to polynomial, respectively geometric rate results for the convergence to in total variation or f-norm f , f(x)=1+d(x,x 0) for some (0,p]. The results are applied to various examples that have been discussed in the literature, including the Beta walk, multivariate ARMA models and matrix recursions.  相似文献   

7.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

8.
广义拟牛顿算法对一般目标函数的收敛性   总被引:2,自引:0,他引:2  
本文证明了求解无约束最优化的广义拟牛顿算法在Goldstein非精确线搜索下对一般目标函数的全局收敛性,并在一定条件下证明了算法的局部超线性收敛性。  相似文献   

9.
本文对无约束最优化问题:minf(x),x∈R,提出一种新的重新开始共轭梯度算法.该算法采用一类广义Curry线搜索原则,参数β可在一个有限闭区间内选择,且允许β取负值.在较弱的条件下证明了该算法的全局收敛性.  相似文献   

10.
考虑约束最优化问题:minx∈Ωf(x)其中:f:R^n→R是连续可微函数,Ω是一闭凸集。本文研究了解决此问题的梯度投影方法,在步长的选取时采用了一种新的策略,在较弱的条件下,证明了梯度投影响方法的全局收敛性。  相似文献   

11.
本文对求解无约束优化问题提出一类三项混合共轭梯度算法,新算法将Hestenes- stiefel算法与Dai-Yuan方法相结合,并在不需给定下降条件的情况下,证明了算法在Wolfe线搜索原则下的收敛性,数值试验亦显示出这种混合共轭梯度算法较之HS和PRP的优势.  相似文献   

12.
13.
14.
In this paper, a new nonmonotone BFGS algorithmfor unconstrained optimization is introduced. Under mild conditions,the global convergence of this new algorithm on convex functions isproved. Some numerical experiments show that this new nonmonotoneBFGS algorithm is competitive to the BFGS algorithm.  相似文献   

15.
A convergence theorem and an approximation theorem for semigroups of Lipschitz operators are given. These theorems are applied to a semi-discrete approximation problem for a quasi-linear wave equation with damping and a finite difference method for a quasi-linear equation of Kirchhoff type.  相似文献   

16.
17.
Stochastic differential equations are often simulated with the Monte Carlo Euler method. Convergence of this method is well understood in the case of globally Lipschitz continuous coefficients of the stochastic differential equation. However, the important case of superlinearly growing coefficients has remained an open question. The main difficulty is that numerically weak convergence fails to hold in many cases of superlinearly growing coefficients. In this paper we overcome this difficulty and establish convergence of the Monte Carlo Euler method for a large class of one-dimensional stochastic differential equations whose drift functions have at most polynomial growth.  相似文献   

18.
鲁其辉  朱道立 《应用数学》2005,18(2):272-278
本文使用Clarke次微分分析了定义在Banach空间的局部Lipschitz连续的伪线性函数的性质,并且考虑了伪线性规划解集的性质.  相似文献   

19.
Abstract

The problem of finding marginal distributions of multidimensional random quantities has many applications in probability and statistics. Many of the solutions currently in use are very computationally intensive. For example, in a Bayesian inference problem with a hierarchical prior distribution, one is often driven to multidimensional numerical integration to obtain marginal posterior distributions of the model parameters of interest. Recently, however, a group of Monte Carlo integration techniques that fall under the general banner of successive substitution sampling (SSS) have proven to be powerful tools for obtaining approximate answers in a very wide variety of Bayesian modeling situations. Answers may also be obtained at low cost, both in terms of computer power and user sophistication. Important special cases of SSS include the “Gibbs sampler” described by Gelfand and Smith and the “IP algorithm” described by Tanner and Wong. The major problem plaguing users of SSS is the difficulty in ascertaining when “convergence” of the algorithm has been obtained. This problem is compounded by the fact that what is produced by the sampler is not the functional form of the desired marginal posterior distribution, but a random sample from this distribution. This article gives a general proof of the convergence of SSS and the sufficient conditions for both strong and weak convergence, as well as a convergence rate. We explore the connection between higher-order eigenfunctions of the transition operator and accelerated convergence via good initial distributions. We also provide asymptotic results for the sampling component of the error in estimating the distributions of interest. Finally, we give two detailed examples from familiar exponential family settings to illustrate the theory.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号