首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
本文只用一个纵波信息,对一维波动方程的速度和震源函数进行联合反演.并考虑到波动方程的反问题是一不适定问题,对震源函数和波速分别用正则化法分步迭代求解,大大减少了反问题的计算工作量,改善了该反问题的计算稳定性.为计算实际一维地震数据提供了一种方法.文中给出了只用一个反问题补充条件同时进行多参数反演的详细公式,并对相应的数值算例进行了分析和比较.  相似文献   

2.
反问题是现在数学物理研究中的一个热点问题,而反问题求解面临的一个本质性困难是不适定性。求解不适定问题的普遍方法是:用与原不适定问题相“邻近”的适定问题的解去逼近原问题的解,这种方法称为正则化方法.如何建立有效的正则化方法是反问题领域中不适定问题研究的重要内容.当前,最为流行的正则化方法有基于变分原理的Tikhonov正则化及其改进方法,此类方法是求解不适定问题的较为有效的方法,在各类反问题的研究中被广泛采用,并得到深入研究.  相似文献   

3.
提出了一种基于正则化技术的信号稀疏表示方法.该方法与经典稀疏表示算法的主要区别可概括为两点:其一,直接使用e_0模而不是被广泛采用的e_1模来度量稀疏性;其二,正则化项的引入使得该模型得到的信号表达是所有表示中最优稀疏的.在本文中,正则化项采用框架势来描述稀疏表示的"最优性",利用二次可微的凹函数来逼近e_0模,得到了求解所提出的正则化模型的近似算法,并给出了收敛性分析.此外,数值实验也显现了本文所提模型及算法相比于经典算法的优越性.  相似文献   

4.
本文从阵列声波CT成象问题出发,阐述了用正则化方法作CT成象处理的原理和方法,即引入正则化因子,求泛函极小和构组一族正则化近似并从中寻求最佳解,算例分析表明,正则化方法作CT成象速度反演效果好。  相似文献   

5.
考虑预测变量p的数量超过样本大小n的高维稀疏精度矩阵.近年来,由于高维稀疏精度矩阵估计变得越来越流行,所以文章专注于计算正则化路径,或者在整个正则化参数范围内解决优化问题.首先使用定义在正定性约束下最小化Lasso目标函数精度矩阵估计器,然后对稀疏精度矩阵使用乘数交替方向法(ADMM)算法正则化路径,以快速估计与正则化...  相似文献   

6.
基于奇异值分解建立的一种新的正则化方法   总被引:1,自引:0,他引:1       下载免费PDF全文
根据紧算子的奇异系统理论,引入一种正则化滤子函数,从而建立一种新的正则化方法来求解右端近似给定的第一类算子方程,并给出了正则解的误差分析。通过正则参数的先验选取,证明了正则解的误差具有渐进最优阶。   相似文献   

7.
一种新的正则化方法的正则参数的最优后验选取   总被引:1,自引:0,他引:1  
李功胜  王家军 《数学杂志》2002,22(1):103-106
应用紧算子的奇异系统和广义Arcangeli方法后验选取正则参数,证明了文[1]中所建立的求解第一类算子方程的正则化方法是收敛的,且正则解具有最优的渐近阶。  相似文献   

8.
解第一类算子方程的一种新的正则化方法   总被引:4,自引:0,他引:4  
杨宏奇  侯宗义 《数学学报》1997,40(3):369-376
对算子与右端都为近似给定的第一类算子方程提出一种新的正则化方法,依据广义Arcangeli方法选取正则参数,建立了正则解的收敛性。这种新的正则化方法与通常的Tikhonov正则化方法相比较,提高了正则解的渐近阶估计。  相似文献   

9.
主要研究了分裂可行问题的1-范数正则化.首先利用1-范数正则化方法,将分裂可行问题转化为无约束优化问题.其次讨论了1-范数正则化解的若干性质,并给出了求解1-范数正则化解的邻近梯度算法.最后通过数值试验验证了算法的可行性和有效性.  相似文献   

10.
徐会林 《数学杂志》2015,35(6):1461-1468
本文研究了一阶数值微分问题,将其等价转化为第一类积分方程的求解问题,给出了求解该问题的局部正则化方法.在精确导数的一定假设条件下,讨论了正则化参数的先验选取策略及相应近似导数的误差估计.相对于经典的正则化方法,数值实验表明局部正则化方法能在有效抑制噪声的同时,保证近似导数逼近精确导数的效果,尤其是在精确导数有间断或急剧变化时.  相似文献   

11.
本文研究了Dirichlet分布总体的参数和其他感光趣的量的贝叶斯估计。在参数的有实际意义的函数上设置均匀的先验分布,对适当变换后的参数用Metropolis算法得到马尔可夫链蒙特卡罗后验样本,由此即得参数和其他感兴趣的量的贝叶斯估计。  相似文献   

12.
An algorithm of continuous stage-space MCMC method for solving algebra equation f(x)=0 is given. It is available for the case that the sign of f(x) changes frequently or the derivative f‘ (x) does not exist in the neighborhood of the root,while the Newton method is hard to work. Let n be the number of random variables created by computer in our algorithm.Then after rn=O(n) transactions from the initial value Xo,X^* can be got such that [f(x^*)|<e-^cm|f(xo)|by choosing suitable positive constant c. An illustration is also given with the discussion of convergence by adjusting the parameters in the algorithm.  相似文献   

13.
The complexity of the Metropolis–Hastings (MH) algorithm arises from the requirement of a likelihood evaluation for the full dataset in each iteration. One solution has been proposed to speed up the algorithm by a delayed acceptance approach where the acceptance decision proceeds in two stages. In the first stage, an estimate of the likelihood based on a random subsample determines if it is likely that the draw will be accepted and, if so, the second stage uses the full data likelihood to decide upon final acceptance. Evaluating the full data likelihood is thus avoided for draws that are unlikely to be accepted. We propose a more precise likelihood estimator that incorporates auxiliary information about the full data likelihood while only operating on a sparse set of the data. We prove that the resulting delayed acceptance MH is more efficient. The caveat of this approach is that the full dataset needs to be evaluated in the second stage. We therefore propose to substitute this evaluation by an estimate and construct a state-dependent approximation thereof to use in the first stage. This results in an algorithm that (i) can use a smaller subsample m by leveraging on recent advances in Pseudo-Marginal MH (PMMH) and (ii) is provably within O(m? 2) of the true posterior.  相似文献   

14.
Dynamically rescaled Hamiltonian Monte Carlo is introduced as a computationally fast and easily implemented method for performing full Bayesian analysis in hierarchical statistical models. The method relies on introducing a modified parameterization so that the reparameterized target distribution has close to constant scaling properties, and thus is easily sampled using standard (Euclidian metric) Hamiltonian Monte Carlo. Provided that the parameterizations of the conditional distributions specifying the hierarchical model are “constant information parameterizations” (CIPs), the relation between the modified- and original parameterization is bijective, explicitly computed, and admit exploitation of sparsity in the numerical linear algebra involved. CIPs for a large catalogue of statistical models are presented, and from the catalogue, it is clear that many CIPs are currently routinely used in statistical computing. A relation between the proposed methodology and a class of explicitly integrated Riemann manifold Hamiltonian Monte Carlo methods is discussed. The methodology is illustrated on several example models, including a model for inflation rates with multiple levels of nonlinearly dependent latent variables. Supplementary materials for this article are available online.  相似文献   

15.
Hamiltonian Monte Carlo (HMC) has been progressively incorporated within the statistician’s toolbox as an alternative sampling method in settings when standard Metropolis–Hastings is inefficient. HMC generates a Markov chain on an augmented state space with transitions based on a deterministic differential flow derived from Hamiltonian mechanics. In practice, the evolution of Hamiltonian systems cannot be solved analytically, requiring numerical integration schemes. Under numerical integration, the resulting approximate solution no longer preserves the measure of the target distribution, therefore an accept–reject step is used to correct the bias. For doubly intractable distributions—such as posterior distributions based on Gibbs random fields—HMC suffers from some computational difficulties: computation of gradients in the differential flow and computation of the accept–reject proposals poses difficulty. In this article, we study the behavior of HMC when these quantities are replaced by Monte Carlo estimates. Supplemental codes for implementing methods used in the article are available online.  相似文献   

16.
This article proposes a new Bayesian approach to prediction on continuous covariates. The Bayesian partition model constructs arbitrarily complex regression and classification surfaces by splitting the covariate space into an unknown number of disjoint regions. Within each region the data are assumed to be exchangeable and come from some simple distribution. Using conjugate priors, the marginal likelihoods of the models can be obtained analytically for any proposed partitioning of the space where the number and location of the regions is assumed unknown a priori. Markov chain Monte Carlo simulation techniques are used to obtain predictive distributions at the design points by averaging across posterior samples of partitions.  相似文献   

17.
Expected gain in Shannon information is commonly suggested as a Bayesian design evaluation criterion. Because estimating expected information gains is computationally expensive, examples in which they have been successfully used in identifying Bayes optimal designs are both few and typically quite simplistic. This article discusses in general some properties of estimators of expected information gains based on Markov chain Monte Carlo (MCMC) and Laplacian approximations. We then investigate some issues that arise when applying these methods to the problem of experimental design in the (technically nontrivial) random fatigue-limit model of Pascual and Meeker. An example comparing follow-up designs for a laminate panel study is provided.  相似文献   

18.
19.
In this study, we consider the Bayesian estimation of unknown parameters and reliability function of the generalized exponential distribution based on progressive type-I interval censoring. The Bayesian estimates of parameters and reliability function cannot be obtained as explicit forms by applying squared error loss and Linex loss functions, respectively; thus, we present the Lindley’s approximation to discuss these estimations. Then, the Bayesian estimates are compared with the maximum likelihood estimates by using the Monte Carlo simulations.  相似文献   

20.
Most regression modeling is based on traditional mean regression which results in non-robust estimation results for non-normal errors. Compared to conventional mean regression, composite quantile regression (CQR) may produce more robust parameters estimation. Based on a composite asymmetric Laplace distribution (CALD), we build a Bayesian hierarchical model for the weighted CQR (WCQR). The Gibbs sampler algorithm of Bayesian WCQR is developed to implement posterior inference. Finally, the proposed method are illustrated by some simulation studies and a real data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号