首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 232 毫秒
1.
(1. 西南科技大学 核废物与环境安全国防重点学科实验室, 四川 绵阳 621010;2. 四川理工学院 化学与环境工程学院, 四川 自贡 643000;3. 成都理工大学 地质灾害防治与地质环境保护国家重点实验室, 成都 610059;4. 中国工程物理研究院 核物理与化学研究所, 四川 绵阳 621900) 根据能谱的形成机理,建立了一种能谱峰形函数,提出基于矩估计法确定该峰形函数的拟合参数初始值的方法。以数理统计为基础,分别通过峰形函数和离散谱峰数据计算均值、方差和三阶中心矩建立方程组求解获取拟合参数的初始值。通过连续气溶胶检测仪探测的氡子体能谱和低本底能谱仪探测的238Pu面源能谱对该峰形函数模型和初始值获取方法进行了测试。结果表明:从低重合度到高重合度,该峰形函数模型都能得到较好的应用;基于矩估计法确定该峰形函数的参数初始值能够使该峰形函数较好地拟合能谱数据。该方法在采用计算机自动能谱拟合分析中具有较强的实用性。  相似文献   

2.
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (相似文献   

3.
刘玲  陈淼  邱健  彭力  骆开庆  韩鹏 《计算物理》2019,36(6):673-681
研究加权贝叶斯算法在多角度动态光散射法测量单峰分布颗粒体系的颗粒粒度分布中的应用.采用颗粒粒度信息分布为底数、调节参数为指数的权重系数给各个角度下的光强自相关函数曲线加入不同的权重系数,再利用传统的贝叶斯算法反演.模拟与实验结果表明,加权后的贝叶斯算法能获得分布误差更小的反演结果,有效地抑制了数据噪声的影响,提高颗粒粒度分布反演的准确性.  相似文献   

4.
Entropy measures the uncertainty associated with a random variable. It has important applications in cybernetics, probability theory, astrophysics, life sciences and other fields. Recently, many authors focused on the estimation of entropy with different life distributions. However, the estimation of entropy for the generalized Bilal (GB) distribution has not yet been involved. In this paper, we consider the estimation of the entropy and the parameters with GB distribution based on adaptive Type-II progressive hybrid censored data. Maximum likelihood estimation of the entropy and the parameters are obtained using the Newton–Raphson iteration method. Bayesian estimations under different loss functions are provided with the help of Lindley’s approximation. The approximate confidence interval and the Bayesian credible interval of the parameters and entropy are obtained by using the delta and Markov chain Monte Carlo (MCMC) methods, respectively. Monte Carlo simulation studies are carried out to observe the performances of the different point and interval estimations. Finally, a real data set has been analyzed for illustrative purposes.  相似文献   

5.
Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.  相似文献   

6.
Background: For the kinetic models used in contrast-based medical imaging, the assignment of the arterial input function named AIF is essential for the estimation of the physiological parameters of the tissue via solving an optimization problem. Objective: In the current study, we estimate the AIF relayed on the modified maximum entropy method. The effectiveness of several numerical methods to determine kinetic parameters and the AIF is evaluated—in situations where enough information about the AIF is not available. The purpose of this study is to identify an appropriate method for estimating this function. Materials and Methods: The modified algorithm is a mixture of the maximum entropy approach with an optimization method, named the teaching-learning method. In here, we applied this algorithm in a Bayesian framework to estimate the kinetic parameters when specifying the unique form of the AIF by the maximum entropy method. We assessed the proficiency of the proposed method for assigning the kinetic parameters in the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), when determining AIF with some other parameter-estimation methods and a standard fixed AIF method. A previously analyzed dataset consisting of contrast agent concentrations in tissue and plasma was used. Results and Conclusions: We compared the accuracy of the results for the estimated parameters obtained from the MMEM with those of the empirical method, maximum likelihood method, moment matching (“method of moments”), the least-square method, the modified maximum likelihood approach, and our previous work. Since the current algorithm does not have the problem of starting point in the parameter estimation phase, it could find the best and nearest model to the empirical model of data, and therefore, the results indicated the Weibull distribution as an appropriate and robust AIF and also illustrated the power and effectiveness of the proposed method to estimate the kinetic parameters.  相似文献   

7.
The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.  相似文献   

8.
We introduce here a new distribution called the power-modified Kies-exponential (PMKE) distribution and derive some of its mathematical properties. Its hazard function can be bathtub-shaped, increasing, or decreasing. Its parameters are estimated by seven classical methods. Further, Bayesian estimation, under square error, general entropy, and Linex loss functions are adopted to estimate the parameters. Simulation results are provided to investigate the behavior of these estimators. The estimation methods are sorted, based on partial and overall ranks, to determine the best estimation approach for the model parameters. The proposed distribution can be used to model a real-life turbocharger dataset, as compared with 24 extensions of the exponential distribution.  相似文献   

9.
In this article, we propose the exponentiated sine-generated family of distributions. Some important properties are demonstrated, such as the series representation of the probability density function, quantile function, moments, stress-strength reliability, and Rényi entropy. A particular member, called the exponentiated sine Weibull distribution, is highlighted; we analyze its skewness and kurtosis, moments, quantile function, residual mean and reversed mean residual life functions, order statistics, and extreme value distributions. Maximum likelihood estimation and Bayes estimation under the square error loss function are considered. Simulation studies are used to assess the techniques, and their performance gives satisfactory results as discussed by the mean square error, confidence intervals, and coverage probabilities of the estimates. The stress-strength reliability parameter of the exponentiated sine Weibull model is derived and estimated by the maximum likelihood estimation method. Also, nonparametric bootstrap techniques are used to approximate the confidence interval of the reliability parameter. A simulation is conducted to examine the mean square error, standard deviations, confidence intervals, and coverage probabilities of the reliability parameter. Finally, three real applications of the exponentiated sine Weibull model are provided. One of them considers stress-strength data.  相似文献   

10.
Formal Bayesian comparison of two competing models, based on the posterior odds ratio, amounts to estimation of the Bayes factor, which is equal to the ratio of respective two marginal data density values. In models with a large number of parameters and/or latent variables, they are expressed by high-dimensional integrals, which are often computationally infeasible. Therefore, other methods of evaluation of the Bayes factor are needed. In this paper, a new method of estimation of the Bayes factor is proposed. Simulation examples confirm good performance of the proposed estimators. Finally, these new estimators are used to formally compare different hybrid Multivariate Stochastic Volatility–Multivariate Generalized Autoregressive Conditional Heteroskedasticity (MSV-MGARCH) models which have a large number of latent variables. The empirical results show, among other things, that the validity of reduction of the hybrid MSV-MGARCH model to the MGARCH specification depends on the analyzed data set as well as on prior assumptions about model parameters.  相似文献   

11.
根据废旧纺织品所含成分对它们做分类回收和处理可节省大量纺织原材料。目前,在废旧纺织品的回收过程中往往使用人工分拣方法。这种方法成本高且效率低。近红外光谱分析是21世纪发展最迅速的技术之一,可以在不破坏样本的情况下快速测定样本的成分及每种成分的含量。利用该技术对废旧纺织品进行分析,预先判断废旧纺织品所含的成分及各种成分的含量,可为废旧纺织品的大规模精细分类回收提供帮助。多模型方法通过将各子模型的预测值做加权平均得到最终的预测值,用该方法建立的近红外光谱分析模型一般具有较好的稳定性。以废旧纺织品样本的锦纶含量为例,先用多模型方法建立了锦纶含量的近红外光谱分析模型。方法如下:将反射率向量按照波长划分为15组。用每组数据建立一个近红外光谱分析子模型。对子模型的预测值做加权平均得出锦纶含量的最终预测值。然后在多模型方法基础上,根据锦纶含量预测值与实验值之间的近似线性关系,通过用变量代替常量并对变量做标准化处理,给出了一种便于优化的预测锦纶含量的近红外光谱分析新模型。优化后的每个子模型中的参数比优化前减少了6个,这样可防止模型过拟合。将上述两个模型与常见的用偏最小二乘法建立的模型进行了对比。交叉验证的结果表明:(优化后的)新模型的拟合优度的平均值为0.820 7,单纯使用多模型方法所建模型的拟合优度的平均值为0.769 1,用偏最小二乘法建立的模型的拟合优度的平均值为0.746 7。因此, 使用多模型方法建立的模型的预测效果好于用偏最小二乘法建立的模型的预测效果。新模型的预测效果明显好于其他两个模型的预测效果。该研究主要创新之处是新模型的建立和优化。文中建模方法有望用于废旧纺织品样本其他成分的含量预测。  相似文献   

12.
In this paper, we construct an intermediate distribution linking the Gaussian and the Cauchy distribution. We provide the probability density function and the corresponding characteristic function of the intermediate distribution. Because many kinds of distributions have no moment, we introduce weighted moments. Specifically, we consider weighted moments under two types of weighted functions: the cut-off function and the exponential function. Through these two types of weighted functions, we can obtain weighted moments for almost all distributions. We consider an application of the probability density function of the intermediate distribution on the spectral line broadening in laser theory. Moreover, we utilize the intermediate distribution to the problem of the stock market return in quantitative finance.  相似文献   

13.
Maximum entropy network ensembles have been very successful in modelling sparse network topologies and in solving challenging inference problems. However the sparse maximum entropy network models proposed so far have fixed number of nodes and are typically not exchangeable. Here we consider hierarchical models for exchangeable networks in the sparse limit, i.e., with the total number of links scaling linearly with the total number of nodes. The approach is grand canonical, i.e., the number of nodes of the network is not fixed a priori: it is finite but can be arbitrarily large. In this way the grand canonical network ensembles circumvent the difficulties in treating infinite sparse exchangeable networks which according to the Aldous-Hoover theorem must vanish. The approach can treat networks with given degree distribution or networks with given distribution of latent variables. When only a subgraph induced by a subset of nodes is known, this model allows a Bayesian estimation of the network size and the degree sequence (or the sequence of latent variables) of the entire network which can be used for network reconstruction.  相似文献   

14.
L. Quan  E. Ferrero  F. Hu 《Physica A》2012,391(1-2):231-247
Fourth order moments and their connection with other statistics, including second order moments, skewness and entropy, in stable boundary layers are investigated with a large eddy simulation model (LES), wind tunnel experiment data (WT) and measurements on a meteorological tower in an urban area (MT). The relationship between skewness and kurtosis has been studied through two formulae, whose coefficients are determined for the three data sets. Shannon entropy is analysed as an index of the turbulent flow organization in order to further understand the possible reason for the failure of the QN hypothesis. To quantify this relationship between Shannon entropy and kurtosis, a power function is proposed.  相似文献   

15.
提出了一种基于SIFT(Scale invariant feature transform)匹配的全局运动估计算法.在SIFT初匹配的基础上,对每一个原始匹配特征点,利用所在尺度的邻域灰度信息,对其加权平均后再进行匹配,进而去除误配点.精炼后的匹配点集合作为求解全局运动参数模型的对应数据,采用最小二乘法计算模型参数.在...  相似文献   

16.
吴忠德  邓露 《应用声学》2016,24(6):286-288, 322
产品在研制阶段存在大量的试验数据,为有效利用验前数据,降低测试性验证试验样本量,提出一种基于验前试验信息熵的测试性验证试验方案。该方案利用信息熵来度量研制阶段多次验前试验数据对测试性验证试验所起的作用,依据平均互信息熵和信息总量相等的原则,将多次验前试验数据等效成一次成败型数据。在此基础上,通过相容性检验方法确定验前数据与试验数据的相容性水平,并以Beta分布为验前分布,利用加权混合贝叶斯理论建立混合验后分布,之后,基于贝叶斯平均风险理论求解满足双方风险要求的试验方案。最后,以某型雷达发射分机为例,对其进行测试性验证试验研究,研究结果验证了该方案的有效性。  相似文献   

17.
In this paper a model updating algorithm is presented to estimate structural parameters at the element level utilizing frequency domain representation of the strain data. Sensitivity equations for mass and stiffness parameters estimation are derived using decomposed form of the strain-based transfer functions. The rate of changes of eigenvectors and a subset of measured natural frequencies are used to assemble the sensitivity equation of the strain-based transfer function. Solution of the derived sensitivity equations through the least square method resulted in a robust parameters estimation method. Numerical examples using simulated noise polluted data of 2D truss and frame models confirm that the proposed method is able to successfully update structural models even in the presence of mass modeling errors.  相似文献   

18.
谢将剑  杨俊  邢照亮  张卓  陈新 《应用声学》2020,39(2):207-215
针对短时窗平均/长时窗平均算法从次声台站监测数据中提取的信号仍然包含噪声的问题,对支持向量机和人工神经网络的机器学习方法进行了研究。采用小波包分解的方法对信号进行重构,提取出各频带内的重构信号能量特征,对事件信号和噪声进行了识别实验,并分析了提高识别能力的方法,为工程应用提供理论参考。实验结果表明,在训练数据集不大的情况下,通过优化模型结构可以将两种方法的识别能力提高到可以接受的水平。  相似文献   

19.
基于最小一乘和混沌遗传算法检测红外小目标   总被引:4,自引:1,他引:3  
提出了一种基于最小一乘估计和混沌遗传算法进行背景预测检测红外小目标的方法.在建立最小一乘准则背景预测模型的基础上,根据最小一乘估计的性质,利用混沌序列内在的伪随机性,将混沌引入到遗传算法得到混沌遗传优化算法,以此解决最小一乘估计中极值的选取问题.将原始图像与预测图像相减得到预测残差图像后,利用基于二维指数熵的图像阈值选取快速算法进行分割.给出了实验结果与分析,并与基于遗传算法的最小一乘预测、最小二乘背景预测的检测算法作了比较.实验结果表明,提出的算法具有更高的检测概率和更好的检测结果.  相似文献   

20.
基于选择性模型组合的三维荧光光谱水质分析方法   总被引:5,自引:0,他引:5  
为提高三维荧光光谱水质分析的精度,提出一种选择性模型组合方法,采用相关系数法对三维荧光光谱激发波长进行选择,并将被选中的激发波长下的荧光发射光谱水质分析子模型采用岭回归法进行模型组合,得到对水质指标的组合模型。以一组总有机碳(TOC)范围在3.41~125.35 mg·L-1,化学需氧量(COD)范围在22.80~330.60 mg·L-1的32个地表水和城市生活污水水样做为研究对象,对其三维荧光光谱220~400 nm范围内的10个激发波长采用上述方法进行选择,分别针对TOC和COD指标筛选出260,280,400 nm和220,280,400 nm各3个激发波长。采用部分最小二乘方法建立上述激发波长下荧光发射光谱水质分析子模型,根据岭回归法计算各子模型的组合系数,分别得到对TOC和COD指标的组合模型。实验结果表明:采用该方法得到的组合模型对TOC和COD两种指标的预测误差均方根(RMSEP)相比精度最高的单一荧光发射光谱子模型分别减小了15.4%和17.5%,相比未经模型选择的组合模型分别减小了6.1%和10.9%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号