首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in co...  相似文献   

2.
Sensitivity analysis in hidden Markov models (HMMs) is usually performed by means of a perturbation analysis where a small change is applied to the model parameters, upon which the output of interest is re-computed. Recently it was shown that a simple mathematical function describes the relation between HMM parameters and an output probability of interest; this result was established by representing the HMM as a (dynamic) Bayesian network. To determine this sensitivity function, it was suggested to employ existing Bayesian network algorithms. Up till now, however, no special purpose algorithms for establishing sensitivity functions for HMMs existed. In this paper we discuss the drawbacks of computing HMM sensitivity functions, building only upon existing algorithms. We then present a new and efficient algorithm, which is specially tailored for determining sensitivity functions in HMMs.  相似文献   

3.
The optimal control of stochastic processes through sensor estimation of probability density functions is given a geometric setting via information theory and the information metric. Information theory identifies the exponential distribution as the maximum entropy distribution if only the mean is known and the Γ distribution if also the mean logarithm is known. The surface representing Γ models has a natural Riemannian information metric. The exponential distributions form a one-dimensional subspace of the two-dimensional space of all Γ distributions, so we have an isometric embedding of the random model as a subspace of the Γ models. This geometry provides an appropriate structure on which to represent the dynamics of a process and algorithms to control it. This short paper presents a comparative study on the parameter estimation performance between the geodesic equation and the B-spline function approximations when they are used to optimize the parameters of the Γ family distributions. In this case, the B-spline functions are first used to approximate the Γ probability density function on a fixed length interval; then the coefficients of the approximation are related, through mean and variance calculations, to the two parameters (i.e. μ and β) in Γ distributions. A gradient based parameter tuning method has been used to produce the trajectories for (μ, β) when B-spline functions are used, and desired results have been obtained which are comparable to the trajectories obtained from the geodesic equation. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

4.
A key problem in financial and actuarial research, and particularly in the field of risk management, is the choice of models so as to avoid systematic biases in the measurement of risk. An alternative consists of relaxing the assumption that the probability distribution is completely known, leading to interval estimates instead of point estimates. In the present contribution, we show how this is possible for the Value at Risk, by fixing only a small number of parameters of the underlying probability distribution. We start by deriving bounds on tail probabilities, and we show how a conversion leads to bounds for the Value at Risk. It will turn out that with a maximum of three given parameters, the best estimates are always realized in the case of a unimodal random variable for which two moments and the mode are given. It will also be shown that a lognormal model results in estimates for the Value at Risk that are much closer to the upper bound than to the lower bound.  相似文献   

5.
Classical information geometry has emerged from the study of geometrical aspect of the statistical estimation. Cencov characterized the Fisher metric as a canonical metric on probability simplexes from a statistical point of view, and Campbell extended the characterization of the Fisher metric from probability simplexes to positive cone . In quantum information geometry, quantum states which are represented by positive Hermitian matrices with trace one are regarded as an extension of probability distributions. A quantum version of the Fisher metric is introduced, and is called a monotone metric. Petz characterized the monotone metrics on the space of all quantum states in terms of operator monotone functions. A purpose of the present paper is to extend a characterization of monotone metrics from the space of all states to the space of all positive Hermitian matrices on finite dimensional Hilbert space. This characterization corresponds quantum modification of Campbell’s work.  相似文献   

6.
This paper prices defaultable bonds by incorporating inherent risks with the use of utility functions. By allowing risk preferences into the valuation of bonds, nonlinearity is introduced in their pricing. The utility‐function approach affords the advantage of yielding exact solutions to the risky bond pricing equation when familiar stochastic models are used for interest rates. This can be achieved even when the default probability parameter is itself a stochastic variable. Valuations are found for the power‐law and log utility functions under the interest‐rate dynamics of the extended Vasicek and CIR models.  相似文献   

7.
This paper expands on the multigraph method for expressing moments of non-linear functions of Gaussian random variables. In particular, it includes a list of regular multigraphs that is needed for the computation of some of these moments. The multigraph method is then used to evaluate numerically the moments of non-Gaussian self-similar processes. These self-similar processes are of interest in various applications and the numerical value of their marginal moments yield qualitative information about the behavior of the probability tails of their marginal distributions.  相似文献   

8.
We consider a general convex stochastic control model. Our main interest concerns monotonicity results and bounds for the value functions and for optimal policies. In particular, we show how the value functions depend on the transition kernels and we present conditions for a lower bound of an optimal policy. Our approach is based on convex stochastic orderings of probability measures. We derive several interesting sufficient conditions of these ordering concepts, where we make also use of the Blackwell ordering. The structural results are illustrated by partially observed control models and Bayesian information models.  相似文献   

9.
In order to construct estimating functions in some parametric models, this paper introduces two classes of information matrices. Some necessary and sufficient conditions for the information matrices achieving their upper bounds are given. For the problem of estimating the median, some optimum estimating functions based on the information matrices are acquired. Under some regularity conditions, an approach to carrying out the best basis function is introduced. In nonlinear regression models, an optimum estimating function based on the information matrices is obtained. Some examples are given to illustrate the results. Finally, the concept of optimum estimating function and the methods of constructing optimum estimating function are developed in more general statistical models.  相似文献   

10.
The problem of optimality and performance evaluation for cluster analysis procedures is investigated. For the situations where the classes are described by known or unknown prior probabilities and regular probability density functions with unknown parameters the asymptotic expansions of classification error probability are constructed. The results are illustrated for the case of well‐known Fisher classification model. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
The paper is devoted to risk theory insight into the problem of asset-liability and solvency adaptive management. Two adaptive control strategies in the multiperiodic insurance risk model composed of chained classical risk models are introduced and their performance in terms of probability of ruin is examined. The analysis is based on an explicit expression of the probability of ruin within finite time in terms of Bessel functions. The dependence of that probability on the premium loading, either positive or negative, is a basic technical result of independent interest.  相似文献   

12.
When there is a complete sufficient statistic for the nuisance parameter which depends on the parameter of interest then there are locally optimal unbiased estimating functions, but generally there is no globally optimal estimating function. We consider conditioning on the minimal sufficient statistic for the nuisance parameter and find the conditional linear optimal unbiased estimating function. Since the nuisance parameter is totally eliminated in the conditional model there is no intrinsic problem in setting up conditional tests of significance and confidence intervals. A compromise between conditional and unconditional optimum estimating functions is suggested. The techniques are illustrated on three examples including the well known common means problem. The proposed hypothesis testing and confidence interval procedures work reasonably well for the examples considered.  相似文献   

13.
We consider a multi-colony version of the Wright–Fisher model with seed-bank that was recently introduced by Blath et al. Individuals live in colonies and change type via resampling and mutation. Each colony contains a seed-bank that acts as a genetic reservoir. Individuals can enter the seed-bank and become dormant or can exit the seed-bank and become active. In each colony at each generation a fixed fraction of individuals swap state, drawn randomly from the active and the dormant population. While dormant, individuals suspend their resampling. While active, individuals resample from their own colony, but also from different colonies according to a random walk transition kernel representing migration. Both active and dormant individuals mutate.We are interested in the probability that two individuals drawn randomly from two given colonies are identical by descent, i.e., share a common ancestor. This probability, which depends on the locations of the two colonies, is a measure for the inbreeding coefficient of the population. We derive a formula for this probability that is valid when the colonies form a discrete torus. We consider the special case of a symmetric slow seed-bank, for which in each colony half of the individuals are in the seed-bank and at each generation the fraction of individuals that swap state is small. This leads to a simpler formula, from which we are able to deduce how the probability to be identical by descent depends on the distance between the two colonies and various relevant parameters. Through an analysis of random walk Green functions, we are able to derive explicit scaling expressions when mutation is slower than migration. We also compute the spatial second moment of the probability to be identical by descent for all parameters when the torus becomes large. For the special case of a symmetric slow seed-bank, we again obtain explicit scaling expressions.  相似文献   

14.
We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned probability zero (in other words: the probability functions are regular). We use a non-Archimedean field as the range of the probability function. As a result, the property of countable additivity in Kolmogorov’s axiomatization of probability is replaced by a different type of infinite additivity.  相似文献   

15.
GEOMETRICMETHODOFSEQUENTIALESTIMATIONRELATEDTOMULTINOMIALDISTRIBUTIONMODELS¥WEIBOCHENG;LISHOUYE(DepartmentofMathematics,South...  相似文献   

16.
We study partial linear single index models when the response and the covariates in the parametric part are measured with errors and distorted by unknown functions of commonly observable confounding variables, and propose a semiparametric covariate-adjusted estimation procedure. We apply the minimum average variance estimation method to estimate the parameters of interest. This is different from all existing covariate-adjusted methods in the literature. Asymptotic properties of the proposed estimators are established. Moreover, we also study variable selection by adopting the coordinate-independent sparse estimation to select all relevant but distorted covariates in the parametric part. We show that the resulting sparse estimators can exclude all irrelevant covariates with probability approaching one. A simulation study is conducted to evaluate the performance of the proposed methods and a real data set is analyzed for illustration.  相似文献   

17.
This article explores the possibility of modeling the wage distribution using a mixture of density functions. We deal with this issue for a long time and we build on our earlier work. Classical models use the probability distribution such as normal, lognormal, Pareto, etc., but the results are not very good in the last years. Changing the parameters of a probability density over time has led to a degradation of such models and it was necessary to choose a different probability distribution. We were using the idea of mixtures of distributions (instead of using one classical density) in previous articles. We tried using a mixture of probability distributions (normal, lognormal and a mixture of Johnson’s distribution densities) in our models. The achieved results were very good. We used data from Czech Statistical Office covering the wages of the last 18 years in Czech Republic.  相似文献   

18.
We consider a stationary and isotropic bi-phasic (pore and solid) medium, draw many lines through it, and see each line as a one-dimensional level-cut process with value 0 or 1 according to whether a regular stationary process X is less or greater than a given level. The intervals corresponding to the points at which X is in a given phase are named chords. We are interested in obtaining information on the chord-length distribution functions. Working with the Palm probability measure and using level crossings techniques, in particular Rice methods, we can obtain not only the exact analytical formula of the chord-length distribution function but also the joint distribution function of the lengths of two successive chords. Finally, we indicate some concrete applications for the computation of usual stereological parameters.  相似文献   

19.
An empirical method to evaluate pure endowment policies is proposed. The financial component of the policies is described using the time dependent Black Scholes model and making a suitable choice for its time dependent parameter functions. Specifically, the integral of the time dependent risk free interest rate is modeled using an extension of the Nelson and Siegel yield curve (see Dielbold and Li, 2006). The time dependent volatility is expressed using two different models. One of these is based on an extension of the Nelson and Siegel model (Dielbold and Li, 2006), while the other assumes that the volatility is a piecewise function with respect to the time variable. The demographic component is modeled using a generalization of the geometric Brownian mean reverting Gompertz model while an asymptotic formula for survival probability is derived when the mortality risk volatility is small. The method has been tested on two policies. In these the risk free interest rate parameters are calibrated using the one-month, three-month, six-month, one-year, three-year and five-year US treasury constant maturity yields and the parameters of the volatility are calibrated using the VSTOXX volatility indices. The choice of the data employed in the calibration depends on the policy to be evaluated. The performance of the method is established comparing the observed values of the policies with the values obtained using this method.  相似文献   

20.
The sample-based rule obtained from Bayes classification rule by replacing the unknown parameters by ML estimates from a stratified training sample is used for the classification of a random observationX into one ofL populations. The asymptotic expansions in terms of the inverses of the training sample sizes for cross-validation, apparent and plug-in error rates are found. These are used to compare estimation methods of the error rate for a wide range of regular distributions as probability models for considered populations. The optimal training sample allocation minimizing the asymptotic expected error regret is found in the cases of widely applicable, positively skewed distributions (Rayleigh and Maxwell distributions). These probability models for populations are often met in ecology and biology. The results indicate that equal training sample sizes for each populations sometimes are not optimal, even when prior probabilities of populations are equal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号