首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces a probabilistic framework for the joint survivorship of couples in the context of dynamic stochastic mortality models. The death of one member of a couple can have either deterministic or stochastic effects on the other; our new framework gives an intuitive and flexible pairwise cohort-based probabilistic mechanism that can account for both. It is sufficiently flexible to allow modelling of effects that are short-term (called the broken-heart effect) and/or long-term (named life circumstances bereavement). In addition, it can account for the state of health of both the surviving and the dying spouse and can allow for dynamic and asymmetric reactions of varying complexity. Finally, it can accommodate the pairwise dependence of mortality intensities before the first death. Analytical expressions for bivariate survivorship in representative models are given, and their sensitivity analysis is performed for benchmark cases of old and young couples. Simulation and estimation procedures are provided that are straightforward to implement and lead to consistent parameter estimation on synthetic dataset of 10000 pairs of death times for couples.  相似文献   

2.
We consider time series data modeled by ordinary differential equations (ODEs), widespread models in physics, chemistry, biology and science in general. The sensitivity analysis of such dynamical systems usually requires calculation of various derivatives with respect to the model parameters. We employ the adjoint state method (ASM) for efficient computation of the first and the second derivatives of likelihood functionals constrained by ODEs with respect to the parameters of the underlying ODE model. Essentially, the gradient can be computed with a cost (measured by model evaluations) that is independent of the number of the ODE model parameters and the Hessian with a linear cost in the number of the parameters instead of the quadratic one. The sensitivity analysis becomes feasible even if the parametric space is high-dimensional. The main contributions are derivation and rigorous analysis of the ASM in the statistical context, when the discrete data are coupled with the continuous ODE model. Further, we present a highly optimized implementation of the results and its benchmarks on a number of problems. The results are directly applicable in (e.g.) maximum-likelihood estimation or Bayesian sampling of ODE based statistical models, allowing for faster, more stable estimation of parameters of the underlying ODE model.  相似文献   

3.
This paper presents a new parameter and state estimation algorithm for single-input single-output systems based on canonical state space models from the given input–output data. Difficulties of identification for state space models lie in that there exist unknown noise terms in the formation vector and unknown state variables. By means of the hierarchical identification principle, those noise terms in the information vector are replaced with the estimated residuals and a new least squares algorithm is proposed for parameter estimation and the system states are computed by using the estimated parameters. Finally, an example is provided.  相似文献   

4.
The use of nonlinear state space models in the study and control of stochastic dynamic systems is regularly growing. With the new generation of particle filters, efficient filtering methods are now available for the identification of these models. However their statistical selection is still an open problem because of the frequent nonaccessibility of the related likelihoods and the intricate estimation of the latter. This rules out all the usual model comparison information criteria as Akaïke's and unfavour also the efficient methods relying on Bayes factor estimation by MCMC simulations.This Note shows how a convergent nonparametric Bayes factor estimator can be built and used advantageously, as direct application of these new particle filters themselves. To cite this article: J.-P. Vila, I. Saley, C. R. Acad. Sci. Paris, Ser. I 347 (2009).  相似文献   

5.
This paper reviews estimation problems with missing, or hidden data. We formulate this problem in the context of Markov models and consider two interrelated issues, namely, the estimation of a state given measured data and model parameters, and the estimation of model parameters given the measured data alone. We also consider situations where the measured data is, itself, incomplete in some sense. We deal with various combinations of discrete and continuous states and observations.  相似文献   

6.
We introduce polynomial processes in the context of stochastic portfolio theory to model simultaneously companies’ market capitalizations and the corresponding market weights. These models substantially extend volatility stabilized market models considered in Fernholz and Karatzas (2005), in particular they allow for correlation between the individual stocks. At the same time they remain remarkably tractable which makes them applicable in practice, especially for estimation and calibration to high dimensional equity index data. In the diffusion case we characterize the polynomial property of the market capitalizations and their weights, exploiting the fact that the transformation between absolute and relative quantities perfectly fits the structural properties of polynomial processes. Explicit parameter conditions assuring the existence of a local martingale deflator and relative arbitrages with respect to the market portfolio are given and the connection to non-attainment of the boundary of the unit simplex is discussed. We also consider extensions to models with jumps and the computation of optimal relative arbitrage strategies.  相似文献   

7.
The location of a rapid transit line (RTL) represents a very complex decision problem because of the large number of decision makers, unquantifiable criteria and uncertain data. In this context Operational Research can help in the design process by providing tools to generate and assess alternative solutions. For this purpose two bicriterion mathematical programming models — the Maximum Coverage Shortest Path model and the Median Shortest Path model — have been developed in the past. In this paper a new bicriterion model, which can evaluate in a more realistic way the attractivity of an RTL is introduced. To calculate an estimation of the non-inferior solution set of the problem, a procedure based on a k-shortest path algorithm was developed. This approach was applied to a well-known sample problem and the results are discussed and compared with those obtained using a Median Shortest Path model.  相似文献   

8.
Among the convolution particle filters for discrete-time dynamic systems defined by nonlinear state space models, the Resampled Convolution Filter is one of the most efficient, in terms of estimation of the conditional probability density functions (pdf’s) of the state variables and unknown parameters and in terms of implementation. This nonparametric filter is known for its almost sure L1-convergence property. But contrarily to the other convolution filters, its almost sure punctual convergence had not yet been established. This paper is devoted to the proof of this property.  相似文献   

9.
The decision problem considered in this paper is a hierarchical workforce scheduling problem in which a higher qualified worker can substitute for a lower qualified one, but not vice versa, labour requirements may vary, and each worker must receive n off-days a week. Within this context, five mathematical models are discussed. The first two of these five models are previously published. Both of them are for the case where the work is indivisible. The remaining three models are developed by the authors of this paper. One of these new models is for the case where the work is indivisible and the other two are for the case where the work is divisible. The three new models are proposed with the purpose of removing the shortcomings of the previously published two models. All of the five models are applied on the same illustrative example. Additionally, a total of 108 test problems are solved within the context of two computational experiments.  相似文献   

10.
We introduce a flexible, open source implementation that provides the optimal sensitivity of solutions of nonlinear programming (NLP) problems, and is adapted to a fast solver based on a barrier NLP method. The program, called sIPOPT evaluates the sensitivity of the Karush?CKuhn?CTucker (KKT) system with respect to perturbation parameters. It is paired with the open-source IPOPT NLP solver and reuses matrix factorizations from the solver, so that sensitivities to parameters are determined with minimal computational cost. Aside from estimating sensitivities for parametric NLPs, the program provides approximate NLP solutions for nonlinear model predictive control and state estimation. These are enabled by pre-factored KKT matrices and a fix-relax strategy based on Schur complements. In addition, reduced Hessians are obtained at minimal cost and these are particularly effective to approximate covariance matrices in parameter and state estimation problems. The sIPOPT program is demonstrated on four case studies to illustrate all of these features.  相似文献   

11.
The paper deals with recursive state estimation for hybrid systems. An unobservable state of such systems is changed both in a continuous and a discrete way. Fast and efficient online estimation of hybrid system state is desired in many application areas. The presented paper proposes to look at this problem via Bayesian filtering in the factorized (decomposed) form. General recursive solution is proposed as the probability density function, updated entry-wise. The paper summarizes general factorized filter specialized for (i) normal state-space models; (ii) multinomial state-space models with discrete observations; and (iii) hybrid systems. Illustrative experiments and comparison with one of the counterparts are provided.  相似文献   

12.
Semi-Parametric Probability-Weighted Moments Estimation Revisited   总被引:1,自引:0,他引:1  
In this paper, for heavy-tailed models and through the use of probability weighted moments based on the largest observations, we deal essentially with the semi-parametric estimation of the Value-at-Risk at a level p, the size of the loss occurred with a small probability p, as well as the dual problem of estimation of the probability of exceedance of a high level x. These estimation procedures depend crucially on the estimation of the extreme value index, the primary parameter in Statistics of Extremes, also done on the basis of the same weighted moments. Under regular variation conditions on the right-tail of the underlying distribution function F, we prove the consistency and asymptotic normality of the estimators under consideration in this paper, through the usual link of their asymptotic behaviour to the one of the extreme value index estimator they are based on. The performance of these estimators, for finite samples, is illustrated through Monte-Carlo simulations. An adaptive choice of thresholds is put forward. Applications to a real data set in the field of insurance as well as to simulated data are also provided.  相似文献   

13.
14.
Artificial Neural Networks (ANNs) are well known for their credible ability to capture non-linear trends in scientific data. However, the heuristic nature of estimation of parameters associated with ANNs has prevented their evolution into efficient surrogate models. Further, the dearth of optimal training size estimation algorithms for the data greedy ANNs resulted in their overfitting. Therefore, through this work, we aim to contribute a novel ANN building algorithm called TRANSFORM aimed at simultaneous and optimal estimation of ANN architecture, training size and transfer function. TRANSFORM is integrated with three standalone Sobol sampling based training size determination algorithms which incorporate the concepts of hypercube sampling and optimal space filling. TRANSFORM was used to construct ANN surrogates for a highly non-linear industrially validated continuous casting model from steel plant. Multiobjective optimization of casting model to ensure maximum productivity, maximum energy saving and minimum operational cost was performed by ANN assisted Non-dominated Sorting Genetic Algorithms (NSGA-II). The surrogate assisted optimization was found to be 13 times faster than conventional optimization, leading to its online implementation. Simple operator's rules were deciphered from the optimal solutions using Pareto front characterization and K-means clustering for optimal functioning of casting plant. Comprehensive studies on (a) computational time comparisons between proposed training size estimation algorithms and (b) predictability comparisons between constructed ANNs and state of art statistical models, Kriging Interpolators adds to the other highlights of this work. TRANSFORM takes physics based model as the only input and provides parsimonious ANNs as outputs, making it generic across all scientific domains.  相似文献   

15.
We provide in this paper asymptotic theory for the multivariate GARCH(p,q) process. Strong consistency of the quasi-maximum likelihood estimator (MLE) is established by appealing to conditions given by Jeantheau (Econometric Theory14 (1998), 70) in conjunction with a result given by Boussama (Ergodicity, mixing and estimation in GARCH models, Ph.D. Dissertation, University of Paris 7, 1998) concerning the existence of a stationary and ergodic solution to the multivariate GARCH(p,q) process. We prove asymptotic normality of the quasi-MLE when the initial state is either stationary or fixed.  相似文献   

16.
The supervised classification of fuzzy data obtained from a random experiment is discussed. The data generation process is modelled through random fuzzy sets which, from a formal point of view, can be identified with certain function-valued random elements. First, one of the most versatile discriminant approaches in the context of functional data analysis is adapted to the specific case of interest. In this way, discriminant analysis based on nonparametric kernel density estimation is discussed. In general, this criterion is shown not to be optimal and to require large sample sizes. To avoid such inconveniences, a simpler approach which eludes the density estimation by considering conditional probabilities on certain balls is introduced. The approaches are applied to two experiments; one concerning fuzzy perceptions and linguistic labels and another one concerning flood analysis. The methods are tested against linear discriminant analysis and random K-fold cross validation.  相似文献   

17.
Many numerical aspects are involved in parameter estimation of stochastic volatility models. We investigate a model for stochastic volatility suggested by Hobson and Rogers [Complete models with stochastic volatility, Mathematical Finance 8 (1998) 27] and we focus on its calibration performance with respect to numerical methodology.In recent financial literature there are many papers dealing with stochastic volatility models and their capability in capturing European option prices; in Figà-Talamanca and Guerra [Towards a coherent volatility pricing model: An empirical comparison, Financial Modelling, Phisyca-Verlag, 2000] a comparison between some of the most significant models is done. The model proposed by Hobson and Rogers seems to describe quite well the dynamics of volatility.In Figà-Talamanca and Guerra [Fitting the smile by a complete model, submitted] a deep investigation of the Hobson and Rogers model was put forward, introducing different ways of parameters' estimation. In this paper we test the robustness of the numerical procedures involved in calibration: the quadrature formula to compute the integral in the definition of some state variables, called offsets, that represent the weight of the historical log-returns, the discretization schemes adopted to solve the stochastic differential equation for volatility and the number of simulations in the Monte Carlo procedure introduced to obtain the option price.The main results can be summarized as follows. The choice of a high order of convergence scheme is not fully justified because the option prices computed via calibration method are not sensitive to the use of a scheme with 2.0 order of convergence or greater. The refining of the approximation rule for the integral, on the contrary, allows to compute option prices that are often closer to market prices. In conclusion, a number of 10 000 simulations seems to be sufficient to compute the option price and a higher number can only slow down the numerical procedure.  相似文献   

18.
The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including e.g. the regression, density, Poisson and binary response model. The method extends the adaptive weights smoothing (AWS) procedure introduced in Polzehl and Spokoiny (2000) in context of image denoising. The main idea of the method is to describe a greatest possible local neighborhood of every design point Xi in which the local parametric assumption is justified by the data. The method is especially powerful for model functions having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical examples for density estimation and classification. We also establish some remarkable theoretical nonasymptotic results on properties of the new algorithm. This includes the ``propagation' property which particularly yields the root-n consistency of the resulting estimate in the homogeneous case. We also state an ``oracle' result which implies rate optimality of the estimate under usual smoothness conditions and a ``separation' result which explains the sensitivity of the method to structural changes.  相似文献   

19.
A general class of discrete-time uncertain nonlinear stochastic systems corrupted by finite energy disturbances and estimation performance criteria are considered. These performance criteria include guaranteed-cost suboptimal versions of estimation objectives like H2, H, stochastic passivity, etc. Linear state estimators that satisfy these criteria are presented. A common matrix inequality formulation is used in characterization of estimator design equations.  相似文献   

20.
In this paper we present the application of a method of adaptive estimation using an algebra–geometric approach, to the study of dynamic processes in the brain. It is assumed that the brain dynamic processes can be described by nonlinear or bilinear lattice models. Our research focuses on the development of an estimation algorithm for a signal process in the lattice models with background additive white noise, and with different assumptions regarding the characteristics of the signal process. We analyze the estimation algorithm and implement it as a stochastic differential equation under the assumption that the Lie algebra, associated with the signal process, can be reduced to a finite dimensional nilpotent algebra. A generalization is given for the case of lattice models, which belong to a class of causal lattices with certain restrictions on input and output signals. The application of adaptive filters for state estimation of the CA3 region of the hippocampus (a common location of the epileptic focus) is discussed. Our areas of application involve two problems: (1) an adaptive estimation of state variables of the hippocampal network, and (2) space identification of the coupled ordinary equation lattice model for the CA3 region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号