首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
The optimization of the output matrix for a discrete-time, single-output, linear stochastic system is approached from two different points of view. Firstly, we investigate the problem of minimizing the steady-state filter error variance with respect to a time-invariant output matrix subject to a norm constraint. Secondly, we propose a filter algorithm in which the output matrix at timek is chosen so as to maximize the difference at timek+1 between the variance of the prediction error and that of the a posteriori error. For this filter, boundedness of the covariance and asymptotic stability are investigated. Several numerical experiments are reported: they give information about the limiting behavior of the sequence of output matrices generated by the algorithm and the corresponding error covariance. They also enable us to make a comparison with the results obtained by solving the former problem.This work was supported by the Italian Ministry of Education (MPI 40%), Rome, Italy.  相似文献   

2.
This paper discusses the estimation of a class of discrete-time linear stochastic systems with statistically-constrained unknown inputs (UI), which can represent an arbitrary combination of a class of un-modeled dynamics, random UI with unknown covariance matrix and deterministic UI. In filter design, an upper bound filter is explored to compute, recursively and adaptively, the upper bounds of covariance matrices of the state prediction error, innovation and state estimate error. Furthermore, the minimum upper bound filter (MUBF) is obtained via online scalar parameter convex optimization in pursuit of the minimum upper bounds. Two examples, a system with multiple piecewise UIs and a continuous stirred tank reactor (CSTR), are used to illustrate the proposed MUBF scheme and verify its performance.  相似文献   

3.
Important performance measures for many Markov renewal processes are the counts of the exits from each state. We present solutions for the conditional first, second, and covariance moments of the state exiting counting processes for a Markov renewal process, and solutions for the unconditional equilibrium versions of the moments. We demonstrate the relationship between the conditional first moments for the state exiting and the state entering counting processes. For analytical and illustrative purposes, we concentrate on the two state case. Two asymptotic expansions for the moment functions are proposed and evaluated both analytically and empirically. The two approximations are shown to be competitive in terms of absolute relative error, but the second approximation has a simpler analytical form which is useful in analyzing more complex stochastic processes having an underlying MRP structure.  相似文献   

4.
《Applied Mathematical Modelling》2014,38(9-10):2422-2434
An exact, closed-form minimum variance filter is designed for a class of discrete time uncertain systems which allows for both multiplicative and additive noise sources. The multiplicative noise model includes a popular class of models (Cox-Ingersoll-Ross type models) in econometrics. The parameters of the system under consideration which describe the state transition are assumed to be subject to stochastic uncertainties. The problem addressed is the design of a filter that minimizes the trace of the estimation error variance. Sensitivity of the new filter to the size of parameter uncertainty, in terms of the variance of parameter perturbations, is also considered. We refer to the new filter as the ‘perturbed Kalman filter’ (PKF) since it reduces to the traditional (or unperturbed) Kalman filter as the size of stochastic perturbation approaches zero. We also consider a related approximate filtering heuristic for univariate time series and we refer to filter based on this heuristic as approximate perturbed Kalman filter (APKF). We test the performance of our new filters on three simulated numerical examples and compare the results with unperturbed Kalman filter that ignores the uncertainty in the transition equation. Through numerical examples, PKF and APKF are shown to outperform the traditional (or unperturbed) Kalman filter in terms of the size of the estimation error when stochastic uncertainties are present, even when the size of stochastic uncertainty is inaccurately identified.  相似文献   

5.
The Wonham filter, which estimates a Markov chain observed in Brownian noise, is considered. However, the parameters of the observation process are not known. Maximizing the un-normalized probabilities of the Zakai equation over the parameters leads to a Nash equilibrium whose solution is discussed using the stochastic control results of Peng and Yong and Zhou.  相似文献   

6.
We consider a multiperiod mean-variance model where the model parameters change according to a stochastic market. The mean vector and covariance matrix of the random returns of risky assets all depend on the state of the market during any period where the market process is assumed to follow a Markov chain. Dynamic programming is used to solve an auxiliary problem which, in turn, gives the efficient frontier of the mean-variance formulation. An explicit expression is obtained for the efficient frontier and an illustrative example is given to demonstrate the application of the procedure.  相似文献   

7.
We develop and implement a method for maximum likelihood estimation of a regime-switching stochastic volatility model. Our model uses a continuous time stochastic process for the stock dynamics with the instantaneous variance driven by a Cox–Ingersoll–Ross process and each parameter modulated by a hidden Markov chain. We propose an extension of the EM algorithm through the Baum–Welch implementation to estimate our model and filter the hidden state of the Markov chain while using the VIX index to invert the latent volatility state. Using Monte Carlo simulations, we test the convergence of our algorithm and compare it with an approximate likelihood procedure where the volatility state is replaced by the VIX index. We found that our method is more accurate than the approximate procedure. Then, we apply Fourier methods to derive a semi-analytical expression of S&P500 and VIX option prices, which we calibrate to market data. We show that the model is sufficiently rich to encapsulate important features of the joint dynamics of the stock and the volatility and to consistently fit option market prices.  相似文献   

8.
In this paper, we study a reflected Markov-modulated Brownian motion with a two sided reflection in which the drift, diffusion coefficient and the two boundaries are (jointly) modulated by a finite state space irreducible continuous time Markov chain. The goal is to compute the stationary distribution of this Markov process, which in addition to the complication of having a stochastic boundary can also include jumps at state change epochs of the underlying Markov chain because of the boundary changes. We give the general theory and then specialize to the case where the underlying Markov chain has two states.  相似文献   

9.

We show that for the binomial process (or Bernoulli random walk) the orthogonal functionals constructed in Kroeker, J.P. (1980) "Wiener analysis of functionals of a Markov chain: application to neural transformations of random signals", Biol. Cybernetics 36 , 243-248, [14] for Markov chains can be expressed using the Krawtchouk polynomials, and by iterated stochastic integrals. This allows to construct a chaotic calculus based on gradient and divergence operators and structure equations, and to establish a Clark representation formula. As an application we obtain simple infinite dimensional proofs of covariance identities on the discrete cube.  相似文献   

10.
针对一类以有限齐次马氏链δ(k)作为切换信号的随机混合系统,首先,通过构造随机混合Lyapunov函数,得到整个随机混合系统渐近稳定的充分条件.然后,引入可调转移概率等相关概念,通过对有限齐次马氏链δ(k)及各子系统加入控制,以实现状态反馈控制.进一步,得到随机混合闭环系统渐近稳定的充分条件.  相似文献   

11.
We study a class of Gaussian random fields with negative correlations. These fields are easy to simulate. They are defined in a natural way from a Markov chain that has the index space of the Gaussian field as its state space. In parallel with Dynkin's investigation of Gaussian fields having covariance given by the Green's function of a Markov process, we develop connections between the occupation times of the Markov chain and the prediction properties of the Gaussian field. Our interest in such fields was initiated by their appearance in random matrix theory.  相似文献   

12.
何舒平  刘飞 《系统科学与数学》2009,29(12):1579-1592
针对一类广义不确定时滞Markov跳变系统,应用鲁棒控制理论分析重构的增广系统,获取了系统的故障检测滤波器.利用构造的随机Lyapunov-Krasovskii函数和线性矩阵不等式,证明并给出了故障检测滤波器有解的充分条件和优化设计方法,同时使得残差体现其对扰动的抑制作用和对故障的灵敏性.所设计的滤波器使系统具有随机稳定性,抑制干扰能力强,满足所给的范数指标.仿真算例对结果进行了验证,在故障信息发生时,故障检测滤波器可以很灵敏地检测出故障,并能有效地抑制未知扰动对残差的影响在给定的范围内.  相似文献   

13.
This article presents a Markov chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities with no additional tuning. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix.

Prior determination of explanatory variables and random effects is not a prerequisite because the definite structure is chosen in a data-driven manner in the course of the modeling procedure. To illustrate the method, we give two bank data examples.  相似文献   

14.
We study the continuous-time limit of a class of Markov chains coming from the evolution of classical open systems undergoing repeated interactions. This repeated interaction model has been initially developed for dissipative quantum systems in Attal and Pautrat (2006) and was recently set up for the first time in Deschamps (2012) for classical dynamics. It was particularly shown in the latter that this scheme furnishes a new kind of Markovian evolutions based on Hamilton’s equations of motion. The system is also proved to evolve in the continuous-time limit with a stochastic differential equation. We here extend the convergence of the evolution of the system to more general dynamics, that is, to more general Hamiltonians and probability measures in the definition of the model. We also present a natural way to directly renormalize the initial Hamiltonian in order to obtain the relevant process in a study of the continuous-time limit. Then, even if Hamilton’s equations have no explicit solution in general, we obtain some bounds on the dynamics allowing us to prove the convergence in law of the Markov chain on the system to the solution of a stochastic differential equation, via the infinitesimal generators.  相似文献   

15.
Matrix-valued dynamical systems are an important class of systems that can describe important processes such as covariance/second-order moment processes, or processes on manifolds and Lie Groups. We address here the case of processes that leave the cone of positive semidefinite matrices invariant, thereby including covariance and second-order moment processes. Both the continuous-time and the discrete-time cases are first considered. In the LTV case, the obtained stability and stabilization conditions are expressed as differential and difference Lyapunov conditions which are equivalent, in the LTI case, to some spectral conditions for the generators of the processes. Convex stabilization conditions are also obtained in both the continuous-time and the discrete-time setting. It is proven that systems with constant delays are stable provided that the systems with zero-delays are stable—which mirrors existing results for linear positive systems. The results are then extended and unified into an impulsive formulation for which similar results are obtained. The proposed framework is very general and can recover and/or extend many of the existing results in the literature on linear systems related to (mean-square) exponential (uniform) stability. Several examples are discussed to illustrate this claim by deriving stability conditions for stochastic systems driven by Brownian motion and Poissonian jumps, Markov jump systems, (stochastic) switched systems, (stochastic) impulsive systems, (stochastic) sampled-data systems, and all their possible combinations.  相似文献   

16.
Stochastic reaction systems with discrete particle numbers are usually described by a continuous-time Markov process. Realizations of this process can be generated with the stochastic simulation algorithm, but simulating highly reactive systems is computationally costly because the computational work scales with the number of reaction events. We present a new approach which avoids this drawback and increases the efficiency considerably at the cost of a small approximation error. The approach is based on the fact that the time-dependent probability distribution associated to the Markov process is explicitly known for monomolecular, autocatalytic and certain catalytic reaction channels. More complicated reaction systems can often be decomposed into several parts some of which can be treated analytically. These subsystems are propagated in an alternating fashion similar to a splitting method for ordinary differential equations. We illustrate this approach by numerical examples and prove an error bound for the splitting error.  相似文献   

17.
We consider a Bernoulli process where the success probability changes with respect to a Markov chain. Such a model represents an interesting application of stochastic processes where the parameters are not constants; rather, they are stochastic processes themselves due to their dependence on a randomly changing environment. The model operates in a random environment depicted by a Markov chain so that the probability of success at each trial depends on the state of the environment. We will concentrate, in particular, on applications in reliability theory to motivate our model. The analysis will focus on transient as well as long-term behaviour of various processes involved.  相似文献   

18.
莫晓云 《经济数学》2010,27(3):28-34
在客户发展关系的Markov链模型的基础上,构建了企业的客户回报随机过程.证明了:在适当假设下,客户回报过程是Markov链。甚至是时间齐次的Markov链.本文求出了该链的转移概率.通过转移概率得到了客户给企业期望回报的一些计算公式,从而为企业选定发展客户关系策略提供了有效的量化基础.  相似文献   

19.
Focusing on stochastic dynamics involve continuous states as well as discrete events, this article investigates stochastic logistic model with regime switching modulated by a singular Markov chain involving a small parameter. This Markov chain undergoes weak and strong interactions, where the small parameter is used to reflect rapid rate of regime switching among each state class. Two-time-scale formulation is used to reduce the complexity. We obtain weak convergence of the underlying system so that the limit has much simpler structure. Then we utilize the structure of limit system as a bridge, to invest stochastic permanence of original system driving by a singular Markov chain with a large number of states. Sufficient conditions for stochastic permanence are obtained. A couple of examples and numerical simulations are given to illustrate our results.  相似文献   

20.
In this paper, the filtering problem is investigated for a class of nonlinear discrete-time stochastic systems with state delays. We aim at designing a full-order filter such that the dynamics of the estimation error is guaranteed to be stochastically, exponentially, ultimately bounded in the mean square, for all admissible nonlinearities and time delays. First, an algebraic matrix inequality approach is developed to deal with the filter analysis problem, and sufficient conditions are derived for the existence of the desired filters. Then, based on the generalized inverse theory, the filter design problem is tackled and a set of the desired filters is explicitly characterized. A simulation example is provided to demonstrate the usefulness of the proposed design method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号