首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
针对当前评估防空作战效能方法的局限性,将排队论应用于舰艇编队防空系统的突防过程分析和突防概率的计算,建立了计算作战效能的数学模型,在此基础上仿真分析了导弹毁伤概率、火力单元反应时间对突防概率的影响,对防空兵力的优化部署具有重要指导意义。  相似文献   

2.
Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information. A function of a conditional probability distribution is proposed as an information state. In the problem form here, the payoff is only a function of the terminal state of the system, and the initial information state is either linear or a sum of max-plus delta functions. When the initial information state belongs to these classes, its propagation is finite-dimensional. The state feedback value function is also finite-dimensional, and obtained via dynamic programming, but has a nonstandard form due to the necessity of an expanded state variable. Under a saddle point assumption, Certainty Equivalence is obtained and the proposed function is indeed an information state.  相似文献   

3.
A novel optimal preventive maintenance policy for a cold standby system consisting of two components and a repairman is described herein. The repairman is to be responsible for repairing either failed component and maintaining the working components under certain guidelines. To model the operational process of the system, some reasonable assumptions are made and all times involved in the assumptions are considered to be arbitrary and independent. Under these assumptions, all system states and transition probabilities between them are analyzed based on a semi-Markov theory and a regenerative point technique. Markov renewal equations are constructed with the convolution of the cumulative distribution function of system time in each state and corresponding transition probability. By using the Laplace transform to solve these equations, the mean time from the initial state to system failure is derived. The optimal preventive maintenance policy that will provide the optimal preventive maintenance cycle is identified by maximizing the mean time from the initial state to system failure, and is determined in the form of a theorem. Finally, a numerical example and simulation experiments are shown which validated the effectiveness of the policy.  相似文献   

4.
This paper presents an integrated platform for multi-sensor equipment diagnosis and prognosis. This integrated framework is based on hidden semi-Markov model (HSMM). Unlike a state in a standard hidden Markov model (HMM), a state in an HSMM generates a segment of observations, as opposed to a single observation in the HMM. Therefore, HSMM structure has a temporal component compared to HMM. In this framework, states of HSMMs are used to represent the health status of a component. The duration of a health state is modeled by an explicit Gaussian probability function. The model parameters (i.e., initial state distribution, state transition probability matrix, observation probability matrix, and health-state duration probability distribution) are estimated through a modified forward–backward training algorithm. The re-estimation formulae for model parameters are derived. The trained HSMMs can be used to diagnose the health status of a component. Through parameter estimation of the health-state duration probability distribution and the proposed backward recursive equations, one can predict the useful remaining life of the component. To determine the “value” of each sensor information, discriminant function analysis is employed to adjust the weight or importance assigned to a sensor. Therefore, sensor fusion becomes possible in this HSMM based framework.  相似文献   

5.
We consider convergence of Markov chains with uncertain parameters, known as imprecise Markov chains, which contain an absorbing state. We prove that under conditioning on non-absorption the imprecise conditional probabilities converge independently of the initial imprecise probability distribution if some regularity conditions are assumed. This is a generalisation of a known result from the classical theory of Markov chains by Darroch and Seneta [6].  相似文献   

6.
The exact probability distribution functions (pdf's) of the sooner andlater waiting time random variables (rv's) for the succession quota problemare derived presently in the case of Markov dependent trials. This is doneby means of combinatorial arguments. The probability generating functions(pgf's) of these rv's are then obtained by means of enumerating generatingfunctions (enumerators). Obvious modifications of the proofs provideanalogous results for the occurrence of frequency quotas and such a resultis established regarding the pdf of a frequency and succession quotas rv.Longest success and failure runs are also considered and their jointcumulative distribution function (cdf) is obtained.  相似文献   

7.
Motivated by queueing systems playing a key role in the performance evaluation of telecommunication networks, we analyze in this paper the stationary behavior of a fluid queue, when the instantaneous input rate is driven by a continuous-time Markov chain with finite or infinite state space. In the case of an infinite state space and for particular classes of Markov chains with a countable state space, such as quasi birth and death processes or Markov chains of the G/M/1 type, we develop an algorithm to compute the stationary probability distribution function of the buffer level in the fluid queue. This algorithm relies on simple recurrence relations satisfied by key characteristics of an auxiliary queueing system with normalized input rates.   相似文献   

8.
The evolution of a system with phase transition is simulated by a Markov process whose transition probabilities depend on a parameter. The change of the stationary distribution of the Markov process with a change of this parameter is interpreted as a phase transition of the system from one thermodynamic equilibrium state to another. Calculations and computer experiments are performed for condensation of a vapor. The sample paths of the corresponding Markov process have parts where the radius of condensed drops is approximately constant. These parts are interpreted as metastable states. Two metastable states occur, initial (gaseous steam) and intermediate (fog). The probability distributions of the drop radii in the metastable states are estimated. Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 123, No. 1, pp. 94–106, April, 2000.  相似文献   

9.
We consider a system in which customers join upon arrival the shortest of two single-server queues. The interarrival times between customers are Erlang distributed and the service times of both servers are exponentially distributed. Under these assumptions, this system gives rise to a Markov chain on a multi-layered quarter plane. For this Markov chain we derive the equilibrium distribution using the compensation approach. The expression for the equilibrium distribution matches and refines tail asymptotics obtained earlier in the literature.  相似文献   

10.
Abstract

The problem of the mean square exponential stability for a class of discrete-time linear stochastic systems subject to independent random perturbations and Markovian switching is investigated. The case of the linear systems whose coefficients depend both to present state and the previous state of the Markov chain is considered. Three different definitions of the concept of exponential stability in mean square are introduced and it is shown that they are not always equivalent. One definition of the concept of mean square exponential stability is done in terms of the exponential stability of the evolution defined by a sequence of linear positive operators on an ordered Hilbert space. The other two definitions are given in terms of different types of exponential behavior of the trajectories of the considered system. In our approach the Markov chain is not prefixed. The only available information about the Markov chain is the sequence of probability transition matrices and the set of its states. In this way one obtains that if the system is affected by Markovian jumping the property of exponential stability is independent of the initial distribution of the Markov chain.

The definition expressed in terms of exponential stability of the evolution generated by a sequence of linear positive operators, allows us to characterize the mean square exponential stability based on the existence of some quadratic Lyapunov functions.

The results developed in this article may be used to derive some procedures for designing stabilizing controllers for the considered class of discrete-time linear stochastic systems in the presence of a delay in the transmission of the data.  相似文献   

11.
不完全信息下军事冲突态势的模糊过程分析   总被引:1,自引:0,他引:1  
本文以军事冲突决策为研究背景 ,给出了在不完全信息下 ,局中人处在冲突态势改变过程是一个模糊集上的马尔柯夫过程时状态转移的预测模型 ,并且探讨了当状态转移的无后效应是模糊概念时模糊马尔柯夫链状态转移的预测模型  相似文献   

12.
A polling system with switchover times and state-dependent server routing is studied. Input flows are modulated by a random external environment. Input flows are ordinary Poisson flows in each state of the environment, with intensities determined by the environment state. Service and switchover durations have exponential laws of probability distribution. A continuous-time Markov chain is introduced to describe the dynamics of the server, the sizes of the queues and the states of the environment. By means of the iterative-dominating method a sufficient condition for ergodicity of the system is obtained for the continuous-time Markov chain. This condition also ensures the existence of a stationary probability distribution of the embedded Markov chain at instants of jumps. The customers sojourn cost during the period of unloading the stable queueing system is chosen as a performance metric. Numerical study in case of two input flows and a class of priority and threshold routing algorithms is conducted. It is demonstrated that in case of light inputs a priority routing rule doesn’t seem to be quasi-optimal.  相似文献   

13.
A stochastic chemical system with multiple types of molecules interacting through reaction channels can be modeled as a continuous‐time Markov chain with a countably infinite multidimensional state space. Starting from an initial probability distribution, the time evolution of the probability distribution associated with this continuous‐time Markov chain is described by a system of ordinary differential equations, known as the chemical master equation (CME). This paper shows how one can solve the CME using backward differentiation. In doing this, a novel approach to truncate the state space at each time step using a prediction vector is proposed. The infinitesimal generator matrix associated with the truncated state space is represented compactly, and exactly, using a sum of Kronecker products of matrices associated with molecules. This exact representation is already compact and does not require a low‐rank approximation in the hierarchical Tucker decomposition (HTD) format. During transient analysis, compact solution vectors in HTD format are employed with the exact, compact, and truncated generated matrices in Kronecker form, and the linear systems are solved with the Jacobi method using fixed or adaptive rank control strategies on the compact vectors. Results of simulation on benchmark models are compared with those of the proposed solver and another version, which works with compact vectors and highly accurate low‐rank approximations of the truncated generator matrices in quantized tensor train format and solves the linear systems with the density matrix renormalization group method. Results indicate that there is a reason to solve the CME numerically, and adaptive rank control strategies on compact vectors in HTD format improve time and memory requirements significantly.  相似文献   

14.
A stationary policy in an MDP (Markov decision process) induces a stationary probability distribution of the reward from each initial state. The problem analyzed here is maximization of the mean/standard deviation ratio of the stationary distribution. In the unichain case, a solution is obtained via parametric analysis of a linear program having the same number of variables and one more constraint than the formulation for gain-rate optimization. The same linear program suffices in the multichain case if the initial state is an element of choice. The easier problem of maximizing the mean/variance ratio is mentioned at the end of the paper.  相似文献   

15.
Kim  Jisoo  Jun  Chi-Hyuck 《Queueing Systems》2002,42(3):221-237
We consider a discrete-time queueing system with a single deterministic server, heterogeneous Markovian arrivals and finite capacity. Most existing techniques model the queueing system using a direct bivariate Markov chain which requires a state space that grows rapidly as the number of customer types increases. In this paper, we define renewal cycles in terms of the input process and model the system occupancy level on each renewal cycle using a one-dimensional Markov chain. We derive the exact joint steady-state probability distribution of both states of input and system occupancy with a considerably reduced state space, which leads to the efficient calculation of overall/individual performance measures such as loss probability and average delay.  相似文献   

16.
We present and study an approximation scheme for the mean of a stochastic simulation that models a population subject to nonlinear birth and exogenous disturbances. We use the information from the probability distribution for the disturbance times to construct a method that improves upon the mean-field approximation. We show through two example systems the effectiveness of the Markov embedding approximation and discuss the contexts in which it is an appropriate method.  相似文献   

17.
由于储备系统组成部件在存储期间的失效概率各不相同,当部件状态趋于稳定时,各个状态对系统性能的影响也存在差异。为了识别关键部件及其状态对系统性能的影响程度,本文以重要度为主要指标,应用马尔科夫过程研究储备系统在稳态时的性能变化模式。首先基于综合重要度研究系统性能的变化规律,并结合冷储备系统和温储备系统的状态转移矩阵推导出马尔科夫过程中稳态值的计算方法;其次基于稳态综合重要度获得系统稳态时的性能变化模式;最后以双臂机器人为例,分析部件处于不同状态时对系统性能的影响模式,比较了不同部件综合重要度的变化,验证了提出方法的有效性。  相似文献   

18.
Recursive equations are derived for the conditional distribution of the state of a Markov chain, given observations of a function of the state. Mainly continuous time chains are considered. The equations for the conditional distribution are given in matrix form and in differential equation form. The conditional distribution itself forms a Markov process. Special cases considered are doubly stochastic Poisson processes with a Markovian intensity, Markov chains with a random time, and Markovian approximations of semi-Markov processes. Further the results are used to compute the Radon-Nikodym derivative for two probability measures for a Markov chain, when a function of the state is observed.  相似文献   

19.
离散时间单重休假两部件并联可修系统的可靠性分析   总被引:1,自引:0,他引:1  
利用离散向量Markov过程方法研究了离散时间单重休假两同型部件并联可修系统.在部件寿命服从几何分布,修理时间和修理工休假时间服从一般离散型概率分布的假定下,引入修理时间和休假时间尾概率,求得了系统的稳态可用度、稳态故障频度、待修概率、修理工空闲概率和休假概率,以及首次故障前平均时间等可靠性指标.并通过具体数值实例展示了离散向量马氏链状态转移频度的具体计算方法.  相似文献   

20.
本文给出了有限状态平稳遍历Markov链部分和序列最小值分布的一个渐近估计式并利用它对一类Athreya-KarlinBPRE.灭种概率的渐近行为作出估计.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号