首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider an assembly system with exponential service times, and derive bounds for its average throughput and inventories. We also present an easily computed approximation for the throughput, and compare it to an existing approximation.  相似文献   

2.
We propose new easily computable bounds for different quantities which are solutions of Markov renewal equations linked to some continuous-time semi-Markov process (SMP). The idea is to construct two new discrete-time SMP which bound the initial SMP in some sense. The solution of a Markov renewal equation linked to the initial SMP is then shown to be bounded by solutions of Markov renewal equations linked to the two discrete time SMP. Also, the bounds are proved to converge. To illustrate the results, numerical bounds are provided for two quantities from the reliability field: mean sojourn times and probability transitions.   相似文献   

3.
We consider Markov processes built from pasting together pieces of strong Markov processes which are killed at a position dependent rate and connected via a transition kernel. We give necessary and sufficient conditions for local absolute continuity of probability laws for such processes on a suitable path space and derive an explicit formula for the corresponding likelihood ratio process. The main tool is the consideration of the process between successive jumps – what we call ‘elementary experiments’ – and criteria for absolute continuity of laws of the process there. We apply our results to systems of branching diffusions with interactions and immigrations. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
We present a modelling method for the analysis of production lines with generally distributed processing times and finite buffers. We consider the complete modelling process, from the data collection to the performance evaluation. First, the data about the processing times is supposed to be collected in the form of histograms. Second, tractable discrete phase-type distributions are built. Third, the evolution of the production line is described by a Markov chain, using a state model.  相似文献   

5.
We here propose some new algorithms to compute bounds for (1) cumulative density functions of sums of i.i.d. nonnegative random variables, (2) renewal functions and (3) cumulative density functions of geometric sums of i.i.d. nonnegative random variables. The idea is very basic and consists in bounding any general nonnegative random variable X   by two discrete random variables with range in hNhN, which both converge to X as h goes to 0. Numerical experiments are lead on and the results given by the different algorithms are compared to theoretical results in case of i.i.d. exponentially distributed random variables and to other numerical methods in other cases.  相似文献   

6.
Let M n = X 1 + ⋯ + X n be a sum of independent random variables such that X k ⩽ 1, and EX k 2 = σ k 2 for all k. Hoeffding [15, Theorem 3] proved that
with
. Bentkus [5] improved Hoeffding’s inequalities using binomial tails as upper bounds. Let and stand for the skewness and kurtosis of X k . In this paper we prove (improved) counterparts of the Hoeffding inequality replacing σ 2 by certain functions of γ 1, ..., γ n (respectively ϰ1, ..., ϰ1). Our bounds extend to a general setting where X k are martingale differences, and they can combine the knowledge of skewness and/or kurtosis and/or variances of X k . Up to factors bounded by e 2/2 the bounds are final. All our results are new since no inequalities incorporating skewness or kurtosis control are known so far. The research was partially supported by the Lithuanian State Science and Studies Foundation, grant No T-15/07.  相似文献   

7.
We consider Markov control processes with Borel state space and Feller transition probabilities, satisfying some generalized geometric ergodicity conditions. We provide a new theorem on the existence of a solution to the average cost optimality equation.  相似文献   

8.
We extend the numerical methods of [Kushner, H.J. and Dupuis, P., 1992 Kushner, H.J. and Dupuis, P. 2001. Numerical Methods for Stochastic Control Problems in Continuous Time, 2nd ed., Berlin and New York: Springer-Verlag. [Crossref] [Google Scholar], Numerical Methods for Stochastic Control Problems in Continuous Time, 2nd ed., 2001 (Berlin and New York: Springer Verlag], known as the Markov chain approximation methods, to controlled general nonlinear delayed reflected diffusion models. Both the path and the control can be delayed. For the no-delay case, the method covers virtually all models of current interest. The method is robust, the approximations have physical interpretations as control problems closely related to the original one, and there are many effective methods for getting the approximations, and for solving the Bellman equation for low-dimensional problems. These advantages carry over to the delay problem. It is shown how to adapt the methods for getting the approximations, and the convergence proofs are outlined for the discounted cost function. Extensions to all of the cost functions of current interest as well as to models with Poisson jump terms are possible. The paper is particularly concerned with representations of the state and algorithms that minimize the memory requirements.  相似文献   

9.
We consider a single server loss system in which arrivals occur according to a doubly stochastic Poisson process with a stationary ergodic intensity functionλ t . The service times are independent, exponentially distributed r.v.'s with meanμ −1, and are independent of arrivals. We obtain monotonicity results for loss probabilities under time scaling as well as under amplitude scaling ofλ t . Moreover, using these results we obtain both lower and upper bounds for the loss probability.  相似文献   

10.
In 2004, Tong found bounds for the approximation quality of a regular continued fraction convergent to a rational number, expressed in bounds for both the previous and next approximation. The authors sharpen his results with a geometric method and give both sharp upper and lower bounds. The asymptotic frequencies that these bounds occur are also calculated.  相似文献   

11.
In this work the problem of obtaining an optimal maintenance policy for a single-machine, single-product workstation that deteriorates over time is addressed, using Markov Decision Process (MDP) models. Two models are proposed. The decision criteria for the first model is based on the cost of performing maintenance, the cost of repairing a failed machine and the cost of holding inventory while the machine is not available for production. For the second model the cost of holding inventory is replaced by the cost of not satisfying the demand. The processing time of jobs, inter-arrival times of jobs or units of demand, and the failure times are assumed to be random. The results show that in order to make better maintenance decisions the interaction between the inventory (whether in process or final), and the number of shifts that the machine has been working without restoration, has to be taken into account. If this interaction is considered, the long-run operational costs are reduced significantly. Moreover, structural properties of the optimal policies of the models are obtained after imposing conditions on the parameters of the models and on the distribution of the lifetime of a recently restored machine.  相似文献   

12.
In many applications of Markov chains, and especially in Markov chain Monte Carlo algorithms, the rate of convergence of the chain is of critical importance. Most techniques to establish such rates require bounds on the distribution of the random regeneration time T that can be constructed, via splitting techniques, at times of return to a “small set” C satisfying a minorisation condition P(x,·)(·), xC. Typically, however, it is much easier to get bounds on the time τC of return to the small set itself, usually based on a geometric drift function , where . We develop a new relationship between T and τC, and this gives a bound on the tail of T, based on ,λ and b, which is a strict improvement on existing results. When evaluating rates of convergence we see that our bound usually gives considerable numerical improvement on previous expressions.  相似文献   

13.
We consider a discrete-time Markov decision process with a partially ordered state space and two feasible control actions in each state. Our goal is to find general conditions, which are satisfied in a broad class of applications to control of queues, under which an optimal control policy is monotonic. An advantage of our approach is that it easily extends to problems with both information and action delays, which are common in applications to high-speed communication networks, among others. The transition probabilities are stochastically monotone and the one-stage reward submodular. We further assume that transitions from different states are coupled, in the sense that the state after a transition is distributed as a deterministic function of the current state and two random variables, one of which is controllable and the other uncontrollable. Finally, we make a monotonicity assumption about the sample-path effect of a pairwise switch of the actions in consecutive stages. Using induction on the horizon length, we demonstrate that optimal policies for the finite- and infinite-horizon discounted problems are monotonic. We apply these results to a single queueing facility with control of arrivals and/or services, under very general conditions. In this case, our results imply that an optimal control policy has threshold form. Finally, we show how monotonicity of an optimal policy extends in a natural way to problems with information and/or action delay, including delays of more than one time unit. Specifically, we show that, if a problem without delay satisfies our sufficient conditions for monotonicity of an optimal policy, then the same problem with information and/or action delay also has monotonic (e.g., threshold) optimal policies.  相似文献   

14.
15.
16.
Let {Yn:n0} be a sequence of independent and identically distributed random variables with continuous distribution function, and let {N(t):t0} be a point process. In this paper, making use of strong invariance principles, we establish limit laws for the paced record process {X(t):t0} based on {Yn:n0} and {N(t):t0}. We consider as applications of our main results, the case of the classical and paced record models. We conclude by extensions of our theorems to non-homogeneous record processes.  相似文献   

17.
Focusing on stochastic systems arising in mean-field models, the systems under consideration belong to the class of switching diffusions, in which continuous dynamics and discrete events coexist and interact. The discrete events are modeled by a continuous-time Markov chain. Different from the usual switching diffusions, the systems include mean-field interactions. Our effort is devoted to obtaining laws of large numbers for the underlying systems. One of the distinct features of the paper is the limit of the empirical measures is not deterministic but a random measure depending on the history of the Markovian switching process. A main difficulty is that the standard martingale approach cannot be used to characterize the limit because of the coupling due to the random switching process. In this paper, in contrast to the classical approach, the limit is characterized as the conditional distribution (given the history of the switching process) of the solution to a stochastic McKean–Vlasov differential equation with Markovian switching.  相似文献   

18.
A Moderate Deviation Principle is established for random processes arising as small random perturbations of one-dimensional dynamical systems of the form Xn=f(Xn−1). Unlike in the Large Deviations Theory the resulting rate function is independent of the underlying noise distribution, and is always quadratic. This allows one to obtain explicit formulae for the asymptotics of probabilities of the process staying in a small tube around the deterministic system. Using these, explicit formulae for the asymptotics of exit times are obtained. Results are specified for the case when the dynamical system is periodic, and imply stability of such systems. Finally, results are applied to the model of density-dependent branching processes.  相似文献   

19.
We present a method of determining upper and lower bounds for the length of a Steiner minimal tree in 3-space whose topology is a given full Steiner topology, or a degenerate form of that full Steiner topology. The bounds are tight, in the sense that they are exactly satisfied for some configurations. This represents the first nontrivial lower bound to appear in the literature. The bounds are developed by first studying properties of Simpson lines in both two and three dimensional space, and then introducing a class of easily constructed trees, called midpoint trees, which provide the upper and lower bounds. These bounds can be constructed in quadratic time. Finally, we discuss strategies for improving the lower bound.Supported by a grant from the Australia Research Council.  相似文献   

20.
We consider triangular arrays of Markov chains that converge weakly to a diffusion process. Local limit theorems for transition densities are proved. Received: 28 August 1998 / Revised version: 6 September 1999 / Published online: 14 June 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号