首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Under the One-step Look Ahead rule of Dynamic Programming, an explicit game value of Dynkin's stopping problem for a Markov chain is obtained by using a potential operator. The condition on the One-step rule could be extended to the k-step and infinity-step rule. We shall also decompose the game value as the sum of two explicit functions under these rules.  相似文献   

2.
Abstract

This paper concerns the pricing of American options with stochastic stopping time constraints expressed in terms of the states of a Markov process. Following the ideas of Menaldi et al., we transform the constrained into an unconstrained optimal stopping problem. The transformation replaces the original payoff by the value of a generalized barrier option. We also provide a Monte Carlo method to numerically calculate the option value for multidimensional Markov processes. We adapt the Longstaff–Schwartz algorithm to solve the stochastic Cauchy–Dirichlet problem related to the valuation problem of the barrier option along a set of simulated trajectories of the underlying Markov process.  相似文献   

3.
In this paper we are interested in the effect that dependencies in the arrival process to a queue have on queueing properties such as mean queue length and mean waiting time. We start with a review of the well known relations used to compare random variables and random vectors, e.g., stochastic orderings, stochastic increasing convexity, and strong stochastic increasing concavity. These relations and others are used to compare interarrival times in Markov renewal processes first in the case where the interarrival time distributions depend only on the current state in the underlying Markov chain and then in the general case where these interarrivai times depend on both the current state and the next state in that chain. These results are used to study a problem previously considered by Patuwo et al. [14].Then, in order to keep the marginal distributions of the interarrivai times constant, we build a particular transition matrix for the underlying Markov chain depending on a single parameter,p. This Markov renewal process is used in the Patuwo et al. [14] problem so as to investigate the behavior of the mean queue length and mean waiting time on a correlation measure depending only onp. As constructed, the interarrival time distributions do not depend onp so that the effects we find depend only on correlation in the arrival process.As a result of this latter construction, we find that the mean queue length is always larger in the case where correlations are non-zero than they are in the more usual case of renewal arrivals (i.e., where the correlations are zero). The implications of our results are clear.  相似文献   

4.
5.
6.
This paper is dedicated to the investigation of a new numerical method to approximate the optimal stopping problem for a discrete-time continuous state space Markov chain under partial observations. It is based on a two-step discretization procedure based on optimal quantization. First, we discretize the state space of the unobserved variable by quantizing an underlying reference measure. Then we jointly discretize the resulting approximate filter and the observation process. We obtain a fully computable approximation of the value function with explicit error bounds for its convergence towards the true value function.  相似文献   

7.
We study the problem of sampling uniformly at random from the set of k-colorings of a graph with maximum degree Δ. We focus attention on the Markov chain Monte Carlo method, particularly on a popular Markov chain for this problem, the Wang–Swendsen–Kotecký (WSK) algorithm. The second author recently proved that the WSK algorithm quickly converges to the desired distribution when k11Δ/6. We study how far these positive results can be extended in general. In this note we prove the first non-trivial results on when the WSK algorithm takes exponentially long to reach the stationary distribution and is thus called torpidly mixing. In particular, we show that the WSK algorithm is torpidly mixing on a family of bipartite graphs when 3k<Δ/(20logΔ), and on a family of planar graphs for any number of colors. We also give a family of graphs for which, despite their small chromatic number, the WSK algorithm is not ergodic when kΔ/2, provided k is larger than some absolute constant k0.  相似文献   

8.
We study a Markov chain on generating n-tuples of a fixed group which arises in algorithms for manipulating finite groups. The main tools are comparison of two Markov chains on different but related state spaces and combinatorics of random paths. The results involve group theoretical parameters such as the size of minimal generating sets, the number of distinct generating k-tuples for different k's and the maximal diameter of the group. Oblatum 6-VIII-1996 & 6-XI-97  相似文献   

9.
The connection between the optimal stopping problems for inhomogeneous standard Markov process and the corresponding homogeneous Markov process constructed in the extended state space is established. An excessive characterization of the value-function and the limit procedure for its construction in the problem of optimal stopping of an inhomogeneous standard Markov process is given. The form of -optimal (optimal) stopping times is also found.  相似文献   

10.
In this paper we consider stopping problems for continuous-time Markov chains under a general risk-sensitive optimization criterion for problems with finite and infinite time horizon. More precisely our aim is to maximize the certainty equivalent of the stopping reward minus cost over the time horizon. We derive optimality equations for the value functions and prove the existence of optimal stopping times. The exponential utility is treated as a special case. In contrast to risk-neutral stopping problems it may be optimal to stop between jumps of the Markov chain. We briefly discuss the influence of the risk sensitivity on the optimal stopping time and consider a special house selling problem as an example.  相似文献   

11.
We establish a stochastic extension of Ramsey's theorem. Any Markov chain generates a filtration relative to which one may define a notion of stopping times. A stochastic colouring is any k-valued (k<∞) colour function defined on all pairs consisting of a bounded stopping time and a finite partial history of the chain truncated before this stopping time. For any bounded stopping time θ and any infinite history ω of the Markov chain, let ω|θ denote the finite partial history up to and including the time θ(ω). Given k=2, for every ?>0, we prove that there is an increasing sequence θ1<θ2<? of bounded stopping times having the property that, with probability greater than 1−?, the history ω is such that the values assigned to all pairs (ω|θi,θj), with i<j, are the same. Just as with the classical Ramsey theorem, we also obtain an analogous finitary stochastic Ramsey theorem. Furthermore, with appropriate finiteness assumptions, the time one must wait for the last stopping time (in the finitary case) is uniformly bounded, independently of the probability transitions. We generalise the results to any finite number k of colours.  相似文献   

12.
13.
The Markov chains with stationary transition probabilities have not proved satisfactory as a model of human mobility. A modification of this simple model is the ‘duration specific’ chain incorporating the axiom of cumulative inertia: the longer a person has been in a state the less likely he is to leave it. Such a process is a Markov chain with a denumerably infinite number of states, specifying both location and duration of time in the location. Here we suggest that a finite upper bound be placed on duration, thus making the process into a finite state Markov chain. Analytic representations of the equilibrium distribution of the process are obtained under two conditions: (a) the maximum duration is an absorbing state, for all locations; and (b) the maximum duration is non‐absorbing. In the former case the chain is absorbing, in the latter it is regular.  相似文献   

14.
15.
In this note, we discuss a Markov chain formulation of the k-SAT problem and the properties of the resulting transition matrix. The motivation behind this work is to relate the phase transition in the k-SAT problem to the phenomenon of “cut-off” in Markov chains. We use the idea of weak-lumpability to reduce the dimension of our transition matrix to manageable proportions.  相似文献   

16.
Rabehasaina  Landy  Woo  Jae-Kyung 《Queueing Systems》2020,94(3-4):393-420

We consider a general k-dimensional discounted infinite server queueing process (alternatively, an incurred but not reported claim process) where the multivariate inputs (claims) are given by a k-dimensional finite-state Markov chain and the arrivals follow a renewal process. After deriving a multidimensional integral equation for the moment-generating function jointly to the state of the input at time t given the initial state of the input at time 0, asymptotic results for the first and second (matrix) moments of the process are provided. In particular, when the interarrival or service times are exponentially distributed, transient expressions for the first two moments are obtained. Also, the moment-generating function for the process with deterministic interarrival times is considered to provide more explicit expressions. Finally, we demonstrate the potential of the present model by showing how it allows us to study semi-Markovian modulated infinite server queues where the customers (claims) arrival and service (reporting delay) times depend on the state of the process immediately before and at the switching times.

  相似文献   

17.
《Optimization》2012,61(1-2):173-190
The paper deals with speculation strategies in a dynamic economy, where “speculation” means participating in a market with the intention to gain a reward by first buying an item and thereafter selling it at a possibly higher price. By assuming that the states of the economy form a Markov chain the problem is modeled as a discrete time Markov decision process. The optimal strategies (which are pairs of stopping times) are identified. Under quite general conditions the optimal rule for the selling process turns out to be a control limit policy in both state of economy and time. Techniques for the computation of optimal strategies are presented; some numerical examples are also discussed. For a static economy closed-form solutions are given  相似文献   

18.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

19.
We study an infinite horizon optimal stopping Markov problem which is either undiscounted (total reward) or with a general Markovian discount rate. Using ergodic properties of the underlying Markov process, we establish the feasibility of the stopping problem and prove the existence of optimal and εε-optimal stopping times. We show the continuity of the value function and its variational characterisation (in the viscosity sense) under different sets of assumptions satisfied by large classes of diffusion and jump–diffusion processes. In the case of a general discounted problem we relax a classical assumption that the discount rate is uniformly separated from zero.  相似文献   

20.
The problem of estimating a finite state Markov chain observed via a process on the same state space is discussed. Optimal solutions are given for both the ``weak' and ``strong' formulations of the problem. The ``weak' formulation proceeds using a reference probability and a measure change for the Markov chain. The ``strong' formulation considers an observation process related to perturbations of the counting processes associated with the Markov chain. In this case the ``small noise' convergence is investigated. Accepted 7 April 1998  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号