首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
无限阶段部分可观察马尔可夫决策规划   总被引:2,自引:0,他引:2  
本文对[1,2]所考虑的无限阶段折扣费用部分可观察马尔可夫决策规划作了进一步的讨论,澄清了其中的一些模糊概念,补充或纠正了其中的疏漏和错误,特别地,在保持费用函数分片线性的原则下扩大了有限瞬时策略类,最后给出了几个新的结论,并对[1]中的策略迭代算法给出了修正及收敛估计。  相似文献   

2.
Policy iteration is a well-studied algorithm for solving stationary Markov decision processes (MDPs). It has also been extended to robust stationary MDPs. For robust nonstationary MDPs, however, an “as is” execution of this algorithm is not possible because it would call for an infinite amount of computation in each iteration. We therefore present a policy iteration algorithm for robust nonstationary MDPs, which performs finitely implementable approximate variants of policy evaluation and policy improvement in each iteration. We prove that the sequence of cost-to-go functions produced by this algorithm monotonically converges pointwise to the optimal cost-to-go function; the policies generated converge subsequentially to an optimal policy.  相似文献   

3.
This paper deals with the bias optimality of multichain models for finite continuous-time Markov decision processes. Based on new performance difference formulas developed here, we prove the convergence of a so-called bias-optimal policy iteration algorithm, which can be used to obtain bias-optimal policies in a finite number of iterations.  相似文献   

4.
We propose a new approach to accelerate the convergence of the modified policy iteration method for Markov decision processes with the total expected discounted reward. In the new policy iteration an additional operator is applied to the iterate generated by Markov operator, resulting in a bigger improvement in each iteration.  相似文献   

5.
We develop an eigenfunction expansion based value iteration algorithm to solve discrete time infinite horizon optimal stopping problems for a rich class of Markov processes that are important in applications. We provide convergence analysis for the value function and the exercise boundary, and derive easily computable error bounds for value iterations. As an application we develop a fast and accurate algorithm for pricing callable perpetual bonds under the CIR short rate model.  相似文献   

6.
Stochastic scheduling problems are considered by using discounted dynamic programming. Both, maximizing pure rewards and minimizing linear holding costs are treated in one common Markov decision problem. A sufficient condition for the optimality of the myopic policy for finite and infinite horizon is given. For the infinite horizon case we show the optimality of an index policy and give a sufficient condition for the index policy to be myopic. Moreover, the relation between the two sufficient conditions is discussed.  相似文献   

7.
We consider undiscounted semi-Markov decision process with a target set and our main concern is a problem minimizing threshold probability. We formulate the problem as an infinite horizon case with a recurrent class. We show that an optimal value function is a unique solution to an optimality equation and there exists a stationary optimal policy. Also several value iteration methods and a policy improvement method are given in our model. Furthermore, we investigate a relationship between threshold probabilities and expectations for total rewards.  相似文献   

8.
We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterize the value function via Hamilton Jacobi Bellman equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.  相似文献   

9.
This paper considers a multi-class batch service problem that involves a class-dependent waiting cost and a service cost in determining customer batch sizes. Unlike a fixed service cost used widely in standard models, the service cost considered in this work is incurred only if the total service time is over the capacity. We formulate this problem as an infinite horizon Markov decision process, and exploit its structural properties to establish theoretical results, including bounds on the optimal action space. We use the results to improve the value iteration procedure. Furthermore, we design heuristic algorithms for large problems. The numerical experiments demonstrate that the class-dependent waiting cost has a considerable influence on the optimal customer batch size. Finally, we evaluate the efficiency of the proposed value iteration procedure and the quality of the heuristic solutions.  相似文献   

10.
This paper deals with a new optimality criterion consisting of the usual three average criteria and the canonical triplet (totally so-called strong average-canonical optimality criterion) and introduces the concept of a strong average-canonical policy for nonstationary Markov decision processes, which is an extension of the canonical policies of Herna′ndez-Lerma and Lasserre [16] (pages: 77) for the stationary Markov controlled processes. For the case of possibly non-uniformly bounded rewards and denumerable state space, we first construct, under some conditions, a solution to the optimality equations (OEs), and then prove that the Markov policies obtained from the OEs are not only optimal for the three average criteria but also optimal for all finite horizon criteria with a sequence of additional functions as their terminal rewards (i.e. strong average-canonical optimal). Also, some properties of optimal policies and optimal average value convergence are discussed. Moreover, the error bound in average reward between a rolling horizon policy and a strong average-canonical optimal policy is provided, and then a rolling horizon algorithm for computing strong average ε(>0)-optimal Markov policies is given.  相似文献   

11.
Planning horizon is a key issue in production planning. Different from previous approaches based on Markov Decision Processes, we study the planning horizon of capacity planning problems within the framework of stochastic programming. We first consider an infinite horizon stochastic capacity planning model involving a single resource, linear cost structure, and discrete distributions for general stochastic cost and demand data (non-Markovian and non-stationary). We give sufficient conditions for the existence of an optimal solution. Furthermore, we study the monotonicity property of the finite horizon approximation of the original problem. We show that, the optimal objective value and solution of the finite horizon approximation problem will converge to the optimal objective value and solution of the infinite horizon problem, when the time horizon goes to infinity. These convergence results, together with the integrality of decision variables, imply the existence of a planning horizon. We also develop a useful formula to calculate an upper bound on the planning horizon. Then by decomposition, we show the existence of a planning horizon for a class of very general stochastic capacity planning problems, which have complicated decision structure.  相似文献   

12.
We introduce a class of models for multidimensional control problems that we call skip-free Markov decision processes on trees. We describe and analyse an algorithm applicable to Markov decision processes of this type that are skip-free in the negative direction. Starting with the finite average cost case, we show that the algorithm combines the advantages of both value iteration and policy iteration—it is guaranteed to converge to an optimal policy and optimal value function after a finite number of iterations but the computational effort required for each iteration step is comparable with that for value iteration. We show that the algorithm can also be used to solve discounted cost models and continuous-time models, and that a suitably modified algorithm can be used to solve communicating models.  相似文献   

13.
We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.  相似文献   

14.
In this paper, the infinite horizon Markovian decision programming with recursive reward functions is discussed. We show that Bellman's optimal principle is applicable for our model. Then, a sufficient and necessary condition for a policy to be optimal is given. For the stationary case, an iteration algorithm for finding a stationary optimal policy is designed. The algorithm is a generalization of Howard's [7] and Iwamoto's [3] algorithms.This research was supported by the National Natural Science Foundation of China.  相似文献   

15.
This paper is concerned with the adaptive control problem, over the infinite horizon, for partially observable Markov decision processes whose transition functions are parameterized by an unknown vector. We treat finite models and impose relatively mild assumptions on the transition function. Provided that a sequence of parameter estimates converging in probability to the true parameter value is available, we show that the certainty equivalence adaptive policy is optimal in the long-run average sense.  相似文献   

16.
We study infinite horizon control of continuous-time non-linear branching processes with almost sure extinction for general (positive or negative) discount. Our main goal is to study the link between infinite horizon control of these processes and an optimization problem involving their quasi-stationary distributions and the corresponding extinction rates. More precisely, we obtain an equivalent of the value function when the discount parameter is close to the threshold where the value function becomes infinite, and we characterize the optimal Markov control in this limit. To achieve this, we present a new proof of the dynamic programming principle based upon a pseudo-Markov property for controlled jump processes. We also prove the convergence to a unique quasi-stationary distribution of non-linear branching processes controlled by a Markov control conditioned on non-extinction.  相似文献   

17.
We study infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled continuous time Markov chains on a countable state space. For the discounted-cost game, we prove the existence of value and saddle-point equilibrium in the class of Markov strategies under nominal conditions. For the ergodic-cost game, we prove the existence of values and saddle point equilibrium by studying the corresponding Hamilton-Jacobi-Isaacs equation under a certain Lyapunov condition.  相似文献   

18.
We present in this paper several asymptotic properties of constrained Markov Decision Processes (MDPs) with a countable state space. We treat both the discounted and the expected average cost, with unbounded cost. We are interested in (1) the convergence of finite horizon MDPs to the infinite horizon MDP, (2) convergence of MDPs with a truncated state space to the problem with infinite state space, (3) convergence of MDPs as the discount factor goes to a limit. In all these cases we establish the convergence of optimal values and policies. Moreover, based on the optimal policy for the limiting problem, we construct policies which are almost optimal for the other (approximating) problems. Based on the convergence of MDPs with a truncated state space to the problem with infinite state space, we show that an optimal stationary policy exists such that the number of randomisations it uses is less or equal to the number of constraints plus one. We finally apply the results to a dynamic scheduling problem.This work was partially supported by the Chateaubriand fellowship from the French embassy in Israel and by the European Grant BRA-QMIPS of CEC DG XIII  相似文献   

19.
In this paper, we study discounted Markov decision processes on an uncountable state space. We allow a utility (reward) function to be unbounded both from above and below. A new feature in our approach is an easily verifiable rate of growth condition introduced for a positive part of the utility function. This assumption, in turn, enables us to prove the convergence of a value iteration algorithm to a solution to the Bellman equation. Moreover, by virtue of the optimality equation we show the existence of an optimal stationary policy.  相似文献   

20.
In this paper, we introduce a Markov decision model with absorbing states and a constraint on the asymptotic failure rate. The objective is to find a stationary policy which minimizes the infinite horizon expected average cost, given that the system never fails. Using Perron-Frobenius theory of non-negative matrices and spectral analysis, we show that the problem can be reduced to a linear programming problem. Finally, we apply this method to a real problem for an aeronautical system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号