首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 85 毫秒
1.
We introduce a class of models for multidimensional control problems that we call skip-free Markov decision processes on trees. We describe and analyse an algorithm applicable to Markov decision processes of this type that are skip-free in the negative direction. Starting with the finite average cost case, we show that the algorithm combines the advantages of both value iteration and policy iteration—it is guaranteed to converge to an optimal policy and optimal value function after a finite number of iterations but the computational effort required for each iteration step is comparable with that for value iteration. We show that the algorithm can also be used to solve discounted cost models and continuous-time models, and that a suitably modified algorithm can be used to solve communicating models.  相似文献   

2.
《Operations Research Letters》2014,42(6-7):429-431
This note shows that the number of arithmetic operations required by any member of a broad class of optimistic policy iteration algorithms to solve a deterministic discounted dynamic programming problem with three states and four actions may grow arbitrarily. Therefore any such algorithm is not strongly polynomial. In particular, the modified policy iteration and λ-policy iteration algorithms are not strongly polynomial.  相似文献   

3.
We consider the stochastic shortest path problem, a classical finite-state Markovian decision problem with a termination state, and we propose new convergent Q-learning algorithms that combine elements of policy iteration and classical Q-learning/value iteration. These algorithms are related to the ones introduced by the authors for discounted problems in Bertsekas and Yu (Math. Oper. Res. 37(1):66-94, 2012). The main difference from the standard policy iteration approach is in the policy evaluation phase: instead of solving a linear system of equations, our algorithm solves an optimal stopping problem inexactly with a finite number of value iterations. The main advantage over the standard Q-learning approach is lower overhead: most iterations do not require a minimization over all controls, in the spirit of modified policy iteration. We prove the convergence of asynchronous deterministic and stochastic lookup table implementations of our method for undiscounted, total cost stochastic shortest path problems. These implementations overcome some of the traditional convergence difficulties of asynchronous modified policy iteration, and provide policy iteration-like alternative Q-learning schemes with as reliable convergence as classical Q-learning. We also discuss methods that use basis function approximations of Q-factors and we give an associated error bound.  相似文献   

4.
We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we develop a risk-averse policy iteration method and we prove its convergence. We also propose a version of the Newton method to solve a nonsmooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to min–max Markov decision models.  相似文献   

5.
An efficient algorithm for solving discounted semi-Markov (Markov-renewal) problems is proposed. The value iteration method of dynamic programming is used in conjunction with a test for non-optimal actions. A non-optimality test for the discounted semi-Markov processes, which is an extension of Hastings and Van Nunens (1976) test for the undiscounted or discounted returns with infinite or finite planning horizon, is used to identify actions which cannot be optimal at the current stage of a discounted semi-Markov process. The test proposed eliminates actions for one or more stages after which they may enter the set of possibly optimal actions, but such re-entries cease as convergence proceeds.  相似文献   

6.
Fitting the value function in a Markovian decision process by a linear superposition of M basis functions reduces the problem dimensionality from the number of states down to M, with good accuracy retained if the value function is a smooth function of its argument, the state vector. This paper provides, for both the discounted and undiscounted cases, three algorithms for computing the coefficients in the linear superposition: linear programming, policy iteration, and least squares.  相似文献   

7.
Negative dynamic programming for risk-sensitive control is studied. Under some compactness and semicontinuity assumptions the following results are proved: the convergence of the value iteration algorithm to the optimal expected total reward, the Borel measurability or upper semicontinuity of the optimal value functions, and the existence of an optimal stationary policy.  相似文献   

8.
For an infinite-horizon discounted Markov decision process with a finite number of states and actions, this note provides upper bounds on the number of operations required to compute an approximately optimal policy by value iterations in terms of the discount factor, spread of the reward function, and desired closeness to optimality. One of the provided upper bounds on the number of iterations has the property that it is a non-decreasing function of the value of the discount factor.  相似文献   

9.
This paper deals with approximate value iteration (AVI) algorithms applied to discounted dynamic programming (DP) problems. For a fixed control policy, the span semi-norm of the so-called Bellman residual is shown to be convex in the Banach space of candidate solutions to the DP problem. This fact motivates the introduction of an AVI algorithm with local search that seeks to minimize the span semi-norm of the Bellman residual in a convex value function approximation space. The novelty here is that the optimality of a point in the approximation architecture is characterized by means of convex optimization concepts and necessary and sufficient conditions to local optimality are derived. The procedure employs the classical AVI algorithm direction (Bellman residual) combined with a set of independent search directions, to improve the convergence rate. It has guaranteed convergence and satisfies, at least, the necessary optimality conditions over a prescribed set of directions. To illustrate the method, examples are presented that deal with a class of problems from the literature and a large state space queueing problem setting.  相似文献   

10.
We consider continuous-time Markov decision processes in Polish spaces. The performance of a control policy is measured by the expected discounted reward criterion associated with state-dependent discount factors. All underlying Markov processes are determined by the given transition rates which are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. By using the dynamic programming approach, we establish the discounted reward optimality equation (DROE) and the existence and uniqueness of its solutions. Under suitable conditions, we also obtain a discounted optimal stationary policy which is optimal in the class of all randomized stationary policies. Moreover, when the transition rates are uniformly bounded, we provide an algorithm to compute (or?at least to approximate) the discounted reward optimal value function as well as a discounted optimal stationary policy. Finally, we use an example to illustrate our results. Specially, we first derive an explicit and exact solution to the DROE and an explicit expression of a discounted optimal stationary policy for such an example.  相似文献   

11.
Approximate value iteration is a simple algorithm that combats the curse of dimensionality in dynamic programs by approximating iterates of the classical value iteration algorithm in a spirit reminiscent of statistical regression. Each iteration of this algorithm can be viewed as an application of a modified dynamic programming operator to the current iterate. The hope is that the iterates converge to a fixed point of this operator, which will then serve as a useful approximation of the optimal value function. In this paper, we show that, in general, the modified dynamic programming operator need not possess a fixed point; therefore, approximate value iteration should not be expected to converge. We then propose a variant of approximate value iteration for which the associated operator is guaranteed to possess at least one fixed point. This variant is motivated by studies of temporal-difference (TD) learning, and existence of fixed points implies here existence of stationary points for the ordinary differential equation approximated by a version of TD that incorporates exploration.  相似文献   

12.
In this paper, we consider a mean–variance optimization problem for Markov decision processes (MDPs) over the set of (deterministic stationary) policies. Different from the usual formulation in MDPs, we aim to obtain the mean–variance optimal policy that minimizes the variance over a set of all policies with a given expected reward. For continuous-time MDPs with the discounted criterion and finite-state and action spaces, we prove that the mean–variance optimization problem can be transformed to an equivalent discounted optimization problem using the conditional expectation and Markov properties. Then, we show that a mean–variance optimal policy and the efficient frontier can be obtained by policy iteration methods with a finite number of iterations. We also address related issues such as a mutual fund theorem and illustrate our results with an example.  相似文献   

13.
In this paper, we develop an exterior point algorithm for convex quadratic programming using a penalty function approach. Each iteration in the algorithm consists of a single Newton step followed by a reduction in the value of the penalty parameter. The points generated by the algorithm follow an exterior path that we define. Convergence of the algorithm is established. The proposed algorithm was motivated by the work of Al-Sultan and Murty on nearest point problems, a special quadratic program. A preliminary implementation of the algorithm produced encouraging results. In particular, the algorithm requires a small and almost constant number of iterations to solve the small to medium size problems tested.  相似文献   

14.
The following optimality principle is established for finite undiscounted or discounted Markov decision processes: If a policy is (gain, bias, or discounted) optimal in one state, it is also optimal for all states reachable from this state using this policy. The optimality principle is used constructively to demonstrate the existence of a policy that is optimal in every state, and then to derive the coupled functional equations satisfied by the optimal return vectors. This reverses the usual sequence, where one first establishes (via policy iteration or linear programming) the solvability of the coupled functional equations, and then shows that the solution is indeed the optimal return vector and that the maximizing policy for the functional equations is optimal for every state.  相似文献   

15.
16.
The paper deals with a risk averse dynamic programming problem with infinite horizon. First, the required assumptions are formulated to have the problem well defined. Then the Bellman equation is derived, which may be also seen as a standalone reinforcement learning problem. The fact that the Bellman operator is contraction is proved, guaranteeing convergence of various solution algorithms used for dynamic programming as well as reinforcement learning problems, which we demonstrate on the value iteration and the policy iteration algorithms.  相似文献   

17.
张鹏 《运筹学学报》2012,16(1):97-105
提出了求解一维连续型动态规划问题的自创算法----离散近似迭代法,并结合双收敛方法求解多维连续型动态规划问题. 该算法的基本思路为:在给定其它状态向
量序列的基础上,每次对一个状态变量序列进行离散近似迭代,并找出该状态变量的最优序列,直到所有状态向量序列都检查完.当模型为非凸非凹动态规划时,
证明了该算法的收敛性.当模型为凸动态规划时,证明了该算法的线性收敛性. 最后,以一个具体算例验证了该模型和算法的有效性.  相似文献   

18.
This paper deals with the design of linear-phase finite impulse response (FIR) digital filters using weighted peak-constrained least-squares (PCLS) optimization. The PCLS error design problem is formulated as a quadratically constrained quadratic semi-infinite programming problem. An exchange algorithm with a new exchange rule is proposed to solve the problem. The algorithm provides the approximate optimal solution after a finite number of iterations. In particular, the subproblem solved at each iteration is a quadratically constrained quadratic programming. We can rewrite it as a conic optimization problem solvable in polynomial time. For illustration, numerical examples are solved using the proposed algorithm.  相似文献   

19.
This paper deals with the bias optimality of multichain models for finite continuous-time Markov decision processes. Based on new performance difference formulas developed here, we prove the convergence of a so-called bias-optimal policy iteration algorithm, which can be used to obtain bias-optimal policies in a finite number of iterations.  相似文献   

20.
To computationally solve an adaptive optimal control problem by means of conventional dynamic programming, a backward recursion must be used with an optimal value and optimal control determined for all conceivable prior information patterns and prior control histories. Consequently, almost all problems are beyond the capability of even large computers.As an alternative, we develop in this paper a computational successive improvement scheme which involves choosing a nominal control policy and then improving it at each iteration. Each improvement involves considerable computation, but much less than the straightforward dynamic programming algorithm. As in any local-improvement procedure, the scheme may converge to something which is inferior to the absolutely optimal control.This paper has been supported by the National Science Foundation under Grant No. GP-25081.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号