Continuous-Review Tracking Policies for Dynamic Control of Stochastic Networks |
| |
Authors: | Maglaras Constantinos |
| |
Affiliation: | (1) Columbia Business School, 409 Uris Hall, 3022 Broadway, New York, NY 10027, USA |
| |
Abstract: | This paper is concerned with dynamic control of stochastic processing networks. Specifically, it follows the so called heavy traffic approach, where a Brownian approximating model is formulated, an associated Brownian optimal control problem is solved, the solution of which is then used to define an implementable policy for the original system. A major challenge is the step of policy translation from the Brownian to the discrete network. This paper addresses this problem by defining a general and easily implementable family of continuous-review tracking policies. Each such policy has the following structure: at each point in time t, the controller observes the current vector of queue lengths q and chooses (i) a target position z(q) of where the system should be at some point in the near future, say at time t+l, and (ii) an allocation vector v(q) that describes how to split the server's processing capacity amongst job classes in order to steer the state from q to z(q). Implementation of such policies involves the enforcement of small safety stocks. In the context of the heavy traffic approach, the solution of the approximating Brownian control problem is used in selecting the target state z(q). The proposed tracking policy is shown to be asymptotically optimal in the heavy traffic limiting regime, where the Brownian model approximation becomes valid, for multiclass queueing networks that admit orthant Brownian optimal controls; this is a form of pathwise, or greedy, optimality. Several extensions are discussed. |
| |
Keywords: | continuous-review tracking policies multiclass networks fluid models Brownian models heavy-traffic asymptotic optimality |
本文献已被 SpringerLink 等数据库收录! |
|