首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
For optimal control problems satisfying convexity conditions in the state as well as the velocity, the optimal value is studied as a function of the time horizon and other parameters. Conditions are identified in which this optimal value function is locally Lipschitz continuous and semidifferentiable, or even differentiable. The Hamilton–Jacobi theory for such control problems provides the framework in which the results are obtained.  相似文献   

2.
We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterize the value function via Hamilton Jacobi Bellman equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.  相似文献   

3.
This paper is concerned with some optimal control problems of an age-structured population model controlled by fecundity and culling rates. The first-order necessary optimality conditions of the following four problems are deduced by means of a Dubovitskii–Milyutin functional analysis approach: a problem with fixed horizon and terminal constraint set, a free horizon counterpart, a differential game and a mini–max problem with a non-differentiable objective functional.  相似文献   

4.
We study the existence of optimal solutions for a class of infinite horizon nonconvex autonomous discrete-time optimal control problems. This class contains optimal control problems without discounting arising in economic dynamics which describe a model with a nonconcave utility function.  相似文献   

5.
In this paper, a robust receding horizon control for multirate sampled-data nonlinear systems with bounded disturbances is presented. The proposed receding horizon control is based on the solution of Bolza-type optimal control problems for the approximate discrete-time model of the nominal system. “Low measurement rate” is assumed. It is shown that the multistep receding horizon controller that stabilizes the nominal approximate discrete-time model also practically input-to-state stabilizes the exact discrete-time system with disturbances.  相似文献   

6.
We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.  相似文献   

7.
Time-discrete systems with a finite set of states are considered. Discrete optimal control problems with infinite time horizon for such systems are formulated. We introduce a certain graph-theoretic structure to model the transitions of the dynamical system. Algorithms for finding the optimal stationary control parameters are presented. Furthermore, we determine the optimal mean cost cycles. This approach can be used as a decision support strategy within such a class of problems; especially so-called multilayered decision problems which occur within environmental emission trading procedures can be modelled by such an approach.  相似文献   

8.
《Optimization》2012,61(1):115-130
In this article, we establish the existence of optimal solutions for a large class of nonconvex infinite horizon discrete-time optimal control problems. This class contains optimal control problems arising in economic dynamics which describe a model with nonconcave utility functions representing the preferences of the planner.  相似文献   

9.
Finite and infinite planning horizon Markov decision problems are formulated for a class of jump processes with general state and action spaces and controls which are measurable functions on the time axis taking values in an appropriate metrizable vector space. For the finite horizon problem, the maximum expected reward is the unique solution, which exists, of a certain differential equation and is a strongly continuous function in the space of upper semi-continuous functions. A necessary and sufficient condition is provided for an admissible control to be optimal, and a sufficient condition is provided for the existence of a measurable optimal policy. For the infinite horizon problem, the maximum expected total reward is the fixed point of a certain operator on the space of upper semi-continuous functions. A stationary policy is optimal over all measurable policies in the transient and discounted cases as well as, with certain added conditions, in the positive and negative cases.  相似文献   

10.
In this paper, problems of stability and optimal control for a class of stochastic singular systems are studied. Firstly, under some appropriate assumptions, some new results about mean-square admissibility are developed and the corresponding LMI sufficient condition is given. Secondly, finite-time horizon and infinite-time horizon linear quadratic (LQ) control problems for the stochastic singular system are investigated, in which the coefficients are allowed to be random in control input and quadratic criterion. Some results involving new stochastic generalized Riccati equation are discussed as well. Finally, the proposed LQ control model for stochastic singular systems provides an appropriate and effective framework to study the portfolio selection problem in light of the recent development on general stochastic LQ problems.  相似文献   

11.
The infinite dimensional version of the linear quadratic cost control problem is studied by Curtain and Pritchard [2], Gibson [5] by using Riccati integral equations, instead of differential equations. In the present paper the corresponding stochastic case over a finite horizon is considered. The stochastic perturbations are given by Hilbert valued square integrable martingales and it is shown that the deterministic optimal feedback control is also optimal in the stochastic case. Sufficient conditions are given for the convergence of approximate solutions of optimal control problems.  相似文献   

12.
This paper deals with Markov Decision Processes (MDPs) on Borel spaces with possibly unbounded costs. The criterion to be optimized is the expected total cost with a random horizon of infinite support. In this paper, it is observed that this performance criterion is equivalent to the expected total discounted cost with an infinite horizon and a varying-time discount factor. Then, the optimal value function and the optimal policy are characterized through some suitable versions of the Dynamic Programming Equation. Moreover, it is proved that the optimal value function of the optimal control problem with a random horizon can be bounded from above by the optimal value function of a discounted optimal control problem with a fixed discount factor. In this case, the discount factor is defined in an adequate way by the parameters introduced for the study of the optimal control problem with a random horizon. To illustrate the theory developed, a version of the Linear-Quadratic model with a random horizon and a Logarithm Consumption-Investment model are presented.  相似文献   

13.
document     
This work develops asymptotically optimal controls for discrete-time singularly perturbed Markov decision processes (MDPs) having weak and strong interactions. The focus is on finite-state-space-MDP problems. The state space of the underlying Markov chain can be decomposed into a number of recurrent classes or a number of recurrent classes and a group of transient states. Using a hierarchical control approach, continuous-time limit problems that are much simpler to handle than the original ones are derived. Based on the optimal solutions for the limit problems, nearly optimal decisions for the original problems are obtained. The asymptotic optimality of such controls is proved and the rate of convergence is provided. Infinite horizon problems are considered; both discounted costs and long-run average costs are examined.  相似文献   

14.
In this paper we consider a nonstationary periodic review dynamic production–inventory model with uncertain production capacity and uncertain demand. The maximum production capacity varies stochastically. It is known that order up-to (or base-stock, critical number) policies are optimal for both finite horizon problems and infinite horizon problems. We obtain upper and lower bounds of the optimal order up-to levels, and show that for an infinite horizon problem the upper and the lower bounds of the optimal order up-to levels for the finite horizon counterparts converge as the planning horizons considered get longer. Furthermore, under mild conditions the differences between the upper and the lower bounds converge exponentially to zero.  相似文献   

15.
In this paper, we consider how to construct the optimal solutions for the undiscounted discrete time infinite horizon optimization problems. We present the conditions under which the limit of the solutions for the finite horizon problems is optimal among all attainable paths for the infinite horizon problem under two modified overtaking criteria, as well as the conditions under which it is the unique optimum under the sum-of-utilities criterion. The results are applied to a parametric example of a simple one-sector growth model to examine the impacts of discounting on the optimal path.  相似文献   

16.
Necessary conditions are proved for deterministic nonsmooth optimal control problems involving an infinite horizon and terminal conditions at infinity. The necessary conditions include a complete set of transversality conditions.  相似文献   

17.
We present a receding horizon algorithm that converges to the exact solution in polynomial time for a class of optimal impulse control problems with uniformly distributed impulse instants and governed by so-called reverse dwell time conditions. The cost has two separate terms, one depending on time and the second monotonically decreasing on the state norm. The obtained results have both theoretical and practical relevance. From a theoretical perspective we prove certain geometrical properties of the discrete set of feasible solutions. From a practical standpoint, such properties reduce the computational burden and speed up the search for the optimum thus making the algorithm suitable for the on-line implementation in real-time problems. Our approach consists in approximating the optimal impulse control problem via a binary linear programming problem with a totally unimodular constraint matrix. Hence, solving the binary linear programming problem is equivalent to solving its linear relaxation. Then, given the feasible solution from the linear relaxation, we find the optimal solution via receding horizon and local search. Numerical illustrations of a queueing system are performed.  相似文献   

18.
We consider infinite horizon fractional variational problems, where the fractional derivative is defined in the sense of Caputo. Necessary optimality conditions for higher-order variational problems and optimal control problems are obtained. Transversality conditions are obtained in the case state functions are free at the initial time.  相似文献   

19.
A minimax optimal control problem with infinite horizon is studied. We analyze a relaxation of the controls, which allows us to consider a generalization of the original problem that not only has existence of an optimal control but also enables us to approximate the infinite-horizon problem with a sequence of finite-horizon problems. We give a set of conditions that are sufficient to solve directly, without relaxation, the infinite-horizon problem as the limit of finite-horizon problems.  相似文献   

20.
Value functions for convex optimal control problems on infinite time intervals are studied in the framework of duality. Hamilton-Jacobi characterizations and the conjugacy of primal and dual value functions are of main interest. Close ties between the uniqueness of convex solutions to a Hamilton-Jacobi equation, the uniqueness of such solutions to a dual Hamilton-Jacobi equation, and the conjugacy of primal and dual value functions are displayed. Simultaneous approximation of primal and dual infinite horizon problems with a pair of dual problems on finite horizon, for which the value functions are conjugate, leads to sufficient conditions on the conjugacy of the infinite time horizon value functions. Consequently, uniqueness results for the Hamilton-Jacobi equation are established. Little regularity is assumed on the cost functions in the control problems, correspondingly, the Hamiltonians need not display any strict convexity and may have several saddle points.

  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号