共查询到20条相似文献,搜索用时 15 毫秒
1.
Y. Fujita 《Applied Mathematics and Optimization》2001,43(2):169-186
In this paper we consider the Bellman equation in a one-dimensional ergodic control. Our aim is to show the existence and
the uniqueness of its solution under general assumptions. For this purpose we introduce an auxiliary equation whose solution
gives the invariant measure of the diffusion corresponding to an optimal control. Using this solution, we construct a solution
to the Bellman equation. Our method of using this auxiliary equation has two advantages in the one-dimensional case. First,
we can solve the Bellman equation under general assumptions. Second, this auxiliary equation gives an optimal Markov control
explicitly in many examples. \keywords{Bellman equation, Auxiliary equation, Ergodic control.}
\amsclass{49L20, 35G20, 93E20.}
Accepted 11 September 2000. Online publication 16 January 2001. 相似文献
2.
Impulsive control of continuous-time Markov processes with risk- sensitive long-run average cost is considered. The most
general impulsive control problem is studied under the restriction that impulses are in dyadic moments only. In a particular
case of additive cost for impulses, the impulsive control problem is solved without restrictions on the moments of impulses.
Accepted 30 April 2001. Online publication 29 August 2001. 相似文献
3.
Fujita 《Applied Mathematics and Optimization》2003,47(2):143-149
Abstract. In this paper we give a new proof of the existence result of Bensoussan [1, Theorem II-6.1] for the Bellman equation of ergodic
control with periodic structure. This Bellman equation is a nonlinear PDE, and he constructed its solution by using the solution
of a nonlinear PDE. On the contrary, our key idea is to solve two linear PDEs. Hence, we propose a linear PDE approach to
this Bellman equation. 相似文献
4.
Fujita 《Applied Mathematics and Optimization》2008,47(2):143-149
Abstract. In this paper we give a new proof of the existence result of Bensoussan [1, Theorem II-6.1] for the Bellman equation of ergodic
control with periodic structure. This Bellman equation is a nonlinear PDE, and he constructed its solution by using the solution
of a nonlinear PDE. On the contrary, our key idea is to solve two linear PDEs. Hence, we propose a linear PDE approach to
this Bellman equation. 相似文献
5.
Y. Fujita 《Applied Mathematics and Optimization》2000,41(1):1-7
We give a simple proof of the theorem concerning optimality in a one-dimensional ergodic control problem. We characterize
the optimal control in the class of all Markov controls. Our proof is probabilistic and does not need to solve the corresponding
Bellman equation. This simplifies the proof.
Accepted 24 March 1998 相似文献
6.
Using nonlinear programming theory in Banach spaces we derive a version of Pontryagin's maximum principle that can be applied
to distributed parameter systems under control and state constrains. The results are applied to fluid mechanics and combustion
problems.
Accepted 3 December 1996 相似文献
7.
This paper is concerned with distributed and Dirichlet boundary controls of semilinear parabolic equations, in the presence
of pointwise state constraints. The paper is divided into two parts. In the first part we define solutions of the state equation
as the limit of a sequence of solutions for equations with Robin boundary conditions. We establish Taylor expansions for solutions
of the state equation with respect to perturbations of boundary control (Theorem 5.2). For problems with no state constraints,
we prove three decoupled Pontryagin's principles, one for the distributed control, one for the boundary control, and the last
one for the control in the initial condition (Theorem 2.1). Tools and results of Part 1 are used in the second part to derive
Pontryagin's principles for problems with pointwise state constraints.
Accepted 12 July 2001. Online publication 21 December 2001. 相似文献
8.
In this paper a linearly perturbed version of the well-known matrix Riccati equations which arise in certain stochastic optimal control problems is studied. Via the
concepts of mean square stabilizability and mean square detectability we improve previous results on both the convergence properties of the linearly perturbed Riccati differential equation and the solutions of the linearly perturbed algebraic Riccati equation. Furthermore, our approach unifies, in some way, the study for this class of Riccati equations with the one for classical
theory, by eliminating a certain inconvenient assumption used in previous works (e.g., [10] and [26]). The results are derived
under relatively weaker assumptions and include, inter alia, the following: (a) An extension of Theorem 4.1 of [26] to handle systems not necessarily observable. (b) The existence of
a strong solution, subject only to the mean square stabilizability assumption. (c) Conditions for the existence and uniqueness of stabilizing
solutions for systems not necessarily detectable. (d) Conditions for the existence and uniqueness of mean square stabilizing
solutions instead of just stabilizing. (e) Relaxing the assumptions for convergence of the solution of the linearly perturbed
Riccati differential equation and deriving new convergence results for systems not necessarily observable.
Accepted 30 July 1996 相似文献
9.
This paper is the continuation of the paper ``Dirichlet boundary control of semilinear parabolic equations. Part 1: Problems
with no state constraints.' It is concerned with an optimal control problem with distributed and Dirichlet boundary controls
for semilinear parabolic equations, in the presence of pointwise state constraints. We first obtain approximate optimality
conditions for problems in which state constraints are penalized on subdomains. Next by using a decomposition theorem for
some additive measures (based on the Stone—Cech compactification), we pass to the limit and recover Pontryagin's principles
for the original problem.
Accepted 21 July 2001. Online publication 21 December 2001. 相似文献
10.
Abstract. An optimal control problem for an elliptic variational inequality with a source term is considered. The obstacle is the control,
and the goal is to keep the solution of the variational inequality close to the desired profile while the H
1
norm of the obstacle is not too large. The addition of the source term strongly affects the needed compactness result for
the existence of a minimizer. 相似文献
11.
An Obstacle Control Problem with a Source Term 总被引:1,自引:0,他引:1
Abstract. An optimal control problem for an elliptic variational inequality with a source term is considered. The obstacle is the control,
and the goal is to keep the solution of the variational inequality close to the desired profile while the H
1
norm of the obstacle is not too large. The addition of the source term strongly affects the needed compactness result for
the existence of a minimizer. 相似文献
12.
The Bellman equation of the risk-sensitive control problem with full observation is considered. It appears as an example
of a quasi-linear parabolic equation in the whole space, and fairly general growth assumptions with respect to the space variable
x are permitted. The stochastic control problem is then solved, making use of the analytic results. The case of large deviation
with small noises is then treated, and the limit corresponds to a differential game.
Accepted 25 March 1996 相似文献
13.
Vivek S. Borkar Mrinal K. Ghosh‡ 《Stochastics An International Journal of Probability and Stochastic Processes》2013,85(4):221-231
The problem of ergodic control of a reflecting diffusion in a compact domain is analysed under the condition of partial degeneracy, i.e. when its transition kernel after some time is absolutely continuous with respect to the Lebesgue measure on a part of the state space. Existence of a value function and a “martingale dynamic programming principle” are established by mapping the problem to a discrete time control problem. Implications for existence of optimal controls are derived. 相似文献
14.
This paper considers the problem of minimizing a quadratic cost subject to purely quadratic equality constraints. This problem
is tackled by first relating it to a standard semidefinite programming problem. The approach taken leads to a dynamical systems
analysis of semidefinite programming and the formulation of a gradient descent flow which can be used to solve semidefinite
programming problems. Though the reformulation of the initial problem as a semidefinite pro- gramming problem does not in
general lead directly to a solution of the original problem, the initial problem is solved by using a modified flow incorporating
a penalty function.
Accepted 10 March 1998 相似文献
15.
F. Da Lio 《Applied Mathematics and Optimization》2000,41(2):171-197
We study a class of infinite horizon control problems for nonlinear systems, which includes the Linear Quadratic (LQ) problem,
using the Dynamic Programming approach. Sufficient conditions for the regularity of the value function are given. The value
function is compared with sub- and supersolutions of the Bellman equation and a uniqueness theorem is proved for this equation
among locally Lipschitz functions bounded below. As an application it is shown that an optimal control for the LQ problem
is nearly optimal for a large class of small unbounded nonlinear and nonquadratic pertubations of the same problem.
Accepted 8 October 1998 相似文献
16.
Stochastic Linear Quadratic Optimal Control Problems 总被引:2,自引:0,他引:2
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the
coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control
variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward—backward
stochastic differential equations are established. Some results involving Riccati equation are discussed as well.
Accepted 15 May 2000. Online publication 1 December 2000 相似文献
17.
Ergodic control of singularly perturbed Markov chains with general state and compact action spaces is considered. A new method
is given for characterization of the limit of invariant measures, for perturbed chains, when the perturbation parameter goes
to zero. It is also demonstrated that the limit control principle is satisfied under natural ergodicity assumptions about
controlled Markov chains. These assumptions allow for the presence of transient states, a situation that has not been considered
in the literature before in the context of control of singularly perturbed Markov processes with long-run-average cost functionals.
Accepted 3 December 1996 相似文献
18.
We study the variational inequality associated with a bounded-velocity control problem when discretionary stopping is allowed.
We establish the existence of a strong solution by using the viscosity solution techniques. The optimal policy is shown to
exist from the optimality conditions in the variational inequality. 相似文献
19.
We study the variational inequality associated with a bounded-velocity control problem when discretionary stopping is allowed.
We establish the existence of a strong solution by using the viscosity solution techniques. The optimal policy is shown to
exist from the optimality conditions in the variational inequality. 相似文献
20.
In this paper we are concerned with the existence of optimal stationary policies for infinite-horizon risk-sensitive Markov
control processes with denumerable state space, unbounded cost function, and long-run average cost. Introducing a discounted
cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk-sensitive
control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality
holds, and derive an optimal stationary policy.
Accepted 1 October 1997 相似文献