共查询到20条相似文献,搜索用时 15 毫秒
1.
Fujita 《Applied Mathematics and Optimization》2008,47(2):143-149
Abstract. In this paper we give a new proof of the existence result of Bensoussan [1, Theorem II-6.1] for the Bellman equation of ergodic
control with periodic structure. This Bellman equation is a nonlinear PDE, and he constructed its solution by using the solution
of a nonlinear PDE. On the contrary, our key idea is to solve two linear PDEs. Hence, we propose a linear PDE approach to
this Bellman equation. 相似文献
2.
Y. Fujita 《Applied Mathematics and Optimization》2001,43(2):169-186
In this paper we consider the Bellman equation in a one-dimensional ergodic control. Our aim is to show the existence and
the uniqueness of its solution under general assumptions. For this purpose we introduce an auxiliary equation whose solution
gives the invariant measure of the diffusion corresponding to an optimal control. Using this solution, we construct a solution
to the Bellman equation. Our method of using this auxiliary equation has two advantages in the one-dimensional case. First,
we can solve the Bellman equation under general assumptions. Second, this auxiliary equation gives an optimal Markov control
explicitly in many examples. \keywords{Bellman equation, Auxiliary equation, Ergodic control.}
\amsclass{49L20, 35G20, 93E20.}
Accepted 11 September 2000. Online publication 16 January 2001. 相似文献
3.
We consider the Bellman equation related to the quadratic ergodic control problem for stochastic differential systems with
controller constraints. We solve this equation rigidly in C
2
-class, and give the minimal value and the optimal control.
Accepted 9 January 1997 相似文献
4.
Impulsive control of continuous-time Markov processes with risk- sensitive long-run average cost is considered. The most
general impulsive control problem is studied under the restriction that impulses are in dyadic moments only. In a particular
case of additive cost for impulses, the impulsive control problem is solved without restrictions on the moments of impulses.
Accepted 30 April 2001. Online publication 29 August 2001. 相似文献
5.
The Bellman equation of the risk-sensitive control problem with full observation is considered. It appears as an example
of a quasi-linear parabolic equation in the whole space, and fairly general growth assumptions with respect to the space variable
x are permitted. The stochastic control problem is then solved, making use of the analytic results. The case of large deviation
with small noises is then treated, and the limit corresponds to a differential game.
Accepted 25 March 1996 相似文献
6.
F. Da Lio 《Applied Mathematics and Optimization》2000,41(2):171-197
We study a class of infinite horizon control problems for nonlinear systems, which includes the Linear Quadratic (LQ) problem,
using the Dynamic Programming approach. Sufficient conditions for the regularity of the value function are given. The value
function is compared with sub- and supersolutions of the Bellman equation and a uniqueness theorem is proved for this equation
among locally Lipschitz functions bounded below. As an application it is shown that an optimal control for the LQ problem
is nearly optimal for a large class of small unbounded nonlinear and nonquadratic pertubations of the same problem.
Accepted 8 October 1998 相似文献
7.
In this paper a linearly perturbed version of the well-known matrix Riccati equations which arise in certain stochastic optimal control problems is studied. Via the
concepts of mean square stabilizability and mean square detectability we improve previous results on both the convergence properties of the linearly perturbed Riccati differential equation and the solutions of the linearly perturbed algebraic Riccati equation. Furthermore, our approach unifies, in some way, the study for this class of Riccati equations with the one for classical
theory, by eliminating a certain inconvenient assumption used in previous works (e.g., [10] and [26]). The results are derived
under relatively weaker assumptions and include, inter alia, the following: (a) An extension of Theorem 4.1 of [26] to handle systems not necessarily observable. (b) The existence of
a strong solution, subject only to the mean square stabilizability assumption. (c) Conditions for the existence and uniqueness of stabilizing
solutions for systems not necessarily detectable. (d) Conditions for the existence and uniqueness of mean square stabilizing
solutions instead of just stabilizing. (e) Relaxing the assumptions for convergence of the solution of the linearly perturbed
Riccati differential equation and deriving new convergence results for systems not necessarily observable.
Accepted 30 July 1996 相似文献
8.
In this article we consider a polygonal approximation to the unnormalized conditional measure of a filtering problem, which
is the solution of the Zakai stochastic differential equation on measure space. An estimate of the convergence rate based
on a distance which is equivalent to the weak convergence topology is derived. We also study the density of the unnormalized
conditional measure, which is the solution of the Zakai stochastic partial differential equation. An estimate of the convergence
rate is also given in this case. 60F25, 60H10.}
Accepted 23 April 2001. Online publication 14 August 2001. 相似文献
9.
In this work we study the existence and asymptotic behavior of overtaking optimal trajectories for linear control systems
with convex integrands. We extend the results obtained by Artstein and Leizarowitz for tracking periodic problems with quadratic
integrands [2] and establish the existence and uniqueness of optimal trajectories on an infinite horizon. The asymptotic dynamics
of finite time optimizers is examined.
Accepted 31 January 1996 相似文献
10.
We consider a stochastic system whose uncontrolled state dynamics are modelled by a general one-dimensional Itô diffusion. The control effort that can be applied to this system takes the form that is associated with the so-called monotone follower problem of singular stochastic control. The control problem that we address aims at maximising a performance criterion that rewards high values of the utility derived from the system’s controlled state but penalises any expenditure of control effort. This problem has been motivated by applications such as the so-called goodwill problem in which the system’s state is used to represent the image that a product has in a market, while control expenditure is associated with raising the product’s image, e.g., through advertising. We obtain the solution to the optimisation problem that we consider in a closed analytic form under rather general assumptions. Also, our analysis establishes a number of results that are concerned with analytic as well as probabilistic expressions for the first derivative of the solution to a second-order linear non-homogeneous ordinary differential equation. These results have independent interest and can potentially be of use to the solution of other one-dimensional stochastic control problems. 相似文献
11.
Stochastic Linear Quadratic Optimal Control Problems 总被引:2,自引:0,他引:2
This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the
coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control
variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward—backward
stochastic differential equations are established. Some results involving Riccati equation are discussed as well.
Accepted 15 May 2000. Online publication 1 December 2000 相似文献
12.
This paper is concerned with distributed and Dirichlet boundary controls of semilinear parabolic equations, in the presence
of pointwise state constraints. The paper is divided into two parts. In the first part we define solutions of the state equation
as the limit of a sequence of solutions for equations with Robin boundary conditions. We establish Taylor expansions for solutions
of the state equation with respect to perturbations of boundary control (Theorem 5.2). For problems with no state constraints,
we prove three decoupled Pontryagin's principles, one for the distributed control, one for the boundary control, and the last
one for the control in the initial condition (Theorem 2.1). Tools and results of Part 1 are used in the second part to derive
Pontryagin's principles for problems with pointwise state constraints.
Accepted 12 July 2001. Online publication 21 December 2001. 相似文献
13.
《Applied Mathematics and Optimization》2008,45(3):325-345
We consider the optimal control of harvesting the diffusive degenerate elliptic logistic equation. Under certain assumptions,
we prove the existence and uniqueness of an optimal control. Moreover, the optimality system and a characterization of the
optimal control are also derived. The sub-supersolution method, the singular eigenvalue problem and differentiability with
respect to the positive cone are the techniques used to obtain our results. 相似文献
14.
We consider the optimal control of harvesting the diffusive degenerate elliptic logistic equation. Under certain assumptions,
we prove the existence and uniqueness of an optimal control. Moreover, the optimality system and a characterization of the
optimal control are also derived. The sub-supersolution method, the singular eigenvalue problem and differentiability with
respect to the positive cone are the techniques used to obtain our results. 相似文献
15.
B. Øksendal 《Applied Mathematics and Optimization》1999,40(3):355-375
We study an impulse control problem where the cost of interfering in a stochastic system with an impulse of size ζ∈
R is given by
c+λ|ζ|,
where c and λ are positive constants. We call λ the proportional cost coefficient and c the intervention cost . We find the value/cost function V
c
for this problem for each c>0 and we show that lim
c→ 0+
V
c
=W , where W is the value function for the corresponding singular stochastic control problem. Our main result is that
This illustrates that the introduction of an intervention cost c>0 , however small, into a system can have a big effect on the value function: the increase in the value function is in no proportion
to the increase in c (from c=0 ).
Accepted 23 April 1998 相似文献
16.
17.
Filtering equations are derived for conditional probability density functions in case of partially observable diffusion processes
by using results and methods from the L
p
-theory of SPDEs. The method of derivation is new and does not require any knowledge of filtering theory.
Accepted 31 July 2000. Online publication 13 November 2000. 相似文献
18.
This paper concerns the filtering of an R
d
-valued Markov pure jump process when only the total number of jumps are observed. Strong and weak uniqueness for the solutions
of the filtering equations are discussed.
Accepted 12 November 1999 相似文献
19.
Y. Fujita 《Applied Mathematics and Optimization》2000,41(1):1-7
We give a simple proof of the theorem concerning optimality in a one-dimensional ergodic control problem. We characterize
the optimal control in the class of all Markov controls. Our proof is probabilistic and does not need to solve the corresponding
Bellman equation. This simplifies the proof.
Accepted 24 March 1998 相似文献
20.
Sergey V. Lototsky 《Applied Mathematics and Optimization》2008,47(2):167-194
Abstract. An approximation to the solution of a stochastic parabolic equation is constructed using the Galerkin approximation followed
by the Wiener chaos decomposition. The result is applied to the nonlinear filtering problem for the time-homogeneous diffusion
model with correlated noise. An algorithm is proposed for computing recursive approximations of the unnormalized filtering
density and filter, and the errors of the approximations are estimated. Unlike most existing algorithms for nonlinear filtering,
the real-time part of the algorithm does not require solving partial differential equations or evaluating integrals. The algorithm
can be used for both continuous and discrete time observations.
\par 相似文献