共查询到20条相似文献,搜索用时 15 毫秒
1.
《Stochastic Processes and their Applications》2020,130(11):6733-6756
We study the ergodic control problem for a class of controlled jump diffusions driven by a compound Poisson process. This extends the results of Arapostathis et al. (2019) to running costs that are not near-monotone. This generality is needed in applications such as optimal scheduling of large-scale parallel server networks.We provide a full characterizations of optimality via the Hamilton–Jacobi–Bellman (HJB) equation, for which we additionally exhibit regularity of solutions under mild hypotheses. In addition, we show that optimal stationary Markov controls are a.s. pathwise optimal. Lastly, we show that one can fix a stable control outside a compact set and obtain near-optimal solutions by solving the HJB on a sufficiently large bounded domain. This is useful for constructing asymptotically optimal scheduling policies for multiclass parallel server networks. 相似文献
2.
V. S. Borkar 《Journal of Optimization Theory and Applications》1995,86(1):251-261
For the ergodic control problem with degenerate diffusions, the existence of an optimal solution is established for various interesting classes of solutions.This research was supported by Grant No. 26/01/92-G from the Department of Atomic Energy, Government of India, Delhi, India. 相似文献
3.
On optimal solutions of general continuous-singular stochastic control problem of McKean-Vlasov type
Lina Guenane Mokhtar Hafayed Shahlar Meherrem Syed Abbas 《Mathematical Methods in the Applied Sciences》2020,43(10):6498-6516
In this paper, we establish general necessary optimality conditions for stochastic continuous-singular control of McKean-Vlasov type equations. The coefficients of the state equation depend on the state of the solution process as well as of its probability law and the control variable. The coefficients of the system are nonlinear and depend explicitly on the absolutely continuous component of the control. The control domain under consideration is not assumed to be convex. The proof of our main result is based on the first- and second-order derivatives, with respect to measure in Wasserstein space of probability measures, and by using variational method. 相似文献
4.
In a recent paper (Ref. 1), Cheng and Teo discussed some further extensions of a student-related optimal control problem which was originally proposed by Raggettet al. (Ref. 2) and later on modified by Parlar (Ref. 3). In this paper, we treat further extensions of the problem.This paper is a modified and improved version of Ref. 4. It is based, in part, on research sponsored by NSF. 相似文献
5.
M. Kohlmann 《Stochastic Processes and their Applications》1982,13(2):215-226
For a partially observed control problem we prove the existence of an optimal control. Purely probabilistic means as Skorokhod imbedding and convergence are used to derive the result. 相似文献
6.
Alain Bensoussan Metin Çakanyıldırım Suresh P. Sethi 《Comptes Rendus Mathematique》2005,341(7):419-426
This Note introduces recent developments in the analysis of inventory systems with partial observations. The states of these systems are typically conditional distributions, which evolve in infinite dimensional spaces over time. Our analysis involves introducing unnormalized probabilities to transform nonlinear state transition equations to linear ones. With the linear equations, the existence of the optimal feedback policies are proved for two models where demand and inventory are partially observed. In a third model where the current inventory is not observed but a past inventory level is fully observed, a sufficient statistic is provided to serve as a state. The last model serves as an example where a partially observed model has a finite dimensional state. In that model, we also establish the optimality of the basestock policies, hence generalizing the corresponding classical models with full information. To cite this article: A. Bensoussan et al., C. R. Acad. Sci. Paris, Ser. I 341 (2005). 相似文献
7.
Vivek S. Borkar 《Annals of Operations Research》1991,29(1):429-438
A new state variable is introduced for the problem of controlling a Markov chain under partial observations, which, under a suitably altered probability measure, has a simple evolution. 相似文献
8.
In this paper, we present a new computational approach for solving an internal optimal control problem, which is governed by a linear parabolic partial differential equation. Our approach is to approximate the PDE problem by a nonhomogeneous ordinary differential equation system in higher dimension. Then, the homogeneous part of ODES is solved using semigroup theory. In the next step, the convergence of this approach is verified by means of Toeplitz matrix. In the rest of the paper, the optimal control problem is solved by utilizing the solution of homogeneous part. Finally, a numerical example is given. 相似文献
9.
Emmanuel Fernández-Gaucherand Aristotle Arapostathis Steven I. Marcus 《Annals of Operations Research》1991,29(1):439-469
We consider partially observable Markov decision processes with finite or countably infinite (core) state and observation spaces and finite action set. Following a standard approach, an equivalent completely observed problem is formulated, with the same finite action set but with anuncountable state space, namely the space of probability distributions on the original core state space. By developing a suitable theoretical framework, it is shown that some characteristics induced in the original problem due to the countability of the spaces involved are reflected onto the equivalent problem. Sufficient conditions are then derived for solutions to the average cost optimality equation to exist. We illustrate these results in the context of machine replacement problems. Structural properties for average cost optimal policies are obtained for a two state replacement problem; these are similar to results available for discount optimal policies. The set of assumptions used compares favorably to others currently available.This research was supported in part by the Advanced Technology Program of the State of Texas, in part by the Air Force Office of Scientific Research under Grant AFOSR-86-0029, in part by the National Science Foundation under Grant ECS-8617860, and in part by the Air Force Office of Scientific Research (AFSC) under Contract F49620-89-C-0044. 相似文献
10.
《Optimization》2012,61(5):707-715
In this article, we investigate the optimal control problem governed by parabolic inclusion. We describe the Galerkin approximation and we demonstrate the existence of the strong condensation points of the set of solutions of approximate optimization problems. Each of these points is a solution of the initial optimization problem. 相似文献
11.
A. V. Kamyad J. E. Rubio D. A. Wilson 《Journal of Optimization Theory and Applications》1992,75(1):101-132
The present paper is concerned with an optimal control problem for then-dimensional diffusion equation with a sequence of Radon measures as generalized control variables. Suppose that a desired final state is not reachable. We enlarge the set of admissible controls and provide a solution to the corresponding moment problem for the diffusion equation, so that the previously chosen desired final state is actually reachable by the action of a generalized control. Then, we minimize an objective function in this extended space, which can be characterized as consisting of infinite sequences of Radon measures which satisfy some constraints. Then, we approximate the action of the optimal sequence by that of a control, and finally develop numerical methods to estimate these nearly optimal controls. Several numerical examples are presented to illustrate these ideas. 相似文献
12.
We consider a broad class of singular stochastic control problems of spectrally negative jump diffusions in the presence of potentially nonlinear state-dependent exercise payoffs. We analyse these problems by relying on associated variational inequalities and state a set of sufficient conditions under which the value of the considered problems can be explicitly derived in terms of the increasing minimal r-harmonic map. We also present a set of inequalities bounding the value of the optimal policy and prove that increased policy flexibility increases both the value of the optimal strategy as well as the rate at which this value grows. 相似文献
13.
T. Tadumadze 《Nonlinear Analysis: Theory, Methods & Applications》2010,73(1):211-220
The existence theorems of the optimal element are proved for a nonlinear control problem with constant delay in phase coordinates and with general functional. Here element implies the collection of delay parameter and initial function, initial moment and vector, control and finally moment. 相似文献
14.
15.
We consider the infinite horizon risk-sensitive problem for nondegenerate diffusions with a compact action space, and controlled through the drift. We only impose a structural assumption on the running cost function, namely near-monotonicity, and show that there always exists a solution to the risk-sensitive Hamilton–Jacobi–Bellman (HJB) equation, and that any minimizer in the Hamiltonian is optimal in the class of stationary Markov controls. Under the additional hypothesis that the coefficients of the diffusion are bounded, and satisfy a condition that limits (even though it still allows) transient behavior, we show that any minimizer in the Hamiltonian is optimal in the class of all admissible controls. In addition, we present a sufficient condition, under which the solution of the HJB is unique (up to a multiplicative constant), and establish the usual verification result. We also present some new results concerning the multiplicative Poisson equation for elliptic operators in . 相似文献
16.
J. Casti 《Journal of Optimization Theory and Applications》1980,32(4):491-497
The general inverse problem of optimal control is considered from a dynamic programming point of view. Necessary and sufficient conditions are developed which two integral criteria must satisfy if they are to yield the same optimal feedback law, the dynamics being fixed. Specializing to the linear-quadratic case, it is shown how the general results given here recapture previously obtained results for quadratic criteria with linear dynamics.Dedicated to R. Bellman 相似文献
17.
Pointwise control of the viscous Burgers equation in one spatial dimension is studied with the objective of minimizing the distance between the final state function and target profile along with the energy of the control. An efficient computational method is proposed for solving such problems, which is based on special orthonormal functions that satisfy the associated boundary conditions. Employing these orthonormal functions as a basis of a modal expansion method, the solution space is limited to the smallest lower subspace that is sufficient to describe the original problem. Consequently, the Burgers equation is reduced to a set of a minimal number of ordinary nonlinear differential equations. Thus, by the modal expansion method, the optimal control of a distributed parameter system described by the Burgers equation is converted to the optimal control of lumped parameter dynamical systems in finite dimension. The time-variant control is approximated by a finite term of the Fourier series whose unknown coefficients and frequencies giving an optimal solution are sought, thereby converting the optimal control problem into a mathematical programming problem. The solution space obtained is based on control parameterization by using the Runge–Kutta method. The efficiency of the proposed method is examined using a numerical example for various target functions. 相似文献
18.
S. Aiyappan A. K. Nandakumaran Abu Sufian 《Mathematical Methods in the Applied Sciences》2019,42(18):6407-6434
We consider an optimal control problem posed on a domain with a highly oscillating smooth boundary where the controls are applied on the oscillating part of the boundary. There are many results on domains with oscillating boundaries where the oscillations are pillar‐type (non‐smooth) while the literature on smooth oscillating boundary is very few. In this article, we use appropriate scaling on the controls acting on the oscillating boundary leading to different limit control problems, namely, boundary optimal control and interior optimal control problem. In the last part of the article, we visualize the domains as a branched structure, and we introduce unfolding operators to get contributions from each level at every branch. 相似文献
19.
QingXin Meng 《中国科学A辑(英文版)》2009,52(7):1579-1588
The paper is concerned with a stochastic optimal control problem in which the controlled system is described by a fully coupled
nonlinear forward-backward stochastic differential equation driven by a Brownian motion. It is required that all admissible
control processes are adapted to a given subfiltration of the filtration generated by the underlying Brownian motion. For
this type of partial information control, one sufficient (a verification theorem) and one necessary conditions of optimality
are proved. The control domain need to be convex and the forward diffusion coefficient of the system can contain the control
variable.
This work was partially supported by Basic Research Program of China (Grant No. 2007CB814904), National Natural Science Foundation
of China (Grant No. 10325101) and Natural Science Foundation of Zhejiang Province (Grant No. Y605478, Y606667) 相似文献
20.
Łukasz Stettner 《Applied Mathematics and Optimization》1993,27(2):161-177
We control a discrete-time uniformly ergodic system, which depends on an unknown parameter 0 A, a compact set. Our purpose is to minimize the long-run average-cost functional. We estimate the unknown parameter using the biased maximum likelihood estimator and apply the control which is almost optimal for the value of estimation. This way we construct strategies such that the value of the cost functional can be arbitrarily close to the optimal value obtained for 0. 相似文献