首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We prove the existence of an optimal control for systems of stochastic differential equations without solving the Bellman dynamic programming equation. Instead, we use direct methods for solving extremal problems.  相似文献   

2.
We give existence theorems for stochastic control problems with a lower semicontinuous cost functional and governed by Ito equations. We prove that two formulations of the fundamental problem are equivalent, one involving nonanticipative controls and the other involving (measurable) feedback controls. We then use the concept ofconvergence in distribution to prove existence for the first problem, and hence for the second as well. While our work has certain similarities with a paper of Kushner, our techniques are different and lead to more general results.  相似文献   

3.
Letx t u () be a stochastic control system on the probability space (, ,P) intoR n. We say that the pointxR n is (, ) attainable at timet if there exists an admissible controlu such thatP xo{x t u ()S (x)}, wherex 0()=x 0, 0, 10, andS (x) is the closed Euclidean -ball inR n centered atx. We define the attainable setA (t) to be the set of all pointsxR n which are (, ) attainable at timet. For a large class of stochastic control systems, it is shown thatA (t) is compact for eacht and continuous as a function oft in an appropriate metric. From this, the existence of stochastic time-optional controls is established for a large class of nonlinear stochastic differential equations.This research was supported by the National Research Council of Canada, Grant No. A-9072.  相似文献   

4.
Let xtu(w) be the solution process of the n-dimensional stochastic differential equation dxtu = [A(t)xtu + B(t) u(t)] dt + C(t) dWt, where A(t), B(t), C(t) are matrix functions, Wt is a n-dimensional Brownian motion and u is an admissable control function. For fixed ? ? 0 and 1 ? δ ? 0, we say that x?Rn is (?, δ) attainable if there exists an admissable control u such that P{xtu?S?(x)} ? δ, where S?(x) is the closed ?-ball in Rn centered at x. The set of all (?, δ) attainable points is denoted by A(t). In this paper, we derive various properties of A(t) in terms of K(t), the attainable set of the deterministic control system x? = A(t)x + B(t)u. As well a stochastic bang-bang principle is established and three examples presented.  相似文献   

5.
《Optimization》2012,61(4):343-354
In this paper we treat discrete-time stochastic control systems. Using corresponding results for systems, which are linear with respect to the state variables, we derive under convexity assumptions optimality conditions in form of maximum principles  相似文献   

6.
A new control mode is proposed for networked control systems whose network-induced delay is longer than a sampling period. Under the control mode, the mathematical model of networked control systems is obtained. Markov characteristic of the network-induced delay is discussed. Based on Markov chain theory, the optimal controller is designed. One example is given for illustration.  相似文献   

7.
This paper presents a method for discrete-time control and estimation of flexible structures in the presence of actuator and sensor noise. The approach consists of complete decoupling of the modal equations and estimator dynamics based on the independent modal-space control technique and modal spatial filtering of the system output. The solution for the Kalman filter gains reduces to that of independent second-order modal estimators, thus permitting real-time digital control of distributed-parameter systems in a noisy environment. The method can be used to control and estimate any number of modes without computational restraints and is theoretically free of observation spillover. Two examples, the first using nonlinear, quantized control and the second using linear, state feedback control are presented.This work was supported by the National Science Foundation, Grant No. PFR-80-20623.  相似文献   

8.
A general model is available for analysis of control systems involving stochastic time varying parameters in the system to be controlled by the use of the “iterative” method of the authors or its more recent adaptations for stochastic operator equations. It is shown that the statistical separability which is achieved as a result of the method for stochastic operator equations is unaffected by the matrix multiplications in state space equations; the method, therefore, is applicable to the control problem. Application is made to the state space equation x? = Ax + Bu + C, where A, B, C are stochastic matrices corresponding to stochastic operators, i.e., involving randomly time varying elements, e.g., aij(t, ω) ? A, t ? T, ω ? (Ω, F,μ), a p.s. It is emphasized that the processes are arbitrary stochatic processes with known statistics. No assumption is made of Wiener or Markov behavior or of smallness of fluctuations and no closure approximations are necessary. The method differs in interesting aspects from Volterra series expansions used by Wiener and others and has advantages over the other methods. Because of recent progress in the solution of the nonlinear case, it may be possible to generalize the results above to the nonlinear case as well but the linear case is suffcient to show the connections and essential point of separability of ensemble averages.  相似文献   

9.
10.
In ergodic stochastic problems the limit of the value function Vλ of the associated discounted cost functional with infinite time horizon is studied, when the discounted factor λ tends to zero. These problems have been well studied in the literature and the used assumptions guarantee that the value function λVλ converges uniformly to a constant as λ0. The objective of this work consists in studying these problems under the assumption, namely, the nonexpansivity assumption, under which the limit function is not necessarily constant. Our discussion goes beyond the case of the stochastic control problem with infinite time horizon and discusses also Vλ given by a Hamilton–Jacobi–Bellman equation of second order which is not necessarily associated with a stochastic control problem. On the other hand, the stochastic control case generalizes considerably earlier works by considering cost functionals defined through a backward stochastic differential equation with infinite time horizon and we give an explicit representation formula for the limit of λVλ, as λ0.  相似文献   

11.
The synthesis of optimal control over nonlinear stochastic systems that are described by the Itô equations is reduced to the solution of recurrence relations derived from the Bellman stochastic equation.  相似文献   

12.
In this paper a theory of optimal control is developed for stochastic systems whose performance is measured by the exponential of an integral form. Such a formulation of the cost function is shown to be not only general and useful but also analytically tractable. Starting with very general classes of stochastic systems, optimality conditions are obtained which exploit the multiplicative decomposability of the exponential-of-integral form. Specializing to partially observed systems of stochastic differential equations with Brownian Motion disturbances, optimality conditions are obtained which parallel those for systems with integral costs. Also treated are the special cases of linear systems with exponential of quadratic costs for which explicit optimal controls are obtainable. In addition, several general results of independent interest are obtained, which concern optimality of stochastic systems.  相似文献   

13.
In this paper, the mean-square exponential stabilization for stochastic differential equations with Markovian switching is studied. Specifically, a new set of sufficient conditions is derived to obtain the aperiodically intermittent control design which exponentially stabilizes the addressed hybrid stochastic differential equations. Further, stabilization problem by periodically intermittent control can be deduced as a special case from the developed results. As an application, we consider the Hopfield neutral network model with simulations to illustrate the effectiveness of developed aperiodically intermittent control design.  相似文献   

14.
This paper considers optimal feedback control policies for a class of discrete stochastic distributed-parameter systems. The class under consideration has the property that the random variable in the dynamic systems depends only on the time and possesses the Markovian property with stationary transition probabilities. A necessary condition for optimality of a feedback control policy, which has form similar to the Hamiltonian form in the deterministic case, is derived via a dynamic programming approach.  相似文献   

15.
An adaptive control problem for some linear stochastic evolution systems in Hilbert spaces is formulated and solved in this paper. The solution includes showing the strong consistency of a family of least squares estimates of the unknown parameters and the convergence of the average quadratic costs with a control based on these estimates to the optimal average cost. The unknown parameters in the model appear affinely in the infinitesimal generator of the C 0 semigroup that defines the evolution system. A recursive equation is given for a family of least squares estimates and the bounded linear operator solution of the stationary Riccati equation is shown to be a continuous function of the unknown parameters in the uniform operator topology  相似文献   

16.
17.
This paper is concerned with the control of linear, discrete-time, stochastic systems with unknown control gain parameters. Two suboptimal adaptive control schemes are derived: One is based on underestimating future control and the other is based on overestimating future control. Both schemes require little on-line computation and incorporate in their control laws some information on estimation errors. The performance of these laws is studied by Monte Carlo simulations on a computer. Two single-input, third-order systems are considered, one stable and the other unstable, and the performance of the two adaptive control schemes is compared with that of the scheme based on enforced certainty equivalence and the scheme where the control gain parameters are known.  相似文献   

18.
In this paper we deal with blow-up solutions to an elliptic equation with a nonlinear gradient term. The problem under consideration can be seen as the ergodic limit of a stochastic control problem with state constraints. It is well known that it has a solution only when a parameter which appears in the equation assumes a particular value known as ergodic constant. For such a constant many properties similar to those of an eigenvalue hold true. We show that a Faber–Krahn inequality can be stated for the ergodic constant and that for the corresponding solution a comparison result in terms of the solution to a symmetrized problem can be proved.  相似文献   

19.
For the deterministic case, a linear controlled system is alwayspth order stable as long as we use the control obtained as the solution of the so-called LQ-problem. For the stochastic case, however, a linear controlled system with multiplicative noise is not alwayspth mean stable for largep, even if we use the LQ-optimal control. Hence, it is meaningful to solve the LP-optimal control problem (i.e., linear system,pth order cost functional) for eachp. In this paper, we define the LP-optimal control problem and completely solve it for the scalar case. For the multidimensional case, we get some results, but the general solution of this problem seems to be impossible. So, we consider thepth mean stabilization problem more intensively and give a sufficient condition for the existence of apth mean stabilizing control by using the contraction mapping method in a Hilbert space. Some examples are also given.This research was conducted while the author was a visitor at the Forschungsschwerpunkt Dynamische Systeme, Universität Bremen, Bremen, West Germany. The author is grateful to Professor L. Arnold for providing interesting seminars and excellent working conditions during his stay. The financial assistance given by the Alexander von Humboldt Foundation during the author's stay is also gratefully acknowledged.  相似文献   

20.
Let X be a real Hilbert space with dim X ≥ 2 and let Y be a real normed space which is strictly convex. In this paper, we generalize a theorem of Benz by proving that if a mapping f, from an open convex subset of X into Y, has a contractive distance ρ and an extensive one (where N ≥ 2 is a fixed integer), then f is an isometry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号