首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
In this paper, the stochastic averaging method of quasi-non-integrable-Hamiltonian systems is applied to Duffing–van der Pol system to obtain partially averaged Ito stochastic differential equations. On the basis of the stochastic dynamical programming principle and the partially averaged Ito equation, dynamical programming equations for the reliability function and the mean first-passage time of controlled system are established. Then a non-linear stochastic optimal control strategy for coupled Duffing–van der Pol system subject to Gaussian white noise excitation is taken for investigating feedback minimization of first-passage failure. By averaging the terms involving control forces and replacing control forces by the optimal ones, the fully averaged Ito equation is derived. Thus, the feedback minimization for first-passage failure of controlled system can be obtained by solving the final dynamical programming equations. Numerical results for first-passage reliability function and mean first-passage time of the controlled and uncontrolled systems are compared in illustrative figures to show effectiveness and efficiency of the proposed method.  相似文献   

2.
A nonlinear stochastic optimal time-delay control strategy for quasi-integrable Hamiltonian systems is proposed. First, a stochastic optimal control problem of quasi-integrable Hamiltonian system with time-delay in feedback control subjected to Gaussian white noise is formulated. Then, the time-delayed feedback control forces are approximated by the control forces without time-delay and the original problem is converted into a stochastic optimal control problem without time-delay. After that, the converted stochastic optimal control problem is solved by applying the stochastic averaging method and the stochastic dynamical programming principle. As an example, the stochastic time-delay optimal control of two coupled van der Pol oscillators under stochastic excitation is worked out in detail to illustrate the procedure and effectiveness of the proposed control strategy.  相似文献   

3.
将政府对价格系统的宏观调控作为外部控制力,建立受控的随机非线性物价模型;利用拟Hamilton系统随机平均法和随机动态规划原理的非线性随机控制策略对系统实施最优控制,控制目标是实现系统的稳定性变大;并通过对比控制前后的Lyapunov指教值说明了控制的有效性.  相似文献   

4.
The paper is concerned with a stochastic optimal control problem in which the controlled system is described by a fully coupled nonlinear forward-backward stochastic differential equation driven by a Brownian motion. It is required that all admissible control processes are adapted to a given subfiltration of the filtration generated by the underlying Brownian motion. For this type of partial information control, one sufficient (a verification theorem) and one necessary conditions of optimality are proved. The control domain need to be convex and the forward diffusion coefficient of the system can contain the control variable. This work was partially supported by Basic Research Program of China (Grant No. 2007CB814904), National Natural Science Foundation of China (Grant No. 10325101) and Natural Science Foundation of Zhejiang Province (Grant No. Y605478, Y606667)  相似文献   

5.
Traditional approaches to solving stochastic optimal control problems involve dynamic programming, and solving certain optimality equations. When recast as stochastic programming problems, structural aspects such as convexity are retained, and numerical solution procedures based on decomposition and duality may be exploited. This paper explores a class of stationary, infinite-horizon stochastic optimization problems with discounted cost criterion. Constraints on both states and controls are permitted, and modeled in the objective function by allowing it to take infinite values. Approximating techniques are developed using variational analysis, and intuitive lower bounds are obtained via averaging the future. These bounds could be used in a finite-time horizon stochastic programming setting to find solutions numerically. Research supported in part by a grant of the National Science Foundation. AMS Classification 46N10, 49N15, 65K10, 90C15, 90C46  相似文献   

6.
This paper uses stochastic averaging method to design an optimal feedback control for nonlinear stochastic systems. The method of stochastic averaging is used to reduce the dimension of the state space and to derive the Itô stochastic differential equation for the response amplitude process. Two approaches to optimization, namely, with the exact steady state probability density function of the amplitude process and the Rayleigh approximation are compared. The cost function is a steady state response measure. Numerical examples are studied to demonstrate the performance of the control both in transient and steady-state. The effect of the control on the system response and control performance is studied. The regions where the controls are conservative and unconservative are pointed out.  相似文献   

7.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

8.
《随机分析与应用》2013,31(6):1255-1282
Abstract

The purpose of this paper is to give a systematic method for global asymptotic stabilization in probability of nonlinear control stochastic differential systems the unforced dynamics of which are Lyapunov stable in probability. The approach developed in this paper is based on the concept of passivity for nonaffine stochastic differential systems together with the theory of Lyapunov stability in probability for stochastic differential equations. In particular, we prove that, as in the case of affine in the control stochastic differential systems, a nonlinear stochastic differential system is asymptotically stabilizable in probability provided its unforced dynamics are Lyapunov stable in probability and some rank conditions involving the affine part of the system coefficients are satisfied. Furthermore, for such systems, we show how a stabilizing smooth state feedback law can be designed explicitly. As an application of our analysis, we construct a dynamic state feedback compensator for a class of nonaffine stochastic differential systems.  相似文献   

9.
We consider the problem of control for continuous time stochastic hybrid systems in finite time horizon. The systems considered are nonlinear: the state evolution is a nonlinear function of both the control and the state. The control parameters change at discrete times according to an underlying controlled Markov chain which has finite state and action spaces. The objective is to design a controller which would minimize an expected nonlinear cost of the state trajectory. We show using an averaging procedure, that the above minimization problem can be approximated by the solution of some deterministic optimal control problem. This paper generalizes our previous results obtained for systems whose state evolution is linear in the control.This work is supported by the Australian Research Council. All correspondence should be directed to the first author.  相似文献   

10.
建立了非线性随机动力模型—带噪声的能源Logistic反馈控制模型,应用随机平均法对随机动力模型进行了简化,得到了一个二维的扩散过程.二维过程满足Ito型随机微分方程,应用不变测度理论研究了该模型的随机分岔.最后,给出了数值实验验证了相应的结论.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号