首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A nonlinear stochastic optimal control strategy for minimizing the first-passage failure of quasi integrable Hamiltonian systems (multi-degree-of-freedom integrable Hamiltonian systems subject to light dampings and weakly random excitations) is proposed. The equations of motion for a controlled quasi integrable Hamiltonian system are reduced to a set of averaged Itô stochastic differential equations by using the stochastic averaging method. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximization of reliability and mean first-passage time are formulated. The optimal control law is derived from the dynamical programming equations and the control constraints. The final dynamical programming equations for these control problems are determined and their relationships to the backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the mean first-passage time are separately established. The conditional reliability function and the mean first-passage time of the controlled system are obtained by solving the final dynamical programming equations or their equivalent Kolmogorov and Pontryagin equations. An example is presented to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

2.
An n degree-of-freedom Hamiltonian system with r(1<r<n) independent first integrals which are in involution is called partially integrable Hamiltonian system. A partially integrable Hamiltonian system subject to light dampings and weak stochastic excitations is called quasi-partially integrable Hamiltonian system. In the present paper, the procedures for studying the first-passage failure and its feedback minimization of quasi-partially integrable Hamiltonian systems are proposed. First, the stochastic averaging method for quasi-partially integrable Hamiltonian systems is briefly reviewed. Then, based on the averaged Itô equations, a backward Kolmogorov equation governing the conditional reliability function, a set of generalized Pontryagin equations governing the conditional moments of first-passage time and their boundary and initial conditions are established. After that, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximization of reliability and of maximization of mean first-passage time are formulated. The relationship between the backward Kolmogorov equation and the dynamical programming equation for reliability maximization, and that between the Pontryagin equation and the dynamical programming equation for maximization of mean first-passage time are discussed. Finally, an example is worked out to illustrate the proposed procedures and the effectiveness of feedback control in reducing first-passage failure.  相似文献   

3.
Zhu  W. Q.  Deng  M. L.  Huang  Z. L. 《Nonlinear dynamics》2003,33(2):189-207
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged Itô equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.  相似文献   

4.
A procedure for designing a feedback control to asymptotically stabilize in probability a quasi non-integrable Hamiltonion system is proposed. First, an one-dimensional averaged Itô stochastic differential equation for controlled Hamiltonian is derived from given equations of motion of the system by using the stochastic averaging method for quasi non-integrable Hamiltonian systems. Second, a dynamical programming equation for an ergodic control problem with undetermined cost function is established based on the stochastic dynamical programming principle and solved to yield the optimal control law. Third, the asymptotic stability in probability of the system is analysed by examining the sample behaviors of the completely averaged Itô differential equation at its two boundaries. Finally, the cost function and the optimal control forces are determined by the requirement of stabilizing the system. Two examples are given to illustrate the application of the proposed procedure and the effect of control on the stability of the system.  相似文献   

5.
A strategy is proposed based on the stochastic averaging method for quasi nonintegrable Hamiltonian systems and the stochastic dynamical programming principle. The proposed strategy can be used to design nonlinear stochastic optimal control to minimize the response of quasi non-integrable Hamiltonian systems subject to Gaussian white noise excitation. By using the stochastic averaging method for quasi non-integrable Hamiltonian systems the equations of motion of a controlled quasi non-integrable Hamiltonian system is reduced to a one-dimensional averaged Ito stochastic differential equation. By using the stochastic dynamical programming principle the dynamical programming equation for minimizing the response of the system is formulated.The optimal control law is derived from the dynamical programming equation and the bounded control constraints. The response of optimally controlled systems is predicted through solving the FPK equation associated with It5 stochastic differential equation. An example is worked out in detail to illustrate the application of the control strategy proposed.  相似文献   

6.
Zhu  W. Q.  Deng  M. L. 《Nonlinear dynamics》2004,35(1):81-100
A strategy for designing optimal bounded control to minimize theresponse of quasi non-integrable Hamiltonian systems is proposed basedon the stochastic averaging method for quasi non-integrable Hamiltoniansystems and the stochastic dynamical programming principle. Theequations of motion of a controlled quasi non-integrable Hamiltoniansystem are first reduced to an one-dimensional averaged Itô stochasticdifferential equation for the Hamiltonian by using the stochasticaveraging method for quasi non-integrable Hamiltonian systems. Then, thedynamical programming equation for the control problem of minimizing theresponse of the averaged system is formulated based on the dynamicalprogramming principle. The optimal control law is derived from thedynamical programming equation and control constraints without solvingthe equation. The response of optimally controlled systems is predictedthrough solving the Fokker–Planck–Kolmogrov (FPK) equation associatedwith completely averaged Itô equation. Finally, two examples are workedout in detail to illustrate the application and effectiveness of theproposed control strategy.  相似文献   

7.
In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic averaging method for quasi partially integrable Hamiltonian systems, an n-DOF controlled quasi partially integrable Hamiltonian system with stochastic excitation is converted into a set of partially averaged Itô stochastic differential equations. Then, the dynamical programming equation associated with the partially averaged Itô equations is formulated by applying the stochastic dynamical programming principle. In the first control strategy, the optimal control law is derived from the dynamical programming equation and the control constraints without solving the dynamical programming equation. In the second control strategy, the optimal control law is obtained by solving the dynamical programming equation. Finally, both the responses of controlled and uncontrolled systems are predicted through solving the Fokker-Plank-Kolmogorov equation associated with fully averaged Itô equations. An example is worked out to illustrate the application and effectiveness of the two proposed control strategies.  相似文献   

8.
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent nonhysteretic system. Stochastic averaging is then implemented to obtain the Itô stochastic equation associated with the total energy of the vibrating system, appropriate for evaluating system responses. Dynamical programming equations for maximizing system reliability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equation. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.  相似文献   

9.
A procedure for designing optimal bounded control to minimize the response of quasi-integrable Hamiltonian systems is proposed based on the stochastic averaging method for quasi-integrable Hamiltonian systems and the stochastic dynamical programming principle. The equations of motion of a controlled quasi-integrable Hamiltonian system are first reduced to a set of partially completed averaged Itô stochastic differential equations by using the stochastic averaging method for quasi-integrable Hamiltonian systems. Then, the dynamical programming equation for the control problems of minimizing the response of the averaged system is formulated based on the dynamical programming principle. The optimal control law is derived from the dynamical programming equation and control constraints without solving the dynamical programming equation. The response of optimally controlled systems is predicted through solving the Fokker-Planck-Kolmogrov equation associated with fully completed averaged Itô equations. Finally, two examples are worked out in detail to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

10.
The first passage failure of quasi non-integrable generalized Hamiltonian systems is studied. First, the generalized Hamiltonian systems are reviewed briefly. Then, the stochastic averaging method for quasi non-integrable generalized Hamiltonian systems is applied to obtain averaged Itô stochastic differential equations, from which the backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the conditional mean of the first passage time are established. The conditional reliability function and the conditional mean of first passage time are obtained by solving these equations together with suitable initial condition and boundary conditions. Finally, an example of power system under Gaussian white noise excitation is worked out in detail and the analytical results are confirmed by using Monte Carlo simulation of original system.  相似文献   

11.
A new procedure for designing optimal control of quasi non-integrable Hamiltonian systems under stochastic excitations is proposed based on the stochastic averaging method for quasi non-integrable Hamiltonian systems and the stochastic maximum principle. First, the control problem consisting of 2n-dimensional equations governing the controlled quasi non-integrable system and performance index is converted into a partially averaged one consisting of one-dimensional equation of the controlled system and performance index by using the stochastic averaging method. Then, the adjoint equation and the maximum condition of the partially averaged control problem are derived based on the stochastic maximum principle. The optimal control forces are determined from the maximum condition and solving the forward?Cbackward stochastic differential equations (FBSDE). For infinite time-interval ergodic control, the adjoint variable is a stationary process and the FBSDE is reduced to a partial differential equation. Finally, the response statistics of optimally controlled system is predicted by solving the Fokker?CPlank equation (FPE) associated with the fully averaged It? equation of the controlled system. An example of two degree-of-freedom (DOF) quasi non-integrable Hamiltonian system is worked out to illustrate the proposed procedure and its effectiveness.  相似文献   

12.
In this paper, first-passage problem of a class of internally resonant quasi-integrable Hamiltonian system under wide-band stochastic excitations is studied theoretically. By using stochastic averaging method, the equations of motion of the original internally resonant Hamiltonian system are reduced to a set of averaged Itô stochastic differential equations. The backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the mean first-passage time are established under appropriate boundary and (or) initial conditions. An example is given to show the accuracy of the theoretical method. Numerical solutions of high-dimensional backward Kolmogorov and Pontryagin equation are obtained by finite difference. All theoretical results are verified by Monte Carlo simulation.  相似文献   

13.
Zhu  W. Q. 《Nonlinear dynamics》2004,36(2-4):455-470
A procedure for designing a feedback control to asymptotically stabilize, with probability one, a quasi nonintegrable Hamiltonian system is proposed. First, the motion equations of a system are reduced to a one-dimensional averaged Itô stochastic differential equation for controlled Hamiltonian by using the stochastic averaging method for quasi nonintegrable Hamiltonian systems. Second, a dynamical programming equation for the ergodic control problem of the averaged system with undetermined cost function is established based on the dynamical programming principle. This equation is then solved to yield the optimal control law. Third, a formula for the Lyapunov exponent of the completely averaged Itô equation is derived by introducing a new norm for the definitions of stochastic stability and Lyapunov exponent in terms of the square root of Hamiltonian. The asymptotic stability with probability one of the originally controlled system is analysed approximately by using the Lyapunov exponent. Finally, the cost function is determined by the requirement of stabilizing the system. Two examples are given to illustrate the application of the proposed procedure and the effectiveness of control on stabilizing the system.  相似文献   

14.
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed.The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method.The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems.The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation.A bilinear controller by using the direct method of Lyapunov is introduced.The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.  相似文献   

15.
A new bounded optimal control strategy for multi-degree-of-freedom (MDOF) quasi nonintegrable-Hamiltonian systems with actuator saturation is proposed. First, an n-degree-of-freedom (n-DOF) controlled quasi nonintegrable-Hamiltonian system is reduced to a partially averaged Itô stochastic differential equation by using the stochastic averaging method for quasi nonintegrable-Hamiltonian systems. Then, a dynamical programming equation is established by using the stochastic dynamical programming principle, from which the optimal control law consisting of optimal unbounded control and bang–bang control is derived. Finally, the response of the optimally controlled system is predicted by solving the Fokker–Planck–Kolmogorov (FPK) equation associated with the fully averaged Itô equation. An example of two controlled nonlinearly coupled Duffing oscillators is worked out in detail. Numerical results show that the proposed control strategy has high control effectiveness and efficiency and that chattering is reduced significantly compared with the bang–bang control strategy.  相似文献   

16.
耦合Duffing-van der Pol系统的首次穿越问题   总被引:2,自引:0,他引:2  
徐伟  李伟  靳艳飞  赵俊锋 《力学学报》2005,37(5):620-626
利用拟不可积Hamilton系统随机平均法,研究了高斯白噪声激励下耦 合Duffing-van der Pol系统的首次穿越问题. 首先给出了条件可靠性函数满足的后向 Kolmogorov 方程以及首次穿越时间条件矩满足的广义Pontryagin方程. 然后根据 这两类偏微分方程的边界条件和初始条件,详细分析了在外激与参激共 同作用以及纯外激作用等情况下系统的可靠性与首次穿越时间的各阶矩. 最后以图表形式给 出了可靠性函数、首次穿越时间的概率密度以及平均首次穿越时间的数值结果.  相似文献   

17.
A procedure for studying the first-passage failure of strongly non-linear oscillators with time-delayed feedback control under combined harmonic and wide-band noise excitations is proposed. First, the time-delayed feedback control forces are expressed approximately in terms of the system state variables without time delay. Then, the averaged Itô stochastic differential equations for the system are derived by using the stochastic averaging method. A backward Kolmogorov equation governing the conditional reliability function and a set of generalized Pontryagin equations governing the conditional moments of first-passage time are established. Finally, the conditional reliability function, the conditional probability density and moments of first-passage time are obtained by solving the backward Kolmogorov equation and generalized Pontryagin equations with suitable initial and boundary conditions. An example is worked out in detail to illustrate the proposed procedure. The effects of time delay in feedback control forces on the conditional reliability function, conditional probability density and moments of first-passage time are analyzed. The validity of the proposed method is confirmed by digital simulation.  相似文献   

18.
A stochastic fractional optimal control strategy for quasi-integrable Hamiltonian systems with fractional derivative damping is proposed. First, equations of the controlled system are reduced to a set of partially averaged It $\hat{o}$ stochastic differential equations for the energy processes by applying the stochastic averaging method for quasi-integrable Hamiltonian systems and a stochastic fractional optimal control problem (FOCP) of the partially averaged system for quasi-integrable Hamiltonian system with fractional derivative damping is formulated. Then the dynamical programming equation for the ergodic control of the partially averaged system is established by using the stochastic dynamical programming principle and solved to yield the fractional optimal control law. Finally, an example is given to illustrate the application and effectiveness of the proposed control design procedure.  相似文献   

19.
An optimal vibration control strategy for partially observable nonlinear quasi Hamiltonian systems with actuator saturation is proposed. First,a controlled partially observable non-linear system is converted into a completely observable linear control system of finite dimension based on the theorem due to Charalambous and Elliott. Then the partially averaged It stochastic differential equations and dynamical programming equation associated with the completely observable linear system are derived by using the stochastic averaging method and stochastic dynamical programming principle,respectively. The optimal control law is obtained from solving the final dynamical programming equation. The results show that the proposed control strategy has high control effectiveness and control effciency.  相似文献   

20.
A NEW STOCHASTIC OPTIMAL CONTROL STRATEGY FOR HYSTERETIC MR DAMPERS   总被引:3,自引:0,他引:3  
I. INTRODUCTION Magneto-rheological (MR) ?uid as a smart material possesses fairly good essential characteristics suchas reversible change between liquid and semi-solid in milliseconds with a controllable yield strengthwhen exposed to a magnetic ?eld. A…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号