首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Michael Schacher 《PAMM》2008,8(1):10033-10036
In practice often it is not possible to specify exact model parameters. Hence, precomputed controller based on some parameter estimates can produce bad results. In this presentation the aim is to combine classical PID control theory and stochastic optimisation methods in order to obtain robust optimal feedback control. The method works with cost functions being minimized and takes into account stochastic parameter varations. After Taylor expansion to calculate expected cost functions and a few transformations an approximate deterministic substitute PID control problem follows. Here, retaining only linear terms, approximation of expectations and variances of the expected cost functions can be calculated explicitly. By means of splines, numerical approximations of the objective function and the differential equations are obtained then. Using stochastic optimization methods, random parameter variations are incorporated into the optimal control process. Hence, robust optimal feedback controls are obtained. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Michael Schacher 《PAMM》2010,10(1):541-542
The aim of this presentation is to construct an optimal open-loop feedback controller for robots, which takes into account stochastic uncertainties. This way, optimal regulators being insensitive with respect to random parameter variations can be obtained. Usually, a precomputed feedback control is based on exactly known or estimated model parameters. However, in practice, often exact informations about model parameters, e.g. the payload mass, are not given. Supposing now that the probability distribution of the random parameter variation is known, in the following, stochastic optimisation methods will be applied in order to obtain robust open-loop feedback control. Taking into account stochastic parameter variations, the method works with expected cost functions evaluating the primary control expenses and the tracking error. The expectation of the total costs has then to be minimized. Corresponding to Model Predictive Control (MPC), here a sliding horizon is considered. This means that, instead of minimizing an integral from a starting time point t0 to the final time tf, the future time range [t; t+T], with a small enough positive time unit T, will be taken into account. The resulting optimal regulator problem under stochastic uncertainty will be solved by using the Hamiltonian of the problem. After the computation of a H-minimal control, the related stochastic two-point boundary value problem is then solved in order to find a robust optimal open-loop feedback control. The performance of the method will be demonstrated by a numerical example, which will be the control of robot under random variations of the payload mass. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
Michael Schacher 《PAMM》2007,7(1):1061801-1061802
The most important aspect in the optimal control and design of manipulators is the determination of the basic movement, i.e. the calculation of the optimal trajectory on which the robot has to move. Having an optimal reference trajectory and an optimal open-loop control, there is the need of control corrections by applying a certain feedback control. Different attempts exist for this. In this article a method will be shown which is based on classical control theory, that works with cost functions being minimized. The aim is to take into account stochastic parameter variations in order to obtain robust optimal feedback controls. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
Some problems of ergodic control and adaptive control are formulated and solved for stochastic differential delay systems. The existence and the uniqueness of invariant measures that are solutions of the stochastic functional differential equations for these systems are verified. For an ergodic cost criterion, almost optimal controls are constructed. For an unknown system, the invariant measures and the optimal ergodic costs are shown to be continuous functions of the unknown parameters. Almost self-optimizing adaptive controls are feasibly constructed by an approximate certainty equivalence principle.This research was partially supported by NSF Grants ECS-91-02714 and ECS91-13029.  相似文献   

5.
《Optimization》2012,61(1-4):163-195
In order to reduce large online measurement and correction expenses, the a priori informations on the random variations of the model parameters of a robot and its working environment are taken into account already at the planning stage. Thus, instead of solving a deterministic path planning problem with a fixed nominal parameter vector, here, the optimal velocity profile along a given trajectory in work space is determined by using a stochastic optimization approach. Especially, the standard polygon of constrained motion-depending on the nominal parameter vector-is replaced by a more general set of admissible motion determined by chance constraints or more general risk constraints. Robust values (with respect to stochastic parameter variations) of the maximum, minimum velocity, acceleration, deceleration, resp., can be obtained then by solving a univariate stochastic optimization problem Considering the fields of extremal trajectories, the minimum-time path planning problem under stochastic uncertainty can be solved now by standard optimal deterministic path planning methods  相似文献   

6.
In the optimal control of industrial, field or service robots, the standard procedure is to determine first offline a reference trajectory and a feedforward control, based on some selected nominal values of the unknown stochastic model parameters, and to correct then the inevitable and increasing deviation of the state or performance of the robot from the prescribed state or performance of the system by online measurement and control actions. Due to the stochastic variations of the model parameters, increasing measurement and correction actions are needed during the process. By optimal stochastic trajectory planning (OSTP), based on stochastic optimization methods, the available a priori and sample information about the robot and its working environment is incorporated into the control process. Consequently, more robust reference trajectories and feedforward controls are obtained which cause much less online control actions. In order to maintain a high quality of the guiding functions, the reference trajectory and the feedforward control can be updated at some later time points such that additional information about the control process is available. After the presentation of the Adaptive Optimal Stochastic Trajectory Planning (AOSTP) procedure based on stochastic optimization methods, several numerical techniques for the computation of robust reference trajectories and feedforward controls under real-time conditions are presented. Additionally, numerical examples for a Manutec r3 industrial robot are discussed. The first one demonstrates real-time solutions of (OSTP) based on a sensitivity analysis of a before-hand calculated reference trajectory. The second shows the differences between reference trajectories based on deterministic methods and the stochastic methods introduced in this paper. Based on simulations of the robots behavior, the increased robustness of stochastic reference trajectories is demonstrated.  相似文献   

7.
This paper presents an extension of earlier research on heirarchical control of stochastic manufacturing systems with linear production costs. A new method is introduced to construct asymptotically optimal open-loop and feedback controls for manufacturing systems in which the rates of machine breakdown and repair are much larger than the rate of fluctuation in demand and rate of discounting of cost. This new approach allows us to carry out an asymptotic analysis on manufacturing systems with convex inventory/backlog and production costs as well as obtain error bound estimates for constructed open loop controls. Under appropriate conditions, an asymptotically optimal Lipschitz feedback control law is obtained.This work was partly supported by the NSERC Grant A4619, URIF, General Motors of Canada, and Manufacturing Research Corporation of Ontario.  相似文献   

8.
Problems from limit load or shakedown analysis are based on the convex, linear or linearized yield/strength condition and the linear equilibrium equation for the generic stress vector. Having to take into account, in practice, stochastic variations of the model parameters (e.g., yield stresses, plastic capacities) and external loadings, the basic stochastic plastic analysis problem must be replaced by an appropriate deterministic substitute problem. Instead of calculating approximatively the probability of failure based on a certain choice of failure modes, here, a direct approach is presented based on the costs for missing carrying capacity and the failure costs (e.g., costs for damage, repair, compensation for weakness within the structure, etc.). Based on the basic mechanical survival conditions, the failure costs may be represented by the minimum value of a convex and often linear program. Several mathematical properties of this program are shown. Minimizing then the total expected costs subject to the remaining (simple) deterministic constraints, a stochastic optimization problem is obtained which may be represented by a “Stochastic Convex Program (SCP) with recourse”. Working with linearized yield/strength conditions, a “Stochastic Linear Program (SLP) with complete fixed recourse” is obtained. In case of a discretely distributed probability distribution or after the discretization of a more general probability distribution of the random structural parameters and loadings as well as certain random cost factors one has a linear program (LP) with a so-called “dual decomposition data” structure. For stochastic programs of this type many theoretical results and efficient numerical solution procedures (LP-solver) are available. The mathematical properties of theses substitute problems are considered. Furthermore approximate analytical formulas for the limit load factor are given.  相似文献   

9.
We consider a controlled system driven by a coupled forward–backward stochastic differential equation with a non degenerate diffusion matrix. The cost functional is defined by the solution of the controlled backward stochastic differential equation, at the initial time. Our goal is to find an optimal control which minimizes the cost functional. The method consists to construct a sequence of approximating controlled systems for which we show the existence of a sequence of feedback optimal controls. By passing to the limit, we establish the existence of a relaxed optimal control to the initial problem. The existence of a strict control follows from the Filippov convexity condition.  相似文献   

10.
We study the behavior of step approximations of solutions of stochastic evolution equations. The results obtained are applied to controlled stochastic equations. Under certain assumptions about the cost functional and the coefficients of the equation the coincidence of optimal costs for the classes of feedback controls and generalized control is proved.Translated from Teoriya Sluchainykh Protsessov, No. 16, pp. 28–33, 1988.  相似文献   

11.
A function space asymptotic distribution of quadratic functionals induced from an unknown system is obtained in terms of a multi-dimensional Wiener process where the control is a linear transformation of the state that depends smoothly on the unknown parameters. The result is easily specialized to the asymptotic distribution of the family of random variables formed as the upper limit of the integrals of the quadratic terms is varied.The result provides a measure of the dependence of such a quadratic functional on a family of strongly consistent estimates of the unknown parameters, and in some cases it provides an interesting contrast with the case of all known parameters. In this paper, it is shown that, for some linear stochastic evolution systems, there are special feedback control laws where the variance of the asymptotic normal distribution of the average costs is less for the control law based on the estimates of the parameters than for the control law based on the true parameter values. This phenomenon does not occur if the feedback control laws are optimal stationary controls.This research was supported by NSF Grants Nos. ECS-87-18026 and ECS-9113029.The author thanks Professor Alain Benssousan for his great hospitality in INRIA, where this paper was written, and Professors Tyrone Duncan, Pravin Varaiya, and the anonymous reviewer for their very useful comments.  相似文献   

12.
In this paper, the problems of stochastic stability and robust control for a class of uncertain sampled-data systems are studied. The systems consist of random jumping parameters described by finite-state semi-Markov process. Sufficient conditions for stochastic stability or exponential mean square stability of the systems are presented. The conditions for the existence of a sampled-data feedback control and a multirate sampled-data optimal control for the continuous-time uncertain Markovian jump systems are also obtained. The design procedure for robust multirate sampled-data control is formulated as linear matrix inequalities (LMIs), which can be solved efficiently by available software toolboxes. Finally, a numerical example is given to demonstrate the feasibility and effectiveness of the proposed techniques.  相似文献   

13.
We study a single-machine stochastic scheduling problem with n jobs, in which each job has a random processing time and a general stochastic cost function which may include a random due date and weight. The processing times are exponentially distributed, whereas the stochastic cost functions and the due dates may follow any distributions. The objective is to minimize the expected sum of the cost functions. We prove that a sequence in an order based on the product of the rate of processing time with the expected cost function is optimal, and under certain conditions, a sequence with the weighted shortest expected processing time first (WSEPT) structure is optimal. We show that this generalizes previous known results to more general situations. Examples of applications to practical problems are also discussed.This work was partially supported by the Research Grants Council of Hong Kong under Earmarked Grants No. CUHK4418/99E and No. PolyU 5081/00E.  相似文献   

14.
Using the decomposition of solution of SDE, we consider the stochastic optimal control problem with anticipative controls as a family of deterministic control problems parametrized by the paths of the driving Wiener process and of a newly introduced Lagrange multiplier stochastic process (nonanticipativity equality constraint). It is shown that the value function of these problems is the unique global solution of a robust equation (random partial differential equation) associated to a linear backward Hamilton-Jacobi-Bellman stochastic partial differential equation (HJB SPDE). This appears as limiting SPDE for a sequence of random HJB PDE's when linear interpolation approximation of the Wiener process is used. Our approach extends the Wong-Zakai type results [20] from SDE to the stochastic dynamic programming equation by showing how this arises as average of the limit of a sequence of deterministic dynamic programming equations. The stochastic characteristics method of Kunita [13] is used to represent the value function. By choosing the Lagrange multiplier equal to its nonanticipative constraint value the usual stochastic (nonanticipative) optimal control and optimal cost are recovered. This suggests a method for solving the anticipative control problems by almost sure deterministic optimal control. We obtain a PDE for the “cost of perfect information” the difference between the cost function of the nonanticipative control problem and the cost of the anticipative problem which satisfies a nonlinear backward HJB SPDE. Poisson bracket conditions are found ensuring this has a global solution. The cost of perfect information is shown to be zero when a Lagrangian submanifold is invariant for the stochastic characteristics. The LQG problem and a nonlinear anticipative control problem are considered as examples in this framework  相似文献   

15.
A problem of robust guaranteed cost control of stochastic discrete-time systems with parametric uncertainties under Markovian switching is considered. The control is simultaneously applied to both the random and the deterministic components of the system. The noise (the random) term depends on both the states and the control input. The jump Markovian switching is modeled by a discrete-time Markov chain and the noise or stochastic environmental disturbance is modeled by a sequence of identically independently normally distributed random variables. Using linear matrix inequalities (LMIs) approach, the robust quadratic stochastic stability is obtained. The proposed control law for this quadratic stochastic stabilization result depended on the mode of the system. This control law is developed such that the closed-loop system with a cost function has an upper bound under all admissible parameter uncertainties. The upper bound for the cost function is obtained as a minimization problem. Two numerical examples are given to demonstrate the potential of the proposed techniques and obtained results.  相似文献   

16.
The guaranteed cost control (GCC) problem involved in decentralized robust control of a class of uncertain nonlinear large-scale stochastic systems with high-order interconnections is considered. After determining the appropriate conditions for the stochastic GCC controller, a class of decentralized local state feedback controllers is derived using the linear matrix inequality (LMI). The extension of the result of the study to the static output feedback control problem is discussed by considering the Karush-Kuhn-Tucker (KKT) conditions. The efficiency of the proposed design method is demonstrated on the basis of simulation results.  相似文献   

17.
This paper deals with Markov Decision Processes (MDPs) on Borel spaces with possibly unbounded costs. The criterion to be optimized is the expected total cost with a random horizon of infinite support. In this paper, it is observed that this performance criterion is equivalent to the expected total discounted cost with an infinite horizon and a varying-time discount factor. Then, the optimal value function and the optimal policy are characterized through some suitable versions of the Dynamic Programming Equation. Moreover, it is proved that the optimal value function of the optimal control problem with a random horizon can be bounded from above by the optimal value function of a discounted optimal control problem with a fixed discount factor. In this case, the discount factor is defined in an adequate way by the parameters introduced for the study of the optimal control problem with a random horizon. To illustrate the theory developed, a version of the Linear-Quadratic model with a random horizon and a Logarithm Consumption-Investment model are presented.  相似文献   

18.
This paper proposes an approach for the robust averaged control of random vibrations for the Bernoulli–Euler beam equation under uncertainty in the flexural stiffness and in the initial conditions. The problem is formulated in the framework of optimal control theory and provides a functional setting, which is so general as to include different types of random variables and second-order random fields as sources of uncertainty. The second-order statistical moment of the random system response at the control time is incorporated in the cost functional as a measure of robustness. The numerical resolution method combines a classical descent method with an adaptive anisotropic stochastic collocation method for the numerical approximation of the statistics of interest. The direct and adjoint stochastic systems are uncoupled, which permits to exploit parallel computing architectures to solve the set of deterministic problem that arise from the stochastic collocation method. As a result, problems with a relative large number of random variables can be solved with a reasonable computational cost. Two numerical experiments illustrate both the performance of the proposed method and the significant differences that may occur when uncertainty is incorporated in this type of control problems.  相似文献   

19.
Robust solution of monotone stochastic linear complementarity problems   总被引:1,自引:0,他引:1  
We consider the stochastic linear complementarity problem (SLCP) involving a random matrix whose expectation matrix is positive semi-definite. We show that the expected residual minimization (ERM) formulation of this problem has a nonempty and bounded solution set if the expected value (EV) formulation, which reduces to the LCP with the positive semi-definite expectation matrix, has a nonempty and bounded solution set. We give a new error bound for the monotone LCP and use it to show that solutions of the ERM formulation are robust in the sense that they may have a minimum sensitivity with respect to random parameter variations in SLCP. Numerical examples including a stochastic traffic equilibrium problem are given to illustrate the characteristics of the solutions.  相似文献   

20.
Practical industrial process is usually a dynamic process including uncertainty. Stochastic constraints can be used for industrial process modeling, when system sate and/or control input constraints cannot be strictly satisfied. Thus, optimal control of switched systems with stochastic constraints can be available to address practical industrial process problems with different modes. In general, obtaining an analytical solution of the optimal control problem is usually very difficult due to the discrete nature of the switching law and the complexity of stochastic constraints. To obtain a numerical solution, this problem is formulated as a constrained nonlinear parameter selection problem (CNPSP) based on a relaxation transformation (RT) technique, an adaptive sample approximation (ASA) method, a smooth approximation (SA) technique, and a control parameterization (CP) method. Following that, a penalty function-based random search (PFRS) algorithm is designed for solving the CNPSP based on a novel search rule-based penalty function (NSRPF) method and a novel random search (NRS) algorithm. The convergence results show that the proposed method is globally convergent. Finally, an optimal control problem in automobile test-driving with gear shifts (ATGS) is further extended to illustrate the effectiveness of the proposed method by taking into account some stochastic constraints. Numerical results show that compared with other typical methods, the proposed method is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state. In addition, to balance the computation amount and the numerical solution accuracy, a tolerance setting method is also provided by the numerical analysis technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号