首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a general nonlinear time-delay system with state-delays as control variables. The problem of determining optimal values for the state-delays to minimize overall system cost is a non-standard optimal control problem–called an optimal state-delay control problem–that cannot be solved using existing optimal control techniques. We show that this optimal control problem can be formulated as a nonlinear programming problem in which the cost function is an implicit function of the decision variables. We then develop an efficient numerical method for determining the cost function’s gradient. This method, which involves integrating an auxiliary impulsive system backwards in time, can be combined with any standard gradient-based optimization method to solve the optimal state-delay control problem effectively. We conclude the paper by discussing applications of our approach to parameter identification and delayed feedback control.  相似文献   

2.
We consider the problem of determining an optimal driving strategy in a train control problem with a generalised equation of motion. We assume that the journey must be completed within a given time and seek a strategy that minimises fuel consumption. On the one hand we consider the case where continuous control can be used and on the other hand we consider the case where only discrete control is available. We pay particular attention to a unified development of the two cases. For the continuous control problem we use the Pontryagin principle to find necessary conditions on an optimal strategy and show that these conditions yield key equations that determine the optimal switching points. In the discrete control problem, which is the typical situation with diesel-electric locomotives, we show that for each fixed control sequence the cost of fuel can be minimised by finding the optimal switching times. The corresponding strategies are called strategies of optimal type and in this case we use the Kuhn–Tucker equations to find key equations that determine the optimal switching times. We note that the strategies of optimal type can be used to approximate as closely as we please the optimal strategy obtained using continuous control and we present two new derivations of the key equations. We illustrate our general remarks by reference to a typical train control problem.  相似文献   

3.
This paper deal with optimal control problems for a non-stationary Stokes system. We study a simultaneous distributed-boundary optimal control problem with distributed observation. We prove the existence and uniqueness of a simultaneous optimal control and we give the first order optimality condition for this problem. We also consider a distributed optimal control problem and a boundary optimal control problem and we obtain estimations between the simultaneous optimal control and the optimal controls of these last ones. Finally, some regularity results are presented.  相似文献   

4.
In this paper, we consider a class of optimal control problem involving an impulsive systems in which some of its coefficients are subject to variation. We formulate this optimal control problem as a two-stage optimal control problem. We first formulate the optimal impulsive control problem with all its coefficients assigned to their nominal values. This becomes a standard optimal impulsive control problem and it can be solved by many existing optimal control computational techniques, such as the control parameterizations technique used in conjunction with the time scaling transform. The optimal control software package, MISER 3.3, is applicable. Then, we formulate the second optimal impulsive control problem, where the sensitivity of the variation of coefficients is minimized subject to an additional constraint indicating the allowable reduction in the optimal cost. The gradient formulae of the cost functional for the second optimal control problem are obtained. On this basis, a gradient-based computational method is established, and the optimal control software, MISER 3.3, can be applied. For illustration, two numerical examples are solved by using the proposed method.  相似文献   

5.
In this paper, we first design a time optimal control problem for the heat equation with sampled-data controls, and then use it to approximate a time optimal control problem for the heat equation with distributed controls.The study of such a time optimal sampled-data control problem is not easy, because it may have infinitely many optimal controls. We find connections among this problem, a minimal norm sampled-data control problem and a minimization problem, and obtain some properties on these problems. Based on these, we not only build up error estimates for optimal time and optimal controls between the time optimal sampled-data control problem and the time optimal distributed control problem, in terms of the sampling period, but we also prove that such estimates are optimal in some sense.  相似文献   

6.
ABSTRACT

We consider bilevel optimization problems which can be interpreted as inverse optimal control problems. The lower-level problem is an optimal control problem with a parametrized objective function. The upper-level problem is used to identify the parameters of the lower-level problem. Our main focus is the derivation of first-order necessary optimality conditions. We prove C-stationarity of local solutions of the inverse optimal control problem and give a counterexample to show that strong stationarity might be violated at a local minimizer.  相似文献   

7.
In this paper we consider an optimal control system described byn-dimensional heat equation with a thermal source. Thus problem is to find an optimal control which puts the system in a finite time T, into a stationary regime and to minimize a general objective function. Here we assume there is no constraints on control. This problem is reduced to a moment problem.We modify the moment problem into one consisting of the minimization of a positive linear functional over a set of Radon measures and we show that there is an optimal measure corresponding to the optimal control. The above optimal measure approximated by a finite combination of atomic measures. This construction gives rise to a finite dimensional linear programming problem, where its solution can be used to determine the optimal combination of atomic measures. Then by using the solution of the above linear programming problem we find a piecewise-constant optimal control function which is an approximate control for the original optimal control problem. Finally we obtain piecewise-constant optimal control for two examples of heat equations with a thermal source in one-dimensional.  相似文献   

8.
We consider a continuous-time stochastic control problem with partial observations. Given some assumptions, we reduce the problem in successive approximation steps to a discrete-time, complete-observation, stochastic control problem with a finite number of possible states and controls. For the latter problem an optimal control can always be explicitly computed. Convergence of the approximations is shown, which in turn implies that an optimal control for the last-stage approximating problem is ∈-optimal for the original problem.  相似文献   

9.
We consider a linear dynamical system, for which we need to reconstruct the control input on the basis of a noisy output. We form the corresponding family of parametric optimal control problems in which the performance criterion contains terms corresponding to the problem regularization and clearing the output signal from speckle noises. The weight coefficient multiplying the term used for noise filtration plays the role of a parameter in the family of problems. We prove a theorem that describes the properties of solutions of parametric problems in a neighborhood of a regular point, analyze the differential properties of solutions of that problem, and derive formulas for the computation of derivatives of the optimal trajectory and the optimal control with respect to a parameter. We suggest a simple method for constructing approximate solutions of perturbed optimal control problems. These results permit one to control the performance of the reconstruction of the control in the original identification problem. An illustrative example is considered.  相似文献   

10.
In this paper, the task of achieving the soft landing of a lunar module such that the fuel consumption and the flight time are minimized is formulated as an optimal control problem. The motion of the lunar module is described in a three dimensional coordinate system. We obtain the form of the optimal closed loop control law, where a feedback gain matrix is involved. It is then shown that this feedback gain matrix satisfies a Riccati-like matrix differential equation. The optimal control problem is first solved as an open loop optimal control problem by using a time scaling transform and the control parameterization method. Then, by virtue of the relationship between the optimal open loop control and the optimal closed loop control along the optimal trajectory, we present a practical method to calculate an approximate optimal feedback gain matrix, without having to solve an optimal control problem involving the complex Riccati-like matrix differential equation coupled with the original system dynamics. Simulation results show that the proposed approach is highly effective.  相似文献   

11.
We consider integer-restricted optimal control of systems governed by abstract semilinear evolution equations. This includes the problem of optimal control design for certain distributed parameter systems endowed with multiple actuators, where the task is to minimize costs associated with the dynamics of the system by choosing, for each instant in time, one of the actuators together with ordinary controls. We consider relaxation techniques that are already used successfully for mixed-integer optimal control of ordinary differential equations. Our analysis yields sufficient conditions such that the optimal value and the optimal state of the relaxed problem can be approximated with arbitrary precision by a control satisfying the integer restrictions. The results are obtained by semigroup theory methods. The approach is constructive and gives rise to a numerical method. We supplement the analysis with numerical experiments.  相似文献   

12.
A minimax optimal control problem with infinite horizon is studied. We analyze a relaxation of the controls, which allows us to consider a generalization of the original problem that not only has existence of an optimal control but also enables us to approximate the infinite-horizon problem with a sequence of finite-horizon problems. We give a set of conditions that are sufficient to solve directly, without relaxation, the infinite-horizon problem as the limit of finite-horizon problems.  相似文献   

13.
We consider a relaxed optimal control problem for systems defined by nonlinear parabolic partial differential equations with distributed control. The problem is completely discretized by using a finite-element approximation scheme with piecewise linear states and piecewise constant controls. Existence of optimal controls and necessary conditions for optimality are derived for both the continuous and the discrete problem. We then prove that accumulation points of sequences of discrete optimal [resp. extremal] controls are optimal [resp. extremal] for the continuous problem.  相似文献   

14.
We define a new class of optimal control problems and show that this class is the largest one of control problems where every admissible process that satisfies the Extended Pontryaguin Maximum Principle is an optimal solution of nonregular optimal control problems. In this class of problems the local and global minimum coincide. A dual problem is also proposed, which may be seen as a generalization of the Mond–Weir-type dual problem, and it is shown that the 2-invexity notion is a necessary and su?cient condition to establish weak, strong, and converse duality results between a nonregular optimal control problem and its dual problem. We also present an example to illustrate our results.  相似文献   

15.
We address the minimum-time guidance problem for the so-called isotropic rocket in the presence of wind under an explicit constraint on the acceleration norm. We consider the guidance problem to a prescribed terminal position and a circular target set with a free terminal velocity in both cases. We employ standard techniques from optimal control theory to characterize the structure of the optimal guidance law as well as the corresponding minimum time-to-go function. It turns out that the complete characterization of the solution to the optimal control problem reduces to the solution of a system of nonlinear equations in triangular form. Numerical simulations, that illustrate the theoretical developments, are presented.  相似文献   

16.
We consider the minimization problem of an integral functional with integrand that is not convex in the control on solutions of a control system described by fractional differential equation with mixed nonconvex constraints on the control. A relaxation problem is treated along with the original problem. It is proved that, under general assumptions, the relaxation problem has an optimal solution, and that for each optimal solution there is a minimizing sequence of the original problem that converges to the optimal solution with respect to the trajectory, the control, and the functional in appropriate topologies simultaneously.  相似文献   

17.
We consider an optimal control problem for the time-dependent Schrödinger equation modeling molecular dynamics. The dynamics can be steered by interactions with a tuned laser field. The problem of designing an optimal field can be posed as an optimal control problem. We reformulate the optimization problem by using a Fourier transform of the electric field, and narrow the frequency band. The resulting problem is less memory intense, and can be solved with a superlinearly convergent quasi-Newton method. We show computational results for a Raman-transition example and give numerical evidence that our method can outperform the standard monotonically convergent algorithm.  相似文献   

18.
This paper is concerned with the analysis of a control problem related to the optimal management of a bioreactor. This real-world problem is formulated as a state-control constrained optimal control problem. We analyze the state system (a complex system of partial differential equations modelling the eutrophication processes for non-smooth velocities), and we prove that the control problem admits, at least, a solution. Finally, we present a detailed derivation of a first order optimality condition - involving a suitable adjoint system - in order to characterize these optimal solutions, and some computational results.  相似文献   

19.
We consider the problem of optimal control over differential equations with interaction. It is shown that the optimal control satisfies the maximum principle and there exists a generalized optimal control. In the analyzed problem, we encounter certain new technical features as compared with the ordinary problem of optimal control. Translated from Ukrains'kyi Matematychnyi Zhurnal, Vol. 60, No. 8, pp. 1099–1109, August, 2008.  相似文献   

20.
In this paper, the optimal control problem is governed by weak coupled parabolic PDEs and involves pointwise state and control constraints. We use measure theory method for solving this problem. In order to use the weak solution of problem, first problem has been transformed into measure form. This problem is reduced to a linear programming problem. Then we obtain an optimal measure which is approximated by a finite combination of atomic measures. We find piecewise-constant optimal control functions which are an approximate control for the original optimal control problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号