首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
In this paper, we present a new approach to solve a class of optimal discrete-valued control problems. This type of problem is first transformed into an equivalent two-level optimization problem involving a combination of a discrete optimization problem and a standard optimal control problem. The standard optimal control problem can be solved by existing optimal control software packages such as MISER 3.2. For the discrete optimization problem, a discrete filled function method is developed to solve it. A numerical example is solved to illustrate the efficiency of our method.  相似文献   

2.
ABSTRACT

The problem of mixed discrete-continuous task planning for mechanical systems, such as aerial drones or other autonomous units, can be often treated as a sequence of point-to-point trajectories. In this work, the problem of optimal trajectory planning under a combined completion time and energy criterion, for a straight point to point path for a second-order system with quadratic under state (velocity) and control (acceleration) constraints is considered. The solution is obtained and proved to be optimal using the Pontryagin Maximum Principle. Simulation results for different cases are presented and compared with a customary numerical optimal control solver.  相似文献   

3.
In this paper we present an application of optimal control theory of partial differential equations combined with multi-objective optimization techniques to formulate and solve an economical-ecological problem related to the management of a wastewater treatment system. The problem is formulated as a parabolic multi-objective optimal control problem, and it is studied from a non-cooperative point of view (looking for a Nash equilibrium), and also from a cooperative point of view (looking for Pareto-optimal solutions “better” than the Nash equilibrium). In both cases we state the existence of solutions, give a useful characterization of them, and propose a numerical algorithm to solve the problem. Finally, a numerical experience for a real world situation in the estuary of Vigo (NW Spain) is presented.  相似文献   

4.
This paper considers an optimal control problem for the dynamics of a predator-prey model. The predator population has to choose the predation intensity over time in a way that maximizes the present value of the utility stream derived by consuming prey. The utility function is assumed to be convex for small levels of consumption and concave otherwise. The problem is solved using the maximum principle and different time patterns of the optimal solution are obtained in the cases of small, medium and high rates of time preference. The model has features of both, convex and concave optimal control problems and therefore phase plane analysis has to be combined with the problem of synthesis of bang-bang, singular and chattering solution pieces.  相似文献   

5.
We consider the problem of determining an optimal driving strategy in a train control problem with a generalised equation of motion. We assume that the journey must be completed within a given time and seek a strategy that minimises fuel consumption. On the one hand we consider the case where continuous control can be used and on the other hand we consider the case where only discrete control is available. We pay particular attention to a unified development of the two cases. For the continuous control problem we use the Pontryagin principle to find necessary conditions on an optimal strategy and show that these conditions yield key equations that determine the optimal switching points. In the discrete control problem, which is the typical situation with diesel-electric locomotives, we show that for each fixed control sequence the cost of fuel can be minimised by finding the optimal switching times. The corresponding strategies are called strategies of optimal type and in this case we use the Kuhn–Tucker equations to find key equations that determine the optimal switching times. We note that the strategies of optimal type can be used to approximate as closely as we please the optimal strategy obtained using continuous control and we present two new derivations of the key equations. We illustrate our general remarks by reference to a typical train control problem.  相似文献   

6.
We consider a stochastic control problem for a random evolution. We study the Bellman equation of the problem and we prove the existence of an optimal stochastic control which is Markovian. This problem enables us to approximate the general problem of the optimal control of solutions of stochastic differential equations.  相似文献   

7.
The classical finite-dimensional linear-quadratic optimal control problem is revisited. A new linear-quadratic control problem with linear state penalty terms but without quadratic state penalty terms, is introduced. An optimal control exists and the closed-form optimal solution is given. It is remarkable that feedback action plays no role and state information does not feature in the optimal control. The optimal cost function, rather than being quadratic, is linear in the initial state.  相似文献   

8.
A general deterministic time-inconsistent optimal control problem is formulated for ordinary differential equations. To find a time-consistent equilibrium value function and the corresponding time-consistent equilibrium control, a non-cooperative N-person differential game (but essentially cooperative in some sense) is introduced. Under certain conditions, it is proved that the open-loop Nash equilibrium value function of the N -person differential game converges to a time-consistent equilibrium value function of the original problem, which is the value function of a time-consistent optimal control problem. Moreover, it is proved that any optimal control of the time-consistent limit problem is a time-consistent equilibrium control of the original problem.  相似文献   

9.
In this paper, we consider a class of optimal control problem involving an impulsive systems in which some of its coefficients are subject to variation. We formulate this optimal control problem as a two-stage optimal control problem. We first formulate the optimal impulsive control problem with all its coefficients assigned to their nominal values. This becomes a standard optimal impulsive control problem and it can be solved by many existing optimal control computational techniques, such as the control parameterizations technique used in conjunction with the time scaling transform. The optimal control software package, MISER 3.3, is applicable. Then, we formulate the second optimal impulsive control problem, where the sensitivity of the variation of coefficients is minimized subject to an additional constraint indicating the allowable reduction in the optimal cost. The gradient formulae of the cost functional for the second optimal control problem are obtained. On this basis, a gradient-based computational method is established, and the optimal control software, MISER 3.3, can be applied. For illustration, two numerical examples are solved by using the proposed method.  相似文献   

10.
We present an iterative domain decomposition method for the optimal control of systems governed by linear partial differential equations. The equations can be of elliptic, parabolic, or hyperbolic type. The space region supporting the partial differential equations is decomposed and the original global optimal control problem is reduced to a sequence of similar local optimal control problems set on the subdomains. The local problems communicate through transmission conditions, which take the form of carefully chosen boundary conditions on the interfaces between the subdomains. This domain decomposition method can be combined with any suitable numerical procedure to solve the local optimal control problems. We remark that it offers a good potential for using feedback laws (synthesis) in the case of time-dependent partial differential equations. A test problem for the wave equation is solved using this combination of synthesis and domain decomposition methods. Numerical results are presented and discussed. Details on discretization and implementation can be found in Ref. 1.  相似文献   

11.
We describe a technique for a posteriori error estimates suitable to the optimal control problem governed by the evolution equations solved by the method of lines. It is applied to the control problem governed by the parabolic equation, convection-diffusion equation and hyperbolic equation. The error is measured with the aid of the L2-norm in the space-time cylinder combined with a special time weighted energy norm.  相似文献   

12.
In this paper, we establish the existence of the optimal control for an optimal control problem where the state of the system is defined by a variational inequality problem with monotone type mappings. Moreover, as an application, we get several existence results of an optimal control for the optimal control problem where the system is defined by a quasilinear elliptic variational inequality problem with an obstacle.  相似文献   

13.
This paper considers the problem of optimizing the institutional advertising expenditure for a firm which produces two products. The problem is formulated as a minimum-time control problem for the dynamics of an extended Vidale-Wolfe advertising model, the optimal control being the rate of institutional advertising that minimizes the time to attain the specified target market shares for the two products. The attainable set and the optimal control are obtained by applying the recent theory developed by Hermes and Haynes extending the Green's theorem approach to higher dimensions. It is shown that the optimal control is a strict bang-bang control. An interesting side result is that the singular arc obtained by the Green's theorem application turns out to be a maximum-time solution over the set of all feasible controls. The result clarifies the connection between the Green's theorem approach and the maximum principle approach.  相似文献   

14.
This paper presents the application of the multiple shooting technique to minimax optimal control problems (optimal control problems with Chebyshev performance index). A standard transformation is used to convert the minimax problem into an equivalent optimal control problem with state variable inequality constraints. Using this technique, the highly developed theory on the necessary conditions for state-restricted optimal control problems can be applied advantageously. It is shown that, in general, these necessary conditions lead to a boundary-value problem with switching conditions, which can be treated numerically by a special version of the multiple shooting algorithm. The method is tested on the problem of the optimal heating and cooling of a house. This application shows some typical difficulties arising with minimax optimal control problems, i.e., the estimation of the switching structure which is dependent on the parameters of the problem. This difficulty can be overcome by a careful application of a continuity method. Numerical solutions for the example are presented which demonstrate the efficiency of the method proposed.  相似文献   

15.
We consider the limiting behavior of optimal bang-bang controls as a family of Sobolev equations formally converges to a wave equation. The weak-starlimit of the sequence of bang-bang controls is an optimal control for the wave equation problem. The associated optimal states converge strongly and, for the optimal time problem, the optimal times converge to the optimal time for the wave equation.This work was supported in part by the National Science Foundation, Grant No. MCS-79-02037.  相似文献   

16.
In this paper, we identify a new class of stochastic linearconvex optimal control problems, whose solution can be obtained by solving appropriate equivalent deterministic optimal control problems. The term linear-convex is meant to imply that the dynamics is linear and the cost function is convex in the state variables, linear in the control variables, and separable. Moreover, some of the coefficients in the dynamics are allowed to be random and the expectations of the control variables are allowed to be constrained. For any stochastic linear-convex problem, the equivalent deterministic problem is obtained. Furthermore, it is shown that the optimal feedback policy of the stochastic problem is affine in its current state, where the affine transformation depends explicitly on the optimal solution of the equivalent deterministic problem in a simple way. The result is illustrated by its application to a simple stochastic inventory control problem.This research was supported in part by NSERC Grant A4617, by SSHRC Grant 410-83-0888, and by an INRIA Post-Doctoral Fellowship.  相似文献   

17.
This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables, one can reformulate the problem equivalently as an infinite-horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.This research has been supported by NSERC-Canada, Grants 36444 and A4952; by FCAR-Québec, Grant 88EQ3528, Actions Structurantes; and by MESS-Québec, Grant 6.1/7.4(28).  相似文献   

18.
R. Dehghan  M. Keyanpour 《Optimization》2017,66(7):1157-1176
This paper presents a numerical scheme for solving fractional optimal control. The fractional derivative in this problem is in the Riemann–Liouville sense. The proposed method, based upon the method of moments, converts the fractional optimal control problem to a semidefinite optimization problem; namely, the nonlinear optimal control problem is converted to a convex optimization problem. The Grunwald–Letnikov formula is also used as an approximation for fractional derivative. The solution of fractional optimal control problem is found by solving the semidefinite optimization problem. Finally, numerical examples are presented to show the performance of the method.  相似文献   

19.
We consider the minimization problem of an integral functional with integrand that is not convex in the control on solutions of a control system described by fractional differential equation with mixed nonconvex constraints on the control. A relaxation problem is treated along with the original problem. It is proved that, under general assumptions, the relaxation problem has an optimal solution, and that for each optimal solution there is a minimizing sequence of the original problem that converges to the optimal solution with respect to the trajectory, the control, and the functional in appropriate topologies simultaneously.  相似文献   

20.
We consider a general nonlinear time-delay system with state-delays as control variables. The problem of determining optimal values for the state-delays to minimize overall system cost is a non-standard optimal control problem–called an optimal state-delay control problem–that cannot be solved using existing optimal control techniques. We show that this optimal control problem can be formulated as a nonlinear programming problem in which the cost function is an implicit function of the decision variables. We then develop an efficient numerical method for determining the cost function’s gradient. This method, which involves integrating an auxiliary impulsive system backwards in time, can be combined with any standard gradient-based optimization method to solve the optimal state-delay control problem effectively. We conclude the paper by discussing applications of our approach to parameter identification and delayed feedback control.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号