共查询到20条相似文献,搜索用时 15 毫秒
1.
In the context of ordinary differential equations, shooting techniques are a state-of-the-art solver component, whereas their application in the framework of partial differential equations (PDE) is still at an early stage. We present two multiple shooting approaches for optimal control problems (OCP) governed by parabolic PDE. Direct and indirect shooting for PDE optimal control stem from the same extended problem formulation. Our approach reveals that they are structurally similar but show major differences in their algorithmic realizations. In the presented numerical examples we cover a nonlinear parabolic optimal control problem with additional control constraints. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
2.
Parametric nonlinear optimal control problems subject to control and state constraints are studied. Two discretization methods are discussed that transcribe optimal control problems into nonlinear programming problems for which SQP-methods provide efficient solution methods. It is shown that SQP-methods can be used also for a check of second-order sufficient conditions and for a postoptimal calculation of adjoint variables. In addition, SQP-methods lead to a robust computation of sensitivity differentials of optimal solutions with respect to perturbation parameters. Numerical sensitivity analysis is the basis for real-time control approximations of perturbed solutions which are obtained by evaluating a first-order Taylor expansion with respect to the parameter. The proposed numerical methods are illustrated by the optimal control of a low-thrust satellite transfer to geosynchronous orbit and a complex control problem from aquanautics. The examples illustrate the robustness, accuracy and efficiency of the proposed numerical algorithms. 相似文献
3.
William W. Hager 《Numerische Mathematik》2000,87(2):247-282
Summary. The convergence rate is determined for Runge-Kutta discretizations of nonlinear control problems. The analysis utilizes a
connection between the Kuhn-Tucker multipliers for the discrete problem and the adjoint variables associated with the continuous
minimum principle. This connection can also be exploited in numerical solution techniques that require the gradient of the
discrete cost function.
Received January 11, 1999 / Revised version received October 11, 1999 / Published online July 12, 2000 相似文献
4.
In this article, an optimal control problem subject to a semilinear elliptic equation and mixed control-state constraints
is investigated. The problem data depends on certain parameters. Under an assumption of separation of the active sets and
a second-order sufficient optimality condition, Bouligand-differentiability (B-differentiability) of the solutions with respect
to the parameter is established. Furthermore, an adjoint update strategy is proposed which yields a better approximation of
the optimal controls and multipliers than the classical Taylor expansion, with remainder terms vanishing in L
∞. 相似文献
5.
H. J. Oberle 《Journal of Optimization Theory and Applications》1986,50(2):331-357
This paper presents the application of the multiple shooting technique to minimax optimal control problems (optimal control problems with Chebyshev performance index). A standard transformation is used to convert the minimax problem into an equivalent optimal control problem with state variable inequality constraints. Using this technique, the highly developed theory on the necessary conditions for state-restricted optimal control problems can be applied advantageously. It is shown that, in general, these necessary conditions lead to a boundary-value problem with switching conditions, which can be treated numerically by a special version of the multiple shooting algorithm. The method is tested on the problem of the optimal heating and cooling of a house. This application shows some typical difficulties arising with minimax optimal control problems, i.e., the estimation of the switching structure which is dependent on the parameters of the problem. This difficulty can be overcome by a careful application of a continuity method. Numerical solutions for the example are presented which demonstrate the efficiency of the method proposed. 相似文献
6.
7.
A Kind of direct methods is presented for the solution of optimal control problems with state constraints.These methods are sequential quadratic programming methods.At every iteration a quadratic programming which is obtained by quadratic approximation to Lagrangian function and Linear approximations to constraints is solved to get a search direction for a merit function.The merit function is formulated by augmenting the Lagrangian funetion with a penalty term.A line search is carried out along the search direction to determine a step length such that the merit function is decreased.The methods presented in this paper include continuous sequential quadratic programming methods and discreate sequential quadrade programming methods. 相似文献
8.
Hélène Frankowska Marco Mazzola 《NoDEA : Nonlinear Differential Equations and Applications》2013,20(2):361-383
We consider an optimal control problem under state constraints and show that to every optimal solution corresponds an adjoint state satisfying the first order necessary optimality conditions in the form of a maximum principle and sensitivity relations involving the value function. Such sensitivity relations were recently investigated by P. Bettiol and R.B. Vinter for state constraints with smooth boundary. In the difference with their work, our setting concerns differential inclusions and nonsmooth state constraints. To obtain our result we derive neighboring feasible trajectory estimates using a novel generalization of the so-called inward pointing condition. 相似文献
9.
B. Gollan 《Journal of Optimization Theory and Applications》1980,32(1):75-80
This paper is concerned with necessary conditions for a general optimal control problem developed by Russak and Tan. It is shown that, in most cases, a further relation between the multipliers holds. This result is of interest in particular for the investigation of perturbations of the state constraint. 相似文献
10.
Harold Stalford 《Journal of Optimization Theory and Applications》1971,7(2):118-135
A monotonicity result is utilized to derive sufficient optimality conditions of considerable generality for an individual trajectory in control theory. The sufficiency theorem embodying these conditions generalizes those of Boltyanskii and Leitmann and is applied to a simple control system to which their sufficiency theorems are not applicable. Conditions on the state equations and state space are completely relaxed. The set of admissible controls is extended to the set of measurable controls and the integrand of the performance index has its membership extended to the class of bounded Borel-measurable functions. The decomposition of the state space is required to be onlyplain denumerable. 相似文献
11.
The numerical solution of the Dirichlet boundary optimal control problem of the Navier-Stokes equations in presence of pointwise state constraints is investigated. A Moreau-Yosida regularization of the problem is proposed to obtain regular multipliers. Optimality conditions are derived and the convergence of the regularized solutions towards the original one is presented. The paper ends with a numerical experiment. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) 相似文献
12.
P. Deuflhard 《Numerische Mathematik》1979,33(2):115-146
Summary A numerically applicable stepsize control for discrete continuation methods of orderp is derived on a theoretical basis. Both the theoretical results and the performance of the proposed algorithm are invariant under affine transformation of the nonlinear system to be solved. The efficiency and reliability of the method is demonstrated by solving three real life two-point boundary value problems using multiple shooting techniques. In two of the examples bifurcations occur and are significantly marked by sharp changes in the stepsize estimates. 相似文献
13.
We study the approximation of control problems governed by elliptic partial differential equations with pointwise state constraints. For a finite dimensional approximation of the control set and for suitable perturbations of the state constraints, we prove that the corresponding sequence of discrete control problems converges to a relaxed problem. A similar analysis is carried out for problems in which the state equation is discretized by a finite element method. 相似文献
14.
The numerical approximation to a parabolic control problem with control and state constraints is studied in this paper. We use standard piecewise linear and continuous finite elements for the space discretization of the state, while the dG(0) method is used for time discretization. A priori error estimates for control and state are obtained by an improved maximum error estimate for the corresponding discretized state equation. Numerical experiments are provided which support our theoretical results. 相似文献
15.
We discuss the full discretization of an elliptic optimal control problem with pointwise control and state constraints. We
provide the first reliable a-posteriori error estimator that contains only computable quantities for this class of problems.
Moreover, we show, that the error estimator converges to zero if one has convergence of the discrete solutions to the solution
of the original problem. The theory is illustrated by numerical tests. 相似文献
16.
This paper presents some convex stochastic programming models for single and multi-period inventory control problems where the market demand is random and order quantities need to be decided before demand is realized. Both models minimize the expected losses subject to risk aversion constraints expressed through Value at Risk (VaR) and Conditional Value at Risk (CVaR) as risk measures. A sample average approximation method is proposed for solving the models and convergence analysis of optimal solutions of the sample average approximation problem is presented. Finally, some numerical examples are given to illustrate the convergence of the algorithm. 相似文献
17.
A.M. Shmatkov 《Journal of Applied Mathematics and Mechanics》2010,74(1):122-125
The problem of the optimal choice of the limits of a set of possible values of the control during motion for the purpose of obtaining the required form of the attainability set of a linear dynamical system in a specified time interval is considered. Using the method, in which these sets are approximated by ellipsoids, the problem of controlling the parameters of the ellipsoid containing the control vector is solved. Then a functional, which depends on the matrix of the ellipsoid, containing the phase vector, reaches its maximum. The order in which the corresponding formulae are used is illustrated using the example of a simple mechanical system. The results obtained are suitable for systems in which, instead of the control vector, there is an interference vector with controllable boundaries of possible changes and can be extended to stochastic systems. 相似文献
18.
Necessary conditions of optimality are derived for optimal control problems with pathwise state constraints, in which the dynamic constraint is modelled as a differential inclusion. The novel feature of the conditions is the unrestrictive nature of the hypotheses under which these conditions are shown to be valid. An Euler Lagrange type condition is obtained for problems where the multifunction associated with the dynamic constraint has values possibly unbounded, nonconvex sets and satisfies a mild `one-sided' Lipschitz continuity hypothesis. We recover as a special case the sharpest available necessary conditions for state constraint free problems proved in a recent paper by Ioffe. For problems where the multifunction is convex valued it is shown that the necessary conditions are still valid when the one-sided Lipschitz hypothesis is replaced by a milder, local hypothesis. A recent `dualization' theorem permits us to infer a strengthened form of the Hamiltonian inclusion from the Euler Lagrange condition. The necessary conditions for state constrained problems with convex valued multifunctions are derived under hypotheses on the dynamics which are significantly weaker than those invoked by Loewen and Rockafellar to achieve related necessary conditions for state constrained problems, and improve on available results in certain respects even when specialized to the state constraint free case.
Proofs make use of recent `decoupling' ideas of the authors, which reduce the optimization problem to one to which Pontryagin's maximum principle is applicable, and a refined penalization technique to deal with the dynamic constraint.
19.
《Optimization》2012,61(5):595-607
In this paper optimality conditions will be derived for elliptic optimal control problems with a restriction on the state or on the gradient of the state. Essential tools are the method of transposition and generalized trace theorems and green's formulas from the theory of elliptic differential equations. 相似文献
20.
We consider optimal control problems with constraints at intermediate points of the trajectory. A natural technique (propagation
of phase and control variables) is applied to reduce these problems to a standard optimal control problem of Pontryagin type
with equality and inequality constraints at the trajectory endpoints. In this way we derive necessary optimality conditions
that generalize the Pontryagin classical maximum principle. The same technique is applied to so-called variable structure
problems and to some hybrid problems. The new optimality conditions are compared with the results of other authors and five
examples illustrating their application are presented. 相似文献