首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper is concerned with necessary conditions for a general optimal control problem developed by Russak and Tan. It is shown that, in most cases, a further relation between the multipliers holds. This result is of interest in particular for the investigation of perturbations of the state constraint.  相似文献   

2.
《Optimization》2012,61(5):595-607
In this paper optimality conditions will be derived for elliptic optimal control problems with a restriction on the state or on the gradient of the state. Essential tools are the method of transposition and generalized trace theorems and green's formulas from the theory of elliptic differential equations.  相似文献   

3.
Necessary conditions of optimality are derived for optimal control problems with pathwise state constraints, in which the dynamic constraint is modelled as a differential inclusion. The novel feature of the conditions is the unrestrictive nature of the hypotheses under which these conditions are shown to be valid. An Euler Lagrange type condition is obtained for problems where the multifunction associated with the dynamic constraint has values possibly unbounded, nonconvex sets and satisfies a mild `one-sided' Lipschitz continuity hypothesis. We recover as a special case the sharpest available necessary conditions for state constraint free problems proved in a recent paper by Ioffe. For problems where the multifunction is convex valued it is shown that the necessary conditions are still valid when the one-sided Lipschitz hypothesis is replaced by a milder, local hypothesis. A recent `dualization' theorem permits us to infer a strengthened form of the Hamiltonian inclusion from the Euler Lagrange condition. The necessary conditions for state constrained problems with convex valued multifunctions are derived under hypotheses on the dynamics which are significantly weaker than those invoked by Loewen and Rockafellar to achieve related necessary conditions for state constrained problems, and improve on available results in certain respects even when specialized to the state constraint free case.

Proofs make use of recent `decoupling' ideas of the authors, which reduce the optimization problem to one to which Pontryagin's maximum principle is applicable, and a refined penalization technique to deal with the dynamic constraint.

  相似文献   


4.
Parametric nonlinear optimal control problems subject to control and state constraints are studied. Two discretization methods are discussed that transcribe optimal control problems into nonlinear programming problems for which SQP-methods provide efficient solution methods. It is shown that SQP-methods can be used also for a check of second-order sufficient conditions and for a postoptimal calculation of adjoint variables. In addition, SQP-methods lead to a robust computation of sensitivity differentials of optimal solutions with respect to perturbation parameters. Numerical sensitivity analysis is the basis for real-time control approximations of perturbed solutions which are obtained by evaluating a first-order Taylor expansion with respect to the parameter. The proposed numerical methods are illustrated by the optimal control of a low-thrust satellite transfer to geosynchronous orbit and a complex control problem from aquanautics. The examples illustrate the robustness, accuracy and efficiency of the proposed numerical algorithms.  相似文献   

5.
A. Leito 《PAMM》2002,1(1):95-96
We consider optimal control problems of infinite horizon type, whose control laws are given by L1loc‐functions and whose objective function has the meaning of a discounted utility. Our main objective is the verification of the fact that the value function is a viscosity solution of the Hamilton‐Jacobi‐Bellman (HJB) equation in this framework. The usual final condition for the HJB‐equation in the finite horizon case (V (T, x) = 0 or V (T, x) = g(x)) has to be substituted by a decay condition at the infinity. Following the dynamic programming approach, we obtain Bellman's optimality principle and the dynamic programming equation (see (3)). We also prove a regularity result (local Lipschitz continuity) for the value function.  相似文献   

6.
We study the approximation of control problems governed by elliptic partial differential equations with pointwise state constraints. For a finite dimensional approximation of the control set and for suitable perturbations of the state constraints, we prove that the corresponding sequence of discrete control problems converges to a relaxed problem. A similar analysis is carried out for problems in which the state equation is discretized by a finite element method.  相似文献   

7.
We study the Birkhoff billiard in a convex domain with a smooth boundary γ. We show that if this dynamical system has an integral which is polynomial in velocities of degree 4 and is independent with the velocity norm, then γ is an ellipse.  相似文献   

8.
The numerical approximation to a parabolic control problem with control and state constraints is studied in this paper. We use standard piecewise linear and continuous finite elements for the space discretization of the state, while the dG(0) method is used for time discretization. A priori error estimates for control and state are obtained by an improved maximum error estimate for the corresponding discretized state equation. Numerical experiments are provided which support our theoretical results.  相似文献   

9.
10.
We discuss the full discretization of an elliptic optimal control problem with pointwise control and state constraints. We provide the first reliable a-posteriori error estimator that contains only computable quantities for this class of problems. Moreover, we show, that the error estimator converges to zero if one has convergence of the discrete solutions to the solution of the original problem. The theory is illustrated by numerical tests.  相似文献   

11.
A method of region analysis is developed for solving a class of optimal control problems with one state and one control variable, including state and control constraints. The performance index is strictly convex with respect to the control variable, while this variable appears only linearly in the state equation. The convexity or linearity assumption of the performance index or the state equation with respect to the state variable is not required.The author would like to express his sincere gratitude to Prof. R. Klötzler, Prof. E. Zeidler, Prof. H. Schumann, Prof. J. Focke, and other colleagues of the Department of Mathematics, Karl Marx University, Leipzig, GDR, for their support during his stay in Leipzig.  相似文献   

12.
We consider state-constrained optimal control problems governed by elliptic equations. Doing Slater-like assumptions, we know that Lagrange multipliers exist for such problems, and we propose a decoupled augmented Lagrangian method. We present the algorithm with a simple example of a distributed control problem.  相似文献   

13.
A Kind of direct methods is presented for the solution of optimal control problems with state constraints.These methods are sequential quadratic programming methods.At every iteration a quadratic programming which is obtained by quadratic approximation to Lagrangian function and Linear approximations to constraints is solved to get a search direction for a merit function.The merit function is formulated by augmenting the Lagrangian funetion with a penalty term.A line search is carried out along the search direction to determine a step length such that the merit function is decreased.The methods presented in this paper include continuous sequential quadratic programming methods and discreate sequential quadrade programming methods.  相似文献   

14.
An optimal control problem with state constraints is considered. Some properties of extremals to the Pontryagin maximum principle are studied. It is shown that, from the conditions of the maximum principle, it follows that the extended Hamiltonian is a Lipschitz function along the extremal and its total time derivative coincides with its partial derivative with respect to time.  相似文献   

15.
This paper deals with the optimal control problem of an ordinary differential equation with several pure state constraints, of arbitrary orders, as well as mixed control-state constraints. We assume (i) the control to be continuous and the strengthened Legendre–Clebsch condition to hold, and (ii) a linear independence condition of the active constraints at their respective order to hold. We give a complete analysis of the smoothness and junction conditions of the control and of the constraints multipliers. This allows us to obtain, when there are finitely many nontangential junction points, a theory of no-gap second-order optimality conditions and a characterization of the well-posedness of the shooting algorithm. These results generalize those obtained in the case of a scalar-valued state constraint and a scalar-valued control.  相似文献   

16.
In this paper a class of semilinear elliptic optimal control problem with pointwise state and control constraints is studied. We show that sufficient second order optimality conditions for regularized problems with small regularization parameter can be obtained from a second order sufficient condition assumed for the unregularized problem. Moreover, error estimates with respect to the regularization parameter are derived.  相似文献   

17.
Ira Neitzel  Fredi Tröltzsch 《PAMM》2008,8(1):10865-10866
We consider Lavrentiev regularization for a class of semilinear parabolic optimal control problems with control constraints and pointwise state constraints and review convergence results for local solutions under Slater type assumptions as well as quadratic growth conditions. Moreover, we state a local uniqueness result for local optima under the assumptions of strict separability of the active sets as well as a second order sufficient condition for the regularized solution. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
Fernando A. C. C. Fontes  Sofia O. Lopes 《PAMM》2007,7(1):1061701-1061702
For some optimal control problems with pathwise state constraints the standard versions of the necessary conditions of optimality are unable to provide useful information to select minimizers. There exist some literature on stronger forms of the maximum principle, the so-called nondegenerate necessary conditions, that can be informative for those problems. These conditions can be applied when certain constraint qualifications are satisfied. However, when the state constraints have higher index (i.e. their first derivative with respect to time does not depend on the control) these nondegenerate necessary conditions cannot be used. This happens because constraint qualifications assumptions are never satisfied for higher index state constraints. We note that control problems with higher index state constraints arise frequently in practice. An example is a common mechanical systems for which there is a constraint on the position (an obstacle in the path, for example) and the control acts as a second derivative of the position (a force or acceleration) which is a typical case. Here, we provide a nondegenerate form of the necessary conditions that can be applied to nonlinear problems with higher index state constraints. When addressing a problem with a state constraint of index k, the result described is applicable under a constraint qualification that involves the k -th derivative of the state constraint, corresponding to the first time when derivative depends explicitly on the control. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
In this paper a class of nondifferentiable optimal control problems governed by differential inclusions and subject to state variable inequality constraints is considered. Sufficient conditions using the concavity of the maximized Hamiltonian are given. Furthermore, a counterexample is presented that shows that in the nondifferentiable case the maximum principle does not form sufficient optimality conditions if the adjoint relation is formulated in terms of the ordinary Hamiltonian rather than the maximized one. Finally, it is shown that the sufficient conditions correspond to Clarke's necessary conditions with some additional assumptions such as concavity.  相似文献   

20.
Markus Glocker 《PAMM》2004,4(1):608-609
A large class of optimal control problems for hybrid dynamic systems can be formulated as mixed‐integer optimal control problems (MIOCPs). A decomposition approach is suggested to solve a special subclass of MIOCPs with mixed integer inner point state constraints. It is the intrinsic combinatorial complexity of the discrete variables in addition to the high nonlinearity of the continuous optimal control problem that forms the challenges in the theoretical and numerical solution of MIOCPs. During the solution procedure the problem is decomposed at the inner time points into a multiphase problem with mixed integer boundary constraints and phase transitions at unknown switching points. Due to a discretization of the state space at the switching points the problem can be decoupled into a family of continuous optimal control problems (OCPs) and a problem similar to the asymmetric group traveling salesman problem (AGTSP). The OCPs are transcribed by direct collocation to large‐scale nonlinear programming problems, which are solved efficiently by an advanced SQP method. The results are used as weights for the edges of the graph of the corresponding TSP‐like problem, which is solved by a Branch‐and‐Cut‐and‐Price (BCP) algorithm. The proposed approach is applied to a hybrid optimal control benchmark problem for a motorized traveling salesman. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号