首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a control system described by a nonlinear second order evolution equation defined on an evolution triple of Banach spaces (Gelfand triple) with a mixed multivalued control constraint whose values are nonconvex closed sets. Alongside the original system we consider a system with the following control constraints: a constraint whose values are the closed convex hull of the values of the original constraint and a constraint whose values are extreme points of the constraint which belong simultaneously to the original constraint. By a solution to the system we mean an admissible trajectory-control pair. In this part of the article we study existence questions for solutions to the control system with various constraints and density of the solution set with nonconvex constraints in the solution set with convexified constraints.  相似文献   

2.
We consider the optimal control of a semilinear parabolic equation with pointwise bound constraints on the control and finitely many integral constraints on the final state. Using the standard Robinson’s constraint qualification, we provide a second order necessary condition over a set of strictly critical directions. The main feature of this result is that the qualification condition needed for the second order analysis is the same as for classical finite-dimensional problems and does not imply the uniqueness of the Lagrange multiplier. We establish also a second order sufficient optimality condition which implies, for problems with a quadratic Hamiltonian, the equivalence between solutions satisfying the quadratic growth property in the L 1 and \(L^{\infty }\) topologies.  相似文献   

3.
In Ref. 1, existence and optimality conditions were given for control systems whose dynamics are determined by a linear stochastic differential equation with linear feedback controls; moreover, the state variables satisfy probability constraints. Here, for the simplest case of such a model, the Ornstein-Uhlenbeck velocity process, we evaluate the necessary conditions derived in Ref. 1 and compute a time-optimal control such that a given threshold value > 0 is crossed with probability of at least 1 – .This work was supported by the Sonderforschungsbereiche 21 and 72, University of Bonn, Bonn, West Germany.  相似文献   

4.
In this paper, we study the -optimal control problem with additional constraints on the magnitude of the closed-loop frequency response. In particular, we study the case of magnitude constraints at fixed frequency points (a finite number of such constraints can be used to approximate an -norm constraint). In previous work, we have shown that the primal-dual formulation for this problem has no duality gap and both primal and dual problems are equivalent to convex, possibly infinite-dimensional, optimization problems with LMI constraints. Here, we study the effect of approximating the convex magnitude constraints with a finite number of linear constraints and provide a bound on the accuracy of the approximation. The resulting problems are linear programs. In the one-block case, both primal and dual programs are semi-infinite dimensional. The optimal cost can be approximated, arbitrarily well from above and within any predefined accuracy from below, by the solutions of finite-dimensional linear programs. In the multiblock case, the approximate LP problem (as well as the exact LMI problem) is infinite-dimensional in both the variables and the constraints. We show that the standard finite-dimensional approximation method, based on approximating the dual linear programming problem by sequences of finite-support problems, may fail to converge to the optimal cost of the infinite-dimensional problem.  相似文献   

5.
In this paper we study the optimal control of systems driven by nonlinear elliptic partial differential equations. First, with the aid of an appropriate convexity hypothesis we establish the existence of optimal admissible pairs. Then we drop the convexity hypothesis and we pass to the larger relaxed system. First we consider a relaxed system based on the Gamkrelidze-Warga approach, in which the controls are transition probabilities. We show that this relaxed problem has always had a solution and the value of the problem is that of the original one. We also introduce two alternative formulations of the relaxed problem (one of them control free), which we show that they are both equivalent to the first one. Then we compare those relaxed problems, with that of Buttazzo which is based on the -regularization of the extended cost functional. Finally, using a powerful multiplier rule of Ioffe-Tichomirov, we derive necessary conditions for optimality in systems with inequality state constraints.Research supported by NSF Grant DMS-8802688  相似文献   

6.
This paper considers multidimensional control problems governed by a first-order PDE system and state constraints. After performing the standard Young measure relaxation, we are able to prove the Pontryagin principle by means of an -maximum principle. Generalizing the common setting of one-dimensional control theory, we model piecewise-continuous weak derivatives as functions of the first Baire class and obtain regular measures as corresponding multipliers. In a number of corollaries, we derive necessary optimality conditions for local minimizers of the state-constrained problem as well as for global and local minimizers of the unconstrained problem.  相似文献   

7.
The paper provides a sharpened proof of M. G. Khudai-Verenov 's theorem on the density in C2 of solutions to the equation d/d=P/Q on condition that this equation has two singular points at infinity whose characteristic numbers satisfy certain constraints of the incommensurability type.Translated from Matematicheskie Zametki, Vol. 4, No. 6, pp. 741–750, December, 1968.In conclusion, we wish to thank our scientific co-worker E. M. Landis for many discussions of this work, as well as Ya. G. Sinai and M. L. Gerver,whose comments made it possible to improve the first part.  相似文献   

8.
Summary We examine nonconvex problems of Bolza, in which the state is V-valued, with derivative V-valued, where VHV, V is a Banach space, V is its dual space and H is a Hilbert space. For these problems we prove some existence theorems for the minimum, when we consider state constraints and other constraints that are represented by a nonlinear differential equation that relates the state and the control.

Lavoro eseguito nell'ambito del Laboratorio per la Matematica Applicata del C.N.R. presso l'Università di Genova.  相似文献   

9.
We consider an existence theorem for control systems whose state variables for everyt are inC, the set of continuous functions varying over a given setI. The dependence of the state variables upona I is induced by their dependence upon the initial state and the state equation governing the system. In contrast, the controlu=u(t) is taken as a measurable function oft alone. The usual space constraints and boundary conditions are also allowed to vary overaI, and the cost functional is now taken to be a continuous functional over a suitable class of continuous functions. We also discuss an application of these results to control systems with stochastic boundary conditions.This research was accomplished under Grant No. AF-AFOSR-942-65. The author is grateful to Dr. Lamberto Cesari for his suggestions and assistance in the preparation of this paper.  相似文献   

10.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

11.
We continue the research of the first part of the article. We mainly study codensity for the set of admissible trajectory-control pairs of a system with nonconvex constraints in the set of admissible trajectory-control pairs of the system with convexified constraints. We state necessary and sufficient conditions for the set of admissible trajectory-control pairs of a system with nonconvex constraints to be closed in the corresponding function spaces. Using an example of a control hyperbolic system, we give an interpretation of the abstract results obtained. As application we consider the minimization problem for an integral functional on solutions of a control system.  相似文献   

12.
In this article, we present an exact theoretical analysis of an $M/M/1$ M / M / 1 system, with arbitrary distribution of relative deadline for the end of service, operated under the first come first served scheduling policy with exact admission control. We provide an explicit solution to the functional equation that must be satisfied by the workload distribution, when the system reaches steady state. We use this solution to derive explicit expressions for the loss ratio and the sojourn time distribution. Finally, we compare this loss ratio with that of a similar system operating without admission control, in the cases of some common distributions of the relative deadline.  相似文献   

13.
This paper combines the separate works of two authors. Tan proves a set of necessary conditions for a control problem with second-order state inequality constraints (see Ref. 1). Russak proves necessary conditions for an extended version of that problem. Specifically, the extended version augments the original problem by including state equality constraints, differential and isopermetric equality and inequality constraints, and endpoint constraints. In addition, Russak (i) relaxes the solvability assumption on the state constraints, (ii) extends the maximum principle to a larger set, (iii) obtains modified forms of the relationH =H t and of the transversality relation usually obtained in problems of this type, and (iv) proves a condition concerning (t 1), the derivative of the multiplier functions at the final time.Russak's work was supported by a NPS Foundation Grant.Tan is indebted to his thesis advisor, Professor M. R. Hestenes, for suggesting the topic and for his help and guidance in the development of his work. Tan's work was supported by the Army Research Office, Contract No. DA-ARO-D-31-124-71-G18.  相似文献   

14.
15.
In this paper, we study the conjugate Gradient iterative method applied on a discrete Stokes problem obtain by adding in the second equation a stabilising term depending on a parameter . We also establish the convergence rate as a function of .  相似文献   

16.
This paper is concerned with the qualitative property of the ground state solutions for the Hénon equation. By studying a limiting equation on the upper half space , we investigate the asymptotic energy and the asymptotic profile of the ground states for the Hénon equation. The limiting problem is related to a weighted Sobolev type inequality which we establish in this paper.  相似文献   

17.
18.
In this paper we study the existence of optimal trajectories associated with a generalized solution to the Hamilton-Jacobi-Bellman equation arising in optimal control. In general, we cannot expect such solutions to be differentiable. But, in a way analogous to the use of distributions in PDE, we replace the usual derivatives with contingent epiderivatives and the Hamilton-Jacobi equation by two contingent Hamilton-Jacobi inequalities. We show that the value function of an optimal control problem verifies these contingent inequalities.Our approach allows the following three results: (a) The upper semicontinuous solutions to contingent inequalities are monotone along the trajectories of the dynamical system. (b) With every continuous solutionV of the contingent inequalities, we can associate an optimal trajectory along whichV is constant. (c) For such solutions, we can construct optimal trajectories through the corresponding optimal feedback.They are also viscosity solutions of a Hamilton-Jacobi equation. Finally, we prove a relationship between superdifferentials of solutions introduced by Crandallet al. [10] and the Pontryagin principle and discuss the link of viscosity solutions with Clarke's approach to the Hamilton-Jacobi equation.  相似文献   

19.
In this note we study the control problem for the heat equation on \(\mathbb {R}^d\), \(d\ge 1\), with control set \(\omega \subset \mathbb {R}^d\). We provide a necessary and sufficient condition (called \((\gamma , a)\)-thickness) on \(\omega \) such that the heat equation is null-controllable in any positive time. We give an estimate of the control cost with explicit dependency on the characteristic geometric parameters of the control set. Finally, we derive a control cost estimate for the heat equation on cubes with periodic, Dirichlet, or Neumann boundary conditions, where the control sets are again assumed to be thick. We show that the control cost estimate is consistent with the \(\mathbb {R}^d\) case.  相似文献   

20.
There are very few results about analytic solutions of problems of optimal control with minimal L norm. In this paper, we consider such a problem for the wave equation, where the derivative of the state is controlled at both boundaries. We start in the zero position and consider a problem of exact control, that is, we want to reach a given terminal state in a given finite time. Our aim is to find a control with minimal L norm that steers the system to the target.We give the analytic solution for certain classes of target points, for example, target points that are given by constant functions. For such targets with zero velocity, the analytic solution has been given by Bennighof and Boucher in Ref. 1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号