首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider an optimal control system described byn-dimensional heat equation with a thermal source. Thus problem is to find an optimal control which puts the system in a finite time T, into a stationary regime and to minimize a general objective function. Here we assume there is no constraints on control. This problem is reduced to a moment problem.We modify the moment problem into one consisting of the minimization of a positive linear functional over a set of Radon measures and we show that there is an optimal measure corresponding to the optimal control. The above optimal measure approximated by a finite combination of atomic measures. This construction gives rise to a finite dimensional linear programming problem, where its solution can be used to determine the optimal combination of atomic measures. Then by using the solution of the above linear programming problem we find a piecewise-constant optimal control function which is an approximate control for the original optimal control problem. Finally we obtain piecewise-constant optimal control for two examples of heat equations with a thermal source in one-dimensional.  相似文献   

2.
A stochastic control problem whose dynamics are only partially observed is solved. In earlier literature it was conjectured that for such problems an optimal relaxed control exists. In this article we prove that for the problem under consideration the optimal relaxed control exists and is the weak limit of a minimizing sequence of ordinary controls. Making use of the special discrete nature of the observations and of the special form of the drift function the existence of an optimal ordinary control is derived.The general partially observed control problem is then approximated by a sequence of problems of the above form, i.e., with discrete observations. In this way the existence of an ordinary optimal control is derived for the general problem.During part of his work on this topic the author was a guest of the SFB 72 of the Deutsche Forschungsgemeinschaft of the University of Bonn.The author's work was partially supported by the Deutsche Forschungsgemeinschaft within the SFB 72 of the University of Bonn.  相似文献   

3.
A theorem of Hardy, Littlewood, and Polya, first time is used to find the variational form of the well known shortest path problem, and as a consequence of that theorem, one can find the shortest path problem via quadratic programming. In this paper, we use measure theory to solve this problem. The shortest path problem can be written as an optimal control problem. Then the resulting distributed control problem is expressed in measure theoretical form, in fact an infinite dimensional linear programming problem. The optimal measure representing the shortest path problem is approximated by the solution of a finite dimensional linear programming problem.  相似文献   

4.
In this paper we give sufficient conditions for the existence of solutions of a problem of parametric optimization. We use continuity with respect to a functional parameter of weak solutions of a variational problem in a Hilbert space.We consider a problem of optimization with the control in coefficients of linear parabolic equation as an example. Using results of Spagnolo we characterize the closure of the reachable set. Finally, we construct an example of an optimization problem with the control in coefficients of a parabolic equation which does not have an optimal solution.  相似文献   

5.
In this paper, we use measure theory for considering asymptotically stable of an autonomous system [1] of first order nonlinear ordinary differential equations(ODE’s). First, we define a nonlinear infinite-horizon optimal control problem related to the ODE. Then, by a suitable change of variable, we transform the problem to a finite-horizon nonlinear optimal control problem. Then, the problem is modified into one consisting of the minimization of a linear functional over a set of positive Radon measures. The optimal measure is approximated by a finite combination of atomic measures and the problem converted to a finite-dimensional linear programming problem. The solution to this linear programming problem is used to find a piecewise-constant control, and by using the approximated control signals, we obtain the approximate trajectories and the error functional related to it. Finally the approximated trajectories and error functional is used to for considering asymptotically stable of the original problem.  相似文献   

6.
In this paper, we study the optimal control problem for the viscous weakly dispersive Degasperis-Procesi equation. We deduce the existence and uniqueness of a weak solution to this equation in a short interval by using the Galerkin method. Then, according to optimal control theories and distributed parameter system control theories, the optimal control of the viscous weakly dispersive Degasperis-Procesi equation under boundary conditions is given and the existence of an optimal solution to the viscous weakly dispersive Degasperis-Procesi equation is proved.  相似文献   

7.
In this paper we use measure theory to solve a wide range of the nonlinear programming problems. First, we transform a nonlinear programming problem to a classical optimal control problem with no restriction on states and controls. The new problem is modified into one consisting of the minimization of a special linear functional over a set of Radon measures; then we obtain an optimal measure corresponding to functional problem which is then approximated by a finite combination of atomic measures and the problem converted approximately to a finite-dimensional linear programming. Then by the solution of the linear programming problem we obtain the approximate optimal control and then, by the solution of the latter problem we obtain an approximate solution for the original problem. Furthermore, we obtain the path from the initial point to the admissible solution.  相似文献   

8.
We consider a frictionless contact problem with unilateral constraints for a 2D bar. We describe the problem, then we derive its weak formulation, which is in the form of an elliptic variational inequality of the first kind. Next, we establish the existence of a unique weak solution to the problem and prove its continuous dependence with respect to the applied tractions and constraints. We proceed with the study of an associated control problem for which we prove the existence of an optimal pair. Finally, we consider a perturbed optimal control problem for which we prove a convergence result.  相似文献   

9.
We define a new class of optimal control problems and show that this class is the largest one of control problems where every admissible process that satisfies the Extended Pontryaguin Maximum Principle is an optimal solution of nonregular optimal control problems. In this class of problems the local and global minimum coincide. A dual problem is also proposed, which may be seen as a generalization of the Mond–Weir-type dual problem, and it is shown that the 2-invexity notion is a necessary and su?cient condition to establish weak, strong, and converse duality results between a nonregular optimal control problem and its dual problem. We also present an example to illustrate our results.  相似文献   

10.
In this paper, we study an optimal control problem for the mixed boundary value problem for an elastic body with quasistatic evolution of an internal damage variable. We suppose that the evolution of microscopic cracks and cavities responsible for the damage is described by a nonlinear parabolic equation. A density of surface traction p acting on a part of boundary of an elastic body Ω is taken as a boundary control. Because the initial boundary value problem of this type can exhibit the Lavrentieff phenomenon and non‐uniqueness of weak solutions, we deal with the solvability of this problem in the class of weak variational solutions. Using the convergence concept in variable spaces and following the direct method in calculus of variations, we prove the existence of optimal and approximate solutions to the optimal control problem under rather general assumptions on the quasistatic evolution of damage. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we shall study moving boundary problems, and we introduce an approach for solving a wide range of them by using calculus of variations and optimization. First, we transform the problem equivalently into an optimal control problem by defining an objective function and artificial control functions. By using measure theory, the new problem is modified into one consisting of the minimization of a linear functional over a set of Radon measures; then we obtain an optimal measure which is then approximated by a finite combination of atomic measures and the problem converted to an infinite-dimensional linear programming. We approximate the infinite linear programming to a finite-dimensional linear programming. Then by using the solution of the latter problem we obtain an approximate solution for moving boundary function on specific time. Furthermore, we show the path of moving boundary from initial state to final state.  相似文献   

12.
In this article, we consider a general bilevel programming problem in reflexive Banach spaces with a convex lower level problem. In order to derive necessary optimality conditions for the bilevel problem, it is transferred to a mathematical program with complementarity constraints (MPCC). We introduce a notion of weak stationarity and exploit the concept of strong stationarity for MPCCs in reflexive Banach spaces, recently developed by the second author, and we apply these concepts to the reformulated bilevel programming problem. Constraint qualifications are presented, which ensure that local optimal solutions satisfy the weak and strong stationarity conditions. Finally, we discuss a certain bilevel optimal control problem by means of the developed theory. Its weak and strong stationarity conditions of Pontryagin-type and some controllability assumptions ensuring strong stationarity of any local optimal solution are presented.  相似文献   

13.
Patrick Mehlitz 《Optimization》2017,66(10):1533-1562
We consider a bilevel programming problem in Banach spaces whose lower level solution is unique for any choice of the upper level variable. A condition is presented which ensures that the lower level solution mapping is directionally differentiable, and a formula is constructed which can be used to compute this directional derivative. Afterwards, we apply these results in order to obtain first-order necessary optimality conditions for the bilevel programming problem. It is shown that these optimality conditions imply that a certain mathematical program with complementarity constraints in Banach spaces has the optimal solution zero. We state the weak and strong stationarity conditions of this problem as well as corresponding constraint qualifications in order to derive applicable necessary optimality conditions for the original bilevel programming problem. Finally, we use the theory to state new necessary optimality conditions for certain classes of semidefinite bilevel programming problems and present an example in terms of bilevel optimal control.  相似文献   

14.
In this paper, we study the optimal control problem for the viscous generalized Camassa–Holm equation. We deduce the existence and uniqueness of weak solution to the viscous generalized Camassa–Holm equation in a short interval by using Galerkin method. Then, by using optimal control theories and distributed parameter system control theories, the optimal control of the viscous generalized Camassa–Holm equation under boundary condition is given and the existence of optimal solution to the viscous generalized Camassa–Holm equation is proved.  相似文献   

15.
We address a rate control problem associated with a single server Markovian queueing system with customer abandonment in heavy traffic. The controller can choose a buffer size for the queueing system and also can dynamically control the service rate (equivalently the arrival rate) depending on the current state of the system. An infinite horizon cost minimization problem is considered here. The cost function includes a penalty for each rejected customer, a control cost related to the adjustment of the service rate and a penalty for each abandoning customer. We obtain an explicit optimal strategy for the limiting diffusion control problem (the Brownian control problem or BCP) which consists of a threshold-type optimal rejection process and a feedback-type optimal drift control. This solution is then used to construct an asymptotically optimal control policy, i.e. an optimal buffer size and an optimal service rate for the queueing system in heavy traffic. The properties of generalized regulator maps and weak convergence techniques are employed to prove the asymptotic optimality of this policy. In addition, we identify the parameter regimes where the infinite buffer size is optimal.  相似文献   

16.
The paper is devoted to the development of the canonical theory of the Hamilton–Jacobi optimality for nonlinear dynamical systems with controls of the vector measure type and with trajectories of bounded variation. Infinitesimal conditions of the strong and weak monotonicity of continuous Lyapunov-type functions with respect to the impulsive dynamical system are formulated. Necessary and sufficient conditions of the global optimality for the problem of the optimal impulsive control with general end restrictions are represented. The conditions include the sets of weak and strong monotone Lyapunov-type functions and are based on the reduction of the original problem of the optimal impulsive control a finite-dimensional optimization problem on an estimated set of connectable points.  相似文献   

17.
We consider an abstract optimal control problem with additional constraints and nonsmooth terms, but without the requirement that both the state equation on the set of admissible controls and the extremum problem be solvable. We use the approximate penalty method proposed here to find an approximate (in the weak sense) solution of the problem. As an example, we consider the optimal control problem for a singular nonlinear elliptic type equation.  相似文献   

18.
We consider the nonlinear optimal shape design problem, which consists in minimizing the amplitude of bang–bang type controls for the approximate controllability of a linear heat equation with a bounded potential. The design variable is the time-dependent support of the control. Precisely, we look for the best space–time shape and location of the support of the control among those, which have the same Lebesgue measure. Since the admissibility set for the problem is not convex, we first obtain a well-posed relaxation of the original problem and then use it to derive a descent method for the numerical resolution of the problem. Numerical experiments in 2D suggest that, even for a regular initial datum, a true relaxation phenomenon occurs in this context. Also, we implement a simple algorithm for computing a quasi-optimal domain for the original problem from the optimal solution of its associated relaxed one.  相似文献   

19.
We study a problem of optimal investment/consumption over an infinite horizon in a market consisting of a liquid and an illiquid asset. The liquid asset is observed and can be traded continuously, while the illiquid one can only be traded and observed at discrete random times corresponding to the jumps of a Poisson process. The problem is a nonstandard mixed discrete/continuous optimal control problem, which we face by the dynamic programming approach. The main goal of the paper is the characterization of the value function as unique viscosity solution of an associated Hamilton–Jacobi–Bellman equation. We then use such a result to build a numerical algorithm, allowing one to approximate the value function and so to measure the cost of illiquidity.  相似文献   

20.
We consider the problem of determining an optimal driving strategy in a train control problem with a generalised equation of motion. We assume that the journey must be completed within a given time and seek a strategy that minimises fuel consumption. On the one hand we consider the case where continuous control can be used and on the other hand we consider the case where only discrete control is available. We pay particular attention to a unified development of the two cases. For the continuous control problem we use the Pontryagin principle to find necessary conditions on an optimal strategy and show that these conditions yield key equations that determine the optimal switching points. In the discrete control problem, which is the typical situation with diesel-electric locomotives, we show that for each fixed control sequence the cost of fuel can be minimised by finding the optimal switching times. The corresponding strategies are called strategies of optimal type and in this case we use the Kuhn–Tucker equations to find key equations that determine the optimal switching times. We note that the strategies of optimal type can be used to approximate as closely as we please the optimal strategy obtained using continuous control and we present two new derivations of the key equations. We illustrate our general remarks by reference to a typical train control problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号