首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper we study a general multidimensional diffusion-type stochastic control problem. Our model contains the usual regular control problem, singular control problem and impulse control problem as special cases. Using a unified treatment of dynamic programming, we show that the value function of the problem is a viscosity solution of certain Hamilton-Jacobi-Bellman (HJB) quasivariational inequality. The uniqueness of such a quasi-variational inequality is proved. Supported in part by USA Office of Naval Research grant #N00014-96-1-0262. Supported in part by the NSFC Grant #79790130, the National Distinguished Youth Science Foundation of China Grant #19725106 and the Chinese Education Ministry Science Foundation.  相似文献   

2.
In this paper we give semiconcavity results for the value function of some constrained optimal control problems with infinite horizon in a half-space. In particular, we assume that the control space is the l1-ball or the l-ball in Rn.  相似文献   

3.
We study an optimal control problem for a hybrid system exhibiting several internal switching variables whose discrete evolutions are governed by some delayed thermostatic laws. By the dynamic programming technique we prove that the value function is the unique viscosity solution of a system of several Hamilton-Jacobi equations, suitably coupled. The method involves a contraction principle and some suitably adapted results for exit-time problems with discontinuous exit cost.  相似文献   

4.
We propose an alternative method for computing effectively the solution of non-linear, fixed-terminal-time, optimal control problems when they are given in Lagrange, Bolza or Mayer forms. This method works well when the nonlinearities in the control variable can be expressed as polynomials. The essential of this proposal is the transformation of a non-linear, non-convex optimal control problem into an equivalent optimal control problem with linear and convex structure. The method is based on global optimization of polynomials by the method of moments. With this method we can determine either the existence or lacking of minimizers. In addition, we can calculate generalized solutions when the original problem lacks of minimizers. We also present the numerical schemes to solve several examples arising in science and technology.  相似文献   

5.
Many practical optimal control problems include discrete decisions. These may be either time-independent parameters or time-dependent control functions as gears or valves that can only take discrete values at any given time. While great progress has been achieved in the solution of optimization problems involving integer variables, in particular mixed-integer linear programs, as well as in continuous optimal control problems, the combination of the two is yet an open field of research. We consider the question of lower bounds that can be obtained by a relaxation of the integer requirements. For general nonlinear mixed-integer programs such lower bounds typically suffer from a huge integer gap. We convexify (with respect to binary controls) and relax the original problem and prove that the optimal solution of this continuous control problem yields the best lower bound for the nonlinear integer problem. Building on this theoretical result we present a novel algorithm to solve mixed-integer optimal control problems, with a focus on discrete-valued control functions. Our algorithm is based on the direct multiple shooting method, an adaptive refinement of the underlying control discretization grid and tailored heuristic integer methods. Its applicability is shown by a challenging application, the energy optimal control of a subway train with discrete gears and velocity limits.   相似文献   

6.
Naive implementations of Newton's method for unconstrainedN-stage discrete-time optimal control problems with Bolza objective functions tend to increase in cost likeN 3 asN increases. However, if the inherent recursive structure of the Bolza problem is properly exploited, the cost of computing a Newton step will increase only linearly withN. The efficient Newton implementation scheme proposed here is similar to Mayne's DDP (differential dynamic programming) method but produces the Newton step exactly, even when the dynamical equations are nonlinear. The proposed scheme is also related to a Riccati treatment of the linear, two-point boundary-value problems that characterize optimal solutions. For discrete-time problems, the dynamic programming approach and the Riccati substitution differ in an interesting way; however, these differences essentially vanish in the continuous-time limit.This work was supported by the National Science Foundation, Grant No. DMS-85-03746.  相似文献   

7.
Dynamic programming identifies the value function of continuous time optimal control with a solution to the Hamilton-Jacobi equation, appropriately defined. This relationship in turn leads to sufficient conditions of global optimality, which have been widely used to confirm the optimality of putative minimisers. In continuous time optimal control, the dynamic programming methodology has been used for problems with state space a vector space. However there are many problems of interest in which it is necessary to regard the state space as a manifold. This paper extends dynamic programming to cover problems in which the state space is a general finite-dimensional C manifold. It shows that, also in a manifold setting, we can characterise the value function of a free time optimal control problem as a unique lower semicontinuous, lower bounded, generalised solution of the Hamilton-Jacobi equation. The application of these results is illustrated by the investigation of minimum time controllers for a rigid pendulum.  相似文献   

8.
The aim of this paper is to propose an algorithm, based on the optimal level solutions method, which solves a particular class of box constrained quadratic problems. The objective function is given by the sum of a quadratic strictly convex separable function and the square of an affine function multiplied by a real parameter. The convexity and the nonconvexity of the problem can be characterized by means of the value of the real parameter. Within the algorithm, some global optimality conditions are used as stopping criteria, even in the case of a nonconvex objective function. The results of a deep computational test of the algorithm are also provided. This paper has been partially supported by M.I.U.R.  相似文献   

9.
《Optimization》2012,61(5):595-607
In this paper optimality conditions will be derived for elliptic optimal control problems with a restriction on the state or on the gradient of the state. Essential tools are the method of transposition and generalized trace theorems and green's formulas from the theory of elliptic differential equations.  相似文献   

10.
A Kind of direct methods is presented for the solution of optimal control problems with state constraints.These methods are sequential quadratic programming methods.At every iteration a quadratic programming which is obtained by quadratic approximation to Lagrangian function and Linear approximations to constraints is solved to get a search direction for a merit function.The merit function is formulated by augmenting the Lagrangian funetion with a penalty term.A line search is carried out along the search direction to determine a step length such that the merit function is decreased.The methods presented in this paper include continuous sequential quadratic programming methods and discreate sequential quadrade programming methods.  相似文献   

11.
We study a quasi-variational inequality system with unbounded solutions. It represents the Bellman equation associated with an optimal switching control problem with state constraints arising from production engineering. We show that the optimal cost is the unique viscosity solution of the system.This work was supported by the National Research Council of Argentina, Grant No. PID-BID 213.  相似文献   

12.
Optimal control problems in Hilbert spaces are considered in a measure-theoretical framework. Instead of minimizing a functional defined on a class of admissible trajectory-control pairs, we minimize one defined on a set of measures; this set is defined by the boundary conditions and the differential equation of the problem. The new problem is an infinite-dimensionallinear programming problem; it is shown that it is possible to approximate its solution by that of a finite-dimensional linear program of sufficiently high dimensions, while this solution itself can be approximated by a trajectory-control pair. This pair may not be strictly admissible; if the dimensionality of the finite-dimensional linear program and the accuracy of the computations are high enough, the conditions of admissibility can be said to be satisfied up to any given accuracy. The value given by this pair to the functional measuring the performance criterion can be about equal to theglobal infimum associated with the classical problem, or it may be less than this number. It appears that this method may become a useful technique for the computation of optimal controls, provided the approximations involved are acceptable.  相似文献   

13.
In this paper we present a predator-prey mathematical model for two biological populations which dislike crowding. The model consists of a system of two degenerate parabolic equations with nonlocal terms and drifts. We provide conditions on the system ensuring the periodic coexistence, namely the existence of two non-trivial non-negative periodic solutions representing the densities of the two populations. We assume that the predator species is harvested if its density exceeds a given threshold. A minimization problem for a cost functional associated with this process and with some other significant parameters of the model is also considered.  相似文献   

14.
Dynamic programming techniques have proven to be more successful than alternative nonlinear programming algorithms for solving many discrete-time optimal control problems. The reason for this is that, because of the stagewise decomposition which characterizes dynamic programming, the computational burden grows approximately linearly with the numbern of decision times, whereas the burden for other methods tends to grow faster (e.g.,n 3 for Newton's method). The idea motivating the present study is that the advantages of dynamic programming can be brought to bear on classical nonlinear programming problems if only they can somehow be rephrased as optimal control problems.As shown herein, it is indeed the case that many prominent problems in the nonlinear programming literature can be viewed as optimal control problems, and for these problems, modern dynamic programming methodology is competitive with respect to processing time. The mechanism behind this success is that such methodology achieves quadratic convergence without requiring solution of large systems of linear equations.  相似文献   

15.
Continuous-time optimal control problems can rarely be solved directly but have to be approximated with discrete analogues. Shorter time steps lead to more accurate approximations, but result in formulations that are often too big for computer memory. This paper presents a technique for decomposing the problem along the time axis and iterating toward a solution in a leader-follower framework.In the model, the leader controls a set of coordination parameters, which he passes to the followers, who then solve their individual subproblems. State and sensitivity information is returned to the leader, who attempts to minimize an unconstrained problem in the coordination space. Parameters are updated and the process continues until improvement ceases. Two advantages of this technique are that feasible solutions to the original problem are available at each iteration and that the optimal coordination parameters obtained provide some measure of feedback control. Computational results are presented for a comprehensive set of test problems.This work was supported by a grant from the Advanced Research Program of the Texas Higher Education Coordinating Board.  相似文献   

16.
An optimality system of equations for the optimal control problem governed by Helmholtz-type equations is derived. By the associated first-order necessary optimality condition, we obtain the conjugate gradient method (CGM) in the continuous case. Introducing the sequence of higher-order fundamental solutions, we propose an iterative algorithm based on the conjugate gradient-boundary element method using the multiple reciprocity method (CGM+MRBEM) for solving the discrete control input. This algorithm has an advantage over that of the existing literatures because the main attribute (the reduced dimensionality) of the boundary element method is fully utilized. Finally, the local error estimates for this scheme are obtained, and a test problem is given to illustrate the efficiency of the proposed method.  相似文献   

17.
18.
The purpose of this paper is to draw a detailed comparison between Newton's method, as applied to discrete-time, unconstrained optimal control problems, and the second-order method known as differential dynamic programming (DDP). The main outcomes of the comparison are: (i) DDP does not coincide with Newton's method, but (ii) the methods are close enough that they have the same convergence rate, namely, quadratic.The comparison also reveals some other facts of theoretical and computational interest. For example, the methods differ only in that Newton's method operates on a linear approximation of the state at a certain point at which DDP operates on the exact value. This would suggest that DDP ought to be more accurate, an anticipation borne out in our computational example. Also, the positive definiteness of the Hessian of the objective function is easy to check within the framework of DDP. This enables one to propose a modification of DDP, so that a descent direction is produced at each iteration, regardless of the Hessian.Efforts of the first author were partially supported by the South African Council for Scientific and Industrial Research, and those of the second author by NSF Grants Nos. CME-79-05010 and CEE-81-10778.  相似文献   

19.
R. Dehghan  M. Keyanpour 《Optimization》2017,66(7):1157-1176
This paper presents a numerical scheme for solving fractional optimal control. The fractional derivative in this problem is in the Riemann–Liouville sense. The proposed method, based upon the method of moments, converts the fractional optimal control problem to a semidefinite optimization problem; namely, the nonlinear optimal control problem is converted to a convex optimization problem. The Grunwald–Letnikov formula is also used as an approximation for fractional derivative. The solution of fractional optimal control problem is found by solving the semidefinite optimization problem. Finally, numerical examples are presented to show the performance of the method.  相似文献   

20.
Traditional approaches to solving stochastic optimal control problems involve dynamic programming, and solving certain optimality equations. When recast as stochastic programming problems, structural aspects such as convexity are retained, and numerical solution procedures based on decomposition and duality may be exploited. This paper explores a class of stationary, infinite-horizon stochastic optimization problems with discounted cost criterion. Constraints on both states and controls are permitted, and modeled in the objective function by allowing it to take infinite values. Approximating techniques are developed using variational analysis, and intuitive lower bounds are obtained via averaging the future. These bounds could be used in a finite-time horizon stochastic programming setting to find solutions numerically. Research supported in part by a grant of the National Science Foundation. AMS Classification 46N10, 49N15, 65K10, 90C15, 90C46  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号