首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 620 毫秒
1.
2.
This paper studies the optimal control problem for point processes with Gaussian white-noised observations. A general maximum principle is proved for the partially observed optimal control of point processes, without using the associated filtering equation . Adjoint flows—the adjoint processes of the stochastic flows of the optimal system—are introduced, and their relations are established. Adjoint vector fields , which are observation-predictable, are introduced as the solutions of associated backward stochastic integral-partial differential equtions driven by the observation process. In a heuristic way, their relations are explained, and the adjoint processes are expressed in terms of the adjoint vector fields, their gradients and Hessians, along the optimal state process. In this way the adjoint processes are naturally connected to the adjoint equation of the associated filtering equation . This shows that the conditional expectation in the maximum condition is computable through filtering the optimal state, as usually expected. Some variants of the partially observed stochastic maximum principle are derived, and the corresponding maximum conditions are quite different from the counterpart for the diffusion case. Finally, as an example, a quadratic optimal control problem with a free Poisson process and a Gaussian white-noised observation is explicitly solved using the partially observed maximum principle. Accepted 8 August 2001. Online publication 17 December, 2001.  相似文献   

3.
A maximum principle is developed for a class of problems involving the optimal control of a damped-parameter system governed by a linear hyperbolic equation in one space dimension that is not necessarily separable. A convex index of performance is formulated, which consists of functionals of the state variable, its first- and second-order space derivatives, its first-order time derivative, and a penalty functional involving the open-loop control force. The solution of the optimal control problem is shown to be unique. The adjoint operator is determined, and a maximum principle relating the control function to the adjoint variable is stated. The proof of the maximum principle is given with the help of convexity arguments. The maximum principle can be used to compute the optimal control function and is particularly suitable for problems involving the active control of structural elements for vibration suppression.  相似文献   

4.
The paper deals with optimal control of heterogeneous systems, that is, families of controlled ODEs parameterized by a parameter running over a domain called domain of heterogeneity. The main novelty in the paper is that the domain of heterogeneity is endogenous: it may depend on the control and on the state of the system. This extension is crucial for several economic applications and turns out to rise interesting mathematical problems. A necessary optimality condition is derived, where one of the adjoint variables satisfies a differential inclusion (instead of equation) and the maximization of the Hamiltonian takes the form of ??min-max??. As a consequence, a Pontryagin-type maximum principle is obtained under certain regularity conditions for the optimal control. A formula for the derivative of the objective function with respect to the control from L ?? is presented together with a sufficient condition for its existence. A stylized economic example is investigated analytically and numerically.  相似文献   

5.
We consider a class of infinite-horizon optimal control problems that arise in studying models of optimal dynamic allocation of economic resources. In a typical problem of that kind the initial state is fixed, no constraints are imposed on the behavior of the admissible trajectories at infinity, and the objective functional is given by a discounted improper integral. Earlier, for such problems, S.M. Aseev and A.V. Kryazhimskiy in 2004–2007 and jointly with the author in 2012 developed a method of finite-horizon approximations and obtained variants of the Pontryagin maximum principle that guarantee normality of the problem and contain an explicit formula for the adjoint variable. In the present paper those results are extended to a more general situation where the instantaneous utility function need not be locally bounded from below. As an important illustrative example, we carry out a rigorous mathematical investigation of the transitional dynamics in the neoclassical model of optimal economic growth.  相似文献   

6.
We consider an optimal control problem under state constraints and show that to every optimal solution corresponds an adjoint state satisfying the first order necessary optimality conditions in the form of a maximum principle and sensitivity relations involving the value function. Such sensitivity relations were recently investigated by P. Bettiol and R.B. Vinter for state constraints with smooth boundary. In the difference with their work, our setting concerns differential inclusions and nonsmooth state constraints. To obtain our result we derive neighboring feasible trajectory estimates using a novel generalization of the so-called inward pointing condition.  相似文献   

7.
The purpose of this paper is to derive some pointwise second-order necessary conditions for stochastic optimal controls in the general case that the control variable enters into both the drift and the diffusion terms. When the control region is convex, a pointwise second-order necessary condition for stochastic singular optimal controls in the classical sense is established; while when the control region is allowed to be nonconvex, we obtain a pointwise second-order necessary condition for stochastic singular optimal controls in the sense of Pontryagin-type maximum principle. It is found that, quite different from the first-order necessary conditions, the correction part of the solution to the second-order adjoint equation appears in the pointwise second-order necessary conditions whenever the diffusion term depends on the control variable, even if the control region is convex.  相似文献   

8.
We derive nonlocal necessary optimality conditions, which efficiently strengthen the classical Pontryagin maximum principle and its modification obtained by B. Ka?kosz and S. ?ojasiewicz as well as our previous result of a similar kind named the “feedback minimum principle.” The strengthening of the feedback minimum principle (and, hence, of the Pontryagin principle) is owing to the employment of two types of feedback controls “compatible” with a reference trajectory (i.e., producing this trajectory as a Carath´eodory solution). In each of the versions, the strengthened feedback minimum principle states that the optimality of a reference process implies the optimality of its trajectory in a certain family of variational problems generated by cotrajectories of the original and compatible controls. The basic construction of the feedback minimum principle—a perturbation of a solution to the adjoint system—is employed to prove an exact formula for the increment of the cost functional. We use this formula to obtain sufficient conditions for the strong and global minimum of Pontryagin’s extremals. These conditions are much milder than their known analogs, which require the convexity in the state variable of the functional and of the lower Hamiltonian. Our study is focused on a nonlinear smooth Mayer problem with free terminal states. All assertions are illustrated by examples.  相似文献   

9.
The paper deals with first order necessary optimality conditions for a class of infinite-horizon optimal control problems that arise in economic applications. Neither convergence of the integral utility functional nor local boundedness of the optimal control is assumed. Using the classical needle variations technique we develop a normal form version of the Pontryagin maximum principle with an explicitly specified adjoint variable under weak regularity assumptions. The result generalizes some previous results in this direction. An illustrative economical example is presented.  相似文献   

10.
Based on the maximum principle,the difference formula defined on a non-integral node is given to approximate the fractional Riemann-Liouville derivative and the finite difference scheme for solving one-dimensional space fractional diffusion equations(FDEs) with variable coefficients is presented.Furthermore,using the maximum principle the scheme is proved unconditionally stable and secondorder accuracy in spatial grid size.Several numerical examples are given to verify the efficiency of the scheme.  相似文献   

11.
A stochastic maximum principle for the risk-sensitive optimal control problem of jump diffusion processes with an exponential-of-integral cost functional is derived assuming that the value function is smooth, where the diffusion and jump term may both depend on the control. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equations and the maximum condition heavily depend on the risk-sensitive parameter. As applications, a linear-quadratic risk-sensitive control problem is solved by using the maximum principle derived and explicit optimal control is obtained.  相似文献   

12.
Summary We show that a gradient operator defined by perturbations of the Poisson process jump times can be used with its adjoint operator instead of the annihilation and creation operators on the Poisson-Charlier chaotic decomposition to represent the Poisson process. The quantum stochastic integration and the Itô formula are developed accordingly, leading to commutation relations which are different from the CCR. An analog of the Weyl representation is defined for a subgroup ofSL(2, ), showing that the exponential and geometric distributions are closely related in this approach.  相似文献   

13.
A maximum principle for the open-loop optimal control of a vibrating system relative to a given convex index of performance is investigated. Though maximum principles have been studied by many people (see, e.g., Refs. 1–5), the principle derived in this paper is of particular use for control problems involving mechanical structures. The state variable satisfies general initial conditions as well as a self-adjoint system of partial differential equations together with a homogeneous system of boundary conditions. The mass matrix is diagonal, constant, and singular, and the viscous damping matrix is diagonal. The maximum principle relates the optimal control with the solution of the homogeneous adjoint equation in which terminal conditions are prescribed in terms of the terminal values of the optimal state variable. An application of this theory to a structural vibrating system is given in a companion paper (Ref. 6).  相似文献   

14.
This paper investigates a relationship between the maximum principle with an infinite horizon and dynamic programming and sheds new light upon the role of the transversality condition at infinity as necessary and sufficient conditions for optimality with or without convexity assumptions. We first derive the nonsmooth maximum principle and the adjoint inclusion for the value function as necessary conditions for optimality. We then present sufficiency theorems that are consistent with the strengthened maximum principle, employing the adjoint inequalities for the Hamiltonian and the value function. Synthesizing these results, necessary and sufficient conditions for optimality are provided for the convex case. In particular, the role of the transversality conditions at infinity is clarified.  相似文献   

15.
We define a special multiplication of function series (skew multiplication) and a generalized Riemann-Stieltjes integral with function series as integration arguments. The generalized integrals and the skew multiplication are related by an integration by parts formula. The generalized integrals generate a family of linear generalized integral equations, which includes a family (represented in integral form via the Riemann-Stieltjes integral) of linear differential equations with several deviating arguments. A specific feature of these equations is that all deviating functions are defined on the same closed interval and map it into itself. This permits one to avoid specifying the initial functions and imposing any additional constraints on the deviating functions. We present a procedure for constructing the fundamental solution of a generalized integral equation. With respect to the skew multiplication, it is invertible and generates the product of the fundamental solution (a function of one variable) by its inverse function (a function of the second variable). Under certain conditions on the parameters of the equation, the product has all specific properties of the Cauchy function. We introduce the notion of adjoint generalized integral equation, obtain a representation of solutions of the original equation and the adjoint equation in generalized integral Cauchy form, and derive sufficient conditions for the convergence of solutions of a pair of adjoint equations.  相似文献   

16.
This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any derivatives of the value function, relations among the adjoint processes, the generalized Hamiltonian and the value function are investigated by employing the notions of semijets evoked in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is optimal.  相似文献   

17.
This paper is concerned with partially-observed optimal control problems for fully-coupled forward-backward stochastic systems. The maximum principle is obtained on the assumption that the forward diffusion coefficient does not contain the control variable and the control domain is not necessarily convex. By a classical spike variational method and a filtering technique, the related adjoint processes are characterized as solutions to forward-backward stochastic differential equations in finite-dimensional spaces. Then, our theoretical result is applied to study a partially-observed linear-quadratic optimal control problem for a fully-coupled forward-backward stochastic system and an explicit observable control variable is given.  相似文献   

18.
We consider a nonautonomous optimal control problem on an infinite time horizon with an integral functional containing a positive discounting factor. In the case of a dominating discounting factor, we obtain a variant of the Pontryagin maximum principle that contains explicit expressions for the adjoint variable and the Hamiltonian of the problem.  相似文献   

19.
In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. We introduce a class of neutral resolvents and show that the Laplace transform of the fundamental solution is its neutral resolvent operator. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented. Finally, the time optimal control problem to a target set is investigated.  相似文献   

20.
This paper deals with optimal control problems described by higher index DAEs. We introduce a class of these problems which can be transformed to index one control problems. For this class of higher index DAEs, we derive first-order approximations and adjoint equations for the functionals defining the problem. These adjoint equations are then used to state, in the accompanying paper, the necessary optimality conditions in the form of a weak maximum principle. The constructive way used to prove these optimality conditions leads to globally convergent algorithms for control problems with state constraints and defined by higher index DAEs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号