首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

We consider a forward-backward system of stochastic evolution equations in a Hilbert space. Under nondegeneracy assumptions on the diffusion coefficient (that may be nonconstant) we prove an analogue of the well-known Bismut-Elworthy formula. Next, we consider a nonlinear version of the Kolmogorov equation, i.e. a deterministic quasilinear equation associated to the system according to Pardoux, E and Peng, S. (1992). "Backward stochastic differential equations and quasilinear parabolic partial differential equations". In: Rozowskii, B.L., Sowers, R.B. (Eds.), Stochastic Partial Differential Equations and Their Applications , Lecture Notes in Control Inf. Sci., Vol. 176, pp. 200-217. Springer: Berlin. The Bismut-Elworthy formula is applied to prove smoothing effect, i.e. to prove existence and uniqueness of a solution which is differentiable with respect to the space variable, even if the initial datum and (some) coefficients of the equation are not. The results are then applied to the Hamilton-Jacobi-Bellman equation of stochastic optimal control. This way we are able to characterize optimal controls by feedback laws for a class of infinite-dimensional control systems, including in particular the stochastic heat equation with state-dependent diffusion coefficient.  相似文献   

2.
In this paper we study a general multidimensional diffusion-type stochastic control problem. Our model contains the usual regular control problem, singular control problem and impulse control problem as special cases. Using a unified treatment of dynamic programming, we show that the value function of the problem is a viscosity solution of certain Hamilton-Jacobi-Bellman (HJB) quasivariational inequality. The uniqueness of such a quasi-variational inequality is proved. Supported in part by USA Office of Naval Research grant #N00014-96-1-0262. Supported in part by the NSFC Grant #79790130, the National Distinguished Youth Science Foundation of China Grant #19725106 and the Chinese Education Ministry Science Foundation.  相似文献   

3.
Dynamic programming techniques have proven to be more successful than alternative nonlinear programming algorithms for solving many discrete-time optimal control problems. The reason for this is that, because of the stagewise decomposition which characterizes dynamic programming, the computational burden grows approximately linearly with the numbern of decision times, whereas the burden for other methods tends to grow faster (e.g.,n 3 for Newton's method). The idea motivating the present study is that the advantages of dynamic programming can be brought to bear on classical nonlinear programming problems if only they can somehow be rephrased as optimal control problems.As shown herein, it is indeed the case that many prominent problems in the nonlinear programming literature can be viewed as optimal control problems, and for these problems, modern dynamic programming methodology is competitive with respect to processing time. The mechanism behind this success is that such methodology achieves quadratic convergence without requiring solution of large systems of linear equations.  相似文献   

4.
《Optimization》2012,61(3-4):205-232
Various optimal control problems for linear parabolic systems with multiple constant time delays are considered. Necessary and sufficient conditions of optimality are derived for the Neumann problem. The optimal control is obtained in the feedback formMaking use of the results of Schwartz's, the representation of the optimal feedback control is given. A simple example of application is also provided  相似文献   

5.
For a controlled stochastic dynamic system with set-valued drift coefficient and a terminal cost functional we derive a necessary extremality condition in form of a minimum principle.  相似文献   

6.
Decomposition has proved to be one of the more effective tools for the solution of large-scale problems, especially those arising in stochastic programming. A decomposition method with wide applicability is Benders' decomposition, which has been applied to both stochastic programming as well as integer programming problems. However, this method of decomposition relies on convexity of the value function of linear programming subproblems. This paper is devoted to a class of problems in which the second-stage subproblem(s) may impose integer restrictions on some variables. The value function of such integer subproblem(s) is not convex, and new approaches must be designed. In this paper, we discuss alternative decomposition methods in which the second-stage integer subproblems are solved using branch-and-cut methods. One of the main advantages of our decomposition scheme is that Stochastic Mixed-Integer Programming (SMIP) problems can be solved by dividing a large problem into smaller MIP subproblems that can be solved in parallel. This paper lays the foundation for such decomposition methods for two-stage stochastic mixed-integer programs.  相似文献   

7.
Stochastic linear programs become extremely large and complex as additional uncertainties and possible future outcomes are included in their formulation. Row and column aggregation can significantly reduce this complexity, but the solutions of the aggregated problem only provide an approximation of the true solution. In this paper, error bounds on the value of the optimal solution of the original problem are obtained from the solution of the aggregated problem. These bounds apply for aggregation of both random variables and time periods.  相似文献   

8.
In this paper we discuss statistical properties and convergence of the Stochastic Dual Dynamic Programming (SDDP) method applied to multistage linear stochastic programming problems. We assume that the underline data process is stagewise independent and consider the framework where at first a random sample from the original (true) distribution is generated and consequently the SDDP algorithm is applied to the constructed Sample Average Approximation (SAA) problem. Then we proceed to analysis of the SDDP solutions of the SAA problem and their relations to solutions of the “true” problem. Finally we discuss an extension of the SDDP method to a risk averse formulation of multistage stochastic programs. We argue that the computational complexity of the corresponding SDDP algorithm is almost the same as in the risk neutral case.  相似文献   

9.
It is shown how a discrete Markov programming problem can be transformed, using a linear program, into an equivalent problem from which the optimal decision rule can be trivially deduced. This transformation is applied to problems which have either transient probabilities or discounted costs.This research was supported by the National Research Council of Canada, Grant A7751.  相似文献   

10.
《Optimization》2012,61(4):343-354
In this paper we treat discrete-time stochastic control systems. Using corresponding results for systems, which are linear with respect to the state variables, we derive under convexity assumptions optimality conditions in form of maximum principles  相似文献   

11.
《Optimization》2012,61(3-4):267-285
This paper provides a set of stochastic multistage programs where the evolvement of uncertain factors is given by stochastic processes. We treat a practical problem statement within the field of managing fixed-income securities. Detailed information on the used parameter values in various interest rate models is given. Barycentric approximation is applied to obtain computational results; different measures of the achieved goodness of approximation are indicated  相似文献   

12.
This paper deals with the mean-square asymptotic stability of stochastic Markovian jump systems with time-varying delay. Based on a new stochastic inequality and convex analysis property, some novel stability conditions are presented. In the derivation, the information of the time-varying delay is retained and the estimation of it by the worst-case enlargement is not involved. Some special cases of the systems under consideration are also investigated. Illustrative examples are given to show the effectiveness of the proposed approach.  相似文献   

13.
In this paper we consider some stochastic bottleneck linear programming problems. We overview the solution methods in the literature. In the case when the coefficients of the objective functions are simple randomized, the minimum-risk approach will be used for solving these problems. We prove that, under some positivity conditions, these stochastic problems are reduced to certain deterministic bottleneck linear problems. An application of these problems to bottleneck spanning tree problems is given. Two simple numerical examples are presented. This paper was written when I.M. Stancu-Minasian was visiting the Instituto Complutense de Análisis Económico, in the Universidad Complutensen de Madrid, from October 1, 1997 to November 15, 1997 and from October 24, 1998 to November, 9, 1998, as invited researcher. He is grateful to the Institution.  相似文献   

14.
We show that an undiscounted stochastic game possesses optimal stationary strategies if and only if a global minimum with objective value zero can be found to an appropriate nonlinear program with linear constraints. This nonlinear program arises as a method for solving a certain bilinear system, satisfaction of which is also equivalent to finding a stationary optimal solution for the game. The objective function of the program is a nonnegatively valued quadric polynomial.This research was supported in part by the National Science Foundation under the grant #ECS-8503440. We wish to thank the referee for many helpful comments and in streamlining the presentation.  相似文献   

15.
In this paper, stability of the optimal solution of stochastic programs with recourse with respect to parameters of the given distribution of random coefficients is studied. Provided that the set of admissible solutions is defined by equality constraints only, asymptotical normality of the optimal solution follows by standard methods. If nonnegativity constraints are taken into account the problem is solved under assumption of strict complementarity known from the theory of nonlinear programming (Theorem 1). The general results are applied to the simple recourse problem with random right-hand sides under various assumptions on the underlying distribution (Theorems 2–4).  相似文献   

16.
《Optimization》2012,61(3-4):303-317
Star-shaped probability function approximation is suggested. Conditions of log-concavity and differentiability of approximation function are obtained. The method for constructing stochastic estimates of approximation function gradient and stochastic quasi-gradient algorithm for probability function maximization are described in the paper  相似文献   

17.
We study an infinite horizon optimal control problem for a system with two state variables. One of them has the evolution governed by a controlled ordinary differential equation and the other one is related to the latter by a hysteresis relation, represented here by either a play operator or a Prandtl-Ishlinskii operator. By dynamic programming, we derive the corresponding (discontinuous) first order Hamilton-Jacobi equation, which in the first case is of finite dimension and in the second case is of infinite dimension. In both cases we prove that the value function is the only bounded uniformly continuous viscosity solution of the equation.  相似文献   

18.
Traditional approaches to solving stochastic optimal control problems involve dynamic programming, and solving certain optimality equations. When recast as stochastic programming problems, structural aspects such as convexity are retained, and numerical solution procedures based on decomposition and duality may be exploited. This paper explores a class of stationary, infinite-horizon stochastic optimization problems with discounted cost criterion. Constraints on both states and controls are permitted, and modeled in the objective function by allowing it to take infinite values. Approximating techniques are developed using variational analysis, and intuitive lower bounds are obtained via averaging the future. These bounds could be used in a finite-time horizon stochastic programming setting to find solutions numerically. Research supported in part by a grant of the National Science Foundation. AMS Classification 46N10, 49N15, 65K10, 90C15, 90C46  相似文献   

19.

In this paper, we are concerned with optimal control problems where the system is driven by a stochastic differential equation of the Ito type. We study the relaxed model for which an optimal solution exists. This is an extension of the initial control problem, where admissible controls are measure valued processes. Using Ekeland's variational principle and some stability properties of the corresponding state equation and adjoint processes, we establish necessary conditions for optimality satisfied by an optimal relaxed control. This is the first version of the stochastic maximum principle that covers relaxed controls.  相似文献   

20.
It is a fact that the feedback delay actually ariese in digital control systems.It is necessary to modify the structure of digital control systems and develop new control algorithms,which is done in this paper.A great number of digital computer simulation experiments have shown the obvious advantage of the new algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号