首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The purpose of this paper is to study the problem of asymptotic stabilization in probability of nonlinear stochastic differential systems with unknown parameters. With this aim, we introduce the concept of an adaptive control Lyapunov function for stochastic systems and we use the stochastic version of Artstein's theorem to design an adaptive stabilizer. In this framework the problem of adaptive stabilization of a nonlinear stochastic system is reduced to the problem of asymptotic stabilization in probability of a modified system. The design of an adaptive control Lyapunov function is illustrated by the example of adaptively quadratically stabilizable in probability stochastic differential systems. Accepted 9 December 1996  相似文献   

2.
   Abstract. This work is concerned with Carleman inequalities and controllability properties for the following stochastic linear heat equation (with Dirichlet boundary conditions in the bounded domain D R d and multiplicative noise):
and for the corresponding backward dual equation:
We prove the null controllability of the backward equation and obtain partial results for the controllability of the forward equation. \par  相似文献   

3.
The purpose of this paper is to study the identification problem for a spatially varying discontinuous parameter in stochastic diffusion equations. The consistency property of the maximum likelihood estimate (M.L.E.) and a generating algorithm for M.L.E. have been explored under the condition that the unknown parameter is in a sufficiently regular space with respect to spatial variables. In order to prove the consistency property of the M.L.E. for a discontinuous diffusion coefficient, we use the method of sieves, i.e., first the admissible class of unknown parameters is projected into a finite-dimensional space and next the convergence of the derived finite-dimensional M.L.E. to the infinite-dimensional M.L.E. is justified under some conditions. An iterative algorithm for generating the M.L.E. is also proposed with two numerical examples. Accepted 2 April 1996  相似文献   

4.
In this paper a linearly perturbed version of the well-known matrix Riccati equations which arise in certain stochastic optimal control problems is studied. Via the concepts of mean square stabilizability and mean square detectability we improve previous results on both the convergence properties of the linearly perturbed Riccati differential equation and the solutions of the linearly perturbed algebraic Riccati equation. Furthermore, our approach unifies, in some way, the study for this class of Riccati equations with the one for classical theory, by eliminating a certain inconvenient assumption used in previous works (e.g., [10] and [26]). The results are derived under relatively weaker assumptions and include, inter alia, the following: (a) An extension of Theorem 4.1 of [26] to handle systems not necessarily observable. (b) The existence of a strong solution, subject only to the mean square stabilizability assumption. (c) Conditions for the existence and uniqueness of stabilizing solutions for systems not necessarily detectable. (d) Conditions for the existence and uniqueness of mean square stabilizing solutions instead of just stabilizing. (e) Relaxing the assumptions for convergence of the solution of the linearly perturbed Riccati differential equation and deriving new convergence results for systems not necessarily observable. Accepted 30 July 1996  相似文献   

5.
Asset Pricing with Stochastic Volatility   总被引:1,自引:0,他引:1  
In this paper we study the asset pricing problem when the volatility is random. First, we derive a PDE for the risk-minimizing price of any contingent claim. Secondly, we assume that the volatility process \si t is observed through an observation process Y t subject to random error. A price formula and a PDE are then derived regarding the stock price S t and the observation process Y t as parameters. Finally, we assume that S t is observed. In this case we have a complete market and any contingent claim is then priced by an arbitrage argument instead of by risk-minimizing. Accepted 15 August 2000. Online publication 8 December 2000.  相似文献   

6.
The Bellman equation of the risk-sensitive control problem with full observation is considered. It appears as an example of a quasi-linear parabolic equation in the whole space, and fairly general growth assumptions with respect to the space variable x are permitted. The stochastic control problem is then solved, making use of the analytic results. The case of large deviation with small noises is then treated, and the limit corresponds to a differential game. Accepted 25 March 1996  相似文献   

7.
In [4,6], the authors have presented a numerical method for the solution of complex minimax problems, which implicitly solves discretized versions of the equivalent semi-infinite programming problem on increasingly finer grids. While this method only requires the most violated constraint at the current iterate on a finite subset of the infinitely many constraints of the problem, we consider here a related and more direct approach (applicable to general convex semi-infinite programming problems) which makes use of the globally most violated constraint. Numerical examples with up to 500 unknowns, which partially originate from digital filter design problems, are discussed.  相似文献   

8.
   Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

9.
Abstract. This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature.  相似文献   

10.
Stochastic 2-D Navier—Stokes Equation   总被引:1,自引:0,他引:1  
   Abstract. In this paper we prove the existence and uniqueness of strong solutions for the stochastic Navier—Stokes equation in bounded and unbounded domains. These solutions are stochastic analogs of the classical Lions—Prodi solutions to the deterministic Navier—Stokes equation. Local monotonicity of the nonlinearity is exploited to obtain the solutions in a given probability space and this significantly improves the earlier techniques for obtaining strong solutions, which depended on pathwise solutions to the Navier—Stokes martingale problem where the probability space is also obtained as a part of the solution.  相似文献   

11.
Using nonlinear programming theory in Banach spaces we derive a version of Pontryagin's maximum principle that can be applied to distributed parameter systems under control and state constrains. The results are applied to fluid mechanics and combustion problems. Accepted 3 December 1996  相似文献   

12.
This paper studies the two-dimensional singular stochastic control problem over an infinite time-interval arising when the Central Bank tries to contain the inflation by acting on the nominal interest rate. It is shown that this problem admits a variational formulation which can be differentiated (in some sense) to lead to a stochastic differential game with stopping times between the conservative and the expansionist tendencies of the Bank. Substantial regularity of the free boundary associated to the differential game is obtained. Existence of an optimal policy is established when the regularity of the free boundary is strengthened slightly, and it is shown that the optimal process is a diffusion reflected at the boundary. Accepted 22 May 1998  相似文献   

13.
We consider a general model of singular stochastic control with infinite time horizon and we prove a ``verification theorem' under the assumption that the Hamilton—Jacobi—Bellman (HJB) equation has a C 2 solution. In the one-dimensional case, under the assumption that the HJB equation has a solution in W loc 2,p(R) with , we prove a very general ``verification theorem' by employing the generalized Meyer—Ito change of variables formula with local times. In what follows, we consider two special cases which we explicitly solve. These are the formal equivalent of the one-dimensional infinite time horizon LQG problem and a simple example with radial symmetry in an arbitrary Euclidean space. The value function of either of these problems is C 2 and is expressed in terms of special functions, and, in particular, the confluent hypergeometric function and the modified Bessel function of the first kind, respectively. Accepted 21 February 1997  相似文献   

14.
The solvability of forward—backward stochastic differential equations (FBSDEs for short) has been studied extensively in recent years. To guarantee the existence and uniqueness of adapted solutions, many different conditions, some quite restrictive, have been imposed. In this paper we propose a new notion: the approximate solvability of FBSDEs, based on the method of optimal control introduced in our primary work [15]. The approximate solvability of a class of FBSDEs is shown under mild conditions; and a general scheme for constructing approximate adapted solutions is proposed. Accepted 17 April 2001. Online publication 14 August 2001.  相似文献   

15.
We consider the long time behavior of an infinite dimensional stochastic evolution equation with respect to a cylindrical Wiener process. New estimates on the disturbance operator related to the problem are proved using a ``variation of constants'-type formula. Such estimates, under the natural assumption of mean-square stability for the linear part of the equation, lead directly to sufficient conditions for the exponential stability of the problem. In the last part of the paper we prove that, under suitable conditions, the equation admits a unique invariant measure that is strongly mixing. To complete the paper, we present examples of interesting situations where our construction applies. Accepted 28 February 2001. Online publication 9 August 2001.  相似文献   

16.
This paper studies the optimal control problem for point processes with Gaussian white-noised observations. A general maximum principle is proved for the partially observed optimal control of point processes, without using the associated filtering equation . Adjoint flows—the adjoint processes of the stochastic flows of the optimal system—are introduced, and their relations are established. Adjoint vector fields , which are observation-predictable, are introduced as the solutions of associated backward stochastic integral-partial differential equtions driven by the observation process. In a heuristic way, their relations are explained, and the adjoint processes are expressed in terms of the adjoint vector fields, their gradients and Hessians, along the optimal state process. In this way the adjoint processes are naturally connected to the adjoint equation of the associated filtering equation . This shows that the conditional expectation in the maximum condition is computable through filtering the optimal state, as usually expected. Some variants of the partially observed stochastic maximum principle are derived, and the corresponding maximum conditions are quite different from the counterpart for the diffusion case. Finally, as an example, a quadratic optimal control problem with a free Poisson process and a Gaussian white-noised observation is explicitly solved using the partially observed maximum principle. Accepted 8 August 2001. Online publication 17 December, 2001.  相似文献   

17.
The purpose of this paper is to study under weak conditions of stabilizability and detectability, the asymptotic behavior of the matrix Riccati equation which arises in stochastic control and filtering with random stationary coefficients. We prove the existence of a stationary solution of this Riccati equation. This solution is attracting, in the sense that if P t is another solution, then onverges to 0 exponentially fast as t tends to +∞ , at a rate given by the smallest positive Lyapunov exponent of the associated Hamiltonian matrices. Accepted 13 January 1998  相似文献   

18.
The polar decomposition, a well-known algorithm for decomposing real matrices as the product of a positive semidefinite matrix and an orthogonal matrix, is intimately related to involutive automorphisms of Lie groups and the subspace decomposition they induce. Such generalized polar decompositions, depending on the choice of the involutive automorphism σ , always exist near the identity although frequently they can be extended to larger portions of the underlying group. In this paper, first of all we provide an alternative proof to the local existence and uniqueness result of the generalized polar decomposition. What is new in our approach is that we derive differential equations obeyed by the two factors and solve them analytically, thereby providing explicit Lie-algebra recurrence relations for the coefficients of the series expansion. Second, we discuss additional properties of the two factors. In particular, when σ is a Cartan involution, we prove that the subgroup factor obeys similar optimality properties to the orthogonal polar factor in the classical matrix setting both locally and globally, under suitable assumptions on the Lie group G . September 12, 2000. Final version received: April 16, 2001.  相似文献   

19.
Our purpose is to study an ergodic linear equation associated to diffusion processes with jumps in the whole space. This integro-differential equation plays a fundamental role in ergodic control problems of second order Markov processes. The key result is to prove the existence and uniqueness of an invariant density function for a jump diffusion, whose lower order coefficients are only Borel measurable. Based on this invariant probability, existence and uniqueness (up to an additive constant) of solutions to the ergodic linear equation are established. Accepted 24 February 1998  相似文献   

20.
R. Jordan, D. Kinderlehrer, and F. Otto proposed the discrete-time approximation of the Fokker—Planck equation by the variational formulation. It is determined by the Wasserstein metric, an energy functional, and the Gibbs—Boltzmann entropy functional. In this paper we study the asymptotic behavior of the dynamical systems which describe their approximation of the Fokker—Planck equation and characterize the limit as a solution to a class of variational problems. Accepted 2 June 2000. Online publication 6 October 2000.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号