首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
We consider risk measurement in controlled partially observable Markov processes in discrete time. We introduce a new concept of conditional stochastic time consistency and we derive the structure of risk measures enjoying this property. We prove that they can be represented by a collection of static law invariant risk measures on the space of function of the observable part of the state. We also derive the corresponding dynamic programming equations. Finally we illustrate the results on a machine deterioration problem.  相似文献   

2.

We consider nonlinear multistage stochastic optimization problems in the spaces of integrable functions. We allow for nonlinear dynamics and general objective functionals, including dynamic risk measures. We study causal operators describing the dynamics of the system and derive the Clarke subdifferential for a penalty function involving such operators. Then we introduce the concept of subregular recourse in nonlinear multistage stochastic optimization and establish subregularity of the resulting systems in two formulations: with built-in nonanticipativity and with explicit nonanticipativity constraints. Finally, we derive optimality conditions for both formulations and study their relations.

  相似文献   

3.
In this paper we consider the adjustable robust approach to multistage optimization, for which we derive dynamic programming equations. We also discuss this from the point of view of risk averse stochastic programming. We consider as an example a robust formulation of the classical inventory model and show that, like for the risk neutral case, a basestock policy is optimal.  相似文献   

4.
We introduce a new preference relation in the space of random variables, which we call robust stochastic dominance. We consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint. These are composite semi-infinite optimization problems with constraints on compositions of measures of risk and utility functions. We develop necessary and sufficient conditions of optimality for such optimization problems in the convex case. In the nonconvex case, we derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.  相似文献   

5.
Stochastic programming approach to optimization under uncertainty   总被引:2,自引:0,他引:2  
In this paper we discuss computational complexity and risk averse approaches to two and multistage stochastic programming problems. We argue that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages. We discuss an extension of coherent risk measures to a multistage setting and, in particular, dynamic programming equations for such problems.   相似文献   

6.
In this paper, we study the conditional, non-homogeneous and doubly stochastic compound Poisson process with stochastic discounted claims. We derive the moment generating functions of these risk processes and find their inverses, numerically or analytically, by using their corresponding characteristic functions. We then compare their distributions and some risk measures as the VaR and TVaR, and we examine the case where there is a possible dependence between the occurrence time and the severity of the claim.  相似文献   

7.
We study an insurance model where the risk can be controlled by reinsurance and investment in the financial market. We consider a finite planning horizon where the timing of the events, namely the arrivals of a claim and the change of the price of the underlying asset(s), corresponds to a Poisson point process. The objective is the maximization of the expected total utility and this leads to a nonstandard stochastic control problem with a possibly unbounded number of discrete random time points over the given finite planning horizon. Exploiting the contraction property of an appropriate dynamic programming operator, we obtain a value-iteration type algorithm to compute the optimal value and strategy and derive its speed of convergence. Following Schäl (2004) we consider also the specific case of exponential utility functions whereby negative values of the risk process are penalized, thus combining features of ruin minimization and utility maximization. For this case we are able to derive an explicit solution. Results of numerical computations are also reported.  相似文献   

8.
The classical definition of the action functional, for a dynamical system on curved manifolds, can be extended to the case of diffusion processes. For the stochastic action functional so obtained, we introduce variational principles of the type proposed by Morato. In order to generalize the class of process variations, from the flat case originally given by Morato to general curved manifolds, we introduce the notion of stochastic differential systems. These give a synthetic characterization of the process and its variations as a generalized controlled stochastic process on the tangent bundle of the manifold. The resulting programming equations are equivalent to the quantum Schrödinger equation, where the wave function is coupled to an additional vector potential, satisfying a plasma-like equation with a peculiar dissipative behavior.  相似文献   

9.

We derive equations that determine second moments of a random solution of a system of Itô linear differential equations with coefficients depending on a finite-valued random semi-Markov process. We obtain necessary and sufficient conditions for the asymptotic stability of solutions in the mean square with the use of moment equations and Lyapunov stochastic functions.

  相似文献   

10.
Mei  Yu  Chen  Zhiping  Liu  Jia  Ji  Bingbing 《Journal of Global Optimization》2022,83(3):585-613

We study the multi-stage portfolio selection problem where the utility function of an investor is ambiguous. The ambiguity is characterized by dynamic stochastic dominance constraints, which are able to capture the dynamics of the random return sequence during the investment process. We propose a multi-stage dynamic stochastic dominance constrained portfolio selection model, and use a mixed normal distribution with time-varying weights and the K-means clustering technique to generate a scenario tree for the transformation of the proposed model. Based on the scenario tree representation, we derive two linear programming approximation problems, using the sampling approach or the duality theory, which provide an upper bound approximation and a lower bound approximation for the original nonconvex problem. The upper bound is asymptotically tight with infinitely many samples. Numerical results illustrate the practicality and efficiency of the proposed new model and solution techniques.

  相似文献   

11.
《Optimization》2012,61(5):649-671
Abstract

We show that many different concepts of robustness and of stochastic programming can be described as special cases of a general non-linear scalarization method by choosing the involved parameters and sets appropriately. This leads to a unifying concept which can be used to handle robust and stochastic optimization problems. Furthermore, we introduce multiple objective (deterministic) counterparts for uncertain optimization problems and discuss their relations to well-known scalar robust optimization problems by using the non-linear scalarization concept. Finally, we mention some relations between robustness and coherent risk measures.  相似文献   

12.
We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we develop a risk-averse policy iteration method and we prove its convergence. We also propose a version of the Newton method to solve a nonsmooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to min–max Markov decision models.  相似文献   

13.
We introduce and study a new concept of a weak elliptic equation for measures on infinite dimensional spaces. This concept allows one to consider equations whose coefficients are not globally integrable. By using a suitably extended Lyapunov function technique, we derive a priori estimates for the solutions of such equations and prove new existence results. As an application, we consider stochastic Burgers, reaction-diffusion, and Navier-Stokes equations and investigate the elliptic equations for the corresponding invariant measures. Our general theorems yield a priori estimates and existence results for such elliptic equations. We also obtain moment estimates for Gibbs distributions and prove an existence result applicable to a wide class of models. Received: 23 January 2000 / Revised version: 4 October 2000 / Published online: 5 June 2001  相似文献   

14.
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs' condition.  相似文献   

15.
We introduce new classes of stationary spatial processes with asymmetric, sub-Gaussian marginal distributions using the idea of expectiles. We derive theoretical properties of the proposed processes. Moreover, we use the proposed spatial processes to formulate a spatial regression model for point-referenced data where the spatially correlated errors have skewed marginal distribution. We introduce a Bayesian computational procedure for model fitting and inference for this class of spatial regression models. We compare the performance of the proposed method with the traditional Gaussian process-based spatial regression through simulation studies and by applying it to a dataset on air pollution in California.  相似文献   

16.
Abstract

We study a zero-sum stochastic differential game with multiple modes. The state of the system is governed by “controlled switching” diffusion processes. Under certain conditions, we show that the value functions of this game are unique viscosity solutions of the appropriate Hamilton–Jacobi–Isaac' system of equations. We apply our results to the analysis of a portfolio optimization problem where the investor is playing against the market and wishes to maximize his terminal utility. We show that the maximum terminal utility functions are unique viscosity solutions of the corresponding Hamilton–Jacobi–Isaac' system of equations.  相似文献   

17.
We prove the dynamic programming principle for uniformly nondegenerate stochastic differential games in the framework of time-homogeneous diffusion processes considered up to the first exit time from a domain. In contrast with previous results established for constant stopping times we allow arbitrary stopping times and randomized ones as well. There is no assumption about solvability of the the Isaacs equation in any sense (classical or viscosity). The zeroth-order “coefficient” and the “free” term are only assumed to be measurable in the space variable. We also prove that value functions are uniquely determined by the functions defining the corresponding Isaacs equations and thus stochastic games with the same Isaacs equation have the same value functions.  相似文献   

18.
In this paper we study the integral–partial differential equations of Isaacs’ type by zero-sum two-player stochastic differential games (SDGs) with jump-diffusion. The results of Fleming and Souganidis (1989) [9] and those of Biswas (2009) [3] are extended, we investigate a controlled stochastic system with a Brownian motion and a Poisson random measure, and with nonlinear cost functionals defined by controlled backward stochastic differential equations (BSDEs). Furthermore, unlike the two papers cited above the admissible control processes of the two players are allowed to rely on all events from the past. This quite natural generalization permits the players to consider those earlier information, and it makes more convenient to get the dynamic programming principle (DPP). However, the cost functionals are not deterministic anymore and hence also the upper and the lower value functions become a priori random fields. We use a new method to prove that, indeed, the upper and the lower value functions are deterministic. On the other hand, thanks to BSDE methods (Peng, 1997) [18] we can directly prove a DPP for the upper and the lower value functions, and also that both these functions are the unique viscosity solutions of the upper and the lower integral–partial differential equations of Hamilton–Jacobi–Bellman–Isaacs’ type, respectively. Moreover, the existence of the value of the game is got in this more general setting under Isaacs’ condition.  相似文献   

19.
Abstract

This article studies classes of random measures on topological spaces perturbed by stochastic processes (a.k.a. modulated random measures). We render a rigorous construction of the stochastic integral of functions of two variables and showed that such an integral is a random measure. We establish a new Campbell-type formula that, along with a rigorous construction of modulation, leads to the intensity of a modulated random measure. Mathematical formalism of integral-driven random measures and their stochastic intensities find numerous applications in stochastic models, physics, astrophysics, and finance that we discuss throughout the article.  相似文献   

20.
《Optimization》2012,61(1):121-122
In this note we investigate a time-discrete stochastic dynamic programming problem with countable state and action spaces. We introduce an approximation procedure for a numerical solution by decomposition of the state and also of the action space. The minimal value functions and the optimal policies of the Markovian Decision .Processes constructed by clustering of both spaces are calculated by dynamic programming. Bounds for the minimal value functions will be obtained and convergence theorems are proved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号