首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Asset allocation among diverse financial markets is essential for investors especially under situations such as the financial crisis of 2008. Portfolio optimization is the most developed method to examine the optimal decision for asset allocation. We employ the hidden Markov model to identify regimes in varied financial markets; a regime switching model gives multiple distributions and this information can convert the static mean–variance model into an optimization problem under uncertainty, which is the case for unobservable market regimes. We construct a stochastic program to optimize portfolios under the regime switching framework and use scenario generation to mathematically formulate the optimization problem. In addition, we build a simple example for a pension fund and examine the behavior of the optimal solution over time by using a rolling-horizon simulation. We conclude that the regime information helps portfolios avoid risk during left-tail events.  相似文献   

2.
Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequality problems where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large and develop a randomized block stochastic mirror-prox algorithm, where at each iteration only a randomly selected block coordinate of the solution vector is updated through implementing two consecutive projection steps. We show that when the mapping is strictly pseudo-monotone, the algorithm generates a sequence of iterates that converges to the solution of the problem almost surely. When the maps are strongly pseudo-monotone, we prove that the mean-squared error diminishes at the optimal rate. Second, we consider large-scale stochastic optimization problems with convex objectives and develop a new averaging scheme for the randomized block stochastic mirror-prox algorithm. We show that by using a different set of weights than those employed in the classical stochastic mirror-prox methods, the objective values of the averaged sequence converges to the optimal value in the mean sense at an optimal rate. Third, we consider stochastic Cartesian variational inequality problems and develop a stochastic mirror-prox algorithm that employs the new weighted averaging scheme. We show that the expected value of a suitably defined gap function converges to zero at an optimal rate.  相似文献   

3.
The conditions for solving some problems of plastic metal working are essentially stochastic. As a result of this, selection of optimal regimes of the plastic metal-working processes becomes the stochastic optimization problem. The classification and mathematical statements of this problem are proposed. The results of computations for the stochastic optimization of different kinds of regimes for processes of cyclic bending (leveling) for rail R-65 in the maximum rigidity plane of a 6-roll leveler and of upsetting of the cylindrical billet have been proposed. Proceedings of the XVII Seminar on Stability Problems for Stochastic Models, Kazan, Russia, 1995, Part III.  相似文献   

4.
We address a general optimal switching problem over finite horizon for a stochastic system described by a differential equation driven by Brownian motion. The main novelty is the fact that we allow for infinitely many modes (or regimes, i.e. the possible values of the piecewise-constant control process). We allow all the given coefficients in the model to be path-dependent, that is, their value at any time depends on the past trajectory of the controlled system. The main aim is to introduce a suitable (scalar) backward stochastic differential equation (BSDE), with a constraint on the martingale part, that allows to give a probabilistic representation of the value function of the given problem. This is achieved by randomization of control, i.e. by introducing an auxiliary optimization problem which has the same value as the starting optimal switching problem and for which the desired BSDE representation is obtained. In comparison with the existing literature we do not rely on a system of reflected BSDE nor can we use the associated Hamilton–Jacobi–Bellman equation in our non-Markovian framework.  相似文献   

5.
In this paper we research the single machine stochastic JIT scheduling problem subject to the machine breakdowns for preemptive-resume and preemptive-repeat.The objective function of the problem is the sum of squared deviations of the job-expected completion times from the due date.For preemptive-resume,we show that the optimal sequence of the SSDE problem is V-shaped with respect to expected processing times.And a dynamic programming algorithm with the pseudopolynomial time complexity is given.We discuss the difference between the SSDE problem and the ESSD problem and show that the optimal solution of the SSDE problem is a good approximate optimal solution of the ESSD problem,and the optimal solution of the SSDE problem is an optimal solution of the ESSD problem under some conditions.For preemptive-repeat,the stochastic JIT scheduling problem has not been solved since the variances of the completion times cannot be computed.We replace the ESSD problem by the SSDE problem.We show that the optimal sequence of the SSDE problem is V-shaped with respect to the expected occupying times.And a dynamic programming algorithm with the pseudopolynomial time complexity is given.A new thought is advanced for the research of the preemptive-repeat stochastic JIT scheduling problem.  相似文献   

6.
研究了由Teugels鞅和与之独立的多维Brown运动共同驱动的正倒向随机控制系统的最优控制问题. 这里Teugels鞅是一列与L\'{e}vy 过程相关的两两强正交的正态鞅 (见Nualart, Schoutens 在2000年的结果). 在允许控制值域为一非空凸闭集假设下, 采用凸变分法和对偶技术获得了最优控制存在所满足的充分和必要条件. 作为应用, 系统研究了线性正倒向随机系统的二次最优控制问题(简记为FBLQ问题), 通过相应的随机哈密顿系统对最优控制 进行了对偶刻画. 这里的随机哈密顿系统是由Teugels鞅和多维Brown运动共同驱动的线性正倒向随机微分方程, 其由状态方程、伴随方程和最优控制的对偶表示共同来构成.  相似文献   

7.
We consider a stochastic control problem for a random evolution. We study the Bellman equation of the problem and we prove the existence of an optimal stochastic control which is Markovian. This problem enables us to approximate the general problem of the optimal control of solutions of stochastic differential equations.  相似文献   

8.
In Gapeev and Kühn (2005) [8], the Dynkin game corresponding to perpetual convertible bonds was considered, when driven by a Brownian motion and a compound Poisson process with exponential jumps. We consider the same stochastic game but driven by a spectrally positive Lévy process. We establish a complete solution to the game indicating four principle parameter regimes as well as characterizing the occurrence of continuous and smooth fit. In Gapeev and Kühn (2005) [8], the method of proof was mainly based on solving a free boundary value problem. In this paper, we instead use fluctuation theory and an auxiliary optimal stopping problem to find a solution to the game.  相似文献   

9.
We study a population-growth parametric model described by a Cauchy problem for an ordinary differential equation with the right-hand side depending on the population size, time, and a stochastic parameter. For this problem, we consider an adaptive optimal control problem, the problem of optimal harvesting. For the case where the stochastic parameter is piecewise constant and changes at fixed moments, we construct a synthesis of the adaptive trajectory and the optimal control strategy. The results are illustrated with five simple population-growth models.  相似文献   

10.
We discuss the stochastic linear-quadratic (LQ) optimal control problem with Poisson processes under the indefinite case. Based on the wellposedness of the LQ problem, the main idea is expressed by the definition of relax compensator that extends the stochastic Hamiltonian system and stochastic Riccati equation with Poisson processes (SREP) from the positive definite case to the indefinite case. We mainly study the existence and uniqueness of the solution for the stochastic Hamiltonian system and obtain the optimal control with open-loop form. Then, we further investigate the existence and uniqueness of the solution for SREP in some special case and obtain the optimal control in close-loop form.  相似文献   

11.
A nonlinear stochastic optimal time-delay control strategy for quasi-integrable Hamiltonian systems is proposed. First, a stochastic optimal control problem of quasi-integrable Hamiltonian system with time-delay in feedback control subjected to Gaussian white noise is formulated. Then, the time-delayed feedback control forces are approximated by the control forces without time-delay and the original problem is converted into a stochastic optimal control problem without time-delay. After that, the converted stochastic optimal control problem is solved by applying the stochastic averaging method and the stochastic dynamical programming principle. As an example, the stochastic time-delay optimal control of two coupled van der Pol oscillators under stochastic excitation is worked out in detail to illustrate the procedure and effectiveness of the proposed control strategy.  相似文献   

12.
We consider a continuous-time stochastic control problem with partial observations. Given some assumptions, we reduce the problem in successive approximation steps to a discrete-time, complete-observation, stochastic control problem with a finite number of possible states and controls. For the latter problem an optimal control can always be explicitly computed. Convergence of the approximations is shown, which in turn implies that an optimal control for the last-stage approximating problem is ∈-optimal for the original problem.  相似文献   

13.
We study optimal control for mean-field stochastic partial differential equations (stochastic evolution equations) driven by a Brownian motion and an independent Poisson random measure, in case of partial information control. One important novelty of our problem is represented by the introduction of general mean-field operators, acting on both the controlled state process and the control process. We first formulate a sufficient and a necessary maximum principle for this type of control. We then prove the existence and uniqueness of the solution of such general forward and backward mean-field stochastic partial differential equations. We apply our results to find the explicit optimal control for an optimal harvesting problem.  相似文献   

14.
ABSTRACT. This paper presents a noneconometric approach to estimating the short‐run timber supply function based on optimal harvest decisions. Determination of optimal harvest levels and estimation of supply function coefficients are integrated into one step by incorporating a parametric short‐run timber supply function into the harvest decision model. In this manner we convert the original harvest decision model into a new optimization problem with the supply function coefficients functioning as “decision variables.” Optimal solution to the new decision model gives the coefficients of the short‐run supply function and, indirectly, the optimal harvest levels. This approach enables us to develop stochastic models of the timber market that are particularly useful for forest sector analysis involving comparison of alternative institutional regimes or policy proposals and when the timber market is affected by stochastic variables. For demonstration purposes, we apply this method to compare the performances of two timber market regimes (perfect competition and monopoly) under demand uncertainty, using the Swedish data. The results show that the expected timber price is 22 percent lower and the expected annual timber supply is 43 percent higher in the competitive market than in the monopoly market. This confirms the theoretical result that monopoly reduces supply and increases price. The expected social welfare gain from perfect competition over monopoly is about 24 percent.  相似文献   

15.
In this paper, we study an inverse optimal problem in discrete-time stochastic control. We give necessary and sufficient conditions for a solution to a system of stochastic difference equations to be the solution of a certain optimal control problem. Our results extend to the stochastic case the work of Dechert. In particular, we present a stochastic version of an important principle in welfare economics.  相似文献   

16.
We study optimal stochastic control problems with jumps under model uncertainty. We rewrite such problems as stochastic differential games of forward–backward stochastic differential equations. We prove general stochastic maximum principles for such games, both in the zero-sum case (finding conditions for saddle points) and for the nonzero sum games (finding conditions for Nash equilibria). We then apply these results to study robust optimal portfolio-consumption problems with penalty. We establish a connection between market viability under model uncertainty and equivalent martingale measures. In the case with entropic penalty, we prove a general reduction theorem, stating that a optimal portfolio-consumption problem under model uncertainty can be reduced to a classical portfolio-consumption problem under model certainty, with a change in the utility function, and we relate this to risk sensitive control. In particular, this result shows that model uncertainty increases the Arrow–Pratt risk aversion index.  相似文献   

17.
The paper is concerned with optimal control of backward stochastic differential equation (BSDE) driven by Teugel’s martingales and an independent multi-dimensional Brownian motion,where Teugel’s martin- gales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see e.g.,Nualart and Schoutens’ paper in 2000).We derive the necessary and sufficient conditions for the existence of the op- timal control by means of convex variation methods and duality techniques.As an application,the optimal control problem of linear backward stochastic differential equation with a quadratic cost criteria (or backward linear-quadratic problem,or BLQ problem for short) is discussed and characterized by a stochastic Hamilton system.  相似文献   

18.
Planning horizon is a key issue in production planning. Different from previous approaches based on Markov Decision Processes, we study the planning horizon of capacity planning problems within the framework of stochastic programming. We first consider an infinite horizon stochastic capacity planning model involving a single resource, linear cost structure, and discrete distributions for general stochastic cost and demand data (non-Markovian and non-stationary). We give sufficient conditions for the existence of an optimal solution. Furthermore, we study the monotonicity property of the finite horizon approximation of the original problem. We show that, the optimal objective value and solution of the finite horizon approximation problem will converge to the optimal objective value and solution of the infinite horizon problem, when the time horizon goes to infinity. These convergence results, together with the integrality of decision variables, imply the existence of a planning horizon. We also develop a useful formula to calculate an upper bound on the planning horizon. Then by decomposition, we show the existence of a planning horizon for a class of very general stochastic capacity planning problems, which have complicated decision structure.  相似文献   

19.
In this paper, we present an optimal control problem for stochastic differential games under Markov regime-switching forward–backward stochastic differential equations with jumps. First, we prove a sufficient maximum principle for nonzero-sum stochastic differential games problems and obtain equilibrium point for such games. Second, we prove an equivalent maximum principle for nonzero-sum stochastic differential games. The zero-sum stochastic differential games equivalent maximum principle is then obtained as a corollary. We apply the obtained results to study a problem of robust utility maximization under a relative entropy penalty and to find optimal investment of an insurance firm under model uncertainty.  相似文献   

20.
In this paper,we consider an optimal control problem with state constraints,where the control system is described by a mean-field forward-backward stochastic differential equation(MFFBSDE,for short)and the admissible control is mean-field type.Making full use of the backward stochastic differential equation theory,we transform the original control system into an equivalent backward form,i.e.,the equations in the control system are all backward.In addition,Ekeland’s variational principle helps us deal with the state constraints so that we get a stochastic maximum principle which characterizes the necessary condition of the optimal control.We also study a stochastic linear quadratic control problem with state constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号