首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   0篇
数学   10篇
  2016年   1篇
  2013年   3篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2007年   1篇
  2002年   1篇
排序方式: 共有10条查询结果,搜索用时 703 毫秒
1
1.
We present two applications of the linearization techniques in stochastic optimal control. In the first part, we show how the assumption of stability under concatenation for control processes can be dropped in the study of asymptotic stability domains. Generalizing Zubov??s method, the stability domain is then characterized as some level set of a semicontinuous generalized viscosity solution of the associated Hamilton?CJacobi?CBellman equation. In the second part, we extend our study to unbounded coefficients and apply the method to obtain a linear formulation for control problems whenever the state equation is a stochastic variational inequality.  相似文献   
2.
We study two classes of stochastic control problems with semicontinuous cost: the Mayer problem and optimal stopping for controlled diffusions. The value functions are introduced via linear optimization problems on appropriate sets of probability measures. These sets of constraints are described deterministically with respect to the coefficient functions. Both the lower and upper semicontinuous cases are considered. The value function is shown to be a generalized viscosity solution of the associated HJB system, respectively, of some variational inequality. Dual formulations are given, as well as the relations between the primal and dual value functions. Under classical convexity assumptions, we prove the equivalence between the linearized Mayer problem and the standard weak control formulation. Counter-examples are given for the general framework.  相似文献   
3.
Traditional means of studying environmental economics and management problems consist of optimal control and dynamic game models that are solved for optimal or equilibrium strategies. Notwithstanding the possibility of multiple equilibria, the models’ users—managers or planners—will usually be provided with a single optimal or equilibrium strategy no matter how reliable, or unreliable, the underlying models and their parameters are. In this paper we follow an alternative approach to policy making that is based on viability theory. It establishes “satisficing” (in the sense of Simon), or viable, policies that keep the dynamic system in a constraint set and are, generically, multiple and amenable to each manager’s own prioritisation. Moreover, they can depend on fewer parameters than the optimal or equilibrium strategies and hence be more robust. For the determination of these (viable) policies, computation of “viability kernels” is crucial. We introduce a MATLAB application, under the name of VIKAASA, which allows us to compute approximations to viability kernels. We discuss two algorithms implemented in VIKAASA. One approximates the viability kernel by the locus of state space positions for which solutions to an auxiliary cost-minimising optimal control problem can be found. The lack of any solution implies the infinite value function and indicates an evolution which leaves the constraint set in finite time, therefore defining the point from which the evolution originates as belonging to the kernel’s complement. The other algorithm accepts a point as viable if the system’s dynamics can be stabilised from this point. We comment on the pros and cons of each algorithm. We apply viability theory and the VIKAASA software to a problem of by-catch fisheries exploited by one or two fleets and provide rules concerning the proportion of fish biomass and the fishing effort that a sustainable fishery’s exploitation should follow.  相似文献   
4.
5.
We aim at characterizing domains of attraction for controlled piecewise deterministic processes using an occupational measure formulation and Zubov??s approach. Firstly, we provide linear programming (primal and dual) formulations of discounted, infinite horizon control problems for PDMPs. These formulations involve an infinite-dimensional set of probability measures and are obtained using viscosity solutions theory. Secondly, these tools allow to construct stabilizing measures and to avoid the assumption of stability under concatenation for controls. The domain of controllability is then characterized as some level set of a convenient solution of the associated Hamilton-Jacobi integral-differential equation. The theoretical results are applied to PDMPs associated to stochastic gene networks. Explicit computations are given for Cook??s model for gene expression.  相似文献   
6.
This paper concerns with the existence of a value for a zero sum two-player differential game with supremum cost of the form Ct0,x0(u,v)=supτ[t0,T]h(x(τ;t0,x0,u,v)) under Isaacs' condition. We characterize the value function as the unique solution—in a suitable sense—to a PDE, namely the Hamilton–Jacobi–Isaacs equation. As a byproduct, we obtain a PDE characterization of the value function for control system.  相似文献   
7.
We study partial differential inequalities (PDI) of the type where NK(⋅) is the normal cone to the set K. We prove existence of a constant such that the PDI of Hamilton-Jacobi type has a unique (global) Lipschitz viscosity solution. We provide a formula to calculate this constant. Moreover, we define a subset of K such that any two solutions of the previous PDI which coincide on will coincide on K. Our paper generalizes results of the case without boundary conditions for convex Hamiltonians obtained by L.C. Evans and A. Fathi.  相似文献   
8.
The paper studies value functions associated with optimization problems and with Mayer-type control problems. Using methods belonging to proximal analysis and control theory, we establish new results for the primal-lower-nice (pln) property of the value functions for these problems.  相似文献   
9.
The aim of this paper is to study singularly perturbed control systems. Firstly, we provide linearized formulation version for the calculus of the value function associated with the averaged dynamics. Secondly, we obtain necessary and sufficient conditions in order to identify the optimal trajectory of the averaged system.  相似文献   
10.
We consider a control problem with reflecting boundary and obtain necessary optimality conditions in the form of the maximum Pontryagin principle. To derive these results we transform the constrained problem in an unconstrained one or we use penalization techniques of Morreau-Yosida type to approach the original problem by a sequence of optimal control problems with Lipschitz dynamics. Then nonsmooth analysis theory is used to study the convergence of the penalization in order to obtain optimality conditions.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号