首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We consider a multidimensional semi-Markov process of diffusion type. A stochastic integral with respect to the semi-Markov process is defined in terms of asymptotics related to the first exit time from a small neighborhood of the starting point of the process, and, in particular, in terms of its characteristic operator. This integral is equal to the sum of two other integrals: the first one is a curvilinear integral with respect to an additive functional defined in terms of the expected first exit time from a small neighborhood, and the second one is a stochastic integral with respect to a martingale of special kind. To prove the existence and to derive the properties of the integral, both the method of deducing sequences and that of inscribed ellipsoids are used. For Markov processes of diffusion type, the new definition of the stochastic integral is reduced to the standard one. Bibliography: 8 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 328, 2005, pp. 251–276.  相似文献   

2.
《Optimization》2012,61(1-2):173-190
The paper deals with speculation strategies in a dynamic economy, where “speculation” means participating in a market with the intention to gain a reward by first buying an item and thereafter selling it at a possibly higher price. By assuming that the states of the economy form a Markov chain the problem is modeled as a discrete time Markov decision process. The optimal strategies (which are pairs of stopping times) are identified. Under quite general conditions the optimal rule for the selling process turns out to be a control limit policy in both state of economy and time. Techniques for the computation of optimal strategies are presented; some numerical examples are also discussed. For a static economy closed-form solutions are given  相似文献   

3.
4.
This paper studies the bailout optimal dividend problem with regime switching under the constraint that dividend payments can be made only at the arrival times of an independent Poisson process while capital can be injected continuously in time. We show the optimality of the regime-modulated Parisian-classical reflection strategy when the underlying risk model follows a general spectrally negative Markov additive process. In order to verify the optimality, first we study an auxiliary problem driven by a single spectrally negative Lévy process with a final payoff at an exponential terminal time and characterize the optimal dividend strategy. Then, we use the dynamic programming principle to transform the global regime-switching problem into an equivalent local optimization problem with a final payoff up to the first regime switching time. The optimality of the regime modulated Parisian-classical barrier strategy can be proven by using the results from the auxiliary problem and approximations via recursive iterations.  相似文献   

5.
We obtain a formula for the distribution of the first exit time of Brownian motion from a fundamental region associated with a finite reflection group. In the type A case it is closely related to a formula of de Bruijn and the exit probability is expressed as a Pfaffian. Our formula yields a generalisation of de Bruijn’s. We derive large and small time asymptotics, and formulas for expected first exit times. The results extend to other Markov processes. By considering discrete random walks in the type A case we recover known formulas for the number of standard Young tableaux with bounded height.Mathematics Subject Classification (2000): 20F55, 60J65  相似文献   

6.
We obtain a formula for the distribution of the first exit time of Brownian motion from the alcove of an affine Weyl group. In most cases the formula is expressed compactly, in terms of Pfaffians. Expected exit times are derived in the type ${\widetilde{A}}$ case. The results extend to other Markov processes. We also give formulas for the real eigenfunctions of the Dirichlet and Neumann Laplacians on alcoves, observing that the ‘Hot Spots’ conjecture of J. Rauch is true for alcoves.  相似文献   

7.
8.
In this paper we are interested in the effect that dependencies in the arrival process to a queue have on queueing properties such as mean queue length and mean waiting time. We start with a review of the well known relations used to compare random variables and random vectors, e.g., stochastic orderings, stochastic increasing convexity, and strong stochastic increasing concavity. These relations and others are used to compare interarrival times in Markov renewal processes first in the case where the interarrival time distributions depend only on the current state in the underlying Markov chain and then in the general case where these interarrivai times depend on both the current state and the next state in that chain. These results are used to study a problem previously considered by Patuwo et al. [14].Then, in order to keep the marginal distributions of the interarrivai times constant, we build a particular transition matrix for the underlying Markov chain depending on a single parameter,p. This Markov renewal process is used in the Patuwo et al. [14] problem so as to investigate the behavior of the mean queue length and mean waiting time on a correlation measure depending only onp. As constructed, the interarrival time distributions do not depend onp so that the effects we find depend only on correlation in the arrival process.As a result of this latter construction, we find that the mean queue length is always larger in the case where correlations are non-zero than they are in the more usual case of renewal arrivals (i.e., where the correlations are zero). The implications of our results are clear.  相似文献   

9.
The optimal-stopping problem in a partially observable Markov chain is considered, and this is formulated as a Markov decision process. We treat a multiple stopping problem in this paper. Unlike the classical stopping problem, the current state of the chain is not known directly. Information about the current state is always available from an information process. Several properties about the value and the optimal policy are given. For example, if we add another stop action to thek-stop problem, the increment of the value is decreasing ink.The author wishes to thank Professor M. Sakaguchi of Osaka University for his encouragement and guidance. He also thanks the referees for their careful readings and helpful comments.  相似文献   

10.
The optimal policy and the value function of a problem of optimal switching between a Wiener process and a deterministic motion on a segment are found in the present article. The speed of the motion is equal to 1 and it is in direction to the nearest end of the segment. For every switching a positive payment has to be paid. The problem is to minimize the sum of the first exit time of the process and the total payment. It turns out that there exist four different optimal rules depending on the length of the segment and the switching cost.  相似文献   

11.
Critical resources are often shared among different classes of customers. Capacity reservation allows each class of customers to better manage priorities of its customers but might lead to unused capacity. Unused capacity can be avoided or reduced by advance cancelation. This paper addresses the service capacity reservation for a given class of customers. The reservation process is characterized by: contracted time slots (CTS) reserved for the class of customers, requests for lengthy regular time slots (RTS) and two advance cancelation modes to cancel CTS one-period or two-period before. The optimal control under a given contract is formulated as an average cost Markov Decision Process (MDP) in order to minimize customer waiting times, unused CTS and CTS cancelation. Structural properties of optimal control policies are established via the corresponding discounted cost MDP problem. Numerical results show that two-period advance CTS cancelation can significantly improve the contract-based solution.  相似文献   

12.
We consider the constrained optimization of a finite-state, finite action Markov chain. In the adaptive problem, the transition probabilities are assumed to be unknown, and no prior distribution on their values is given. We consider constrained optimization problems in terms of several cost criteria which are asymptotic in nature. For these criteria we show that it is possible to achieve the same optimal cost as in the non-adaptive case.We first formulate a constrained optimization problem under each of the cost criteria and establish the existence of optimal stationary policies.Since the adaptive problem is inherently non-stationary, we suggest a class ofAsymptotically Stationary (AS) policies, and show that, under each of the cost criteria, the costs of an AS policy depend only on its limiting behavior. This property implies that there exist optimal AS policies. A method for generating adaptive policies is then suggested, which leads to strongly consistent estimators for the unknown transition probabilities. A way to guarantee that these policies are also optimal is to couple them with the adaptive algorithm of [3]. This leads to optimal policies for each of the adaptive constrained optimization problems under discussion.This work was supported in part through United States-Israel Binational Science Foundation Grant BSF 85-00306.  相似文献   

13.
For a wide class of local martingales (M t ) there is a default function, which is not identically zero only when (M t ) is strictly local, i.e. not a true martingale. This default in the martingale property allows us to characterize the integrability of functions of sup s≤t M s in terms of the integrability of the function itself. We describe some (paradoxical) mean-decreasing local sub-martingales, and the default functions for Bessel processes and radial Ornstein–Uhlenbeck processes in relation to their first hitting and last exit times. Received: 6 August 1996 / Revised version: 27 July 1998  相似文献   

14.
Stochastic inventory control theory has focused on the order and/or pricing policy when the length of the selling period is known. In contrast to this focus, we examine the optimal length of the selling period—which we refer to as market exit time—in the context of a novel inventory replenishment problem faced by a supplier of a new, trendy, and relatively expensive product with a short life cycle. An important characteristic of the problem is that the supplier applies a price skimming strategy over time and the demand is modeled as a nonhomogeneous Poisson process with an intensity that is dependent on time. The supplier's problems of finding the optimal order quantity and market exit time, with the objective of maximizing expected profit, is studied. Procedures are proposed for joint optimization of the objective function with respect to the order quantity and the market exit time. Then, the effects of the order quantity and market exit time on the supplier's profitability are explored on the basis of a quantitative investigation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
We consider the following model: we inspect the motion of a Markov process with which an “evolution cost” is associated. We inspect the process at times T 1…, T n ,…. If when we inspect, its value is in a given set A, it continues its evolution, otherwise we kill it. At each inspection we associate an "inspection cost" and a "killing cost". The problem consists of finding a sequence of optimal inspections. After the modelization we construct the value function by an iterative procedure as in impulse control theory, by using the theory of analytic functions and theorems of section. Thanks to the criteria of optimality we get a sequence of optimal inspections under very general hypotheses.  相似文献   

16.
For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time step?t 0, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time steps?t 0,t 1,??,T; at the next time step?t 1, he is able to formulate a new optimization problem starting at time?t 1 that yields a new sequence of optimal decision rules. This process can be continued until the final time?T is reached. A?family of optimization problems formulated in this way is said to be dynamically consistent if the optimal strategies obtained when solving the original problem remain optimal for all subsequent problems. The notion of dynamic consistency, well-known in the field of economics, has been recently introduced in the context of risk measures, notably by Artzner et al. (Ann. Oper. Res. 152(1):5?C22, 2007) and studied in the stochastic programming framework by Shapiro (Oper. Res. Lett. 37(3):143?C147, 2009) and for Markov Decision Processes (MDP) by Ruszczynski (Math. Program. 125(2):235?C261, 2010). We here link this notion with the concept of ??state variable?? in MDP, and show that a significant class of dynamic optimization problems are dynamically consistent, provided that an adequate state variable is chosen.  相似文献   

17.
We obtain upper and lower bounds of the exit times from balls of a jump-type symmetric Markov process. The proofs are delivered separately. The upper bounds are obtained by using the Levy system corresponding to the process, while the precise expression of the (L^2-)generator of the Dirichlet form associated with the process is used to obtain the lower bounds.  相似文献   

18.
In this paper we consider the problem of scheduling n jobs on a single batch processing machine in which jobs are ordered by two customers. Jobs belonging to different customers are processed based on their individual criteria. The considered criteria are minimizing makespan and maximum lateness. A batching machine is able to process up to b jobs simultaneously. The processing time of each batch is equal to the longest processing time of jobs in the batch. This kind of batch processing is called parallel batch processing. Optimal methods for three cases are developed: unbounded batch capacity, b > n, with compatible job groups and bounded batch capacity, b  n, with compatible and non compatible job groups. Each job group represents a different class of customers and the concept of being compatible means that jobs which are ordered by different customers are allowed to be processed in a same batch. We propose an optimal method for the problem with incompatible groups and unbounded batches. About the case when groups are incompatible and bounded batches, our proposed method is considered as optimal when the group with maximum lateness objective has identical processing times. We regard this method, however, as a heuristic when these processing times are different. When groups are compatible and batches are bounded we consider another problem by assuming the same processing times for the group which has the maximum lateness objective and propose an optimal method for this problem.  相似文献   

19.
We consider a problem of scheduling in a multi-class network of single-server queues in series, in which service times at the nodes are constant and equal. Such a model has potential application to automated manufacturing systems or packet-switched communication networks, where a message is divided into packets (or cells) of fixed lengths. The network is a series-type assembly or transfer line, with the exception that there is an additional class of jobs that requires processing only at the first node (class 0). There is a holding cost per unit time that is proportional to the total number of customers in the system. The objective is to minimize the (expected) total discounted holding cost over a finite or an infinite horizon. We show that an optimal policy gives priority to class-0 jobs at node 1 when at least one of a set ofm–1 inequalities on partial sums of the components of the state vector is satisfied. We solve the problem by two methods. The first involves formulating the problem as a (discrete-time) Markov decision process and using induction on the horizon length. The second is a sample-path approach using an interchange argument to establish optimality.The research of this author was supported by the National Science Foundation under Grant No. DDM-8719825. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.  相似文献   

20.
This paper deals with the optimal stopping problem under partial observation for piecewise-deterministic Markov processes. We first obtain a recursive formulation of the optimal filter process and derive the dynamic programming equation of the partially observed optimal stopping problem. Then, we propose a numerical method, based on the quantization of the discrete-time filter process and the inter-jump times, to approximate the value function and to compute an ??-optimal stopping time. We prove the convergence of the algorithms and bound the rates of convergence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号