首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When the underlying asset price depends on activities of traders, hedging errors include costs due to the illiquidity of the underlying asset and the size of this cost can be substantial. Cetin et al. (2004), Liquidity risk and arbitrage pricing theory, Finance and Stochastics, 8(3), 311-341, proposed a hedging strategy that approximates the classical Black–Scholes hedging strategy and produces zero liquidity costs. Here, we compute the rate of convergence of the final value of this hedging portfolio to the option payoff in case of a European call option; i.e. we see how fast its hedging error converges to zero. The hedging strategy studied here is meaningful due to its simple liquidity cost structure and its smoothness relative to the classical Black–Scholes delta.  相似文献   

2.
Truncations of completely alternating sequences are entirely characterized. The completely hyperexpansive completion problem is solved for finite sequences of (positive) numbers in terms of positivity of attached matrices. Solutions to the problem are written explicitly for sequences of two, three, four, five and six numbers. As an application, an explicit solution of the subnormal completion problem for five numbers is given.  相似文献   

3.
In this paper we study risk and liquidity management decisions within an insurance firm. Risk management corresponds to decisions regarding proportional reinsurance, whereas liquidity management has two components: distribution of dividends and costly equity issuance. Contingent on whether proportional or fixed costs of reinvestment are considered, singular stochastic control or stochastic impulse control techniques are used to seek strategies that maximize the firm value. We find that, in a proportional-costs setting, the optimal strategies are always mixed in terms of risk management and refinancing. In contrast, when fixed issuance costs are too high relative to the firm’s profitability, optimal management does not involve refinancing. We provide analytical specifications of the optimal strategies, as well as a qualitative analysis of the interaction between refinancing and risk management.  相似文献   

4.
We apply four alternative decision criteria, two old ones and two new, to the question of the appropriate level of greenhouse gas emission reduction. In all cases, we consider a uniform carbon tax that is applied to all emissions from all sectors and all countries; and that increases over time with the discount rate. For a one per cent pure rate of the time preference and a rate of risk aversion of one, the tax that maximises expected net present welfare equals $120/tC in 2010. However, we also find evidence that the uncertainty about welfare may well have fat tails so that the sample mean exists only by virtue of the finite number of runs in our Monte Carlo analysis. This is consistent with Weitzman’s Dismal Theorem. We therefore consider minimax regret as a decision criterion. As regret is defined on the positive real line, we in fact consider large percentiles instead of the ill-defined maximum. Depending on the percentile used, the recommended tax lies between $100 and $170/tC. Regret is a measure of the slope of the welfare function, while we are in fact concerned about the level of welfare. We therefore minimise the tail risk, defined as the expected welfare below a percentile of the probability density function without climate policy. Depending on the percentile used, the recommended tax lies between $20 and $330/tC. We also minimise the fatness of the tails, as measured by the p-value of the test of the null hypothesis that recursive mean welfare is non-stationary in the number of Monte Carlo runs. We cannot reject the null hypothesis of non-stationarity at the 5 % confidence level, but come closest for an initial tax of $50/tC. All four alternative decision criteria rapidly improve as modest taxes are introduced, but gradually deteriorate if the tax is too high. That implies that the appropriate tax is an interior solution. In stark contrast to some of the interpretations of the Dismal Theorem, we find that fat tails by no means justify arbitrarily large carbon taxes.  相似文献   

5.
Summary This paper considers a finite dam in continuous time fed by inputs, with a negative exponential distribution, whose arrival times form a Poisson process; there is a continuous release at unit rate, and overflow is allowed. Various results have been obtained by appropriate limiting methods from an analogous discrete time process, for which it is possible to find some solutions directly by determinantal methods.First the stationary dam content distribution is found. The distribution of the probability of first emptiness is obtained both when overflow is, and is not allowed. This is followed by the probability the overflow before emptiness, which is then applied to determine the exact solution for an insurance risk problem with claims having a negative exponential distribution. The time-dependent content distribution is found, and the analogy with queueing theory is discussed.  相似文献   

6.
7.
Many dynamical phenomena display a cyclic behavior, in the sense that time can be partitioned into units within which distributional aspects of a process are homogeneous. In this paper, we introduce a class of models – called conjugate processes – allowing the sequence of marginal distributions of a cyclic, continuous-time process to evolve stochastically in time. The connection between the two processes is given by a fundamental compatibility equation. Key results include Laws of Large Numbers in the presented framework. We provide a constructive example which illustrates the theory, and give a statistical implementation to risk forecasting in financial data.  相似文献   

8.

Droughts pose a significant challenge to farmers, insurers as well as governments around the world and the situation is expected to worsen in the future due to climate change. We present a large scale drought risk assessment approach that can be used for current and future risk management purposes. Our suggested methodology is a combination of a large scale agricultural computational modelling -, extreme value-, as well as copula approach to upscale local crop yield risks to the national scale. We show that combining regional probabilistic estimates will significantly underestimate losses if the dependencies between regions during drought events are not taken explicitly into account. Among the many ways to use these results it is shown how it enables the assessment of current and future costs of subsidized drought insurance in Austria.

  相似文献   

9.

In this study, we consider two classes of multicriteria two-stage stochastic programs in finite probability spaces with multivariate risk constraints. The first-stage problem features multivariate stochastic benchmarking constraints based on a vector-valued random variable representing multiple and possibly conflicting stochastic performance measures associated with the second-stage decisions. In particular, the aim is to ensure that the decision-based random outcome vector of interest is preferable to a specified benchmark with respect to the multivariate polyhedral conditional value-at-risk or a multivariate stochastic order relation. In this case, the classical decomposition methods cannot be used directly due to the complicating multivariate stochastic benchmarking constraints. We propose an exact unified decomposition framework for solving these two classes of optimization problems and show its finite convergence. We apply the proposed approach to a stochastic network design problem in the context of pre-disaster humanitarian logistics and conduct a computational study concerning the threat of hurricanes in the Southeastern part of the United States. The numerical results provide practical insights about our modeling approach and show that the proposed algorithm is computationally scalable.

  相似文献   

10.
Financial economics literature indicates that estimates for securities' systematic risk, i.e. the beta coefficients, are highly affected by infrequent trading. This is an especially serious problem in small security markets. In this study, the applicability of an error-correction model is investigated for modeling the risk behavior of thinly traded securities. The empirical results from a small stock market, i.e. the Helsinki Stock Exchange, indicate the estimated error-correction term to be highly dependent on the underlying trading frequency of the stock, while the direct effect is dependent merely on the market value of the firm. The model thus appears to produce useful information about the risk characteristics of thinly traded stocks.  相似文献   

11.
This work treats, within a multi-objective framework, of an economical-ecological problem related to the optimal management of a wastewater treatment system consisting of several purifying plants. The problem is formulated as a multi-objective parabolic optimal control problem and it is studied from a cooperative point of view, looking for Pareto-optimal solutions. The weighting method is used here to characterize the Pareto solutions of our problem. To obtain them, a numerical algorithm—based in a characteristics-Galerkin discretization—is proposed and numerical results for a real world situation in the estuary of Vigo (NW Spain) are also presented.  相似文献   

12.
In this paper, we generalize earlier work dealing with maxima of discrete random variables. We show that row-wise stationary block maxima of a triangular array of integer valued random variables converge to a Gumbel extreme value distribution if row-wise variances grow sufficiently fast as the row-size increases. As a by-product, we derive analytical expressions of normalising constants for most classical unbounded discrete distributions. A brief simulation illustrates our theoretical result. Also, we highlight its usefulness in practice with a real risk assessment problem, namely the evaluation of extreme avalanche occurrence numbers in the French Alps.  相似文献   

13.
14.
Summary A possible way for parametrizing the solution path of the nonlinear systemH(u)=0, H: n+1 n consists of using the secant length as parameter. This idea leads to a quadratic constraint by which the parameter is introduced. A Newton-like method for computing the solution for a given parameter is proposed where the nonlinear system is linearized at each iterate, but the quadratic parametrizing equation is exactly satisfied. The localQ-quadratic convergence of the method is proved and some hints for implementing the algorithm are givenDedicated to Professor Lothar Collatz on the occasion of his 75th birthday  相似文献   

15.
Annals of Operations Research - One of the shortcomings in the standard data envelopment analysis (DEA) self-evaluation models is the flexibility of choosing favorable DEA weights on inputs and...  相似文献   

16.
We develop methods for solving nonlinear stochastic dynamic difference games using orthogonal polynomial collocation techniques. The methods are applied to models of world commodity markets in which governments compete against each other using storage as a strategy variable. The rational expectations equilibrium outcomes under four different game structures are derived numerically and compared using stochastic simulation techniques.  相似文献   

17.
18.
Given a finite set F of estimators, the problem of aggregation is to construct a new estimator whose risk is as close as possible to the risk of the best estimator in F. It was conjectured that empirical minimization performed in the convex hull of F is an optimal aggregation method, but we show that this conjecture is false. Despite that, we prove that empirical minimization in the convex hull of a well chosen, empirically determined subset of F is an optimal aggregation method.  相似文献   

19.
In the paper the exponential risk measure of Damant and Satchell is used to formulate an investor's utility function and the properties of this function are investigated. The utility function is calibrated for a typical UK investor who would hold different proportions of equity. It is found that, for plausible parameter values, a typical UK investor will hold more equity under the assumption of non-normality of return if his utility function has the above formulation and not the standard mean-variance utility function. Furthermore, our utility function is consistent with positive skewness affection and kurtosis aversion. Some aggregate estimates of risk parameters are calculated for the typical UK investor. These do not seem well determined, raising issues of the roles of aggregation and wealth in this model.  相似文献   

20.
Individual risk models need to capture possible correlations as failing to do so typically results in an underestimation of extreme quantiles of the aggregate loss. Such dependence modelling is particularly important for managing credit risk, for instance, where joint defaults are a major cause of concern. Often, the dependence between the individual loss occurrence indicators is driven by a small number of unobservable factors. Conditional loss probabilities are then expressed as monotone functions of linear combinations of these hidden factors. However, combining the factors in a linear way allows for some compensation between them. Such diversification effects are not always desirable and this is why the present work proposes a new model replacing linear combinations with maxima. These max-factor models give more insight into which of the factors is dominant.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号