首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we consider the optimal insurance problem when the insurer has a loss limit constraint. Under the assumptions that the insurance price depends only on the policy’s actuarial value, and the insured seeks to maximize the expected utility of his terminal wealth, we show that coverage above a deductible up to a cap is the optimal contract, and the relaxation of insurer’s loss limit will increase the insured’s expected utility.When the insurance price is given by the expected value principle, we show that a positive loading factor is a sufficient and necessary condition for the deductible to be positive. Moreover, with the expected value principle, we show that the optimal deductible derived in our model is not greater (lower) than that derived in Arrow’s model if the insured’s preference displays increasing (decreasing) absolute risk aversion. Therefore, when the insured has an IARA (DARA) utility function, compared to Arrow model, the insurance policy derived in our model provides more (less) coverage for small losses, and less coverage for large losses.Furthermore, we prove that the optimal insurance derived in our model is an inferior (normal) good for the insured with a DARA (IARA) utility function, consistent with the finding in the previous literature. Being inferior, the insurance can also be a Giffen good. Under the assumption that the insured’s initial wealth is greater than a certain level, we show that the insurance is not a Giffen good if the coefficient of the insured’s relative risk aversion is lower than 1.  相似文献   

2.
The cointegration of major financial markets around the globe is well evidenced with strong empirical support. This paper considers the continuous-time mean–variance (MV) asset–liability management (ALM) problem for an insurer investing in an incomplete financial market with cointegrated assets. The number of trading assets is allowed to be less than the number of Brownian motions spanning the market. The insurer also faces the risk of paying uncertain insurance claims during the investment period. We assume that the cointegration market follows the diffusion limit of the error-correction model for cointegrated time series. Using the Markowitz (1952) MV portfolio criterion, we consider the insurer’s problem of minimizing variance in the terminal wealth, given an expected terminal wealth subject to interim random liability payments following a compound Poisson process. We generalize the technique developed by Lim (2005) to tackle this problem. The particular structure of cointegration enables us to solve the ALM problem completely in the sense that the solutions of the continuous-time portfolio policy and efficient frontier are obtained as explicit and closed-form formulas.  相似文献   

3.
In this paper we extend some results in Cramér [7] by considering the expected discounted penalty function as a generalization of the infinite time ruin probability. We consider his ruin theory model that allows the claim sizes to take positive as well as negative values. Depending on the sign of these amounts, they are interpreted either as claims made by insureds or as income from deceased annuitants, respectively. We then demonstrate that when the events’ arrival process is a renewal process, the Gerber-Shiu function satisfies a defective renewal equation. Subsequently, we consider some special cases such as when claims have exponential distribution or the arrival process is a compound Poisson process and annuity-related income has Erlang(nβ) distribution. We are then able to specify the parameter and the functions involved in the above-mentioned defective renewal equation.  相似文献   

4.
It has been exactly 100 years since Hess’s historical discovery: an extraterrestrial origin of cosmic rays [1]. Galactic cosmic rays (GCR) being charged particles, penetrate the heliosphere and are modulated by the solar magnetic field. The propagation of cosmic rays is described by Parker’s transport equation [2], which is a second order parabolic type partial differential equation. It is time dependent 4-variables (with r, θ, φ, R, meaning: distance from the Sun, heliolatitudes, heliolongitudes and particles’ rigidity, respectively) equation which is a well known tool for studying problems connected with solar modulation of cosmic rays. Transport equation contains all fundamental processes taking place in the heliosphere: convection, diffusion, energy changes of the GCR particles owing to the interaction with solar wind’s inhomogeneities, drift due to the gradient and curvature of the regular interplanetary magnetic field and on the heliospheric current sheet.  相似文献   

5.
6.
Functions operating on multivariate distribution and survival functions are characterized, based on a theorem of Morillas, for which a new proof is presented. These results are applied to determine those classical mean values on [0,1]n which are distribution functions of probability measures on [0,1]n. As it turns out, the arithmetic mean plays a universal rôle for the characterization of distribution as well as survival functions. Another consequence is a far reaching generalization of Kimberling’s theorem, tightly connected to Archimedean copulas.  相似文献   

7.
In the present analysis, the motion of an immersed plate in a Newtonian fluid described by Torvik and Bagley’s fractional differential equation [1] has been considered. This Bagley Torvik equation has been solved by operational matrix of Haar wavelet method. The obtained result is compared with analytical solution suggested by Podlubny [2]. Haar wavelet method is used because its computation is simple as it converts the problem into algebraic matrix equation.  相似文献   

8.
We consider the problem of Adverse Selection and optimal derivative design within a Principal–Agent framework. The principal’s income is exposed to non-hedgeable risk factors arising, for instance, from weather or climate phenomena. She evaluates her risk using a coherent and law invariant risk measure and tries minimize her exposure by selling derivative securities on her income to individual agents. The agents have mean–variance preferences with heterogeneous risk aversion coefficients. An agent’s degree of risk aversion is private information and hidden from the principal who only knows the overall distribution. We show that the principal’s risk minimization problem has a solution and illustrate the effects of risk transfer on her income by means of two specific examples. Our model extends earlier work of Barrieu and El Karoui (in Financ Stochast 9, 269–298, 2005) and Carlier et al. (in Math Financ Econ 1, 57–80, 2007). We thank Guillaume Carlier, Pierre-Andre Chiappori, Ivar Ekeland, Andreas Putz and seminar participants at various institutions for valuable comments and suggestions. Financial support through an NSERC individual discovery grant is gratefully acknowledged.  相似文献   

9.
Difficulties with the interpretation of the parameters of the beta distribution let Malcolm et al. (1959) to suggest in the Program Evaluation and Review Technique (PERT) their by now classical expressions for the mean and variance for activity completion for practical applications. In this note, we shall provide an alternative for the PERT variance expression addressing a concern raised by Hahn (2008) regarding the constant PERT variance assumption given the range for an activity’s duration, while retaining the original PERT mean expression. Moreover, our approach ensures that an activity’s elicited most likely value aligns with the beta distribution’s mode. While this was the original intent of Malcolm et al. (1959), their method of selecting beta parameters via the PERT mean and variance is not consistent in this manner.  相似文献   

10.
Schröder’s methods of the first and second kind for solving a nonlinear equation f(x)=0, originally derived in 1870, are of great importance in the theory and practice of iteration processes. They were rediscovered several times and expressed in different forms during the last 130 years. It was proved in the paper of Petkovi? and Herceg (1999) [7] that even seven families of iteration methods for solving nonlinear equations are mutually equivalent. In this paper we show that these families are also equivalent to another four families of iteration methods and find that all of them have the origin in Schröder’s generalized method (of the second kind) presented in 1870. In the continuation we consider Smale’s open problem from 1994 about possible link between Schröder’s methods of the first and second kind and state the link in a simple way.  相似文献   

11.
12.
The purpose of the present work is to introduce a framework which enables us to study nonlinear homogenization problems. The starting point is the theory of algebras with mean value. Very often in physics, from very simple experimental data, one gets complicated structure phenomena. These phenomena are represented by functions which are permanent in mean, but complicated in detail. In addition the functions are subject to the verification of a functional equation which in general is nonlinear. The problem is therefore to give an interpretation of these phenomena using functions having the following qualitative properties: they are functions that represent a phenomenon on a large scale, and which vary irregularly, undergoing nonperiodic oscillations on a fine scale. In this work we study the qualitative properties of spaces of such functions, which we call generalized Besicovitch spaces, and we prove general compactness results related to these spaces. We then apply these results in order to study some new homogenization problems. One important achievement of this work is the resolution of the generalized weakly almost periodic homogenization problem for a nonlinear pseudo-monotone parabolic-type operator. We also give the answer to the question raised by Frid and Silva in their paper [35] [H. Frid, J. Silva, Homogenization of nonlinear pde’s in the Fourier-Stieltjes algebras, SIAM J. Math. Anal, 41 (4) (2009) 1589-1620] as regards whether there exist or do not exist ergodic algebras that are not subalgebras of the Fourier-Stieltjes algebra.  相似文献   

13.
This paper extends Eeckhoudt et al.’s (2012) results for precautionary effort to bivariate utility function framework. We establish an equivalence between the agent’s precautionary effort motive and the signs of successive cross-derivatives of the bivariate utility function. We show that the introduction (or deterioration) of an independent background risk induces more prevention to protect against wealth loss provided the individual exhibits correlation aversion of some given order. The conditions on the individual’s risk preferences are given to generate some specific prevention behaviors in the univariate framework with multiplicative risks. Our conclusion also indicates that an increase in the correlation between wealth risk and background risk leads to a reduction in optimal prevention.  相似文献   

14.
Every semisimple Lie algebra defines a root system on the dual space of a Cartan subalgebra and a Cartan matrix, which expresses the dual of the Killing form on a root base. Serre’s Theorem [J.-P. Serre, Complex Semisimple Lie Algebras (G.A. Jones, Trans.), Springer-Verlag, New York, 1987] gives then a representation of the given Lie algebra in generators and relations in terms of the Cartan matrix.In this work, we generalize Serre’s Theorem to give an explicit representation in generators and relations for any simply laced semisimple Lie algebra in terms of a positive quasi-Cartan matrix. Such a quasi-Cartan matrix expresses the dual of the Killing form for a Z-base of roots. Here, by a Z-base of roots, we mean a set of linearly independent roots which generate all roots as linear combinations with integral coefficients.  相似文献   

15.
In a financial market composed of n risky assets and a riskless asset, where short sales are allowed and mean–variance investors can be ambiguity averse, i.e., diffident about mean return estimates where confidence is represented using ellipsoidal uncertainty sets, we derive a closed form portfolio rule based on a worst case max–min criterion. Then, in a market where all investors are ambiguity-averse mean–variance investors with access to given mean return and variance–covariance estimates, we investigate conditions regarding the existence of an equilibrium price system and give an explicit formula for the equilibrium prices. In addition to the usual equilibrium properties that continue to hold in our case, we show that the diffidence of investors in a homogeneously diffident (with bounded diffidence) mean–variance investors’ market has a deflationary effect on equilibrium prices with respect to a pure mean–variance investors’ market in equilibrium. Deflationary pressure on prices may also occur if one of the investors (in an ambiguity-neutral market) with no initial short position decides to adopt an ambiguity-averse attitude. We also establish a CAPM-like property that reduces to the classical CAPM in case all investors are ambiguity-neutral.  相似文献   

16.
This paper presents a new way of measuring residual income, originally introduced by Magni (2000a,b,c, 2001a,b, 2003). Contrary to the standard residual income, the capital charge is equal to the capital lost by investors multiplied by the cost of capital. The lost capital may be viewed as (a) the foregone capital, (b) the capital implicitly infused into the business, (c) the outstanding capital of a shadow project, (d) the claimholders’ credit. Relations of the lost capital with book values and market values are studied, as well as relations of the lost capital residual income with the classical standard paradigm; many appealing properties are derived, among which an aggregation property. Different concepts and results, provided by different authors in such different fields as economic theory, management accounting and corporate finance, are considered: O’Hanlon and Peasnell’s (2002) unrecovered capital and Excess Value Created; Ohlson’s (2005) Abnormal Earnings Growth; O’Byrne’s (1997) EVA improvement; Miller and Modigliani’s (1961) investment opportunities approach to valuation; Young and O’Byrne’s (2001) Adjusted EVA; Keynes’s (1936) user cost; Drukarczyk and Schueler’s (2000) Net Economic Income; Fernández’s (2002) Created Shareholder Value; Anthony’s (1975) profit. They are all conveniently reinterpreted within the theoretical domain of the lost-capital paradigm and conjoined in a unified view. The results found make this new theoretical approach a good candidate for firm valuation, capital budgeting decision-making, managerial incentives and control.  相似文献   

17.
In this paper, we consider Newton’s method and Bernoulli’s method for a quadratic matrix equation arising from an overdamped vibrating system. By introducing M-matrix to this equation, we provide a sufficient condition for the existence of the primary solution. Moreover, we show that Newton’s method and Bernoulli’s method with an initial zero matrix converge to the primary solvent under the proposed sufficient condition.  相似文献   

18.
Abstract We consider a model of a fishery in which the dynamics of the unharvested fish population are given by the stochastic logistic growth equation Similar to the classical deterministic analogon, we assume that the fishery harvests the fish population following a constant effort strategy. In the first step, we derive the effort level that leads to maximum expected sustainable yield, which is understood as the expectation of the equilibrium distribution of the stochastic dynamics. This replaces the nonzero fixed point in the classical deterministic setup. In the second step, we assume that the fishery is risk averse and that there is a tradeoff between expected sustainable yield and uncertainty measured in terms of the variance of the equilibrium distribution. We derive the optimal constant effort harvesting strategy for this problem. In the final step, we consider an approach that we call the mean‐variance analysis to sustainable fisheries. Similar as in the now classical mean‐variance analysis in finance, going back to Markowitz [1952] , we study the problem of maximizing expected sustainable yields under variance constraints, and with this, minimizing the variance, e.g., risk, under guaranteed minimum expected sustainable yields. We derive explicit formulas for the optimal fishing effort in all four problems considered and study the effects of uncertainty, risk aversion, and mean reversion speed on fishing efforts.  相似文献   

19.
Cohen and Sackrowitz [Characterization of Bayes procedures for multiple endpoint problems and inadmissibility of the step-up procedure, Ann. Statist. 33 (2005) 145-158] proved that the step-up multiple testing procedure is inadmissible for a multivariate normal model with unknown mean vector and known intraclass covariance matrix. The hypotheses tested are each mean is zero vs. each mean is positive. The risk function is a 2×1 vector where one component is average size and the other component is one minus average power. In this paper, we extend the inadmissibility result to several different models, to two-sided alternatives, and to other risk functions. The models include one-parameter exponential families, independent t-variables, independent χ2-variables, t-tests arising from the analysis of variance, and t-tests arising from testing treatments against a control. The additional risk functions are linear combinations where one component is the false discovery rate (FDR).  相似文献   

20.
In the 1980s, Motorola, Inc. introduced its Six Sigma quality program to the world. Some quality practitioners questioned why the Six Sigma advocates claim it is necessary to add a 1.5σ shift to the process mean when estimating process capability. Bothe [Bothe, D.R., 2002. Statistical reason for the 1.5σ shift. Quality Engineering 14 (3), 479–487] provided a statistical reason for considering such a shift in the process mean for normal processes. In this paper, we consider gamma processes which cover a wide class of applications. For fixed sample size n, the detection power of the control chart can be computed. For small process mean shifts, it is beyond the control chart detection power, which results in overestimating process capability. To resolve the problem, we first examine Bothe’s approach and find the detection power is less than 0.5 when data comes from gamma distribution, showing that Bothe’s adjustments are inadequate when we have gamma processes. We then calculate adjustments under various sample sizes n and gamma parameter N, with power fixed to 0.5. At the end, we adjust the formula of process capability to accommodate those shifts which can not be detected. Consequently, our adjustments provide much more accurate capability calculation for gamma processes. For illustration purpose, an application example is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号