首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions.  相似文献   

2.
The idea of efficient hedging has been introduced by Föllmer and Leukert. They defined the shortfall risk as the expectation of the shortfall weighted by a loss function, and looked for strategies that minimize the shortfall risk under a capital constraint. In this paper, to measure the shortfall risk, we use the coherent risk measures introduced by Artzner, Delbaen, Eber and Heath. We show that, for a given contingent claim H, the optimal strategy consists in hedging a modified claim ?H for some randomized test ?. This is an analogue of the results by Föllmer and Leukert.  相似文献   

3.
Technology evaluation has become a critical part of technology investment, and accurate evaluation can lead more funds to the companies that have innovative technology. However, existing processes have a weakness in that it considers only accepted applicants at the application stage. We analyse the effectiveness of technology evaluation model that encompasses both accepted and rejected applicants and compare its performance with the original accept-only model. Also, we include the analysis of reject inference technique, bivariate probit model, in order to see if the reject inference technique is of use against the accept-only model. The results show that sample selection bias of the accept-only model exists and the reject inference technique improves the accept-only model. However, the reject inference technique does not completely resolve the problem of sample selection bias.  相似文献   

4.
Stein unbiased risk estimation is generalized twice, from the Gaussian shift model to nonparametric families of smooth densities, and from the quadratic risk to more general divergence type distances. The development relies on a connection with local proper scoring rules.  相似文献   

5.
Point estimators for the parameters of the component lifetime distribution in coherent systems are evolved assuming to be independently and identically Weibull distributed component lifetimes. We study both complete and incomplete information under continuous monitoring of the essential component lifetimes. First, we prove that the maximum likelihood estimator (MLE) under complete information based on progressively Type‐II censored system lifetimes uniquely exists and we present two approaches to compute the estimates. Furthermore, we consider an ad hoc estimator, a max‐probability plan estimator and the MLE for the parameters under incomplete information. In order to compute the MLEs, we consider a direct maximization of the likelihood and an EM‐algorithm–type approach, respectively. In all cases, we illustrate the results by simulations of the five‐component bridge system and the 10‐component parallel system, respectively.  相似文献   

6.
Many researchers see the need for reject inference in credit scoring models to come from a sample selection problem whereby a missing variable results in omitted variable bias. Alternatively, practitioners often see the problem as one of missing data where the relationship in the new model is biased because the behaviour of the omitted cases differs from that of those who make up the sample for a new model. To attempt to correct for this, differential weights are applied to the new cases. The aim of this paper is to see if the use of both a Heckman style sample selection model and the use of sampling weights, together, will improve predictive performance compared with either technique used alone. This paper will use a sample of applicants in which virtually every applicant was accepted. This allows us to compare the actual performance of each model with the performance of models which are based only on accepted cases.  相似文献   

7.
Various concepts appeared in the existing literature to evaluate the risk exposure of a financial or insurance firm/subsidiary/line of business due to the occurrence of some extreme scenarios. Many of those concepts, such as Marginal Expected Shortfall or Tail Conditional Expectation, are simply some conditional expectations that evaluate the risk in adverse scenarios and are useful for signaling to a decision-maker the poor performance of its risk portfolio or to identify which sub-portfolio is likely to exhibit a massive downside risk. We investigate the latter risk under the assumption that it is measured via a coherent risk measure, which obviously generalizes the idea of only taking the expectation of the downside risk. Multiple examples are given and our numerical illustrations show how the asymptotic approximations can be used in the capital allocation exercise. We have concluded that the expectation of the downside risk does not fairly take into account the individual risk contribution when allocating the VaR-based regulatory capital, and thus, more conservative risk measurements are recommended. Finally, we have found that more conservative risk measurements do not improve the fairness of the cost of capital allocation when the uncertainty with parameter estimation is present, even at a very high level.  相似文献   

8.
One of the basic problems of applied finance is the optimal selection of stocks, with the aim of maximizing future returns and constraining risks by an appropriate measure. Here, the problem is formulated by finding the portfolio that maximizes the expected return, with risks constrained by the worst conditional expectation. This model is a straightforward extension of the classic Markovitz mean–variance approach, where the original risk measure, variance, is replaced by the worst conditional expectation.The worst conditional expectation with a threshold α of a risk X, in brief WCEα(X), is a function that belongs to the class of coherent risk measures. These are measures that satisfy a set of properties, such as subadditivity and monotonicity, that are introduced to prevent some of the drawbacks that affect some other common measures.This paper shows that the optimal portfolio selection problem can be formulated as a linear programming instance, but with an exponential number of constraints. It can be solved efficiently by an appropriate generation constraint subroutine, so that only a small number of inequalities are actually needed.This method is applied to the optimal selection of stocks in the Italian financial market and some computational results suggest that the optimal portfolios are better than the market index.  相似文献   

9.
Sun  Jie  Liao  Li-Zhi  Rodrigues  Brian 《Mathematical Programming》2018,168(1-2):599-613
Mathematical Programming - A new scheme to cope with two-stage stochastic optimization problems uses a risk measure as the objective function of the recourse action, where the risk measure is...  相似文献   

10.
We investigate an optimal portfolio and consumption choice problem with a defaultable security. Under the goal of maximizing the expected discounted utility of the average past consumption, a dynamic programming principle is applied to derive a pair of second-order parabolic Hamilton-Jacobi-Bellman (HJB) equations with gradient constraints. We explore these HJB equations by a viscosity solution approach and characterize the post-default and pre-default value functions as a unique pair of constrained viscosity solutions to the HJB equations.  相似文献   

11.
A risk-averse newsvendor with law invariant coherent measures of risk   总被引:1,自引:0,他引:1  
For general law invariant coherent measures of risk, we derive an equivalent representation of a risk-averse newsvendor problem as a mean-risk model. We prove that the higher the weight of the risk functional, the smaller the order quantity. Our theoretical results are confirmed by sample-based optimization.  相似文献   

12.
In this paper, we derive new expectation representations of coherent multiperiod risk measures. A special feature of our representation is that it requires the use of randomized stopping times (introduced by Baxter and Chacon). Additionally, the results provide some insight into multiperiod risk measurement.  相似文献   

13.
Annals of the Institute of Statistical Mathematics - In this paper, we propose improved statistical inference and variable selection methods for generalized linear models based on empirical...  相似文献   

14.
Chen  Yanhong  Hu  Yijun 《Positivity》2020,24(3):711-727

In this paper, we study the close relationship between multivariate coherent and convex risk measures. Namely, starting from a multivariate convex risk measure, we propose a family of multivariate coherent risk measures induced by it. In return, the convex risk measure can be represented by its induced coherent risk measures. The representation result for the induced coherent risk measures is given in terms of the minimal penalty function of the convex risk measure. Finally, an example is also given.

  相似文献   

15.
We consider the problem of optimizing a portfolio of n assets, whose returns are described by a joint discrete distribution. We formulate the mean–risk model, using as risk functionals the semideviation, deviation from quantile, and spectral risk measures. Using the modern theory of measures of risk, we derive an equivalent representation of the portfolio problem as a zero-sum matrix game, and we provide ways to solve it by convex optimization techniques. In this way, we reconstruct new probability measures which constitute part of the saddle point of the game. These risk-adjusted measures always exist, irrespective of the completeness of the market. We provide an illustrative example, in which we derive these measures in a universe of 200 assets and we use them to evaluate the market portfolio and optimal risk-averse portfolios.  相似文献   

16.
The paper by Huang [Fuzzy chance-constrained portfolio selection, Applied Mathematics and Computation 177 (2006) 500-507] proposes a fuzzy chance-constrained portfolio selection model and presents a numerical example to illustrate the proposed model. In this note, we will show that Huang’s model produces optimal portfolio investing in only one security when candidate security returns are independent to each other no matter how many independent securities are in the market. The reason for concentrative solution is that Huang’s model does not consider the investment risk. To avoid concentrative investment, a risk constraint is added to the fuzzy chance-constrained portfolio selection model. In addition, we point out that the result of the numerical example is inaccurate.  相似文献   

17.
In the field of portfolio selection, variance, semivariance and probability of an adverse outcome are three best-known mathematical definitions of risk. Lots of models were built to minimize risk based on these definitions. This paper gives a new definition of risk for portfolio selection and proposes a new type of model based on this definition. In addition, a hybrid intelligent algorithm is employed to solve the optimization problem in general cases. One numerical example is also presented for the sake of illustration.  相似文献   

18.
We assessed the ability of several penalized regression methods for linear and logistic models to identify outcome-associated predictors and the impact of predictor selection on parameter inference for practical sample sizes. We studied effect estimates obtained directly from penalized methods (Algorithm 1), or by refitting selected predictors with standard regression (Algorithm 2). For linear models, penalized linear regression, elastic net, smoothly clipped absolute deviation (SCAD), least angle regression and LASSO had a low false negative (FN) predictor selection rates but false positive (FP) rates above 20 % for all sample and effect sizes. Partial least squares regression had few FPs but many FNs. Only relaxo had low FP and FN rates. For logistic models, LASSO and penalized logistic regression had many FPs and few FNs for all sample and effect sizes. SCAD and adaptive logistic regression had low or moderate FP rates but many FNs. 95 % confidence interval coverage of predictors with null effects was approximately 100 % for Algorithm 1 for all methods, and 95 % for Algorithm 2 for large sample and effect sizes. Coverage was low only for penalized partial least squares (linear regression). For outcome-associated predictors, coverage was close to 95 % for Algorithm 2 for large sample and effect sizes for all methods except penalized partial least squares and penalized logistic regression. Coverage was sub-nominal for Algorithm 1. In conclusion, many methods performed comparably, and while Algorithm 2 is preferred to Algorithm 1 for estimation, it yields valid inference only for large effect and sample sizes.  相似文献   

19.
We introduce a new aspect of a risk process, which is a macro approximation of the flow of a risk reserve. We assume that the underlying process consists of a Brownian motion plus negative jumps, and that the process is observed at discrete time points. In our context, each jump size of the process does not necessarily correspond to the each claim size. Therefore our risk process is different from the traditional risk process. We cannot directly observe each jump size because of discrete observations. Our goal is to estimate the adjustment coefficient of our risk process from discrete observations.  相似文献   

20.
本文给出了响应变量随机右删失情况下线性模型的FIC (focused information criterion) 模型选择方法和光滑FIC 模型平均估计方法, 证明了兴趣参数的FIC 模型选择估计和光滑FIC 模型平均估计的渐近正态性, 通过随机模拟研究了估计的有限样本性质, 模拟结果显示, 从均方误差和一定置信水平置信区间的经验覆盖概率看, 兴趣参数的光滑FIC 模型平均估计均优于FIC, AIC (Akaikeinformation criterion) 和BIC (Bayesian information citerion) 等模型选择估计; 而FIC 模型选择估计与AIC 和BIC 等模型选择估计相比, 也表现出了一定的优越性. 通过分析原发性胆汁性肝硬化数据集, 说明了本文方法在实际问题中的应用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号