首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 719 毫秒
1.
The evaluation of the performance of mutual funds (MFs) has been a very interesting research topic not only for researchers, but also for managers of financial, banking and investment institutions. In this paper, an integrated methodological framework for the evaluation of MF performance is proposed. The proposed methodology is based on the combination of discrete and continuous multicriteria decision aid (MCDA) methods for MFs selection and composition. In the first stage of the analysis the UTADIS MCDA method is employed in order to develop mutual fund's performance models supporting the selection of a small set of MFs, which will compose the final portfolios. In the second stage, a goal programming model is employed to determine the proportion of the selected MFs in the final portfolios. The methodology is applied on data of Greek MFs over the period 1999–2001 with encouraging results.  相似文献   

2.
A scenario tree is an efficient way to represent a stochastic data process in decision problems under uncertainty. This paper addresses how to efficiently generate appropriate scenario trees. A knowledge‐based scenario tree generation method is proposed; the new method is further improved by accounting for subjective judgements or expectations about the random future. Compared with existing approaches, complicated mathematical models and time‐consuming estimation, simulation and optimization problem solution are avoided in our knowledge‐based algorithms, and large‐scale scenario trees can be quickly generated. To show the advantages of the new algorithms, a multiperiod portfolio selection problem is considered, and a dynamic risk measure is adopted to control the intermediate risk, which is superior to the single‐period risk measure used in the existing literature. A series of numerical experiments are carried out by using real trading data from the Shanghai stock market. The results show that the scenarios generated by our algorithms can properly represent the underlying distribution; our algorithms have high performance, say, a scenario tree with up to 10,000 scenarios can be generated in less than a half minute. The applications in the multiperiod portfolio management problem demonstrate that our scenario tree generation methods are stable, and the optimal trading strategies obtained with the generated scenario tree are reasonable, efficient and robust. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Markowitz formulated the portfolio optimization problem through two criteria: the expected return and the risk, as a measure of the variability of the return. The classical Markowitz model uses the variance as the risk measure and is a quadratic programming problem. Many attempts have been made to linearize the portfolio optimization problem. Several different risk measures have been proposed which are computationally attractive as (for discrete random variables) they give rise to linear programming (LP) problems. About twenty years ago, the mean absolute deviation (MAD) model drew a lot of attention resulting in much research and speeding up development of other LP models. Further, the LP models based on the conditional value at risk (CVaR) have a great impact on new developments in portfolio optimization during the first decade of the 21st century. The LP solvability may become relevant for real-life decisions when portfolios have to meet side constraints and take into account transaction costs or when large size instances have to be solved. In this paper we review the variety of LP solvable portfolio optimization models presented in the literature, the real features that have been modeled and the solution approaches to the resulting models, in most of the cases mixed integer linear programming (MILP) models. We also discuss the impact of the inclusion of the real features.  相似文献   

4.
Value at Risk (VaR) has been used as an important tool to measure the market risk under normal market. Usually the VaR of log returns is calculated by assuming a normal distribution. However, log returns are frequently found not normally distributed. This paper proposes the estimation approach of VaR using semiparametric support vector quantile regression (SSVQR) models which are functions of the one-step-ahead volatility forecast and the length of the holding period, and can be used regardless of the distribution. We find that the proposed models perform better overall than the variance-covariance and linear quantile regression approaches for return data on S&P 500, NIKEI 225 and KOSPI 200 indices.  相似文献   

5.
6.
用VaR度量石油市场的极端风险   总被引:5,自引:0,他引:5  
本文采用2001年11月到2005年6月国内原油价格的调度数据,运用基于GED分布的GARCH模型度量了国内油市的极端上涨和极端下跌时的VaR,得到如下两点结论:第一,国内油市存在ARCH in Mean 效应,表明收益与风险是正相关的,同时也意味着国内油市违背了有效市场假说,进一步的分析表明国内原油的定价机制和流通体制是造成市场非有效的主要原因;第二,上涨风险的平均水平要高于下跌风险的平均水平,这是石油市场供需双方的非对称市场地位决定的,石油生产者可以利用市场势力和上下游一体化的组织形式,将部分下跌风险转嫁给石油需求者,而石油需求者则缺少有效的措施来应对油价上涨.  相似文献   

7.
Recent developments in actuarial literature have shown that credibility theory can serve as an effective tool in mortality modelling, leading to accurate forecasts when applied to single or multi-population datasets. This paper presents a crossed classification credibility formulation of the Lee–Carter method particularly designed for multi-population mortality modelling. Differently from the standard Lee–Carter methodology, where the time index is assumed to follow an appropriate time series process, herein, future mortality dynamics are estimated under a crossed classification credibility framework, which models the interactions between various risk factors (e.g. genders, countries). The forecasting performances between the proposed model, the original Lee–Carter model and two multi-population Lee–Carter extensions are compared for both genders of multiple countries. Numerical results indicate that the proposed model produces more accurate forecasts than the Lee–Carter type models, as evaluated by the mean absolute percentage forecast error measure. Applications with life insurance and annuity products are also provided and a stochastic version of the proposed model is presented.  相似文献   

8.
长期事件研究方法论——一个综述   总被引:1,自引:0,他引:1  
本文对国外长期事件研究方法论文献进行了全面的梳理,并着重介绍和分析了期望收益模型(或收益基准)选择、异常收益度量,以及检验统计量的设定与检验力等。研究发现,在各期望收益模型、两种度量异常收益方法—AAR(或CAR)与BHAR的选择,以及各检验统计量是否存在错误设定等问题上,国外学界并未达成共识,争论仍在继续。  相似文献   

9.
Credit risk optimization with Conditional Value-at-Risk criterion   总被引:27,自引:0,他引:27  
This paper examines a new approach for credit risk optimization. The model is based on the Conditional Value-at-Risk (CVaR) risk measure, the expected loss exceeding Value-at-Risk. CVaR is also known as Mean Excess, Mean Shortfall, or Tail VaR. This model can simultaneously adjust all positions in a portfolio of financial instruments in order to minimize CVaR subject to trading and return constraints. The credit risk distribution is generated by Monte Carlo simulations and the optimization problem is solved effectively by linear programming. The algorithm is very efficient; it can handle hundreds of instruments and thousands of scenarios in reasonable computer time. The approach is demonstrated with a portfolio of emerging market bonds. Received: November 1, 1999 / Accepted: October 1, 2000?Published online December 15, 2000  相似文献   

10.
The popularity of downside risk among investors is growing and mean return–downside risk portfolio selection models seem to oppress the familiar mean–variance approach. The reason for the success of the former models is that they separate return fluctuations into downside risk and upside potential. This is especially relevant for asymmetrical return distributions, for which mean–variance models punish the upside potential in the same fashion as the downside risk.The paper focuses on the differences and similarities between using variance or a downside risk measure, both from a theoretical and an empirical point of view. We first discuss the theoretical properties of different downside risk measures and the corresponding mean–downside risk models. Against common beliefs, we show that from the large family of downside risk measures, only a few possess better theoretical properties within a return–risk framework than the variance. On the empirical side, we analyze the differences between some US asset allocation portfolios based on variances and downside risk measures. Among other things, we find that the downside risk approach tends to produce – on average – slightly higher bond allocations than the mean–variance approach. Furthermore, we take a closer look at estimation risk, viz. the effect of sampling error in expected returns and risk measures on portfolio composition. On the basis of simulation analyses, we find that there are marked differences in the degree of estimation accuracy, which calls for further research.  相似文献   

11.
刘忠 《应用概率统计》2000,16(4):365-372
本文利用SV(Stochastic Variance)模型对期权基础资产的收益过程进行统计描述,在同时给出期权定价和市场风险计量之后,又给出定价置信区间和风险置信区间的估计。文中对SV模型作了分析和比较,利用自适应滤波方法对模型的建立和参数的估计给出了简单的方法,最后还对SV模型作了模拟分析并计算了期权定价和风险计量的一个例子。  相似文献   

12.
Although data envelopment analysis (DEA) has been extensively used to assess the performance of mutual funds (MF), most of the approaches overestimate the risk associated to the endogenous benchmark portfolio. This is because in the conventional DEA technology the risk of the target portfolio is computed as a linear combination of the risk of the assessed MF. This neglects the important effects of portfolio diversification. Other approaches based on mean–variance or mean–variance–skewness are non-linear. We propose to combine DEA with stochastic dominance criteria. Thus, in this paper, six distinct DEA-like linear programming (LP) models are proposed for computing relative efficiency scores consistent (in the sense of necessity) with second-order stochastic dominance (SSD). The aim is that, being SSD efficient, the obtained target portfolio should be an optimal benchmark for any rational risk-averse investor. The proposed models are compared with several related approaches from the literature.  相似文献   

13.
Recent extreme economic developments nearing a worst-case scenario motivate further examination of minimax linear programming approaches for portfolio optimization. Risk measured as the worst-case return is employed and a portfolio from maximizing returns subject to a risk threshold is constructed. Minimax model properties are developed and parametric analysis of the risk threshold connects this model to expected value along a continuum, revealing an efficient frontier segmenting investors by risk preference. Divergence of minimax model results from expected value is quantified and a set of possible prior distributions expressing a degree of Knightian uncertainty corresponding to risk preference determined. The minimax model will maximize return with respect to one of these prior distributions providing valuable insight regarding an investor’s risk attitude and decision behavior. Linear programming models for financial firms to assist individual investors to hedge against losses by buying insurance and a model for designing variable annuities are proposed.  相似文献   

14.
The credit scoring is a risk evaluation task considered as a critical decision for financial institutions in order to avoid wrong decision that may result in huge amount of losses. Classification models are one of the most widely used groups of data mining approaches that greatly help decision makers and managers to reduce their credit risk of granting credits to customers instead of intuitive experience or portfolio management. Accuracy is one of the most important criteria in order to choose a credit‐scoring model; and hence, the researches directed at improving upon the effectiveness of credit scoring models have never been stopped. In this article, a hybrid binary classification model, namely FMLP, is proposed for credit scoring, based on the basic concepts of fuzzy logic and artificial neural networks (ANNs). In the proposed model, instead of crisp weights and biases, used in traditional multilayer perceptrons (MLPs), fuzzy numbers are used in order to better model of the uncertainties and complexities in financial data sets. Empirical results of three well‐known benchmark credit data sets indicate that hybrid proposed model outperforms its component and also other those classification models such as support vector machines (SVMs), K‐nearest neighbor (KNN), quadratic discriminant analysis (QDA), and linear discriminant analysis (LDA). Therefore, it can be concluded that the proposed model can be an appropriate alternative tool for financial binary classification problems, especially in high uncertainty conditions. © 2013 Wiley Periodicals, Inc. Complexity 18: 46–57, 2013  相似文献   

15.
While traditional data envelopment analysis (DEA) models assess the relative efficiency of similar, independent decision making units (DMUs) centralized DEA models aim at reallocating inputs and outputs among the units setting new input and output targets for each one. This system point of view is appropriate when the DMUs belong to a common organization that allocates their inputs and appropriates their outputs. This intraorganizational perspective opens up the possibility that greater technical efficiency for the organization as a whole might be achieved by closing down some of the existing DMUs. In this paper, we present three centralized DEA models that take advantage of this possibility. Although these models involve some binary variables, we present efficient solution approaches based on Linear Programming. We also present some numerical results of the proposed models for a small problem from the literature.  相似文献   

16.

A measure for portfolio risk management is proposed by extending the Markowitz mean-variance approach to include the left-hand tail effects of asset returns. Two risk dimensions are captured: asset covariance risk along risk in left-hand tail similarity and volatility. The key ingredient is an informative set on the left-hand tail distributions of asset returns obtained by an adaptive clustering procedure. This set allows a left tail similarity and left tail volatility to be defined, thereby providing a definition for the left-tail-covariance-like matrix. The convex combination of the two covariance matrices generates a “two-dimensional” risk that, when applied to portfolio selection, provides a measure of its systemic vulnerability due to the asset centrality. This is done by simply associating a suitable node-weighted network with the portfolio. Higher values of this risk indicate an asset allocation suffering from too much exposure to volatile assets whose return dynamics behave too similarly in left-hand tail distributions and/or co-movements, as well as being too connected to each other. Minimizing these combined risks reduces losses and increases profits, with a low variability in the profit and loss distribution. The portfolio selection compares favorably with some competing approaches. An empirical analysis is made using exchange traded fund prices over the period January 2006–February 2018.

  相似文献   

17.
This paper deals with fuzzy optimization schemes for managing a portfolio in the framework of risk–return trade-off. Different models coexist to select the best portfolio according to their respective objective functions and many of them are linearly constrained. We are concerned with the infeasible instances of such models. This infeasibility, usually provoked by the conflict between the desired return and the diversification requirements proposed by the investor, can be satisfactorily avoided by using fuzzy linear programming techniques. We propose an algorithm to repair infeasibility and we illustrate its performance on a numerical example.  相似文献   

18.
Following the increasing use of external and internal credit ratings made by the Bank regulation, credit risk concentration has become one of the leading topics in modern finance. In order to measure separately single-name and sectoral concentration risk, the literature proposes specific concentration indexes and models, which we review in this paper. Following the guideline proposed by Basel 2 on risk integration, we believe that standard approaches could be improved by studying a new measure of risk that integrates single-name and sectoral credit risk concentration in a coherent way. The main objective of this paper is to propose a novel index useful to measure credit risk concentration integrating single-name and sectoral components. From a theoretical point of view, our measure of risk shows interesting mathematical properties; empirical evidences are given on the basis of a data set. Finally, we have compared the results achieved following our proposal with respect to the common procedures proposed in the literature.  相似文献   

19.
To facilitate applications in general insurance, some extensions are proposed to cluster-weighted models (CWMs). First, we extend CWMs to have generalized cluster-weighted models (GCWMs) by allowing modeling of non-Gaussian distribution of the continuous covariates, as they frequently occur in insurance practice. Secondly, we introduce a zero-inflated extension of GCWM (ZI-GCWM) for modeling insurance claims data with excess zeros coming from heterogeneous sources. Additionally, we give two expectation–optimization (EM) algorithms for parameter estimation given in the proposed models. An appropriate simulation study shows that, for various settings and in contrast to the existing mixture-based approaches, both extended models perform well. Finally, a real data set based on French auto-mobile policies is used to illustrate the application of the proposed extensions.  相似文献   

20.
This paper presents an analysis of a portfolio model which can be used to assist a property-liability insurance company in determining the optimal composition of the insurance and investment portfolios. By introducing insurer's threshold risk and relaxing some non-realistic assumptions made in traditional chance constraint insurance and investment portfolio models, we propose a method for an insurer to maximize his return threshold for a given threshold risk level. This proposed model can be used to optimize the composition of underwriting and investment portfolios regarding the insurer's threshold risk level, as well as to generate the efficient frontier by adjusting insurer's threshold risk levels. A numerical example is given based on the industry's aggregated data for a sixteen year period.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号