首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 755 毫秒
1.
Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process.  相似文献   

2.
In typical robust portfolio selection problems, one mainly finds portfolios with the worst-case return under a given uncertainty set, in which asset returns can be realized. A too large uncertainty set will lead to a too conservative robust portfolio. However, if the given uncertainty set is not large enough, the realized returns of resulting portfolios will be outside of the uncertainty set when an extreme event such as market crash or a large shock of asset returns occurs. The goal of this paper is to propose robust portfolio selection models under so-called “ marginal+joint” ellipsoidal uncertainty set and to test the performance of the proposed models. A robust portfolio selection model under a “marginal + joint” ellipsoidal uncertainty set is proposed at first. The model has the advantages of models under the separable uncertainty set and the joint ellipsoidal uncertainty set, and relaxes the requirements on the uncertainty set. Then, one more robust portfolio selection model with option protection is presented by combining options into the proposed robust portfolio selection model. Convex programming approximations with second-order cone and linear matrix inequalities constraints to both models are derived. The proposed robust portfolio selection model with options can hedge risks and generates robust portfolios with well wealth growth rate when an extreme event occurs. Tests on real data of the Chinese stock market and simulated options confirm the property of both the models. Test results show that (1) under the “ marginal+joint” uncertainty set, the wealth growth rate and diversification of robust portfolios generated from the first proposed robust portfolio model (without options) are better and greater than those generated from Goldfarb and Iyengar’s model, and (2) the robust portfolio selection model with options outperforms the robust portfolio selection model without options when some extreme event occurs.  相似文献   

3.
This paper develops a λ mean-hybrid entropy model to deal with portfolio selection problem with both random uncertainty and fuzzy uncertainty. Solving this model provides the investor a tradeoff frontier between security return and risk. We model the security return as a triangular fuzzy random variable, where the investor’s individual preference is reflected by the pessimistic-optimistic parameter λ. We measure the security risk using the hybrid entropy in this model. Algorithm is developed to solve this bi-objective portfolio selection model. Beside, a numerical example is also presented to illustrate this approach.  相似文献   

4.
In model-based analysis for comparative evaluation of strategies for disease treatment and management, the model of the disease is arguably the most critical element. A fundamental challenge in identifying model parameters arises from the limitations of available data, which challenges the ability to uniquely link model parameters to calibration targets. Consequently, the calibration of disease models leads to the discovery of multiple models that are similarly consistent with available data. This phenomenon is known as calibration uncertainty and its effect is transferred to the results of the analysis. Insufficient examination of the breadth of potential model parameters can create a false sense of confidence in the model recommendation, and ultimately cast doubt on the value of the analysis. This paper introduces a systematic approach to the examination of calibration uncertainty and its impact. We begin with a model of the calibration process as a constrained optimization problem and introduce the notion of plausible models which define the uncertainty region for model parameters. We illustrate the approach using a fictitious disease, and explore various methods for interpreting the outputs obtained.  相似文献   

5.
人口预测作为区域规划和政策决策的依据对于区域经济社会可持续发展有重要理论价值和现实意义.目前已有不少学者使用时序模型进行了人口预测,但从预测精度、偏差和不确定性角度考虑时序模型选择的研究几乎没有.利用ARIMA模型对我国部分具有代表性的省域进行人口预测的基础上,探讨了不同基区间、临界年及预测区间等条件下人口最优时序预测模型选择的一般性规律.研究发现,一些ARIMA模型能提供相对精确的结果,另一些则不能;线性与非线性模型在预测精度上有较大差异;历史数据长短可能导致选择不同的模型;不同精度视角下的模型选择有较强一致性,但也有一定程度的不确定性.  相似文献   

6.
Dokka  Trivikram  Goerigk  Marc  Roy  Rahul 《Optimization Letters》2020,14(6):1323-1337

In robust optimization, the uncertainty set is used to model all possible outcomes of uncertain parameters. In the classic setting, one assumes that this set is provided by the decision maker based on the data available to her. Only recently it has been recognized that the process of building useful uncertainty sets is in itself a challenging task that requires mathematical support. In this paper, we propose an approach to go beyond the classic setting, by assuming multiple uncertainty sets to be prepared, each with a weight showing the degree of belief that the set is a “true” model of uncertainty. We consider theoretical aspects of this approach and show that it is as easy to model as the classic setting. In an extensive computational study using a shortest path problem based on real-world data, we auto-tune uncertainty sets to the available data, and show that with regard to out-of-sample performance, the combination of multiple sets can give better results than each set on its own.

  相似文献   

7.
In mean-risk portfolio optimization, it is typically assumed that the assets follow a known distribution P 0, which is estimated from observed data. Aiming at an investment strategy which is robust against possible misspecification of P 0, the portfolio selection problem is solved with respect to the worst-case distribution within a Wasserstein-neighborhood of P 0. We review tractable formulations of the portfolio selection problem under model ambiguity, as it is called in the literature. For instance, it is known that high model ambiguity leads to equally-weighted portfolio diversification. However, it often happens that the marginal distributions of the assets can be estimated with high accuracy, whereas the dependence structure between the assets remains ambiguous. This leads to the problem of portfolio selection under dependence uncertainty. We show that in this case portfolio concentration becomes optimal as the uncertainty with respect to the estimated dependence structure increases. Hence, distributionally robust portfolio optimization can have two very distinct implications: Diversification on the one hand and concentration on the other hand.  相似文献   

8.
We present a new approach that enables investors to seek a reasonably robust policy for portfolio selection in the presence of rare but high-impact realization of moment uncertainty. In practice, portfolio managers face difficulty in seeking a balance between relying on their knowledge of a reference financial model and taking into account possible ambiguity of the model. Based on the concept of Distributionally Robust Optimization (DRO), we introduce a new penalty framework that provides investors flexibility to define prior reference models using the distributional information of the first two moments and accounts for model ambiguity in terms of extreme moment uncertainty. We show that in our approach a globally-optimal portfolio can in general be obtained in a computationally tractable manner. We also show that for a wide range of specifications our proposed model can be recast as semidefinite programs. Computational experiments show that our penalized moment-based approach outperforms classical DRO approaches in terms of both average and downside-risk performance using historical data.  相似文献   

9.
A current challenge for many Bayesian analyses is determining when to terminate high-dimensional Markov chain Monte Carlo simulations. To this end, we propose using an automated sequential stopping procedure that terminates the simulation when the computational uncertainty is small relative to the posterior uncertainty. Further, we show this stopping rule is equivalent to stopping when the effective sample size is sufficiently large. Such a stopping rule has previously been shown to work well in settings with posteriors of moderate dimension. In this article, we illustrate its utility in high-dimensional simulations while overcoming some current computational issues. As examples, we consider two complex Bayesian analyses on spatially and temporally correlated datasets. The first involves a dynamic space-time model on weather station data and the second a spatial variable selection model on fMRI brain imaging data. Our results show the sequential stopping rule is easy to implement, provides uncertainty estimates, and performs well in high-dimensional settings. Supplementary materials for this article are available online.  相似文献   

10.
We develop a multi-stage stochastic programming model for international portfolio management in a dynamic setting. We model uncertainty in asset prices and exchange rates in terms of scenario trees that reflect the empirical distributions implied by market data. The model takes a holistic view of the problem. It considers portfolio rebalancing decisions over multiple periods in accordance with the contingencies of the scenario tree. The solution jointly determines capital allocations to international markets, the selection of assets within each market, and appropriate currency hedging levels. We investigate the performance of alternative hedging strategies through extensive numerical tests with real market data. We show that appropriate selection of currency forward contracts materially reduces risk in international portfolios. We further find that multi-stage models consistently outperform single-stage models. Our results demonstrate that the stochastic programming framework provides a flexible and effective decision support tool for international portfolio management.  相似文献   

11.
In this paper, we present a duality theory for fractional programming problems in the face of data uncertainty via robust optimization. By employing conjugate analysis, we establish robust strong duality for an uncertain fractional programming problem and its uncertain Wolfe dual programming problem by showing strong duality between the deterministic counterparts: robust counterpart of the primal model and the optimistic counterpart of its dual problem. We show that our results encompass as special cases some programming problems considered in the recent literature. Moreover, we also show that robust strong duality always holds for linear fractional programming problems under scenario data uncertainty or constraint-wise interval uncertainty, and that the optimistic counterpart of the dual is tractable computationally.  相似文献   

12.
This article suggests a method for variable and transformation selection based on posterior probabilities. Our approach allows for consideration of all possible combinations of untransformed and transformed predictors along with transformed and untransformed versions of the response. To transform the predictors in the model, we use a change-point model, or “change-point transformation,” which can yield more interpretable models and transformations than the standard Box–Tidwell approach. We also address the problem of model uncertainty in the selection of models. By averaging over models, we account for the uncertainty inherent in inference based on a single model chosen from the set of models under consideration. We use a Markov chain Monte Carlo model composition (MC3) method which allows us to average over linear regression models when the space of models under consideration is very large. This considers the selection of variables and transformations at the same time. In an example, we show that model averaging improves predictive performance as compared with any single model that might reasonably be selected, both in terms of overall predictive score and of the coverage of prediction intervals. Software to apply the proposed methodology is available via StatLib.  相似文献   

13.
In this paper we study the problem of the optimal portfolio selection with transaction costs for a decision-maker who is faced with Knightian uncertainty. The decision-maker’s portfolio consists of one risky and one risk-free asset, and we assume that the transaction costs are proportional to the traded volume of the risky asset. The attitude to uncertainty is modeled by the Choquet expected utility. We derive optimal strategies and bounds of the no-transaction region for both optimistic and pessimistic decision-makers. The no-transaction region of a pessimistic investor is narrower and its bounds lie closer to the origin than that of an optimistic trader. Moreover, under the Choquet expected utility the structure of the no-transaction region is not necessarily a closed interval as it is under the standard expected utility model.  相似文献   

14.
The Markowitz Mean Variance model (MMV) and its variants are widely used for portfolio selection. The mean and covariance matrix used in the model originate from probability distributions that need to be determined empirically. It is well known that these parameters are notoriously difficult to estimate. In addition, the model is very sensitive to these parameter estimates. As a result, the performance and composition of MMV portfolios can vary significantly with the specification of the mean and covariance matrix. In order to address this issue we propose a one-period mean-variance model, where the mean and covariance matrix are only assumed to belong to an exogenously specified uncertainty set. The robust mean-variance portfolio selection problem is then written as a conic program that can be solved efficiently with standard solvers. Both second order cone program (SOCP) and semidefinite program (SDP) formulations are discussed. Using numerical experiments with real data we show that the portfolios generated by the proposed robust mean-variance model can be computed efficiently and are not as sensitive to input errors as the classical MMV??s portfolios.  相似文献   

15.
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. When there are linear dependencies among potential predictor variables in a generalized linear model, existing Markov chain Monte Carlo algorithms for sampling from the posterior distribution on the model and parameter space in Bayesian variable selection problems may not work well. This article describes a sampling algorithm based on the Swendsen-Wang algorithm for the Ising model, and which works well when the predictors are far from orthogonality. In problems of variable selection for generalized linear models we can index different models by a binary parameter vector, where each binary variable indicates whether or not a given predictor variable is included in the model. The posterior distribution on the model is a distribution on this collection of binary strings, and by thinking of this posterior distribution as a binary spatial field we apply a sampling scheme inspired by the Swendsen-Wang algorithm for the Ising model in order to sample from the model posterior distribution. The algorithm we describe extends a similar algorithm for variable selection problems in linear models. The benefits of the algorithm are demonstrated for both real and simulated data.  相似文献   

16.
This work aims at determining the factors affecting economic output in developed countries. However, the definition of development depends on the criteria by which different principles provide different criteria of level of development. Therefore, there exists uncertainty about choice of sample or real development country and if the selected samples are not representative of the underlying population of real developed countries then the ordinary least squares coefficients may be biased. This paper examines the determinants of economic output in the panel data of 22 developed countries from 1996 to 2008 utilizing econometric techniques that take into account the selective nature of the samples. In general, there are two approaches to estimate the sample selection model, namely the maximum likelihood method and the method proposed by Heckman (1979) [21]. Moreover, these two approaches require that the joint distribution to be known. In general the multivariate normal distribution is assumed. However, this assumption can often be seen as excessively restrictive and this lead to uncertainty about the structure or assumption of joint distribution. Smith (2003) [37] suggests applying the copula approach, especially the Archimedean copula to the sample selection model and the result also shows that the copula approach is well suited to apply to a model where the sample selection is biased, using cross-section data. In our work, we employ the copula approach to construct the sample selection model in the case of panel data, resulting in the identification of significant factors affecting economic output.  相似文献   

17.
We discuss an optimal portfolio selection problem of an insurer who faces model uncertainty in a jump-diffusion risk model using a game theoretic approach. In particular, the optimal portfolio selection problem is formulated as a two-person, zero-sum, stochastic differential game between the insurer and the market. There are two leader-follower games embedded in the game problem: (i) The insurer is the leader of the game and aims to select an optimal portfolio strategy by maximizing the expected utility of the terminal surplus in the “worst-case” scenario; (ii) The market acts as the leader of the game and aims to choose an optimal probability scenario to minimize the maximal expected utility of the terminal surplus. Using techniques of stochastic linear-quadratic control, we obtain closed-form solutions to the game problems in both the jump-diffusion risk process and its diffusion approximation for the case of an exponential utility.  相似文献   

18.
A flexible Bayesian periodic autoregressive model is used for the prediction of quarterly and monthly time series data. As the unknown autoregressive lag order, the occurrence of structural breaks and their respective break dates are common sources of uncertainty these are treated as random quantities within the Bayesian framework. Since no analytical expressions for the corresponding marginal posterior predictive distributions exist a Markov Chain Monte Carlo approach based on data augmentation is proposed. Its performance is demonstrated in Monte Carlo experiments. Instead of resorting to a model selection approach by choosing a particular candidate model for prediction, a forecasting approach based on Bayesian model averaging is used in order to account for model uncertainty and to improve forecasting accuracy. For model diagnosis a Bayesian sign test is introduced to compare the predictive accuracy of different forecasting models in terms of statistical significance. In an empirical application, using monthly unemployment rates of Germany, the performance of the model averaging prediction approach is compared to those of model selected Bayesian and classical (non)periodic time series models.  相似文献   

19.
Decision-making problems (location selection) often involve a complex decision-making process in which multiple requirements and uncertain conditions have to be taken into consideration simultaneously. In evaluating the suitability of alternatives, quantitative/qualitative assessments are often required to deal with uncertainty, subjectiveness and imprecise data, which are best represented with fuzzy data. This paper presents a new method of analysis of multicriteria based on the incorporated efficient fuzzy model and concepts of positive ideal and negative ideal points to solve decision-making problems with multi-judges and multicriteria in real-life situations. As a result, effective decisions can be made on the basis of consistent evaluation results. Finally, this paper uses a numerical example of location selection to demonstrate the applicability of this method, with its simplicity in both concept and computation. The results show that this method can be implemented as an effective decision aid in selecting location or decision-making problems.  相似文献   

20.
This article considers Markov chain computational methods for incorporating uncertainty about the dimension of a parameter when performing inference within a Bayesian setting. A general class of methods is proposed for performing such computations, based upon a product space representation of the problem which is similar to that of Carlin and Chib. It is shown that all of the existing algorithms for incorporation of model uncertainty into Markov chain Monte Carlo (MCMC) can be derived as special cases of this general class of methods. In particular, we show that the popular reversible jump method is obtained when a special form of Metropolis–Hastings (M–H) algorithm is applied to the product space. Furthermore, the Gibbs sampling method and the variable selection method are shown to derive straightforwardly from the general framework. We believe that these new relationships between methods, which were until now seen as diverse procedures, are an important aid to the understanding of MCMC model selection procedures and may assist in the future development of improved procedures. Our discussion also sheds some light upon the important issues of “pseudo-prior” selection in the case of the Carlin and Chib sampler and choice of proposal distribution in the case of reversible jump. Finally, we propose efficient reversible jump proposal schemes that take advantage of any analytic structure that may be present in the model. These proposal schemes are compared with a standard reversible jump scheme for the problem of model order uncertainty in autoregressive time series, demonstrating the improvements which can be achieved through careful choice of proposals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号