首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 958 毫秒
1.
We present an approach for pricing and hedging in incomplete markets, which encompasses other recently introduced approaches for the same purpose. In a discrete time, finite space probability framework conducive to numerical computation we introduce a gain–loss ratio based restriction controlled by a loss aversion parameter, and characterize portfolio values which can be traded in discrete time to acceptability. The new risk measure specializes to a well-known risk measure (the Carr–Geman–Madan risk measure) for a specific choice of the risk aversion parameter, and to a robust version of the gain–loss measure (the Bernardo–Ledoit proposal) for a specific choice of thresholds. The result implies potentially tighter price bounds for contingent claims than the no-arbitrage price bounds. We illustrate the price bounds through numerical examples from option pricing.  相似文献   

2.
This paper extends possibilities for analyzing incomplete ordinal information about the parameters of an additive value function. Such information is modeled through preference statements which associate sets of alternatives or attributes with corresponding sets of rankings. These preference statements can be particularly helpful in developing a joint preference representation for a group of decision-makers who may find difficulties in agreeing on numerical parameter values. Because these statements can lead to a non-convex set of feasible parameters, a mixed integer linear formulation is developed to establish a linear model for the computation of decision recommendations. This makes it possible to complete incomplete ordinal information with other forms of incomplete information.  相似文献   

3.
A novel interval set approach is proposed in this paper to induce classification rules from incomplete information table, in which an interval-set-based model to represent the uncertain concepts is presented. The extensions of the concepts in incomplete information table are represented by interval sets, which regulate the upper and lower bounds of the uncertain concepts. Interval set operations are discussed, and the connectives of concepts are represented by the operations on interval sets. Certain inclusion, possible inclusion, and weak inclusion relations between interval sets are presented, which are introduced to induce strong rules and weak rules from incomplete information table. The related properties of the inclusion relations are proved. It is concluded that the strong rules are always true whatever the missing values may be, while the weak rules may be true when missing values are replaced by some certain known values. Moreover, a confidence function is defined to evaluate the weak rule. The proposed approach presents a new view on rule induction from incomplete data based on interval set.  相似文献   

4.
The log-normal distribution is a common choice for modeling positively skewed data arising from many practical applications.This article introduces a new method of constructing confidence interval for a common mean shared by several log-normal populations through confidence distributions, which combines all information from independent sources. We develop a non-trivial weighting approach by taking account of the sample variances of related quantities to enhance efficiency. Combined confidence distributions are used to construct confidence intervals for the common mean and a simplified version of one existing method is also proposed. We conduct simulation studies to evaluate the performance of the proposed methods in comparison with existing methods. Our simulation results show that the weighting approach yields shorter interval length than the non-weighting approach. The newly proposed confidence intervals perform very well in terms of empirical coverage probability and average interval length. Finally, applications of the proposed methodology is illustrated through three real data examples.  相似文献   

5.
对于满足乘性一致性的残缺互补判断矩阵的决策问题,提出了一种决策方法。首先把互补判断矩阵的乘性一致性定义进行了简化,得到了互补判断矩阵乘性一致性的另外几种表达形式;进一步得到了在已知n-1个特殊元素的条件下,残缺互补判断矩阵中缺失元素的补全方法;然后给出了残缺互补判断矩阵可接受的条件,以及矩阵的一致性检验及调整方法;基于残缺互补判断矩阵,给出了以下决策步骤:残缺互补判断矩阵的一致性检验及调整过程,补全缺失元素的迭代过程和最优方案择优过程。最后给出了一个实例,通过该实例的计算以及本文方法与已有方法的比较,证明了本文方法是简便和有效的。  相似文献   

6.
This paper develops an interactive three–stage systems approach for the calibration of the structural parameters and missing data within a deterministic, dynamic non–linear simultaneous equations model under arbitrary configurations of incomplete data. In Stage One, we minimize a quadratic loss function in the differences between the actual endogenous variables and the predicted solution values, relative to any feasible choice of the structural parameters. Missing exogenous variables and initial endogenous variables are treated as additional parameters to be calibrated; whereas missing current endogenous variables are treated by the missing data updating condition, in which the current solution values iteratively and sequentially replace those absent. Stage One may or may not lead to unique calibrations of the structural parameters—a fact that can be monitored a posteriori using singular value decompositions of the relevant Jacobian matrix. If not, there is an equivalence class of parameter values, all of which result in the same loss function value. If Stage Two is necessary, we attempt to exploit the non–linearity and simultaneity of the structural system to extract further information about the parameters from the same database, by minimizing the distance between the restricted and unrestricted reduced forms, while constraining the parameters also to lie within the Stage One equivalence class. This requires the use of higher–order numerical derivatives, and probably restricts its use in all but the simplest of cases to the next generation of supercomputers with massive numbers of parallel processors and much larger word–sizes. In Stage Three, various methods by which the original structural model can be simplified, given a non–unique Stage One calibration, are entertained.  相似文献   

7.
Since the Age of Enlightenment, most philosophers have associated reasoning with the rules of probability and logic. This association has been enhanced over the years and now incorporates the theory of fuzzy logic as a complement to the probability theory, leading to the concept of fuzzy probability. Our insight, here, is integrating the concept of validity into the notion of fuzzy probability within an extended fuzzy logic (FLe) framework keeping with the notion of collective intelligence. In this regard, we propose a novel framework of possibility–probability–validity distribution (PPVD). The proposed distribution is applied to a real world setting of actual judicial cases to examine the role of validity measures in automated judicial decision-making within a fuzzy probabilistic framework. We compute valid fuzzy probability of conviction and acquittal based on different factors. This determines a possible overall hypothesis for the decision of a case, which is valid only to a degree. Validity is computed by aggregating validities of all the involved factors that are obtained from a factor vocabulary based on the empirical data. We then map the combined validity based on the Jaccard similarity measure into linguistic forms, so that a human can understand the results. Then PPVDs that are obtained based on the relevant factors in the given case yield the final valid fuzzy probabilities for conviction and acquittal. Finally, the judge has to make a decision; we therefore provide a numerical measure. Our approach supports the proposed hypothesis within the three-dimensional contexts of probability, possibility, and validity to improve the ability to solve problems with incomplete, unreliable, or ambiguous information to deliver a more reliable decision.  相似文献   

8.
The discrete Wasserstein barycenter problem is a minimum-cost mass transport problem for a set of discrete probability measures. Although an exact barycenter is computable through linear programming, the underlying linear program can be extremely large. For worst-case input, a best known linear programming formulation is exponential in the number of variables, but has a low number of constraints, making it an interesting candidate for column generation.In this paper, we devise and study two column generation strategies: a natural one based on a simplified computation of reduced costs, and one through a Dantzig–Wolfe decomposition. For the latter, we produce efficiently solvable subproblems, namely, a pricing problem in the form of a classical transportation problem. The two strategies begin with an efficient computation of an initial feasible solution. While the structure of the constraints leads to the computation of the reduced costs of all remaining variables for setup, both approaches may outperform a computation using the full program in speed, and dramatically so in memory requirement. In our computational experiments, we exhibit that, depending on the input, either strategy can become a best choice.  相似文献   

9.
This paper addresses the use of incomplete information on both multi-criteria alternative values and importance weights in evaluating decision alternatives. Incomplete information frequently takes the form of strict inequalities, such as strict orders and strict bounds. En route to prioritizing alternatives, the majority of previous studies have replaced these strict inequalities with weak inequalities, by employing a small positive number. As this replacement closes the feasible region of decision parameters, it circumvents certain troubling questions that arise when utilizing a mathematical programming approach to evaluate alternatives. However, there are no hard and fast rules for selecting the factual small value and, even if the choice is possible, the resultant prioritizations depend profoundly on that choice. The method developed herein addresses and overcomes this drawback, and allows for dominance and potential optimality among alternatives, without selecting any small value for the strict preference information. Given strict information on criterion weights alone, we form a linear program and solve it via a two-stage method. When both alternative values and weights are provided in the form of strict inequalities, we first construct a nonlinear program, transform it into a linear programming equivalent, and finally solve this linear program via the same two-stage method. One application of this methodology to a market entry decision, a salient subject in the area of international marketing, is demonstrated in detail herein.  相似文献   

10.
This paper computes the Harsanyi-Selten solution for a family of two-person bargaining games with incomplete information where one player hastwo possible types while the other player has onlyone possible type. The actual computation procedure is also outlined.  相似文献   

11.
Incomplete fuzzy preference relations, incomplete multiplicative preference relations, and incomplete linguistic preference relations are very useful to express decision makers’ incomplete preferences over attributes or alternatives in the process of decision making under fuzzy environments. The aim of this paper is to investigate fuzzy multiple attribute group decision making problems where the attribute values are represented in intuitionistic fuzzy numbers and the information on attribute weights is provided by decision makers by means of one or some of the different preference structures, including weak ranking, strict ranking, difference ranking, multiple ranking, interval numbers, incomplete fuzzy preference relations, incomplete multiplicative preference relations, and incomplete linguistic preference relations. We transform all individual intuitionistic fuzzy decision matrices into the interval decision matrices and construct their expected decision matrices, and then aggregate all these expected decision matrices into a collective one. We establish an integrated model by unifying the collective decision matrix and all the given different structures of incomplete weight preference information, and develop an integrated model-based approach to interacting with the decision makers so as to adjust all the inconsistent incomplete fuzzy preference relations, inconsistent incomplete linguistic preference relations and inconsistent incomplete multiplicative preference relations into the ones with acceptable consistency. The developed approach can derive the attribute weights and the ranking of the alternatives directly from the integrated model, and thus it has the following prominent characteristics: (1) it does not need to construct the complete fuzzy preference relations, complete linguistic preference relations and complete multiplicative preference relations from the incomplete fuzzy preference relations, incomplete linguistic preference relations and incomplete multiplicative preference relations, respectively; (2) it does not need to unify the different structures of incomplete preferences, and thus can simplify the calculation and avoid distorting the given preference information; and (3) it can sufficiently reflect and adjust the subjective desirability of decision makers in the process of interaction. A practical example is also provided to illustrate the developed approach.  相似文献   

12.
The sensitivity of histogram computation to the choice of a reference interval and number of bins can be attenuated by replacing the crisp partition on which the histogram is built by a fuzzy partition. This involves replacing the crisp counting process by a distributed (weighted) voting process. The counterpart to this low sensitivity is some confusion in the count values: a value of 10 in the accumulator associated with a bin can mean 10 observations in the bin or 40 observations near the bin. This confusion can bias the statistical decision process based on such a histogram. In a recent paper, we proposed a method that links the probability measure associated with any subset of the reference interval with the accumulator values of a fuzzy partition-based histogram. The method consists of transferring counts associated with each bin proportionally to its interaction with the considered subset. Two methods have been proposed which are called precise and imprecise pignistic transfer. Imprecise pignistic transfer accounts for the interactivity of two consecutive cells in order to propagate, in the estimated probability measure, counting confusion due to fuzzy granulation. Imprecise pignistic transfer has been conjectured to include precise pignistic transfer. The present article proposes a proof of this conjecture.  相似文献   

13.
Exploring incomplete data using visualization techniques   总被引:1,自引:0,他引:1  
Visualization of incomplete data allows to simultaneously explore the data and the structure of missing values. This is helpful for learning about the distribution of the incomplete information in the data, and to identify possible structures of the missing values and their relation to the available information. The main goal of this contribution is to stress the importance of exploring missing values using visualization methods and to present a collection of such visualization techniques for incomplete data, all of which are implemented in the ${{\sf R}}$ package VIM. Providing such functionality for this widely used statistical environment, visualization of missing values, imputation and data analysis can all be done from within ${{\sf R}}$ without the need of additional software.  相似文献   

14.
Variable elimination (VE) and join tree propagation (JTP) are two alternatives to inference in Bayesian networks (BNs). VE, which can be viewed as one-way propagation in a join tree, answers each query against the BN meaning that computation can be repeated. On the other hand, answering a single query with JTP involves two-way propagation, of which some computation may remain unused. In this paper, we propose marginal tree inference (MTI) as a new approach to exact inference in discrete BNs. MTI seeks to avoid recomputation, while at the same time ensuring that no constructed probability information remains unused. Thereby, MTI stakes out middle ground between VE and JTP. The usefulness of MTI is demonstrated in multiple probabilistic reasoning sessions.  相似文献   

15.
We address the problem of assigning probabilities at discrete time instants for routing toll-free calls to a given set of call centers to minimize a weighted sum of transmission costs and load variability at the call centers during the next time interval.We model the problem as a tripartite graph and decompose the finding of an optimal probability assignment in the graph into the following problems: (i) estimating the true arrival rates at the nodes for the last time period; (ii) computing routing probabilities assuming that the estimates are correct. We use a simple approach for arrival rate estimation and solve the routing probability assignment by formulating it as a convex quadratic program and using the affine scaling algorithm to obtain an optimal solution.We further address a practical variant of the problem that involves changing routing probabilities associated with k nodes in the graph, where k is a prespecified number, to minimize the objective function. This involves deciding which k nodes to select for changing probabilities and determining the optimal value of the probabilities. We solve this problem using a heuristic that ranks all subsets of k nodes using gradient information around a given probability assignment.The routing model and the heuristic are evaluated for speed of computation of optimal probabilities and load balancing performance using a Monte Carlo simulation. Empirical results for load balancing are presented for a tripartite graph with 99 nodes and 17 call center gates.  相似文献   

16.
Point estimators for the parameters of the component lifetime distribution in coherent systems are evolved assuming to be independently and identically Weibull distributed component lifetimes. We study both complete and incomplete information under continuous monitoring of the essential component lifetimes. First, we prove that the maximum likelihood estimator (MLE) under complete information based on progressively Type‐II censored system lifetimes uniquely exists and we present two approaches to compute the estimates. Furthermore, we consider an ad hoc estimator, a max‐probability plan estimator and the MLE for the parameters under incomplete information. In order to compute the MLEs, we consider a direct maximization of the likelihood and an EM‐algorithm–type approach, respectively. In all cases, we illustrate the results by simulations of the five‐component bridge system and the 10‐component parallel system, respectively.  相似文献   

17.
A key issue in applying multi-attribute project portfolio models is specifying the baseline value – a parameter which defines how valuable not implementing a project is relative to the range of possible project values. In this paper we present novel baseline value specification techniques which admit incomplete preference statements and, unlike existing techniques, make it possible to model problems where the decision maker would prefer to implement a project with the least preferred performance level in each attribute. Furthermore, we develop computational methods for identifying the optimal portfolios and the value-to-cost -based project rankings for all baseline values. We also show how these results can be used to (i) analyze how sensitive project and portfolio decision recommendations are to variations in the baseline value and (ii) provide project decision recommendations in a situation where only incomplete information about the baseline value is available.  相似文献   

18.
I discuss some aspects of the distinction between ontic and epistemic views of sets as representation of imprecise or incomplete information. In particular, I consider its implications on imprecise probability representations: credal sets and sets of desirable gambles. It is emphasized that the interpretation of the same mathematical object can be different depending on the point of view from which this element is considered. In the case of a fuzzy information on a random variable, it is possible to define a possibility distribution on the simplex of probability distributions. I add some comments about the properties of this possibility distribution.  相似文献   

19.
The main purpose of this article b to study how sometimes incomplete information coming from different sources can be put together. After interrogating the qth source, a variable Xq is defined whose values are passible states. It is shown that {Xq } forms a Markov chain. It is shown that the probability of not getting additional information at the qth source is the averaged commonality induced by Xq . Other related questions are looked at. Then, strategies are defined as to what to do when the probability of different states is not known. A possible way to proceed is to view possible states as environments in which decisions must be made. Different strategies, under different degrees of the reliability of the sources, are sketched. The belief function is used to build different payoffs and fuzzy versions of the problems are studied  相似文献   

20.
用古典概型,计算机模拟,整数模2同余简化的方法,以及线性空间相关性的理论,分析,计算,论证了元素都是整数的行列式值为奇数的概率.得到了相合一致的结论:n阶整数行列式的值为奇数的概率■.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号