首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper is concerned with the derivation of guides to decision making in a set of circumstances intermediate between the classic extremes of pure risk and pure uncertainty. It is assumed that the decision maker can specify a subjective strict ranking of the probabilities of states of nature, in which case, given knowledge of the payoffs of a strategy, it is possible to find in a straightforward fashion, maximum and minimum expected payoffs for the strategy. If the information contained in the strict ranking is available to the decision maker, then to fail to take it into account must adversely affect the quality of decision making. The paper discusses both the interpretation and the possible practical uses of extreme expected values in decision making and includes a numerical example, based on an investment decision problem, to demonstrate the ease of use of the results obtained.  相似文献   

2.
《Optimization》2012,61(4):463-476
We consider the problem of expected utility maximization for the two-agent case in general semimartingale model. For this case a cooperative investment game is posed as follows: firstly collect both agents' capital as a whole at the initial time, and invest it in a trading strategy. Then at some time T 0 one agent quits cooperation and terminates investment, so they divide the wealth and each of them gets a part. During the time interval [T 0, T], the other agent invests her capital in a new trading strategy herself. By stochastic optimization methods with the help of the theory of backward stochastic differential equations, we give a characterization of Pareto optimal cooperative strategies and a characterization of situations where cooperation strictly Pareto dominates non-cooperation in our model.  相似文献   

3.
4.
The existence of optimal strategy in robust utility maximization is addressed when the utility function is finite on the entire real line. A delicate problem in this case is to find a ??good definition?? of admissible strategies to admit an optimizer. Under certain assumptions, especially a kind of time-consistency property of the set ${\mathcal{P}}$ of probabilities which describes the model uncertainty, we show that an optimal strategy is obtained in the class of those whose wealths are supermartingales under all local martingale measures having a finite generalized entropy with one of ${P\in\mathcal{P}}$ .  相似文献   

5.
The R&D manager is commonly faced with the problem of deciding which projects to fund to meet overall corporate and technical goals. Because outcomes can rarely be predicted with certainty, decisions aimed at striking a balance between cost and risk are likely to involve some amount of redundancy at the project level. The intent of this paper is to examine the difficulties that arise when trying to pursue a parallel strategy in the presence of multiple objectives. The basic elements of the problem include a set of projects, a set of objectives, the associated probability measures relating effort to success, budgetary and performance constraints, and a utility function defined on the range of outcomes. In the model it will be assumed that each project contributes to one or more objectives, and that the selection criterion is based on expected utility maximization. With this in mind, the problem is formulated as a probabilistic goal programme and solved with a heuristic that computes the K best funding schemes. Results are presented for a case involving the development of a non-petroleum-powered vehicle which demonstrate the robustness of the algorithm and the implications of the underlying decision rules.  相似文献   

6.
Stochastic random phenomena considered in von Neumann–Morgenstern utility theory constitute only a part of all possible random phenomena (Kolmogorov, 1986). We show that any sequence of observed consequences generates a corresponding sequence of frequency distributions, which in general does not have a single limit point but a non-empty closed limit set in the space of finitely additive probabilities. This approach to randomness allows to generalize the expected utility theory in order to cover decision problems under nonstochastic random events. We derive the maxmin expected utility representation for preferences over closed sets of probability measures. The derivation is based on the axiom of preference for stochastic risk, i.e. the decision maker wishes to reduce a set of probability distributions to a single one. This complements Gilboa and Schmeidler’s (1989) consideration of the maxmin expected utility rule with objective treatment of multiple priors.  相似文献   

7.
A discrete search model for one of many objects hidden in two boxes is studied. The number of objects is assumed to be a random variable with a known prior distribution. When box i is searched, a cost ci > 0 is paid and the conditional probability of finding a specific object given it was hidden there is αi. We are interested in determining a search strategy which finds at least one object with minimum expected cost. Zones of the state space for which Blackwell's rule [3] is optimal are characterized. Based on these results an algorithm for constructing an optimal search sequence is suggested and demonstrated in the case where the number of hidden objects has a geometric distribution.  相似文献   

8.
In a standard single-period model under risk, we formalize and discuss an intuitive criterion for the binary comparison of financial investments. Two investments – x and y – are compared by calculating the present value of x’s payoffs using the state dependent returns of y as discount factors. The induced preference is asymmetric but exhibits intransitive indifference. If the feasible set is convex, then the criterion selects a unique maximum element. Interestingly, it can be shown that the induced preference can be represented by a one-way expected utility representation employing logarithmic utility. Besides giving a relevant and illustrative example for a one-way utility representation, this result provides a new interpretation of using logarithmic utility for expected utility based decision-making.  相似文献   

9.
An equity-indexed annuity (EIA) is a hybrid between a variable and a fixed annuity that allows the investor to participate in the stock market, and earn at least a minimum interest rate. The investor sacrifices some of the upside potential for the downside protection of the minimum guarantee. Because EIAs allow investors to participate in equity growth without the downside risk, their popularity has grown rapidly.An optimistic EIA owner might consider surrendering an EIA contract, paying a surrender charge, and investing the proceeds directly in the index to earn the full (versus reduced) index growth, while using a risk-free account for downside protection. Because of the popularity of these products, it is important for individuals and insurers to understand the optimal policyholder behavior.We consider an EIA investor who seeks the surrender strategy and post-surrender asset allocation strategy that maximizes the expected discounted utility of bequest. We formulate a variational inequality and a Hamilton-Jacobi-Bellman equation that govern the optimal surrender strategy and post-surrender asset allocation strategy, respectively. We examine the optimal strategies and how they are affected by the product features, model parameters, and mortality assumptions. We observe that in many cases, the “no-surrender” region is an interval (wl,wu); i.e., that there are two free boundaries. In these cases, the investor surrenders the EIA contract if the fund value becomes too high or too low. In other cases, there is only one free boundary; the lower (or upper) surrender threshold vanishes. In these cases, the investor holds the EIA, regardless of how low (or high) the fund value goes. For a special case, we prove a succinct and intuitive condition on the model parameters that dictates whether one or two free boundaries exist.  相似文献   

10.
尾部风险测度是风险管理中的关键点,本文利用缓冲超越概率模型,量化不同预期损失的风险概率分布,构建条件风险价值约束下的最小化“厚尾事件”概率的套期保值策略,从而将现有研究的视角拓展到考虑预期损失和风险概率的双重维度。本文通过实证数据统计和参数化拟合分布两个方法提供不同风险阈值及对应的缓冲超越概率的稳定解集合,研究结果发现,无论预期损失服从厚尾分布还是正态分布,缓冲超越概率模型均能够显著地降低市场风险和潜在的“厚尾事件”发生的概率,并提供比最小化方差稳定的套期保值比率。  相似文献   

11.
Several criteria, such as CV, C p , AIC, CAIC, and MAIC, are used for selecting variables in linear regression models. It might be noted that C p has been proposed as an estimator of the expected standardized prediction error, although the target risk function of CV might be regarded as the expected prediction error R PE. On the other hand, the target risk function of AIC, CAIC, and MAIC is the expected log-predictive likelihood. In this paper, we propose a prediction error criterion, PE, which is an estimator of the expected prediction error R PE. Consequently, it is also a competitor of CV. Results of this study show that PE is an unbiased estimator when the true model is contained in the full model. The property is shown without the assumption of normality. In fact, PE is demonstrated as more faithful for its risk function than CV. The prediction error criterion PE is extended to the multivariate case. Furthermore, using simulations, we examine some peculiarities of all these criteria.  相似文献   

12.
This paper analyzes the F-policy M/M/1/K queueing system with working vacation and an exponential startup time. The F-policy deals with the issue of controlling arrivals to a queueing system, and the server requires a startup time before allowing customers to enter the system. For the queueing systems with working vacation, the server can still provide service to customers rather than completely stop the service during a vacation period. The matrix-analytic method is applied to develop the steady-state probabilities, and then obtain several system characteristics. We construct the expected cost function and formulate an optimization problem to find the minimum cost. The direct search method and Quasi-Newton method are implemented to determine the optimal system capacity K, the optimal threshold F and the optimal service rates (μB,μV) at the minimum cost. A sensitivity analysis is conducted to investigate the effect of changes in the system parameters on the expected cost function. Finally, numerical examples are provided for illustration purpose.  相似文献   

13.
The European option with transaction costs is studied. The cost of making a transaction is taken to be proportional by a factor λ to the value (in dollars) of stock traded. When there are no transaction costs (i.e. when λ=0) the well-known Black-Scholes strategy tells how to hedge the option. Since no non-trivial perfect hedging strategy exists when λ>0 (see (Ann. Appl. Probab. 5(2) (1995) 327)), we instead try to maximize the expected utility attainable. We seek to understand the effect transaction costs have on the maximum attainable expected utility over all strategies, when λ is small but non-zero. It turns out that transaction costs diminish the expected utility by an amount which has the order of magnitude λ2/3. We will compute that correction explicitly modulo an error which is small compared to λ2/3. We will exhibit an explicit strategy whose expected utility differs from the maximum attainable expected utility by an error small in comparison to λ2/3.  相似文献   

14.
In 1934, Whitney raised the question of how to recognize whether a function f defined on a closed subset X of ℝ n is the restriction of a function of class 𝒞 p . A necessary and sufficient criterion was given in the case n=1 by Whitney, using limits of finite differences, and in the case p=1 by Glaeser (1958), using limits of secants. We introduce a necessary geometric criterion, for general n and p, involving limits of finite differences, that we conjecture is sufficient at least if X has a “tame topology”. We prove that, if X is a compact subanalytic set, then there exists q=q X (p) such that the criterion of order q implies that f is 𝒞 p . The result gives a new approach to higher-order tangent bundles (or bundles of differential operators) on singular spaces. Oblatum 21-XI-2001 & 3-VII-2002?Published online: 8 November 2002 RID="*" ID="*"Research partially supported by the following grants: E.B. – NSERC OGP0009070, P.M. – NSERC OGP0008949 and the Killam Foundation, W.P. – KBN 5 PO3A 005 21.  相似文献   

15.
The alternating step generator is a well-known keystream generator consisting of two stop/go clocked LFSRs, LFSR1 and LFSR2, whose clocks are controlled by another LFSR, LFSR3, which is clocked regularly. A probabilistic analysis of this generator is conducted which shows that the posterior probabilites of individual bits of the first derivatives of the regularly clocked LFSR1 and LFSR2 sequences, when conditioned on a given segment of the first derivative of the keystream sequence, can be computed efficiently in a number of probabilistic models of interest. The expected values of these probabilities, for a random keystream sequence, are derived by an approximate theoretical analysis and are also verified by systematic computer experiments. It is pointed out that these posterior probabilities can be enhanced in a resynchronization scenario and thus used for a low-complexity fast correlation attack on the two LFSRs. More generally, it is argued that even without resynchronization these probabilities may be significantly different from one half for fast correlation attacks based on iterative decoding algorithms to be successful, although with incresead complexity. A related method for computing the posterior probabilities of individual bits of the LFSR3 sequence, when conditioned on both the keystream sequence and the LFSR1 and LFSR2 sequences, is also developed. As these posterior probabilities are much more different from one half, they can be used for a low-complexity fast correlation attack on LFSR3, provided that the initial states of LFSR1 and LFSR2 are previously reconstructed.  相似文献   

16.
Mean–variance portfolio choice is often criticized as sub-optimal in the more general expected utility framework. It is argued that the expected utility framework takes into consideration higher moments ignored by mean variance analysis. A body of research suggests that mean–variance choice, though arguably sub-optimal, provides very close-to-expected utility maximizing portfolios and their expected utilities, basing its evaluation on in-sample analysis where mean–variance choice is sub-optimal by definition. In order to clarify this existing research, this study provides a framework that allows comparing in-sample and out-of-sample performance of the mean variance portfolios against expected utility maximizing portfolios. Our in-sample results confirm the results of earlier studies. On the other hand, our out-of-sample results show that the expected utility model performs worse. The out-of-sample inferiority of the expected utility model is more pronounced for preferences and constraints under which in-sample mean variance approximations are weakest. We argue that, in addition to its elegance and simplicity, the mean–variance model extracts more information from sample data because it uses the covariance matrix of returns. The expected utility model may reach its optimal solution without using information from the covariance matrix.  相似文献   

17.
We list and prove a family of binomial identities by calculating in two ways the probabilities of approximate saddlepoints occurring in random m×n matrices. The identities are easily seen to be equivalent to the evaluation of a family of Gauss 2F1 polynomials according to a formula of Vandermonde. We also consider some implications concerning the number of approximate pure strategy Nash equilibria we can expect in large matrix zero-sum and team games.  相似文献   

18.
We give a construction of the maximum, and the minimum of the set of nondecreasmg approxmants in the discrete case , where is a positive conoex function. A characterization of that set is also obtained.  相似文献   

19.
In this paper we deal with the set of k-additive belief functions dominating a given capacity. We follow the line introduced by Chateauneuf and Jaffray for dominating probabilities and continued by Grabisch for general k-additive measures. First, we show that the conditions for the general k-additive case lead to a very wide class of functions and this makes that the properties obtained for probabilities are no longer valid. On the other hand, we show that these conditions cannot be improved. We solve this situation by imposing additional constraints on the dominating functions. Then, we consider the more restrictive case of k-additive belief functions. In this case, a similar result with stronger conditions is proved. Although better, this result is not completely satisfactory and, as before, the conditions cannot be strengthened. However, when the initial capacity is a belief function, we find a subfamily of the set of dominating k-additive belief functions from which it is possible to derive any other dominant k-additive belief function, and such that the conditions are even more restrictive, obtaining the natural extension of the result for probabilities. Finally, we apply these results in the fields of Social Welfare Theory and Decision Under Risk.  相似文献   

20.
We review de Finetti’s two coherence criteria for determinate probabilities: coherence1 defined in terms of previsions for a set of events that are undominated by the status quo – previsions immune to a sure-loss – and coherence2 defined in terms of forecasts for events undominated in Brier score by a rival forecast. We propose a criterion of IP-coherence2 based on a generalization of Brier score for IP-forecasts that uses 1-sided, lower and upper, probability forecasts. However, whereas Brier score is a strictly proper scoring rule for eliciting determinate probabilities, we show that there is no real-valued strictly proper IP-score. Nonetheless, with respect to either of two decision rules – Γ-maximin or (Levi’s) E-admissibility-+-Γ-maximin – we give a lexicographic strictly proper IP-scoring rule that is based on Brier score.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号