首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We analyze the concept of credibility in claim frequency in two generalized count models–Mittag-Leffler and Weibull count models–which can handle both underdispersion and overdispersion in count data and nest the commonly used Poisson model as a special case. We find evidence, using data from a Danish insurance company, that the simple Poisson model can set the credibility weight to one even when there are only three years of individual experience data resulting from large heterogeneity among policyholders, and in doing so, it can thus break down the credibility model. The generalized count models, on the other hand, allow the weight to adjust according to the number of years of experience available. We propose parametric estimators for the structural parameters in the credibility formula using the mean and variance of the assumed distributions and a maximum likelihood estimation over a collective data. As an example, we show that the proposed parameters from Mittag-Leffler provide weights that are consistent with the idea of credibility. A simulation study is carried out investigating the stability of the maximum likelihood estimates from the Weibull count model. Finally, we extend the analyses to multidimensional lines and explain how our approach can be used in selecting profitable customers in cross-selling; customers can now be selected by estimating a function of their unknown risk profiles, which is the mean of the assumed distribution on their number of claims.  相似文献   

2.
The aim of this paper is to analyze the effects of uncontrolled co-production on the production planning and lot scheduling of multiple products. Co-production occurs when a proportion of a certain production comes out as another product. This is typical in the process industry where quality and process specifications can lead to diversified products. We assume that there is no demand substitution and each product has its own market. Furthermore, we assume that co-production cannot be controlled due to technical and/or cost considerations. We introduce two models that extend the common cycle economic lot scheduling (ELSP) setting to include uncontrolled co-production. In the first model we do not allow for shortages and derive the optimal cycle time expression. In the second model, we allow for planned backorders and characterize the optimal solution in closed form. We provide a numerical study to gain insight about co-production. It seems that the cycle time increases with co-production rate and utilization of the system. The effect of co-production on long-term average cost does not exhibit a certain characteristic.  相似文献   

3.
The classical Lawler’s Algorithm provides an optimal solution to the single-machine scheduling problem, where the objective is minimizing maximum cost, given general non-decreasing, job-dependent cost functions, and general precedence constraints. First, we extend this algorithm to allow job rejection, where the scheduler may decide to process only a subset of the jobs. Then, we further extend the model to a setting of two competing agents, sharing the same processor. Both extensions are shown to be solved in polynomial time.  相似文献   

4.
We study microeconomic foundations of diffusion processes as models of stock price dynamics. To this end, we develop a microscopic model of a stock market with finitely many heterogeneous economic agents, who trade in continuous time, giving rise to an endogeneous pure-jump process describing the evolution of stock prices over time. When the number of agents in the market is large, we show that the price process can be approximated by a diffusion, with price-dependent drift and volatility coefficients that are determined by small excess demands and trading volume in the microscopic model. We extend the microscopic model further by allowing for non-market interactions between agents, to model herd behavior in the market. In this case, price dynamics can be approximated by a process with stochastic volatility. Finally, we demonstrate how heavy-tailed stock returns emerge when agents have a strong tendency towards herd behavior.  相似文献   

5.
王灿杰  邓雪 《运筹与管理》2019,28(2):154-159
本文考虑到证券市场的投资者往往面临着随机和模糊两种不确定性的情形,在模糊随机环境下把证券的收益率视作三角模糊变量,在可信性理论基础上建立了带融资约束条件的均值-熵-偏度三目标投资组合决策模型,拓展了基于可信性理论的投资组合决策模型的研究内容,同时通过对约束条件处理方法,外部档案维护方法等关键算子的改良,提出了一种新的约束多目标粒子群算法。本文运用该算法对模型进行求解,把得到的最优解与传统的多目标粒子群算法得到的最优解进行对比,结果表明新算法得到的最优解的质量会显著地优于传统的多目标粒子群算法的最优解,从而验证了算法的有效性和准确性。该算法可以在三维空间中得到一个分布性和逼近性较好的Pareto最优曲面,满足投资者对不同目标的差异需求,为投资者提供合理的投资组合决策方案。  相似文献   

6.
We extend agency theory to incorporate bounded rationality of both principals and agents. In this study we define a simple version of the principal-agent game and examine it using object-oriented computer simulation. Player learning is simulated with a genetic algorithm model. Our results show that players of incentive games in highly uncertain environments may take on defensive strategies. These defensive strategies lead to equilibria which are inferior to Nash equilibria. If agents are risk averse, the principal may not be able to provide enough monetary compensation to encourage them to take risks. But principals may be able to improve system performance by identifying good performers and facilitating information exchange among agents.The authors would like to thank the anonymous referees for their helpful suggestions.  相似文献   

7.
We provide solution techniques for the analysis of multiplexers with periodic arrival streams, which accurately account for the effects of active and idle periods and of gradual arrival. In the models considered in this paper, it is assumed that each source alternates (periodically) between active and idle periods of fixed durations. Incoming packets are transmitted on the network link and excess information is stored in the multiplexing buffer when the aggregate input rate exceeds the capacity of the link. We are interested in the probability distribution of the buffer content for a given network link speed as a function of the number of sources and their characteristics, i.e., rate and duration of idle and active periods. We derive this distribution from two models: discrete time and continuous time systems. Discrete time systems operate in a slotted fashion, with a slot defining the base unit for data generation and transmission. In particular, in each slot the link is capable of transmitting one data unit and conversely an active source generates one data unit in that time. The continuous time model of the paper falls in the category of fluid models. Compared to previous works we allow a more general model for the periodic packet arrival process of each source. In discrete time, this means that the active period of a source can now extend over several consecutive slots instead of a single slot as in previous models. In continuous time, packet arrivals are not required to be instantaneous, but rather the data generation process can now take place over the entire duration of the active period. In both cases, these generalizations allow us to account for the progressive arrival of source data as a function of both the source speed and the amount of data it generates in an active period.This work was done while at the IBM T.J. Watson Research Center.This work was done while at the IBM T.J. Watson Research Center.Part of the work was done while visiting the IBM T.J. Watson Research Center.  相似文献   

8.
Designing systems with human agents is difficult because it often requires models that characterize agents’ responses to changes in the system’s states and inputs. An example of this scenario occurs when designing treatments for obesity. While weight loss interventions through increasing physical activity and modifying diet have found success in reducing individuals’ weight, such programs are difficult to maintain over long periods of time due to lack of patient adherence. A promising approach to increase adherence is through the personalization of treatments to each patient. In this paper, we make a contribution toward treatment personalization by developing a framework for predictive modeling using utility functions that depend upon both time-varying system states and motivational states evolving according to some modeled process corresponding to qualitative social science models of behavior change. Computing the predictive model requires solving a bilevel program, which we reformulate as a mixed-integer linear program (MILP). This reformulation provides the first (to our knowledge) formulation for Bayesian inference that uses empirical histograms as prior distributions. We study the predictive ability of our framework using a data set from a weight loss intervention, and our predictive model is validated by comparison to standard machine learning approaches. We conclude by describing how our predictive model could be used for optimization, unlike standard machine learning approaches that cannot.  相似文献   

9.

In this paper, we present a network manipulation algorithm based on an alternating minimization scheme from Nesterov (Soft Comput 1–12, 2020). In our context, the alternative process mimics the natural behavior of agents and organizations operating on a network. By selecting starting distributions, the organizations determine the short-term dynamics of the network. While choosing an organization in accordance with their manipulation goals, agents are prone to errors. This rational inattentive behavior leads to discrete choice probabilities. We extend the analysis of our algorithm to the inexact case, where the corresponding subproblems can only be solved with numerical inaccuracies. The parameters reflecting the imperfect behavior of agents and the credibility of organizations, as well as the condition number of the network transition matrix have a significant impact on the convergence of our algorithm. Namely, they turn out not only to improve the rate of convergence, but also to reduce the accumulated errors. From the mathematical perspective, this is due to the induced strong convexity of an appropriate potential function.

  相似文献   

10.
Analysis of variance is a standard statistical modeling approach for comparing populations. The functional analysis setting envisions that mean functions are associated with the populations, customarily modeled using basis representations, and seeks to compare them. Here, we adopt the modeling approach of functions as realizations of stochastic processes. We extend the Gaussian process version to allow nonparametric specifications using Dirichlet process mixing. Several metrics are introduced for comparison of populations. Then we introduce a hierarchical Dirichlet process model which enables comparison of the population distributions, either directly or through functionals of interest using the foregoing metrics. The modeling is extended to allow us to switch the sampling scheme. There are still population level distributions but now we sample at levels of the functions, obtaining observations from potentially different individuals at different levels. We illustrate with both simulated data and a dataset of temperature versus depth measurements at different locations in the Atlantic Ocean.  相似文献   

11.
One approach to representing knowledge or belief of agents, used by economists and computer scientists, involves an infinite hierarchy of beliefs. Such a hierarchy consists of an agent's beliefs about the state of the world, his beliefs about other agents' beliefs about the world, his beliefs about other agents' beliefs about other agents' beliefs about the world, and so on. (Economists have typically modeled belief in terms of a probability distribution on the uncertainty space. In contrast, computer scientists have modeled belief in terms of a set of worlds, intuitively, the ones the agent considers possible.) We consider the question of when a countably infinite hierarchy completely describes the uncertainty of the agents. We provide various necessary and sufficient conditions for this property. It turns out that the probability-based approach can be viewed as satisfying one of these conditions, which explains why a countable hierarchy suffices in this case. These conditions also show that whether a countable hierarchy suffices may depend on the “richness” of the states in the underlying state space. We also consider the question of whether a countable hierarchy suffices for “interesting” sets of events, and show that the answer depends on the definition of “interesting”.  相似文献   

12.
We extend the definition of functional data registration to encompass a larger class of registration models. In contrast to traditional registration models, we allow for registered functions that have more than one primary direction of variation. The proposed Bayesian hierarchical model simultaneously registers the observed functions and estimates the two primary factors that characterize variation in the registered functions. Each registered function is assumed to be predominantly composed of a linear combination of these two primary factors, and the function-specific weights for each observation are estimated within the registration model. We show how these estimated weights can easily be used to classify functions after registration using both simulated data and a juggling dataset. Supplementary materials for this article are available online.  相似文献   

13.
We consider the problem of deciding the best action time when observations are made sequentially. Specifically we address a special type of optimal stopping problem where observations are made from state-contingent distributions and there exists uncertainty on the state. In this paper, the decision-maker's belief on state is revised sequentially based on the previous observations. By using the independence property of the observations from a given distribution, the sequential Bayesian belief revision process is represented as a simple recursive form. The methodology developed in this paper provides a new theoretical framework for addressing the uncertainty on state in the action-timing problem context. By conducting a simulation analysis, we demonstrate the value of applying Bayesian strategy which uses sequential belief revision process. In addition, we evaluate the value of perfect information to gain more insight on the effects of using Bayesian strategy in the problem.  相似文献   

14.
We introduce a new class of risk measures called generalized entropic risk measures (GERMS) that allow economic agents to have different attitudes towards different sources of risk. We formulate the problem of optimal risk transfer in terms of these risk measures and characterize the optimal transfer contract. The optimal contract involves what we call intertemporal source-dependent quotient sharing, where agents linearly share changes in the aggregate risk reserve that occur in response to shocks to the system over time, with scaling coefficients that depend on the attitudes of each agent towards the source of risk causing the shock. Generalized entropic risk measures are not dilations of a common base risk measure, so our results extend the class of risk measures for which explicit characterizations of the optimal transfer contract can be found.  相似文献   

15.
We study an agency model, in which the principal has only incomplete information about the agent's preferences, in a dynamic setting. Through repeated interaction with the agent, the principal learns about the agent's preferences and can thus adjust the inventive system. In a dynamic computational model, we compare different learning strategies of the principal when facing different types of agents. The results indicate that better learning of preferences can improve the situation of both parties, but the learning process is rather sensitive to random disturbances.  相似文献   

16.
This paper extends previous work on the development of graph theoretic heuristics for facilities layout. A number of such methods have been shown to be useful for the problem of deciding which pairs of facilities should be adjacent. The heuristics have been designed for a model which assumes that trips are saved if facilities are located adjacently, but no credit is given for nearly-adjacent location. We extend the model to allow for a relaxation of this assumption, and show how two existing heuristics for the former model may be modified to embody this extension. Computational experience is reported and is encouraging.  相似文献   

17.
This paper proposes a prior near-ignorance model for regression based on a set of Gaussian Processes (GP). GPs are natural prior distributions for Bayesian regression. They offer a great modeling flexibility and have found widespread application in many regression problems. However, a GP requires the prior elicitation of its mean function, which represents our prior belief about the shape of the regression function, and of the covariance between any two function values.In the absence of prior information, it may be difficult to fully specify these infinite dimensional parameters. In this work, by modeling the prior mean of the GP as a linear combination of a set of basis functions and assuming as prior for the combination coefficients a set of conjugate distributions obtained as limits of truncate exponential priors, we have been able to model prior ignorance about the mean of the GP. The resulting model satisfies translation invariance, learning and, under some constraints, convergence, which are desirable properties for a prior near-ignorance model. Moreover, it is shown in this paper how this model can be extended to allow for a weaker specification of the GP covariance between function values, by letting each basis function to vary in a set of functions.Application to hypothesis testing has shown how the use of this model induces the capability of automatically detecting when a reliable decision cannot be made based on the available data.  相似文献   

18.
Fuzzy portfolio selection has been widely studied within the framework of the credibility theory. However, all existing models provide only concentrated investment solutions, which contradicts the risk diversification concept in the classical portfolio selection theory. In this paper, we propose an expected regret minimization model, which minimizes the expected value of the distance between the maximum return and the obtained return associated with each portfolio. We prove that our model is advantageous for obtaining distributive investment and reducing investor regret. The effectiveness of the model is demonstrated by using an example of a portfolio selection problem comprising ten securities in the Shanghai Stock Exchange 180 Index.  相似文献   

19.
In this paper we explore the effect that random social interactions have on the emergence and evolution of social norms in a simulated population of agents. In our model agents observe the behaviour of others and update their norms based on these observations. An agent’s norm is influenced by both their own fixed social network plus a second random network that is composed of a subset of the remaining population. Random interactions are based on a weighted selection algorithm that uses an individual’s path distance on the network to determine their chance of meeting a stranger. This means that friends-of-friends are more likely to randomly interact with one another than agents with a higher degree of separation. We then contrast the cases where agents make highest utility based rational decisions about which norm to adopt versus using a Markov Decision process that associates a weight with the best choice. Finally we examine the effect that these random interactions have on the evolution of a more complex social norm as it propagates throughout the population. We discover that increasing the frequency and weighting of random interactions results in higher levels of norm convergence and in a quicker time when agents have the choice between two competing alternatives. This can be attributed to more information passing through the population thereby allowing for quicker convergence. When the norm is allowed to evolve we observe both global consensus formation and group splintering depending on the cognitive agent model used.  相似文献   

20.
The relationship between organizational learning and organizational design is explored. In particular, we examine the information processing aspects of organizational learning as they apply to a two-valued decision making task and the relation of such aspects to organizational structure. Our primary contribution is to extend Carley's (1992) model of this process. The original model assumes that all data input into the decision making processes are of equal importance or weight in determining the correct overall organizational decision. The extension described here allows for the more natural situation of non-uniform weights of evidence. Further extensions to the model are also discussed. Such organizational learning performance measures provide an interesting framework for analyzing the recent trend towards flatter organizational structures. This research suggests that flatter structures are not always better, but rather that data environment, ultimate performance goals, and relative need for speed in learning can be used to form a contingency model for choosing organizational structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号