首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
In this paper, we report the results of a series of experiments on a version of the centipede game in which the total payoff to the two players is constant. Standard backward induction arguments lead to a unique Nash equilibrium outcome prediction, which is the same as the prediction made by theories of “fair” or “focal” outcomes. We find that subjects frequently fail to select the unique Nash outcome prediction. While this behavior was also observed in McKelvey and Palfrey (1992) in the “growing pie” version of the game they studied, the Nash outcome was not “fair”, and there was the possibility of Pareto improvement by deviating from Nash play. Their findings could therefore be explained by small amounts of altruistic behavior. There are no Pareto improvements available in the constant-sum games we examine. Hence, explanations based on altruism cannot account for these new data. We examine and compare two classes of models to explain these data. The first class consists of non-equilibrium modifications of the standard “Always Take” model. The other class we investigate, the Quantal Response Equilibrium model, describes an equilibrium in which subjects make mistakes in implementing their best replies and assume other players do so as well. One specification of this model fits the experimental data best, among the models we test, and is able to account for all the main features we observe in the data.  相似文献   

2.
Digital games (e.g., video games or computer games) have been reported as an effective educational method that can improve students' motivation and performance in mathematics education. This meta‐analysis study (a) investigates the current trend of digital game‐based learning (DGBL) by reviewing the research studies on the use of DGBL for mathematics learning, (b) examines the overall effect size of DGBL on K‐12 students' achievement in mathematics learning, and (c) discusses future directions for DGBL research in the context of mathematics learning. In total, 296 studies were collected for the review, but of those studies, only 33 research studies were identified as empirical studies and systematically analyzed to investigate the current research trends. In addition, due to insufficient statistical data, only 17 out of the 33 studies were analyzed to calculate the overall effect size of digital games on mathematics education. This study will contribute to the research community by analyzing recent trends in significant DGBL research, especially for those who are interested in using DGBL for mathematics education.  相似文献   

3.
This paper is about experiments on two versions of ultimatum games with incomplete information, called the offer game and the demand game. We apply the strategy method, that is, each subject had to design a complete strategy in advance instead of reacting spontaneously to a situation which occurs in the game. Game theory predicts very similar outcomes for the offer and the demand games. Our experiments, however, show significant differences in behavior between both games. Using the strategy method, allows us to explore the motivations leading to those differences. Since each subject played the same version of the game eight rounds against changing anonymous opponents we can also study subjects' learning behavior. We propose a theory of boundedly rational behavior, called the “anticipation philosophy”, which is well supported by the experimental data.  相似文献   

4.
One of the major challenges associated with the measurement of customer lifetime value is selecting an appropriate model for predicting customer future transactions. Among such models, the Pareto/negative binomial distribution (Pareto/NBD) is the most prevalent in noncontractual relationships characterized by latent customer defections; ie, defections are not observed by the firm when they happen. However, this model and its applications have some shortcomings. Firstly, a methodological shortcoming is that the Pareto/NBD, like all lifetime transaction models based on statistical distributions, assumes that the number of transactions by a customer follows a Poisson distribution. However, many applications have an empirical distribution that does not fit a Poisson model. Secondly, a computational concern is that the implementation of Pareto/NBD model presents some estimation challenges specifically related to the numerous evaluation of the Gaussian hypergeometric function. Finally, the model provides 4 parameters as output, which is insufficient to link the individual purchasing behavior to socio‐demographic information and to predict the behavior of new customers. In this paper, we model a customer's lifetime transactions using the Conway‐Maxwell‐Poisson distribution, which is a generalization of the Poisson distribution, offering more flexibility and a better fit to real‐world discrete data. To estimate parameters, we propose a Markov chain Monte Carlo algorithm, which is easy to implement. Use of this Bayesian paradigm provides individual customer estimates, which help link purchase behavior to socio‐demographic characteristics and an opportunity to target individual customers.  相似文献   

5.
Competing populations of finite automata co‐evolve in an evolutionary algorithm to play two player games. Populations endowed with greater complexity do better against their less complex opponents in a strictly competitive constant sum game. In contrast, complexity determines efficiency levels, but not relative earnings, in a Prisoner's Dilemma game; greater levels of complexity result in mutually higher earnings. With reporting noise, advantages to complexity are lost and efficiency levels are reduced as relatively less complex strategies are selected. © 2004 Wiley Periodicals, Inc. Complexity 9: 71–78, 2004  相似文献   

6.
This article intends to clarify properties of learning models in simulation studies and to conduct a comparison of preceding learning models. Learning models are often used in many simulation studies, but there is no uniform rule of learning. We introduce three technical properties (monotonicity, condition of probability, neutrality) and three rational properties (rationality is fixed situations, rationality in first order stochastic domination, rationality with risk preference in stocahstic situations). We examine Michael Macy's model, the Erev & Roth model, and some others. We find that these models have different properties. Though learning is treated as one of the solutions of social dilemma from the results of Macy's model (Kollock, 1998), Macy's model is peculiar learning model. Learning is not always a solution of social dilemma. A comparison of learning models from a uniform point of view clarifies the properties of each model, and helps to probe conformity of a learning model and human behavior.  相似文献   

7.
A new regression model which mininizes the sum of squares of relative residues for data with errors in both fit variables is presented for linear fits. Expressions are derived for the slope, intercept and their respective errors. A detailed comparison is made between the new improved relative least squares (IRLS) model and other linear regression models, using three sets of data points. It is shown that IRLS provides the best compromise between, respectively, quality of fit, and a realistic representation of the physical situation of errors in both fit variables.  相似文献   

8.
We employ an agent‐based model to show that memory and the absence of an a priori best strategy are sufficient for self‐segregation and clustering to emerge in a complex adaptive system with discrete agents that do not compete over a limited resource nor contend in a winner‐take‐all scenario. An agent starts from a corner of a two‐dimensional lattice and aims to reach a randomly selected site in the opposite side within the shortest possible time. The agent is isolated during the course of its journey and does not interact with other agents. Time‐bound obstacles appear at random lattice locations and the agent must decide whether to challenge or evade any obstacle blocking its path. The agent is capable of adapting a strategy in dealing with an obstacle. We analyze the dependence of strategy‐retention time with strategy for both memory‐based and memory‐less agents. We derive the equality spectrum to establish the environmental conditions that favor the existence of an a priori best strategy. We found that memory‐less agents do not polarize into two opposite strategy‐retention time distributions nor cluster toward a center distribution. © 2004 Wiley Periodicals, Inc. Complexity 9: 41–46, 2004  相似文献   

9.
The purpose of this paper is to extend the model of negative binominal distribution used in consumer purchasing models so as to incorporate the consumer's learning and departure behaviours. The regularity of interpurchase time and its unobserved heterogeneity are also included. Due to these extensions, this model can be used to determine during a given period how many purchases are made by an experienced or an inexperienced customer. This model also allows the determination of the probability that a customer with a given pattern of purchasing behaviour still remains, or has departed, at any time after k≥1 purchases are made. An illustration of the approach is conducted using consumer purchase data for tea. As assessed by comparing results with Theil's U, the integrated model developed gives the best results and shows that learning and departure are important factors which influence consumer's purchase behaviour, especially, when evaluating the behaviour of inexperienced customers.  相似文献   

10.
Standard binary models have been developed to describe the behavior of consumers when they are faced with two choices. The classical logit model presents the feature of the symmetric link function. However, symmetric links do not provide good fits for data where one response is much more frequent than the other (as it happens in the insurance fraud context). In this paper, we use an asymmetric or skewed logit link, proposed by Chen et al. [Chen, M., Dey, D., Shao, Q., 1999. A new skewed link model for dichotomous quantal response data. J. Amer. Statist. Assoc. 94 (448), 1172-1186], to fit a fraud database from the Spanish insurance market. Bayesian analysis of this model is developed by using data augmentation and Gibbs sampling. The results show that the use of an asymmetric link notably improves the percentage of cases that are correctly classified after the model estimation.  相似文献   

11.
What strategy should a football (soccer, in American parlance) club adopt when deciding whether to sack its manager? This paper introduces a simple model assuming that a club's objective is to maximize the number of league points that it scores per season. The club's strategy consists of three choices: the length of the honeymoon period during which it will not consider sacking a new manager, the level of the performance trapdoor below which the manager get the sack, and the weight that it will give to more recent games compared to earlier ones. Some data from the last six seasons of the English Premiership are used to calibrate the model. At this early stage of the research, the best strategy appears to have only a short honeymoon period of eight games (much less than the actual shortest period of 12 games), to set the trapdoor at 0.74 points per game, and to put 47% of the weight on the last five games. A club adopting this strategy would obtain on average 56.8 points per season, compared to a Premiership average of 51.8 points.  相似文献   

12.
The process of learning scientific knowledge from the dynamic systems viewpoint is studied in terms probabilistic learning model (PLM), where learning accrues from foraging in the epistemic landscape. The PLM leads to the formation of attractor‐type regions of preferred models in an epistemic landscape. The attractor‐type states correspond to robust learning outcomes which are more probable than others. These can be assigned either to the high confidence in model selection or to the dynamic evolution of a learner's proficiency, which depends on the learning history. The results suggest that robust learning states are essentially context dependent, and that learning is a continuous development between these context dependent states. © 2016 Wiley Periodicals, Inc. Complexity 21: 259–267, 2016  相似文献   

13.
This paper discusses the relationships between learning processes based on the iterated elimination of strictly dominated strategies and their myopic and more naive counterparts. The concept of a monotone game, of which games with strategic complementarities are a subclass, is introduced. Then it is shown that convergence under best reply dynamics and dominance solvability are equivalent for all two-player (and some many-player) games in this class.  相似文献   

14.
The CPR (“cumulative proportional reinforcement”) learning rule stipulates that an agent chooses a move with a probability proportional to the cumulative payoff she obtained in the past with that move. Previously considered for strategies in normal form games (Laslier, Topol and Walliser, Games and Econ. Behav., 2001), the CPR rule is here adapted for actions in perfect information extensive form games. The paper shows that the action-based CPR process converges with probability one to the (unique) subgame perfect equilibrium.Received: October 2004  相似文献   

15.
Digital soil mapping (DSM) increasingly makes use of machine learning algorithms to identify relationships between soil properties and multiple covariates that can be detected across landscapes. Selecting the appropriate algorithm for model building is critical for optimizing results in the context of the available data. Over the past decade, many studies have tested different machine learning (ML) approaches on a variety of soil data sets. Here, we review the application of some of the most popular ML algorithms for digital soil mapping. Specifically, we compare the strengths and weaknesses of multiple linear regression (MLR), k-nearest neighbors (KNN), support vector regression (SVR), Cubist, random forest (RF), and artificial neural networks (ANN) for DSM. These algorithms were compared on the basis of five factors: (1) quantity of hyperparameters, (2) sample size, (3) covariate selection, (4) learning time, and (5) interpretability of the resulting model. If training time is a limitation, then algorithms that have fewer model parameters and hyperparameters should be considered, e.g., MLR, KNN, SVR, and Cubist. If the data set is large (thousands of samples) and computation time is not an issue, ANN would likely produce the best results. If the data set is small (<100), then Cubist, KNN, RF, and SVR are likely to perform better than ANN and MLR. The uncertainty in predictions produced by Cubist, KNN, RF, and SVR may not decrease with large datasets. When interpretability of the resulting model is important to the user, Cubist, MLR, and RF are more appropriate algorithms as they do not function as “black boxes.” There is no one correct approach to produce models for predicting the spatial distribution of soil properties. Nonetheless, some algorithms are more appropriate than others considering the nature of the data and purpose of mapping activity.  相似文献   

16.
Our paper presents an empirical analysis of the association between firm attributes in electronic retailing and the adoption of information initiatives in mobile retailing. In our attempt to analyze the collected data, we find that the count of information initiatives exhibits underdispersion. Also, zero‐truncation arises from our study design. To tackle the two issues, we test four zero‐truncated (ZT) count data models—binomial, Poisson, Conway–Maxwell–Poisson, and Consul's generalized Poisson. We observe that the ZT Poisson model has a much inferior fit when compared with the other three models. Interestingly, even though the ZT binomial distribution is the only model that explicitly takes into account the finite range of our count variable, it is still outperformed by the other two Poisson mixtures that turn out to be good approximations. Further, despite the rising popularity of the Conway–Maxwell–Poisson distribution in recent literature, the ZT Consul's generalized Poisson distribution shows the best fit among all candidate models and suggests support for one hypothesis. Because underdispersion is rarely addressed in IT and electronic commerce research, our study aims to encourage empirical researchers to adopt a flexible regression model in order to make a robust assessment on the impact of explanatory variables. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
18.
This article is based on an experiment using the game ‘Caminhando e Calculando’ (Moving and Calculating) in order to analyse the potential of the game as an educational resource for the teaching and learning of mathematics in Portuguese middle schools, where most students are 10 or 11 years old. Students' data obtained during the games will be used to analyse the different options used for solving the game, identifying its potential and its weaknesses. We start with a theoretical analysis of games as an inherent element of human culture. Combining our innate desire for fun with the different types of teaching and learning styles allows for fun and knowledge to be combined into more efficient and meaningful types of knowledge. Playing games are a primordial aspect of what it means to be a child and they develop within a motivating environment; therefore, not to take advantage of games as a learning resource would be to neglect an important asset. With regard to mathematics, emphasis will be given to the advantages that this teaching and learning tool provides for certain mathematical processes, such as problem-solving.  相似文献   

19.
We consider the Bayes optimal strategy for repeated two player games where moves are made simultaneously. In these games we look at models where one player assumes that the other player is employing a strategy depending only on the previousm-move pairs (as discussed in Wilson, 1986). We show that, under very unrestrictive conditions, such an assumption is not consistent with the assumption of rationality of one's opponent. Indeed, we show that by employing such a model a player is implicitly assuming that his opponent is not playing rationally,with probability one. We argue that, in the context of experimental games, thesem-step back models must be inferior to models which are consistent with the assumption that an opponent can be rational.  相似文献   

20.
We develop deep learning models to learn the hedge ratio for S&P500 index options from options data. We compare different combinations of features and show that with sufficient training data, a feedforward neural network model with time to maturity, the Black-Scholes delta and market sentiment as inputs performs the best in the out-of-sample test under daily hedging. This model significantly outperforms delta hedging and a data-driven hedging model. Our results also demonstrate the importance of market sentiment for hedging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号