首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In the context of multiple attribute decision making, preference models making use of reference points in an ordinal way have recently been introduced in the literature. This text proposes an axiomatic analysis of such models, with a particular emphasis on the case in which there is only one reference point. Our analysis uses a general conjoint measurement model resting on the study of traces induced on attributes by the preference relation and using conditions guaranteeing that these traces are complete. Models using reference points are shown to be a particular case of this general model. The number of reference points is linked to the number of equivalence classes distinguished by the traces. When there is only one reference point, the induced traces are quite rough, distinguishing at most two distinct equivalence classes. We study the relation between the model using a single reference point and other preference models proposed in the literature, most notably models based on concordance and models based on a discrete Sugeno integral.  相似文献   

2.
In modeling marked point processes, it is convenient to assume a separable or multiplicative form for the conditional intensity, as this assumption typically allows one to estimate each component of the model individually. Tests have been proposed in the simple marked point process case, to investigate whether the mark distribution is separable from the spatial–temporal characteristics of the point process. Here, we extend these tests to the case of a marked point process with covariates, and where one is interested in testing the separability of each of the covariates, as well as the mark and the coordinates of the point process. The extension is not at all trivial, and covariates must be treated in a fundamentally different way than marks and coordinates of the process, especially when the covariates are not uniformly distributed. An application is given to point process models for forecasting wildfire hazard in Los Angeles County, California, and solutions are proposed to the problem of how to proceed when the separability hypothesis is rejected.  相似文献   

3.
We present an agent-based market model in which social emulation by consumers and the adaptation of producers to demand play a significant role. Our theoretical approach considers boundedly-rational agents, heterogeneity of agents and product characteristics, and the co-evolution of consumers’ desires and firms’ adaptation efforts. The model reproduces, and allows us to interpret, statistical regularities which have been observed in the evolution of industrial sectors, and that seem to be also significant in the case of discretionary consumption activities. Thus, we suggest new determinants and explanations (from the consumer-side) for these stylized facts, and we obtain new theoretical patterns which may be of help to better understand the dynamics of discretionary goods markets. This model and results may contribute to guide future research on the field of consumer market.  相似文献   

4.
Funding small and medium-sized enterprises (SMEs) to support technological innovation is critical for national competitiveness. Technology credit scoring models are required for the selection of appropriate funding beneficiaries. Typically, a technology credit-scoring model consists of several attributes and new models must be derived every time these attributes are updated. However, it is not feasible to develop new models until sufficient historical evaluation data based on these new attributes will have accumulated. In order to resolve this limitation, we suggest the framework to update the technology credit scoring model. This framework consists of ways to construct new technology credit-scoring model by comparing alternative scenarios for various relationships between existing and new attributes based on explanatory factor analysis, analysis of variance, and logistic regression. Our approach can contribute to find the optimal scenario for updating a scoring model.  相似文献   

5.
Single‐factor interest rate models with constant coefficients are not consistent with arbitrary initial term structures. An extension which allows both arbitrary initial term structure and analytical tractability has been provided only in the Gaussian case. In this paper, within the context of the HJM methodology, an extension of the CIR model is provided which admits arbitrary initial term structure. It is shown how to calculate bond prices via a perturbative approach, and closed formulas are provided at every order. Since the parameter selected for the expansion is typically estimated to be small, the perturbative approach turns out to be adequate to our purpose. Using results on affine models, the extended CIR model is estimated via maximum likelihood on a time series of daily interest rate yields. Results show that the CIR model has to be rejected with respect to the proposed extension, and it is pointed out that the extended CIR model provides a more flexible characterization of the link between risk neutral and natural probability.  相似文献   

6.
Discretionary models of data envelopment analysis (DEA) assume that all inputs and outputs can be varied at the discretion of management or other users. In any realistic situation, however, there may exist “exogenously fixed” or non-discretionary factors that are beyond the control of a DMU’s management, which also need to be considered. This paper discusses and reviews the use of super-efficiency approach in data envelopment analysis (DEA) sensitivity analyses when some inputs are exogenously fixed. Super-efficiency data envelopment analysis (DEA) model is obtained when a decision making unit (DMU) under evaluation is excluded from the reference set. In this paper by means of modified Banker and Morey’s (BM hereafter) model [R.D. Banker, R. Morey, Efficiency analysis for exogenously fixed inputs and outputs, Operations Research 34 (1986) 513–521], in which the test DMU is excluded from the reference set, we are able to determine what perturbations of discretionary data can be tolerated before frontier DMUs become nonfrontier.  相似文献   

7.
The problem of determining dry friction forces in the case of the motion of a rigid body with a plane base over a rough surface is discussed. In view of the dependence of the friction forces on the normal load, the solution of this problem involves constructing a model of the contact stresses. The contact conditions impose three independent constraints on the kinematic characteristics, and the model must therefore include three free parameters, which are determined from these conditions at each instant. When the body is supported at three points, these parameters (for which the normal stresses can be taken) completely determine the model, while indeterminacy arises in the case of a larger number of contact points and, in order to remove this, certain physical hypotheses have to be accepted. It is shown that contact models consistent with the dynamics possess certain new qualitative properties compared with the traditional quasi-static models in which the type of motion of the body is not taken into account. In particular, a dependence of the principal vector and principal moment of the friction forces on the direction of sliding or pivoting of the body, as well as on the magnitude of the angular velocity, is possible.  相似文献   

8.
Phase-field models have become popular to simulate cohesive failure problems because of their capability of predicting crack initiation and propagation without additional criteria. In this paper, a new phase-field damage model coupled with general softening laws for cohesive fracture is proposed based on the unified phase-field theory. The commonly used quadratic geometric function in the classical phase-field model is implemented in the proposed model. The modified degradation function related to the failure strength and length scale is used to obtain the length scale insensitive model. Based on the analytical solution of a 1-D case, general softening laws in cohesive zone models can be considered. Parameters in the degradation function can be calibrated according to different softening curves and material properties. Numerical examples show that the results obtained by the proposed model have a good agreement with experimental results and the length scale has a negligible influence on the load-displacement curves in most cases, which cannot be observed in classical phase-field model.  相似文献   

9.
Multiple recurrent outbreak cycles have been commonly observed in infectious diseases such as measles and chicken pox. This complex outbreak dynamics in epidemiologicals is rarely captured by deterministic models. In this paper, we investigate a simple 2-dimensional SI epidemiological model and propose that the coexistence of multiple attractors attributes to the complex outbreak patterns. We first determine the conditions on parameters for the existence of an isolated center, then properly perturb the model to generate Hopf bifurcation and obtain limit cycles around the center. We further analytically prove that the maximum number of the coexisting limit cycles is three, and provide a corresponding set of parameters for the existence of the three limit cycles. Simulation results demonstrate the case with the maximum coexisting attractors, which contains one stable disease free equilibrium and two stable endemic periodic solutions separated by one unstable periodic solution. Therefore, different disease outcomes can be predicted by a single nonlinear deterministic model based on different initial data.  相似文献   

10.
Tversky and Kahneman have worked out an appealing model of decision making under uncertainty, involving rank- and sign-dependent utilities. This model, cumulative prospect theory (CPT), as well as related models proposed by other authors, has received wide acclaim. Available information and psychological attitude facing ambiguity jointly determine the subjective likelihood values the decision maker attributes to events, expressed by either one of two capacities depending on the prospect of either gains or losses; unfortunately, neither interpretation of these capacities nor prevision of their links are straightforward. An insight into these issues is given by studying consistency of CPT with certain generalized expected utility models, when faced with objective data described by lower–upper probability intervals. Means of testing the existence of subjectively lower–upper probabilized events are obtained, as well as means of evaluating ambiguity aversion.  相似文献   

11.
This paper considers the utility of a stochastic model of carcinogenesis, proposed earlier by Yakovlev and Polig, in quantitative analysis of the incidence of radiation-induced osteosarcomas in beagels injected with various amounts of 239Pu. The original version of the model failed to provide a good fit to our experimental data. The model has been generalized by incorporating a simple mechanism of lesion elimination, which is likely to be mediated by the immune system. Two versions of the model were developed: the first version ( Model 1 ) assumed malignant cells to be a target for the immune attack, while in Model 2 initiated cells were assumed to be such a target. Model 2 was rejected by the likelihood ratio test, thereby indicating that the competing model provides a more plausible explanation of the experimental data. Since in experiments with incorporated radionuclides the dose rate varies with time, dose-rate effects cannot be observed directly, and one must rely on mathematical models. The results of our numerical experiments show that, depending on the time of observation, both the direct and the inverse dose-rate effects may manifest themselves even at a fixed total dose level.  相似文献   

12.
Given their importance in determining the outcome of many economic interactions, different models have been proposed to determine how social networks form and which structures are stable. In Bala and Goyal (Econometrica 68, 1181–1229, 2000), the one-sided link formation model has been considered, which is based on a noncooperative game of network formation. They found that the empty networks, the wheel in the one-way flow of benefits case and the center-sponsored star in the two-way flow case play a fundamental role since they are strict Nash equilibria of the corresponding games for a certain class of payoff functions. In this paper, we first prove that all these network structures are in weakly dominated strategies whenever there are no strict Nash equilibria. Then, we exhibit a more accurate selection device between these network architectures by considering “altruistic behavior” refinements. Such refinements that we investigate here in the framework of finite strategy sets games have been introduced by the authors in previous papers.  相似文献   

13.
自从Box和Meyer首次提出无重复因析试验中散度效应的识别和估计问题, 各种散度效应的估计方法(包括迭代和非迭代)被提出. 特别地, Brenneman 和Nair 给出了这些方法的一个综述, 并且他们验证了改进的Harvey方法优于其它的方法.本文中对于对数线性模型, 一个基于多个位置模型残差平均的非迭代的散度效应估计方法在模型选择阶段被提出. 在大多数的模拟实验模型中, 本文方法具有比MH方法更小的均方误差, 且它可以应用于MH方法不适用的0或小的绝对残差情形. 我们也考虑了这个估计的理论性质, 并进行了实例分析.  相似文献   

14.
Manufacturing plays an increasingly important role in determining the competitiveness of the firm. However, corporate strategy is often formulated with little regard for how these decisions affect operations within the manufacturing system. Detailed models provide a necessary link between manufacturing performance and the functional policies followed by the firm, so that the strengths of the manufacturing system can be consistently reflected in strategic decisions.This paper presents a scheduling model that relates the strategic decisions that determine the type of work that must ultimately be processed by the manufacturing system with the detailed decisions that determine how this work should be scheduled. The model accounts for varying processing time, delay penalty, and revenue characteristics among the jobs available for processing by a single facility, with jobs partitioned in multiple classes such that a setup is incurred each time two jobs of different classes are processed in succession. Given limited processing capacity, the objective is to simultaneously determine the subset of jobs to accept for processing and the associated order in which accepted jobs should be sequenced to maximize the total profit realized by the facility. Problem formulations and dynamic programming algorithms are presented for both the special case in which all available work is from a single job class, and the more general case involving multiple job classes. The insight derived from these models concerning the operational implications of strategic decisions is illustrated through a series of example problems, first focusing on the coordination of marketing and manufacturing policy, and finally by considering important issues related to manufacturing focus.  相似文献   

15.
Input and output data, under uncertainty, must be taken into account as an essential part of data envelopment analysis (DEA) models in practice. Many researchers have dealt with this kind of problem using fuzzy approaches, DEA models with interval data or probabilistic models. This paper presents an approach to scenario-based robust optimization for conventional DEA models. To consider the uncertainty in DEA models, different scenarios are formulated with a specified probability for input and output data instead of using point estimates. The robust DEA model proposed is aimed at ranking decision-making units (DMUs) based on their sensitivity analysis within the given set of scenarios, considering both feasibility and optimality factors in the objective function. The model is based on the technique proposed by Mulvey et al. (1995) for solving stochastic optimization problems. The effect of DMUs on the product possibility set is calculated using the Monte Carlo method in order to extract weights for feasibility and optimality factors in the goal programming model. The approach proposed is illustrated and verified by a case study of an engineering company.  相似文献   

16.
This paper deals with a special class of fisheries models referred to as endogenous optimization models. The distinctive feature of these models is that behaviour of the agents in the model is not predetermined by exogenous behavioural rules. In endogenous optimization models, the model agents are merely furnished with objectives such as profit or utility maximization. Given these objectives and the various constraints determined by the state of the model at each point of time, the agents solve their maximization problem. The corresponding values of their control variables then constitute their behaviour.Having generated individual agents' behaviour by endogenous optimization, summing over agents yields aggregate behaviour. Aggregate behaviour must conform with the overall constraints of the model, be they physical or otherwise. Within the market system, individual behaviour or rather plans are made compatible via changes in relative prices. Therefore, outside equilibrium, behavioural plans must be repeatedly modified to become mutually compatible. This implies iteratively solving the maximization problem of a number of different agents. Endogenous optimization models therefore tend to be computationally very demanding.Clearly, the basic principles of endogenous optimization are just as applicable to any model involving maximizing agents.  相似文献   

17.
It is very common to assume deterministic demand in the literature of integrated targeting – inventory models. However, if variability in demand is high, there may be significant disruptions from using the deterministic solution in probabilistic environment. Thus, the model would not be applicable to real world situations and adjustment must be made. The purpose of this paper is to develop a model for integrated targeting – inventory problem when the demand is a random variable. In particular, the proposed model jointly determines the optimal process mean, lot size and reorder point in (QR) continuous review model. In order to investigate the effect of uncertainty in demand, the proposed model is compared with three baseline cases. The first of which considers a hierarchical model where the producer determines the process mean and lot-sizing decisions separately. This hierarchical model is used to show the effect of integrating the process targeting with production/inventory decisions. Another baseline case is the deterministic demand case which is used to show the effect of variation in demand on the optimal solution. The last baseline case is for the situation where the variation in the filling amount is negligible. This case demonstrates the sensitivity of the total cost with respect to the variation in the process output. Also, a procedure is developed to determine the optimal solution for the proposed models. Empirical results show that ignoring randomness in the demand pattern leads to underestimating the expected total cost. Moreover, the results indicate that performance of a process can be improved significantly by reducing its variation.  相似文献   

18.
19.
We consider a transfer line consisting of a series of machinesseparated by buffers of finite capacity. The processing of apart on a machine requires a fixed amount of time. In thesesystems, blocking and starvation, which occur as a consequenceof machine failures, are important phenomena. Except for transferlines without buffer storage, no exact models of such systemshave been reported in the literature, even in the case of two-machinelines. Two approximate models have been proposed: the discrete-timemodel and the continuous-flow model. Exact solutions of theemodels have been reported in the case of two-machine Lines,and approximations have been proposed for longer lines. Thepurpose of this paper is to provide properties of the continuous-flowmodel. The main result of the paper is to show that lower andupper bounds on the exact production rate of the transfer linecan be obtained using the continuous flow model. These boundsare obtained by making an appropriate choice of the buffer capacities  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号