首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 672 毫秒
1.
The field of direct marketing is constantly searching for new data mining techniques in order to analyze the increasing available amount of data. Self-organizing maps (SOM) have been widely applied and discussed in the literature, since they give the possibility to reduce the complexity of a high dimensional attribute space while providing a powerful visual exploration facility. Combined with clustering techniques and the extraction of the so-called salient dimensions, it is possible for a direct marketer to gain a high level insight about a dataset of prospects. In this paper, a SOM-based profile generator is presented, consisting of a generic method leading to value-adding and business-oriented profiles for targeting individuals with predefined characteristics. Moreover, the proposed method is applied in detail to a concrete case study from the concert industry. The performance of the method is then illustrated and discussed and possible future research tracks are outlined.  相似文献   

2.
The general aim of this study is to provide a guide to the future marketing decisions of a firm, using a model to predict customer lifetime values. The proposed framework aims to eliminate the limitations and drawbacks of the majority of models encountered in the literature through a simple and industry-specific model with easily measurable and objective indicators. In addition, this model predicts the potential value of the current customers rather than measuring the current value, which has generally been used in the majority of previous studies. This study contributes to the literature by helping to make future marketing decisions via Markov decision processes for a company that offers several types of products. Another contribution is that the states for Markov decision processes are also generated using the predicted customer lifetime values where the prediction is realized by a regression-based model. Finally, a real world application of the proposed model is provided in the banking sector to show the empirical validity of the model. Therefore, we believe that the proposed framework and the developed model can guide both practitioners and researchers.  相似文献   

3.
Metamodels are used in many disciplines to replace simulation models of complex multivariate systems. To discover metamodels ‘quality-of-fit’ for simulation, simple information returned by average-based statistics, such as root-mean-square error RMSE, are often used. The sample of points used in determining these averages is restricted in size, especially for simulation models of complex multivariate systems. Obviously, decisions made based on average values can be misleading when the sample size is not adequate, and contributions made by each individual data point in such samples need to be examined. This paper presents methods that can be used to discover metamodels quality-of-fit graphically by means of two-dimensional plots. Three plot types are presented; these are the so-called circle plots, marksman plots, and ordinal plots. Such plots can be used to facilitate visual inspection of the effect on metamodel accuracy of each individual point in the data sample used for metamodel validation. The proposed methods can be used to complement quantitative validation statistics; in particular, for situations where there is not enough validation data or the validation data is too expensive to generate.  相似文献   

4.
Operations research models are used in many business and non-business entities to support a variety of decision making activities, primarily well-defined, operational decisions. This is due to the traditional emphasis of these models on optimal solutions to pre-specified problems. Some attempts have been made to use OR models in support of more complex, strategic decision making. Traditionally, these models have been developed without explicit consideration for the information processing abilities and limitations of the decision makers, who interact with, provide input to, and receive output from such models.Research in judgement and decision making show that human decisions are influenced by a number of factors including, but not limited to, information presentation modes; information content, modes, e.g., quantitative versus qualitative; order effects such as primacy, recency; and simultaneous versus sequential presentation of data.This article presents empirical research findings involving executive business decision makers and their preferences for information in decision making scenarios. These preference functions were evaluated using OR techniques. The results indicate that decision makers view information in different ways. Some decision makers prefer qualitative, narrative, social information, whereas other prefer quantitative, numerical, firm specific information. Results also show that decision making tasks influence the preference structure of decision makers, but that in general, the preference are relatively stable across tasks.The results imply that for OR models to be more useful in support of non-routine decision making, attention needs to be focused on the information content and presentation effects of model inputs and outputs.  相似文献   

5.
Logit models have been widely used in marketing to predict brand choice and to make inference about the impact of marketing mix variables on these choices. Most researchers have followed the pioneering example of Guadagni and Little, building choice models and drawing inference conditional on the assumption that the logit model is the correct specification for household purchase behaviour. To the extent that logit models fail to adequately describe household purchase behaviour, statistical inferences from them may be flawed. More importantly, marketing decisions based on these models may be incorrect. This research applies White's robust inference method to logit brand choice models. The method does not impose the restrictive assumption that the assumed logit model specification be true. A sandwich estimator of the covariance ‘corrected’ for possible mis‐specification is the basis for inference about logit model parameters. An important feature of this method is that it yields correct standard errors for the marketing mix parameter estimates even if the assumed logit model specification is not correct. Empirical examples include using household panel data sets from three different product categories to estimate logit models of brand choice. The standard errors obtained using traditional methods are compared with those obtained by White's robust method. The findings illustrate that incorrectly assuming the logit model to be true typically yields standard errors which are biased downward by 10–40 per cent. Conditions under which the bias is particularly severe are explored. Under these conditions, the robust approach is recommended. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
Verification, validation and testing (VVT) of large systems is an important but complex process. The decisions involved have to consider on one hand the controllable variables associated with investments in appraisal and prevention activities and on the other hand the outcomes of these decisions that are associated with risk impacts and systems' failures. Typically, quantitative models of such large systems use simulation to generate distributions of possible costs and risk outcomes. Here, by assuming independence of risk impacts, we decompose the decision process into separate decisions for each VVT activity and supercede the simulation technique by simple analytical models. We explore various optimization objectives of VVT strategies such as minimum total expected cost, minimum uncertainty as well as a generalized optimization objective expressing Taguchi's expected loss function and provide explicit solutions. A numerical example based on simplified data of a case study is used to demonstrate the proposed VVT optimization procedure.  相似文献   

7.
8.
根据客户发展关系的Markov链的转移概率矩阵,建立了客户发展关系的一般模型,它包含许多特殊模型。前人研究的一些模型正是该一般模型的特例。一般模型的建立不但刻画了各种客户发展关系,而且为企业对客户发展进行定量分析和管理奠定了基础。  相似文献   

9.
This paper extends the classical cost efficiency (CE) models to include data uncertainty. We believe that many research situations are best described by the intermediate case, where some uncertain input and output data are available. In such cases, the classical cost efficiency models cannot be used, because input and output data appear in the form of ranges. When the data are imprecise in the form of ranges, the cost efficiency measure calculated from the data should be uncertain as well. So, in the current paper, we develop a method for the estimation of upper and lower bounds for the cost efficiency measure in situations of uncertain input and output data. Also, we develop the theory of efficiency measurement so as to accommodate incomplete price information by deriving upper and lower bounds for the cost efficiency measure. The practical application of these bounds is illustrated by a numerical example.  相似文献   

10.
One of the most important research fields in marketing science is the analysis of time series data. This article develops a new method for modeling multivariate time series. The proposed method enables us to measure simultaneously the effectiveness of marketing activities, the baseline sales, and the effects of controllable/uncontrollable business factors. The critical issue in the model construction process is the method for evaluating the usefulness of the predictive models. This problem is investigated from a statistical point of view, and use of the Bayesian predictive information criterion is considered. The proposed method is applied to sales data regarding incense products. The method successfully extracted useful information that may enable managers to plan their marketing strategies more effectively.  相似文献   

11.
Quantitative forecasting techniques are not much used in organizations. Instead, organizations rely on the judgement of managers working close to the product market. Increasingly however, developments at the interface between marketing and operations require more accurate forecasting. Quantitative marketing models have that potential. Drawing on theories from the ‘diffusion of innovation’ literature and results on ‘the barriers to effective implementation’, this paper first considers those factors that should be included in any complete evaluation of market forecasting. Using this framework and based on detailed survey work in a multi-divisional organization, the paper then describes how this company produces its market forecasts, and the perceptions of its managers as to inadequacies in the procedures. Reasons are proposed as to why quantitative forecasting techniques are not effectively used. The paper concludes with a discussion of the causes behind the organization's mismanagement of their forecasting activity and how these activities might best be improved.  相似文献   

12.
In this paper, an indirect identification scheme is proposed for identifying the parameters of the continuous-time first-order plus time delay (FOPTD) model and the second-order plus time delay (SOPTD) model from step responses. Unlike the existing direct identification scheme, which identifies the parameters of the continuous-time FOPTD and SOPTD models directly from the continuous-time step response data, the proposed indirect scheme is to pre-identify discrete-time FOPTD and SOPTD models from the discretized continuous-time step response input–output data, then convert the obtained discrete-time models to the desirable continuous-time models. The proposed method is then extended to identify the afore-mentioned models from the step responses of the systems contaminated with input noise and constant output disturbance. The proposed simple alternative method exhibits good estimation performances in both the time domain and the frequency domain. Illustrative examples are presented to demonstrate the effectiveness of the proposed scheme.  相似文献   

13.
Consider discrete storage processes that are modulated by environmental processes. Environmental processes cause interruptions in the input and/or output processes of the discrete storage processes. Due to the difficulties encountered in the exact analysis of such discrete storage systems, often Poisson flow and/or fluid flow models with the same modulating environmental processes are proposed as approximations for these systems. The analysis of Poisson flow and fluid flow models is much easier than that of the discrete storage processes. In this paper we give sufficient conditions under which the content of the discrete storage processes can be bounded by the Poisson flow and the fluid flow models. For example, we show that Poisson flow models and the fluid flow models developed by Kosten (and by Anick, Mitra and Sondhi) can be used to bound the performance of infinite (finite) source packetized voice/data communication systems. We also show that a Poisson flow model and the fluid flow model developed by Mitra can be used to bound the buffer content of a two stage automatic transfer line. The potential use of the bounding techniques presented in this paper, of course, transcends well beyond these examples.Supported in part by NSF grant DMS-9308149.  相似文献   

14.
We consider the problem of restoring services provided by infrastructure systems after an extreme event disrupts them. This research proposes a novel integrated network design and scheduling problem that models these restoration efforts. In this problem, work groups must be allocated to build nodes and arcs into a network in order to maximize the cumulative weighted flow in the network over a horizon. We develop a novel heuristic dispatching rule that selects the next set of tasks to be processed by the work groups. We further propose families of valid inequalities for an integer programming formulation of the problem, one of which specifically links the network design and scheduling decisions. Our methods are tested on realistic data sets representing the infrastructure systems of New Hanover County, North Carolina in the United States and lower Manhattan in New York City. These results indicate that our methods can be used in both real-time restoration activities and long-term scenario planning activities. Our models are also applied to explore the effects on the restoration activities of aligning them with the goals of an emergency manager and to benchmark existing restoration procedures.  相似文献   

15.
16.
结构方程模型在社会学、教育学、医学、市场营销学和行为学中有很广泛的应用。在这些领域中,缺失数据比较常见,很多学者提出了带有缺失数据的结构方程模型,并对此模型进行过很多研究。在这一类模型的应用中,模型选择非常重要,本文将一个基于贝叶斯准则的统计量,称为L_v测度,应用到此类模型中进行模型选择。最后,本文通过一个模拟研究及实例分析来说明L_v测度的有效性及应用,并在实例分析中给出了根据贝叶斯因子进行模型选择的结果,以此来进一步说明该测度的有效性。  相似文献   

17.
Computer traffic simulation models are valuable tools for the design and deployment of Intelligent Transportation Systems (ITS). Simulations of traffic flow can be used for the analysis and assessment of potential ITS technologies. Using simulations, alternative systems can be tested under identical conditions so the effects of oversaturated conditions, spillback, queuing, and overlapping bottlenecks can be measured. The Federal Highway Administration (FHWA) microscopic traffic simulation models, NETSIM, FRESIM, and CORSIM, are regarded as highly comprehensive but somewhat difficult to use. A graphics processor, TRAFVU, has recently been developed for analyzing the output of these microscopic models. TRAFVU was designed to support direct comparison of alternatives to facilitate design and evaluation. Applications of the CORSIM traffic simulation model and the TRAFVU graphics processor to interchange design and developing incident management strategies are presented.  相似文献   

18.
Calibration refers to the adjustment of the posterior probabilities output by a classification algorithm towards the true prior probability distribution of the target classes. This adjustment is necessary to account for the difference in prior distributions between the training set and the test set. This article proposes a new calibration method, called the probability-mapping approach. Two types of mapping are proposed: linear and non-linear probability mapping. These new calibration techniques are applied to 9 real-life direct marketing datasets. The newly-proposed techniques are compared with the original, non-calibrated posterior probabilities and the adjusted posterior probabilities obtained using the rescaling algorithm of Saerens et al. (2002). The results recommend that marketing researchers must calibrate the posterior probabilities obtained from the classifier. Moreover, it is shown that using a ‘simple’ rescaling algorithm is not a first and workable solution, because the results suggest applying the newly-proposed non-linear probability-mapping approach for best calibration performance.  相似文献   

19.
Feedback linearization is a well-known technique in nonlinear control in which known system nonlinearities are canceled by the control input leaving a linear control problem. Feedback linearization requires an exact model for the system. Fundamental and advanced developments in neuro-fuzzy synergy for modeling and control are used to apply the feedback linearization control law on second-order plants. In the models that are used, the nonlinear plant is decomposed on six fuzzy systems necessary to apply the control signal to allow the following of a reference value. A practical application is also presented using a waste water plant. This method can be extended to multiple input–multiple output (MIMO) plants based on input–output data pairs collected directly from the plant.  相似文献   

20.
Simultaneous estimation in nonlinear multivariate regression contexts is a complex problem in inference. In this paper, we compare the methodology suggested in the literature for an unknown covariance matrix among response components, the methodology by Beauchamp and Cornell (B&C), with the standard nonlinear least squares approach (NLS). In the first part of the paper, we contrast B&C and the standard NLS, pointing out, from the theoretical point of view, how a model specification error could affect the estimation. A comprehensive simulation study is also performed to evaluate the effectiveness of B&C versus standard NLS under both correct and misspecified models. Several alternative models are considered to highlight the consequences of different types of specification error. An application to a real dataset within the context of quantitative marketing is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号