首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A framework for modelling the safety of an engineering system using a fuzzy rule-based evidential reasoning (FURBER) approach has been recently proposed, where a fuzzy rule-base designed on the basis of a belief structure (called a belief rule base) forms a basis in the inference mechanism of FURBER. However, it is difficult to accurately determine the parameters of a fuzzy belief rule base (FBRB) entirely subjectively, in particular for complex systems. As such, there is a need to develop a supporting mechanism that can be used to train in a locally optimal way a FBRB initially built using expert knowledge. In this paper, the methods for self-tuning a FBRB for engineering system safety analysis are investigated on the basis of a previous study. The method consists of a number of single and multiple objective nonlinear optimization models. The above framework is applied to model the system safety of a marine engineering system and the case study is used to demonstrate how the methods can be implemented.  相似文献   

2.
In a very recent note by Gao and Ni [B. Gao, M.F. Ni, A note on article “The evidential reasoning approach for multiple attribute decision analysis using interval belief degrees”, European Journal of Operational Research, in press, doi:10.1016/j.ejor.2007.10.0381], they argued that Yen’s combination rule [J. Yen, Generalizing the Dempster–Shafer theory to fuzzy sets, IEEE Transactions on Systems, Man and Cybernetics 20 (1990) 559–570], which normalizes the combination of multiple pieces of evidence at the end of the combination process, was incorrect. If this were the case, the nonlinear programming models we proposed in [Y.M. Wang, J.B. Yang, D.L. Xu, K.S. Chin, The evidential reasoning approach for multiple attribute decision analysis using interval belief degrees, European Journal of Operational Research 175 (2006) 35–66] would also be incorrect. In this reply to Gao and Ni, we re-examine their numerical illustrations and reconsider their analysis of Yen’s combination rule. We conclude that Yen’s combination rule is correct and our nonlinear programming models are valid.  相似文献   

3.
The wide availability of computer technology and large electronic storage media has led to an enormous proliferation of databases in almost every area of human endeavour. This naturally creates an intense demand for powerful methods and tools for data analysis. Current methods and tools are primarily oriented toward extracting numerical and statistical data characteristics. While such characteristics are very important and useful, they are often insufficient. A decision maker typically needs an interpretation of these findings, and this has to be done by a data analyst. With the growth in the amount and complexity of the data, making such interpretations is an increasingly difficult problem. As a potential solution, this paper advocates the development of methods for conceptual data analysis. Such methods aim at semi-automating the processes of determining high-level data interpretations, and discovering qualitative patterns in data. It is argued that these methods could be built on the basis of algorithms developed in the area of machine learning. An exemplary system utilizing such algorithms, INLEN, is discussed. The system integrates machine learning and statistical analysis techniques with database and expert system technologies. Selected capabilities of the system are illustrated by examples from implemented modules.  相似文献   

4.
There are various methods in knowledge space theory for building knowledge structures or surmise relations from data. Few of them have been thoroughly analyzed, making it difficult to decide which of these methods provides good results and when to apply each of the methods.In this paper, we investigate the method known as inductive item tree analysis and discuss the advantages and disadvantages of this algorithm. In particular, we introduce some corrections and improvements to it, resulting in two newly proposed algorithms. These algorithms and the original inductive item tree analysis procedure are compared in a simulation study and with empirical data.  相似文献   

5.
Fuzzy BCC Model for Data Envelopment Analysis   总被引:2,自引:0,他引:2  
Fuzzy Data Envelopment Analysis (FDEA) is a tool for comparing the performance of a set of activities or organizations under uncertainty environment. Imprecise data in FDEA models is represented by fuzzy sets and FDEA models take the form of fuzzy linear programming models. Previous research focused on solving the FDEA model of the CCR (named after Charnes, Cooper, and Rhodes) type (FCCR). In this paper, the FDEA model of the BCC (named after Banker, Charnes, and Cooper) type (FBCC) is studied. Possibility and Credibility approaches are provided and compared with an -level based approach for solving the FDEA models. Using the possibility approach, the relationship between the primal and dual models of FBCC models is revealed and fuzzy efficiency can be constructed. Using the credibility approach, an efficiency value for each DMU (Decision Making Unit) is obtained as a representative of its possible range. A numerical example is given to illustrate the proposed approaches and results are compared with those obtained with the -level based approach.  相似文献   

6.
The risk-triplet approach pioneered by Kaplan and Garrick is the keystone of operational risk analysis. We perform a sharp embedding of the elements of this framework into the one of formal decision theory, which is mainly concerned with the methodological and modeling issues of decision making. The aim of this exercise is twofold: on the one hand, it gives operational risk analysis a direct access to the rich toolbox that decision theory has developed, in the last decades, in order to deal with complex layers of uncertainty; on the other, it exposes decision theory to the challenges of operational risk analysis, thus providing it with broader scope and new stimuli.  相似文献   

7.
In this study, we develop comprehensive symbolic interval-valued time-series models, including interval-valued moving average, auto-interval-regressive moving average, and heteroscedastic volatility models. These models can be flexibly combined to adapt more effectively to various situations. To make inferences regarding these models, likelihood functions were derived, and maximum likelihood estimators were obtained. To evaluate the performance of our methods empirically, Monte Carlo simulations and real data analyses were conducted using the S&P 500 index and PM2.5 levels of 15 stations in southern Taiwan. In the former case, it was found that the proposed model outperforms all other existing methods, whereas in the latter case, the residuals deduced from the proposed models provide more intuitively appealing results compared to the conventional vector autoregressive models. Overall, our findings strongly confirm the adequacy of the proposed model.  相似文献   

8.
This paper assumes the organization as a distributed decision network. It proposes an approach based on application and extension of information theory concepts, in order to analyze informational complexity in a decision network, due to interdependence between decision centers.Based on this approach, new quantitative concepts and definitions are proposed in order to measure the information in a decision center, based on Shannon entropy and its complement in possibility theory, U uncertainty. This approach also measures the quantity of interdependence between decision centers and informational complexity of decision networks.The paper presents an agent-based model of organization as a graph composed of decision centers. The application of the proposed approach is in analyzing and assessing a measure to the organization structure efficiency, based on informational communication view. The structure improvement, analysis of information flow in organization and grouping algorithms are investigated in this paper. The results obtained from this model in different systems as distributed decision networks, clarifies the importance of structure and information distribution sources effect’s on network efficiency.  相似文献   

9.
A new method of alternatives’ probabilities estimation under deficiency of expert numeric information (obtained from different sources) is proposed. The method is based on the Bayesian model of uncertainty randomization. Additional non-numeric, non-exact, and non-complete expert knowledge (NNN-knowledge, NNN-information) is used for final estimation of the alternatives’ probabilities. An illustrative example demonstrates the proposed method application to forecasting of oil shares price with the use of NNN-information obtained from different experts (investment firms).  相似文献   

10.
针对传统区间数据包络分析方法,在确定每一个决策单元区间效率的上界和下界时,存在的评价尺度不一致且计算复杂等问题,本文提出了一种同时最大化所有决策单元的效率上界和下界的公共权重区间DEA模型,并给出了一种考虑决策者偏好信息的可能度排序方法,用以解决区间效率的全排序问题。最后,以中国大陆11个沿海省份工业生产效率测算为例说明了所提方法的有效性和实用性。  相似文献   

11.
12.
In this paper, we propose a dominance-based fuzzy rough set approach for the decision analysis of a preference-ordered uncertain or possibilistic data table, which is comprised of a finite set of objects described by a finite set of criteria. The domains of the criteria may have ordinal properties that express preference scales. In the proposed approach, we first compute the degree of dominance between any two objects based on their imprecise evaluations with respect to each criterion. This results in a valued dominance relation on the universe. Then, we define the degree of adherence to the dominance principle by every pair of objects and the degree of consistency of each object. The consistency degrees of all objects are aggregated to derive the quality of the classification, which we use to define the reducts of a data table. In addition, the upward and downward unions of decision classes are fuzzy subsets of the universe. Thus, the lower and upper approximations of the decision classes based on the valued dominance relation are fuzzy rough sets. By using the lower approximations of the decision classes, we can derive two types of decision rules that can be applied to new decision cases.  相似文献   

13.
A type-2 fuzzy variable is a map from a fuzzy possibility space to the real number space; it is an appropriate tool for describing type-2 fuzziness. This paper first presents three kinds of critical values (CVs) for a regular fuzzy variable (RFV), and proposes three novel methods of reduction for a type-2 fuzzy variable. Secondly, this paper applies the reduction methods to data envelopment analysis (DEA) models with type-2 fuzzy inputs and outputs, and develops a new class of generalized credibility DEA models. According to the properties of generalized credibility, when the inputs and outputs are mutually independent type-2 triangular fuzzy variables, we can turn the proposed fuzzy DEA model into its equivalent parametric programming problem, in which the parameters can be used to characterize the degree of uncertainty about type-2 fuzziness. For any given parameters, the parametric programming model becomes a linear programming one that can be solved using standard optimization solvers. Finally, one numerical example is provided to illustrate the modeling idea and the efficiency of the proposed DEA model.  相似文献   

14.
Simon French 《TOP》2003,11(2):229-251
Sensitivity analysis, robustness studies and uncertainty analyses are key stages in the modelling, inference and evaluation used in operational research, decision analytic and risk management studies. However, sensitivity methods -or others so similar technically that they are difficult to distinguish from sensitivity methods- are used in many different circumstances for many different purposes; and the manner of their use in one context may be inappropriate in another. Thus in this paper, I categorise and explore the use of sensitivity analysis and its parallels, and in doing so I hope to provide a guide and typology to a large growing literature.  相似文献   

15.
The discrete Fourier transform and the FFT algorithm are extended from the circle to continuous graphs with equal edge lengths.  相似文献   

16.
A critical review of the recent models of data envelopment analysis (DEA) is attempted here. Three new lines of approach involving dynamic changes in parameters, the error correction models and a stochastic sensitivity analysis are discussed in some detail. On the applications side, two new formulations are presented and discussed, e.g. a model of technical change and a cost frontier for testing economies of scale and adjustment due to risk factors. Thus the critical review of recent DEA models of productivity measurement provides new insight into the frontier of research in this field.  相似文献   

17.
This paper deals with the application of autoregressive (AR) modelling for the analysis of biological data including clinical laboratory data. In the first part of the paper, we discuss the necessity of feedback analysis in the field of biochemistry. In order to enable this, relative power contribution analysis was introduced. Next, we utilized the two types of impulse response curves of the open and closed loop systems for elucidating the structure of the metabolic networks under study. Time series data obtained from 31 chronic hemodialysis patients observed for periods of 3 to 7 years were analyzed by these procedures. The results of the analysis were rather uniform among the patients and suggested the consistency of this approach in identifying the dynamical system of individual patients. An example of data set is included in the paper.  相似文献   

18.
A transient solution is obtained analytically using continued fractions for the system size in an M/M/1 queueing system with catastrophes, server failures and non-zero repair time. The steady state probability of the system size is present. Some key performance measures, namely, throughput, loss probability and response time for the system under consideration are investigated. Further, reliability and availability of the system are analysed. Finally, numerical illustrations are used to discuss the system performance measures.   相似文献   

19.
We study a stratified multisite cluster‐sampling panel time series approach in order to analyse and evaluate the quality and reliability of produced items, motivated by the problem to sample and analyse multisite outdoor measurements from photovoltaic systems. The specific stratified sampling in spatial clusters reduces sampling costs and allows for heterogeneity as well as for the analysis of spatial correlations due to defects and damages that tend to occur in clusters. The analysis is based on weighted least squares using data‐dependent weights. We show that this does not affect consistency and asymptotic normality of the least squares estimator under the proposed sampling design under general conditions. The estimation of the relevant variance–covariance matrices is discussed in detail for various models including nested designs and random effects. The strata corresponding to damages or manufacturers are modelled via a quality feature by means of a threshold approach. The analysis of outdoor electroluminescence images shows that spatial correlations and local clusters may arise in such photovoltaic data. Further, relevant statistics such as the mean pixel intensity cannot be assumed to follow a Gaussian law. We investigate the proposed inferential tools in detail by simulations in order to assess the influence of spatial cluster correlations and serial correlations on the test's size and power. ©2016 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.  相似文献   

20.
The goal of this paper is to promote computational thinking among mathematics, engineering, science and technology students, through hands-on computer experiments. These activities have the potential to empower students to learn, create and invent with technology, and they engage computational thinking through simulations, visualizations and data analysis. We present nine computer experiments and suggest a few more, with applications to calculus, probability and data analysis, which engage computational thinking through simulations, visualizations and data analysis. We are using the free (open-source) statistical programming language R. Our goal is to give a taste of what R offers rather than to present a comprehensive tutorial on the R language. In our experience, these kinds of interactive computer activities can be easily integrated into a smart classroom. Furthermore, these activities do tend to keep students motivated and actively engaged in the process of learning, problem solving and developing a better intuition for understanding complex mathematical concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号