首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes a two step algorithm for solving a large scale semi-definite logit model, which is appreciated as a powerful model in failure discriminant analysis. This problem has been successfully solved by a cutting plane (outer approximation) algorithm. However, it requires much more computation time than the corresponding linear logit model. A two step algorithm to be proposed in this paper is intended to reduce the amount of computation time by eliminating a certain portion of the data based on the information obtained by solving an associated linear logit model. It will be shown that this algorithm can generate a solution with almost the same quality as the solution obtained by solving the original large scale semi-definite model within a fraction of computation time.  相似文献   

2.
This paper presents an approximation method for performing efficient reliability analysis with complex computer models. The computational cost of industrial-scale models can cause problems when performing sampling-based reliability analysis. This is due to the fact that the failure modes of the system typically occupy a small region of the performance space and thus require relatively large sample sizes to accurately estimate their characteristics. The sequential sampling method proposed in this article, combines Gaussian process-based optimisation and subset simulation. Gaussian process emulators construct a statistical approximation to the output of the original code, which is both affordable to use and has its own measure of predictive uncertainty. Subset simulation is used as an integral part of the algorithm to efficiently populate those regions of the surrogate which are likely to lead to the performance function exceeding a predefined critical threshold. The emulator itself is used to inform decisions about efficiently using the original code to augment its predictions. The iterative nature of the method ensures that an arbitrarily accurate approximation of the failure region is developed at a reasonable computational cost. The presented method is applied to an industrial model of a biodiesel filter.  相似文献   

3.
Automatic construction of decision trees for classification   总被引:1,自引:0,他引:1  
An algorithm for learning decision trees for classification and prediction is described which converts real-valued attributes into intervals using statistical considerations. The trees are automatically pruned with the help of a threshold for the estimated class probabilities in an interval. By means of this threshold the user can control the complexity of the tree, i.e. the degree of approximation of class regions in feature space. Costs can be included in the learning phase if a cost matrix is given. In this case class dependent thresholds are used.Some applications are described, especially the task of predicting the high water level in a mountain river.  相似文献   

4.
Soltysik and Yarnold propose, as a method for two-group multivariate optimal discriminant analysis (MultiODA), selecting a linear discriminant function based on an algorithm by Warmack and Gonzalez. An important assumption underlying the Warmack–Gonzalez algorithm is likely to be violated when the data in the discriminant training samples are discrete, and in particular when they are nominal, causing the algorithm to fail. We offer modest changes to the algorithm that overcome this limitation.  相似文献   

5.
Closed multiclass separable queueing networks can in principle be analyzed using exact computational algorithms. This, however, may not be feasible in the case of large networks. As a result, much work has been devoted to developing approximation techniques, most of which is based on heuristic extensions of the mean value analysis (MVA) algorithm. In this paper, we propose an alternative approximation method to analyze large separable networks. This method is based on an approximation method for non-separable networks recently proposed by Baynat and Dallery. We show how this method can be efficiently used to analyze large separable networks. It is especially of interest when dealing with networks having multiple-server stations. Numerical results show that this method has good accuracy.  相似文献   

6.
A hybrid genetic model for the prediction of corporate failure   总被引:1,自引:0,他引:1  
This study examines the potential of a neural network (NN) model, whose inputs and structure are automatically selected by means of a genetic algorithm (GA), for the prediction of corporate failure using information drawn from financial statements. The results of this model are compared with those of a linear discriminant analysis (LDA) model. Data from a matched sample of 178 publicly quoted, failed and non-failed, US firms, drawn from the period 1991 to 2000 is used to train and test the models. The best evolved neural network correctly classified 86.7 (76.6)% of the firms in the training set, one (three) year(s) prior to failure, and 80.7 (66.0)% in the out-of-sample validation set. The LDA model correctly categorised 81.7 (75.0)% and 76.0 (64.7)% respectively. The results provide support for a hypothesis that corporate failure can be anticipated, and that a hybrid GA/NN model can outperform an LDA model in this domain.MSC codes: 62M45, 68W10, 90B50, 91C20  相似文献   

7.
This paper considers an aging multi‐state system, where the system failure rate varies with time. After any failure, maintenance is performed by an external repair team. Repair rate and cost of each repair are determined by a corresponding corrective maintenance contract with a repair team. The service market can provide different kinds of maintenance contracts to the system owner, which also can be changed after each specified time period. The owner of the system would like to determine a series of repair contracts during the system life cycle in order to minimize the total expected cost while satisfying the system availability. Operating cost, repair cost and penalty cost for system failures should be taken into account. The paper proposes a method for determining such optimal series of maintenance contracts. The method is based on the piecewise constant approximation for an increasing failure rate function in order to assess lower and upper bounds of the total expected cost and system availability by using Markov models. The genetic algorithm is used as the optimization technique. Numerical example is presented to illustrate the approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Starobinski  David  Sidi  Moshe 《Queueing Systems》2000,36(1-3):243-267
We propose a new methodology for modeling and analyzing power-tail distributions, such as the Pareto distribution, in communication networks. The basis of our approach is a fitting algorithm which approximates a power-tail distribution by a hyperexponential distribution. This algorithm possesses several key properties. First, the approximation can be achieved within any desired degree of accuracy. Second, the fitted hyperexponential distribution depends only on a few parameters. Third, only a small number of exponentials are required in order to obtain an accurate approximation over many time scales. Once equipped with a fitted hyperexponential distribution, we have an integrated framework for analyzing queueing systems with power-tail distributions. We consider the GI/G/1 queue with Pareto distributed service time and show how our approach allows to derive both quantitative numerical results and asymptotic closed-form results. This derivation shows that classical teletraffic methods can be employed for the analysis of power-tail distributions. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
We present an approximation algorithm for solving large 0–1 integer programming problems whereA is 0–1 and whereb is integer. The method can be viewed as a dual coordinate search for solving the LP-relaxation, reformulated as an unconstrained nonlinear problem, and an approximation scheme working together with this method. The approximation scheme works by adjusting the costs as little as possible so that the new problem has an integer solution. The degree of approximation is determined by a parameter, and for different levels of approximation the resulting algorithm can be interpreted in terms of linear programming, dynamic programming, and as a greedy algorithm. The algorithm is used in the CARMEN system for airline crew scheduling used by several major airlines, and we show that the algorithm performs well for large set covering problems, in comparison to the CPLEX system, in terms of both time and quality. We also present results on some well known difficult set covering problems that have appeared in the literature.  相似文献   

10.
J. Mosler  L. Stanković 《PAMM》2005,5(1):347-348
In this paper, a geometrically nonlinear finite element approximation for highly localized deformation in structures undergoing material failure in the form of strain softening, is developed. The basis for its numerical implementation in this class of problems is defined through the elaboration of Strong Discontinuity Approach-fundamentals. Proposed numerical model uses an Enhanced Assumed Strain Concept for the additive decomposition of the displacement gradient into a conforming and an enhanced part. The discontinuous component of the displacement field which is associated with the failure in the modeled structure is isolated in the enhanced part of the deformation gradient. In contrast to previous works, this part of the deformation mapping is condensed out at the material level, without the application of static condensation technique. The resulting set of constitutive equations is formally identical to that of standard plasticity and therefore, can be solved using the return-mapping algorithm. No assumptions regarding the interface law connecting the displacement discontinuity with the conjugate traction vector are made. As a result, the proposed numerical solution can be applied to a broad range of different mechanical problems including mode-I fracture in brittle materials or the analysis of shear bands. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
A new algorithm is proposed for generating min-transitive approximations of a given similarity matrix (i.e. a symmetric matrix with elements in the unit interval and diagonal elements equal to one). Different approximations are generated depending on the choice of an aggregation operator that plays a central role in the algorithm. If the maximum operator is chosen, then the approximation coincides with the min-transitive closure of the given similarity matrix. In case of the arithmetic mean, a transitive approximation is generated which is, on the average, as close to the given similarity matrix as the approximation generated by the UPGMA hierarchical clustering algorithm. The new algorithm also allows to generate approximations in a purely ordinal setting. As this new approach is weight-driven, the partition tree associated to the corresponding min-transitive approximation can be built layer by layer. Numerical tests carried out on synthetic data are used for comparing different approximations generated by the new algorithm with certain approximations obtained by classical methods.  相似文献   

12.
Although the rough set and intuitionistic fuzzy set both capture the same notion, imprecision, studies on the combination of these two theories are rare. Rule extraction is an important task in a type of decision systems where condition attributes are taken as intuitionistic fuzzy values and those of decision attribute are crisp ones. To address this issue, this paper makes a contribution of the following aspects. First, a ranking method is introduced to construct the neighborhood of every object that is determined by intuitionistic fuzzy values of condition attributes. Moreover, an original notion, dominance intuitionistic fuzzy decision tables (DIFDT), is proposed in this paper. Second, a lower/upper approximation set of an object and crisp classes that are confirmed by decision attributes is ascertained by comparing the relation between them. Third, making use of the discernibility matrix and discernibility function, a lower and upper approximation reduction and rule extraction algorithm is devised to acquire knowledge from existing dominance intuitionistic fuzzy decision tables. Finally, the presented model and algorithms are applied to audit risk judgment on information system security auditing risk judgement for CISA, candidate global supplier selection in a manufacturing company, and cars classification.  相似文献   

13.
We construct wavelets associated with a mesh involving local refinements. The first stage consists in the definition of a multiresolution analysis adapted to our non-translation invariant situation. Then, we show that, modulo an optimal condition on the geometry of the mesh, the orthogonal complement of the approximation spaces can be constructed by hand and presents the homogeneous behaviour that we were looking for. Finally, we propose a fast algorithm on irregular meshes, using a second wavelet basis constructed for this purpose. The stability of the synthesis algorithm is proved whereas the stability of the analysis algorithm is still an open problem.  相似文献   

14.
针对圆形区域分散布局问题, 文中给出了一个带约束的非线性规划模型. 当布局点数量较少时, 通过将模型转化为无约束优化问题, 利用梯度方法进行求解; 对于布局点数量较多的情况, 提出了一个界为1/2的多项式时间的近似算法, 并进行了相应的算例分析, 进一步来验证算法解的合理性. 研究的结论及方法一定程度上丰富和完善了圆形区域的分散布局理论.  相似文献   

15.
利用参系数多项式正实根的判别序列,给出了多变元5次对称形式在Rn 上取非负值的显示判定方法.并以此为依据,导出了一个有效的算法,能够在变元数较多时也可以使用计算机来自动判定.  相似文献   

16.
This paper studies a two-machine cross-docking flow shop scheduling problem in which a job at the second machine can be processed only after the processing of some jobs at the first machine has been completed. The objective is to minimize the makespan. We first show that the problem is strongly NP-hard. Some polynomially solvable special cases are provided. We then develop a polynomial approximation algorithm with an error-bound analysis. A branch-and-bound algorithm is also constructed. Computational results show that the branch-and-bound algorithm can optimally solve problems with up to 60 jobs within a reasonable amount of time.  相似文献   

17.
We consider bin-packing variations related to the well-studied problem of maximizing the total number of pieces packed into a fixed set of bins. We show that, when the objective is to minimize the total number of pieces packed subject to the constraint that no unpacked piece will fit, no polynomial-time relative approximation algorithm exists (unless, of course,P=NP). That is, we prove that it isNP-hard to guarantee packing no more than a constant multiple of the optimal number of pieces, for any constant. This appears to be the first bin-packing problem for which this property has been demonstrated. Furthermore, this result also holds for the allied packing variant which seeks to minimize the maximum number of pieces packed in any single bin. We find the situation to be markedly better for the problem of maximizing the minimum number of pieces in any bin. If all bins possess the same capacity, we prove that the familiar SPF rule is an absolute approximation algorithm with additive constant 1, and can therefore be regarded as a best possible heuristic. For the more general and difficult case in which bin capacities may differ, it turns out that SPF fails to qualify as even a relative approximation algorithm. However, we devise an implementation of the well-known FFD heuristic, which we show to be a relative approximation algorithm, yielding a worst-case performance ratio of 1/2 with additive constant 0. Moreover, we prove that (unlessP=NP) no polynomial-time algorithm can guarantee a higher ratio without sacrificing the additive constant.This author's research is supported in part by the National Science Foundation under grants ECS-8403859 and MIP-8603879.  相似文献   

18.
We introduce a multigrid algorithm for the solution of a second order elliptic equation in three dimensions. For the approximation of the solution we use a partially ordered hierarchy of finite-volume discretisations. We show that there is a relation with semicoarsening and approximation by more-dimensional Haar wavelets. By taking a proper subset of all possible meshes in the hierarchy, a sparse grid finite-volume discretisation can be constructed.The multigrid algorithm consists of a simple damped point-Jacobi relaxation as the smoothing procedure, while the coarse grid correction is made by interpolation from several coarser grid levels.The combination of sparse grids and multigrid with semi-coarsening leads to a relatively small number of degrees of freedom,N, to obtain an accurate approximation, together with anO(N) method for the solution. The algorithm is symmetric with respect to the three coordinate directions and it is fit for combination with adaptive techniques.To analyse the convergence of the multigrid algorithm we develop the necessary Fourier analysis tools. All techniques, designed for 3D-problems, can also be applied for the 2D case, and — for simplicity — we apply the tools to study the convergence behaviour for the anisotropic Poisson equation for this 2D case.  相似文献   

19.
基于Shadowed Sets理论研究了粗糙集连续属性离散化问题,提出一种新的基于Shadowed Sets 理论的候选断点集提取算法.该算法根据实例在单属性上的分布,对数据样本进行分类,采用Shadowed Sets计算出各类的上下近似,最终提取出候选断点集.使用多组UCI数据对此算法的性能进行检验,同时还与其它候选断点集提取算法做了对比实验.实验结果表明,此算法能有效地减少数据集候选断点的数目,提高离散化算法运行速度和识别率.  相似文献   

20.
研究了优势关系下不协调决策表的下近似约简问题,引入新的下近似约简的定义,证明新的下近似约简与文献[7]定义的下近似约简等价。给出新的下近似约简的判定定理和辨识矩阵,与文献[7]的辨识矩阵相比,计算新的下近似约简的辨识矩阵的时间复杂度要低。因此,可以利用新的辨识矩阵来求决策表的下近似约简.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号