首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
This paper addresses the independent multi-plant, multi-period, and multi-item capacitated lot sizing problem where transfers between the plants are allowed. This is an NP-hard combinatorial optimization problem and few solution methods have been proposed to solve it. We develop a GRASP (Greedy Randomized Adaptive Search Procedure) heuristic as well as a path-relinking intensification procedure to find cost-effective solutions for this problem. In addition, the proposed heuristics is used to solve some instances of the capacitated lot sizing problem with parallel machines. The results of the computational tests show that the proposed heuristics outperform other heuristics previously described in the literature. The results are confirmed by statistical tests.  相似文献   

2.
Discrete support vector machines (DSVM), originally proposed for binary classification problems, have been shown to outperform other competing approaches on well-known benchmark datasets. Here we address their extension to multicategory classification, by developing three different methods. Two of them are based respectively on one-against-all and round-robin classification schemes, in which a number of binary discrimination problems are solved by means of a variant of DSVM. The third method directly addresses the multicategory classification task, by building a decision tree in which an optimal split to separate classes is derived at each node by a new extended formulation of DSVM. Computational tests on publicly available datasets are then conducted to compare the three multicategory classifiers based on DSVM with other methods, indicating that the proposed techniques achieve significantly higher accuracies. This research was partially supported by PRIN grant 2004132117.  相似文献   

3.
Incorporating statistical multiple comparisons techniques with credit risk measurement, a new methodology is proposed to construct exact confidence sets and exact confidence bands for a beta distribution. This involves simultaneous inference on the two parameters of the beta distribution, based upon the inversion of Kolmogorov tests. Some monotonicity properties of the distribution function of the beta distribution are established which enable the derivation of an efficient algorithm for the implementation of the procedure. The methodology has important applications to financial risk management. Specifically, the analysis of loss given default (LGD) data are often modeled with a beta distribution. This new approach properly addresses model risk caused by inadequate sample sizes of LGD data, and can be used in conjunction with the standard recommendations provided by regulators to provide enhanced and more informative analyses.  相似文献   

4.
Data clustering, also called unsupervised learning, is a fundamental issue in data mining that is used to understand and mine the structure of an untagged assemblage of data into separate groups based on their similarity. Recent studies have shown that clustering techniques that optimize a single objective may not provide satisfactory result because no single validity measure works well on different kinds of data sets. Moreover, the performance of clustering algorithms degrades with more and more overlaps among clusters in a data set. These facts have motivated us to develop a fuzzy multi-objective particle swarm optimization framework in an innovative fashion for data clustering, termed as FMOPSO, which is able to deliver more effective results than state-of-the-art clustering algorithms. The key challenge in designing FMOPSO framework for data clustering is how to resolve cluster assignments confusion with such points in the data set which have significant belongingness to more than one cluster. The proposed framework addresses this problem by identification of points having significant membership to multiple classes, excluding them, and re-classifying them into single class assignments. To ascertain the superiority of the proposed algorithm, statistical tests have been performed on a variety of numerical and categorical real life data sets. Our empirical study shows that the performance of the proposed framework (in both terms of efficiency and effectiveness) significantly outperforms the state-of-the-art data clustering algorithms.  相似文献   

5.
As a kind of natural engineering material with original defects, there are distinctly nonlinear and anisotropic mechanical behaviors for rock materials. Nevertheless, the rock damage mechanics can solve this problem well. However, for the complexity of mechanical property of rock material, the mature and applicable model to describe the rock failure process and the method to determine the maximum damage value have not been established very well. To solve this problem, one new damage evolution model for rock material has been proposed. In this model, the least energy consumption principle proposed to describe the fracture process of materials is used. Using the experimental data of granite sample under uniaxial compression and the results of numerical tests under uniaxial tension and uniaxial compression, this model is verified. Moreover, the results of the new model have been compared with the results of the tests (numerical test and real test) and the traditional damage model. The comparison shows that the new model has the higher accuracy and better reflects for the fracture process of the granite sample. Moreover, the released damage energies of the new model and Mazars model are different, and the released damage energy of the new model is slightly less than that of the Mazars model.  相似文献   

6.
This paper addresses the development of a new algorithm forparameter estimation of ordinary differential equations. Here,we show that (1) the simultaneous approach combined with orthogonalcyclic reduction can be used to reduce the estimation problemto an optimization problem subject to a fixed number of equalityconstraints without the need for structural information to devisea stable embedding in the case of non-trivial dichotomy and(2) the Newton approximation of the Hessian information of theLagrangian function of the estimation problem should be usedin cases where hypothesized models are incorrect or only a limitedamount of sample data is available. A new algorithm is proposedwhich includes the use of the sequential quadratic programming(SQP) Gauss–Newton approximation but also encompassesthe SQP Newton approximation along with tests of when to usethis approximation. This composite approach relaxes the restrictionson the SQP Gauss–Newton approximation that the hypothesizedmodel should be correct and the sample data set large enough.This new algorithm has been tested on two standard problems.  相似文献   

7.
A large number of statistical procedures have been proposed in the literature to explicitly utilize available information about the ordering of treatment effects at increasing treatment levels. These procedures are generally more efficient than those ignoring the order information. However, when the assumed order information is incorrect, order restricted procedures are inferior and, strictly speaking, invalid. Just as any statistical model needs to be validated by data, order information to be used in a statistical analysis should also be justified by data first. A common statistical format for checking the validity of order information is to test the null hypothesis of the ordering representing the order information. Parametric tests for ordered null hypotheses have been extensively studied in the literature. These tests are not suitable for data with nonnormal or unknown underlying distributions. The objective of this study is to develop a general distribution-free testing theory for ordered null hypotheses based on rank order statistics and score generating functions. Sufficient and necessary conditions for the consistency of the proposed general tests are rigorously established.  相似文献   

8.
We propose different nonparametric tests for multivariate data and derive their asymptotic distribution for unbalanced designs in which the number of factor levels tends to infinity (large a, small ni case). Quasi gratis, some new parametric multivariate tests suitable for the large a asymptotic case are also obtained. Finite sample performances are investigated and compared in a simulation study. The nonparametric tests are based on separate rankings for the different variables. In the presence of outliers, the proposed nonparametric methods have better power than their parametric counterparts. Application of the new tests is demonstrated using data from plant pathology.  相似文献   

9.
This paper addresses the major drawback of substitution-box in highly auto-correlated data and proposes a novel chaotic substitution technique for encryption algorithm to sort the problem. Simulation results reveal that the overall strength of the proposed technique for encryption is much stronger than most of the existing encryption techniques. Furthermore, few statistical security analyses have also been done to show the strength of anticipated algorithm.  相似文献   

10.
Mood’s median test for testing the equality of medians is a nonparametric approach, which has been widely used for uncensored data in practice. For survival data, many nonparametric methods have been proposed to test for the equality of survival curves. However, if the survival medians, rather than the curves, are compared, those methods are not applicable. Some approaches have been developed to fill this gap. Unfortunately, in general those tests have inflated type I error rates, which make them inapplicable to survival data with small sample sizes. In this paper, Mood’s median test for uncensored data is extended for survival data. The results from a comprehensive simulation study show that the proposed test outperforms existing methods in terms of controlling type I error rate and detecting power.  相似文献   

11.
The Analytic Hierarchy Process developed by T.L. Saaty has received widespread attention in the literature. However this has not been without some criticisms, including the problems of meaning of consistency and large data requirements. This paper addresses the last two problems and presents a new method of calculating preference vectors. To the user the procedure is essentially the same, but the data requirements can be made less onerous and at the same time feedback is provided permitting a greater understanding of the data inputs. Preliminary results indicate the acceptability of the proposed methodology.  相似文献   

12.
This research addresses the scheduling problem of multimedia object requests, which is an important problem in information systems, in particular, for World Wide Web applications. The performance measure considered is the variance of response time which is crucial as end users expect fair treatment to their service requests. This problem is known to be NP-hard. The literature survey indicates that two heuristics have been proposed to solve the problem. In this paper, we present a new heuristic, a hybrid evolutionary heuristic, which is shown to perform much better than the two existing ones, e.g., the overall average errors of the existing ones are 1.012 and 2.042 while the error of the proposed hybrid evolutionary heuristic is 0.154.  相似文献   

13.
Powerful k-sample tests to compare the equality of the underlying distributions of right censored data based on the likelihood ratio are proposed. Their statistical power is studied and compared with that of commonly used tests by Monte Carlo simulations. A real data analysis is also considered. It is observed that the new likelihood ratio based tests are clearly more powerful than the traditional ones when there not exists uniform dominance among the involved distributions. Besides, the new tests turn out to be as powerful as the best classical test otherwise.  相似文献   

14.
This paper develops a new radial super-efficiency data envelopment analysis (DEA) model, which allows input–output variables to take both negative and positive values. Compared with existing DEA models capable of dealing with negative data, the proposed model can rank the efficient DMUs and is feasible no matter whether the input–output data are non-negative or not. It successfully addresses the infeasibility issue of both the conventional radial super-efficiency DEA model and the Nerlove–Luenberger super-efficiency DEA model under the assumption of variable returns to scale. Moreover, it can project each DMU onto the super-efficiency frontier along a suitable direction and never leads to worse target inputs or outputs than the original ones for inefficient DMUs. Additional advantages of the proposed model include monotonicity, units invariance and output translation invariance. Two numerical examples demonstrate the practicality and superiority of the new model.  相似文献   

15.
Start-up demonstration tests and various extensions have been discussed, in which a unit under the test is accepted or rejected according to some criteria. CSTF, CSCF, TSCF and TSTF are most well known start-up demonstration tests. In this paper, two kinds of more general start-up demonstration tests are introduced. CSTF, TSTF, TSCF and CSCF are all special situations of the new tests. For the new generalized start-up demonstration tests, under the assumption of independent and identically distributed trials for each test, the analytic expressions for the expectation, the probability mass function and the distribution of the test length, as well as the probability of acceptance or rejection of the unit are given. All the analyses are based on the finite Markov chain imbedding approach which avoids the complexities of the probability generating function approach and makes the results readily understood and easily extended to the non-i.i.d. cases. Furthermore, an optimal model for generalized start-up demonstration tests is proposed. Finally, a numerical example is presented to make our results more transparent, and it can demonstrate the advantages of the new tests.  相似文献   

16.
This paper considers the problem of clustering the vertices of a complete edge-weighted graph. The objective is to maximize the sum of the edge weights within the clusters (also called cliques). This so-called Clique Partitioning Problem (CPP) is NP-complete, and has several real-life applications such as groupings in flexible manufacturing systems, in biology, in flight gate assignment, etc. Numerous heuristics and exact approaches as well as benchmark tests have been presented in the literature. Most exact methods use branch and bound with branching over edges. We present tighter upper bounds for each search tree node than those known from the literature, improve the constraint propagation techniques for fixing edges in each node, and present a new branching scheme. The theoretical improvements are reflected by computational tests with real-life data. Although a standard solver delivers best results on randomly generated data, the runtime of the proposed algorithm is very low when being applied to instances on object clustering.  相似文献   

17.
二代测序数据的持续增多以及全基因组关联分析存在着只关注常见变异对表型影响的缺陷,使得研究人员开始考虑罕见变异对表型表达的影响。近年来,涉及罕见变异关联分析的统计方法研究成为一个活跃的领域,大批统计检验方法被相继提出。然而,大多数方法没有得到全面详细的比较和分析。因此,缺乏对现有方法的评价以及对它们如何使用的指导。本文对该领域的一些具有代表性的方法做一全面的综述,并介绍一些最新的研究进展。  相似文献   

18.

This paper addresses the integration of the lot-sizing problem and the one-dimensional cutting stock problem with usable leftovers (LSP-CSPUL). This integration aims to minimize the cost of cutting items from objects available in stock, allowing the bringing forward production of items that have known demands in a future planning horizon. The generation of leftovers, that will be used to cut future items, is also allowed and these leftovers are not considered waste in the current period. Inventory costs for items and leftovers are also considered. A mathematical model for the LSP-CSPUL is proposed to represent this problem and an approach, using the simplex method with column generation, is proposed to solve the linear relaxation of this model. A heuristic procedure, based on a relax-and-fix strategy, was also proposed to find integer solutions. Computational tests were performed and the results show the contributions of the proposed mathematical model, as well as, the quality of the solutions obtained using the proposed method.

  相似文献   

19.
A new class of power-transformed threshold ARCH models is proposed as a threshold-asymmetric generalization of the nonlinear ARCH considered by Higgins and Bera [Internat. Econom. Rev. 33 (1992) 137]. This class is rich enough to include diverse nonlinear and nonsymmetric ARCH models which have been spelled out in the literature. Geometric ergodicity of the model and existence of stationary moments are studied. The model facilitates discussing ARCH structures and hence large sample tests for ARCH structures are investigated via local asymptotic normality approach. Semiparametric tests are also discussed for the case when the error density is unknown.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号