首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
The curse of dimensionality is based on the fact that high dimensional data is often difficult to work with. A large number of features can increase the noise of the data and thus the error of a learning algorithm. Feature selection is a solution for such problems where there is a need to reduce the data dimensionality. Different feature selection algorithms may yield feature subsets that can be considered local optima in the space of feature subsets. Ensemble feature selection combines independent feature subsets and might give a better approximation to the optimal subset of features. We propose an ensemble feature selection approach based on feature selectors’ reliability assessment. It aims at providing a unique and stable feature selection without ignoring the predictive accuracy aspect. A classification algorithm is used as an evaluator to assign a confidence to features selected by ensemble members based on their associated classification performance. We compare our proposed approach to several existing techniques and to individual feature selection algorithms. Results show that our approach often improves classification performance and feature selection stability for high dimensional data sets.  相似文献   

2.
We investigate constrained first order techniques for training support vector machines (SVM) for online classification tasks. The methods exploit the structure of the SVM training problem and combine ideas of incremental gradient technique, gradient acceleration and successive simple calculations of Lagrange multipliers. Both primal and dual formulations are studied and compared. Experiments show that the constrained incremental algorithms working in the dual space achieve the best trade-off between prediction accuracy and training time. We perform comparisons with an unconstrained large scale learning algorithm (Pegasos stochastic gradient) to emphasize that our choice can remain competitive for large scale learning due to the very special structure of the training problem.  相似文献   

3.
This paper provides simulation comparisons among the performance of 11 possible prediction intervals for the geometric.mean of a Pareto distribution with parameters (αB, ). Six different procedures were used to obtain these intervals , namely; true inter -val , pivotal interval , maximum likelihood estimation interval, centrallimit teorem interval, variance stabilizing interval and a mixture of the above intervals . Some of these intervals are valid if the observed sample size m,are large , others are valid if both, n and the future sample size m, are large. Some of these intervals require a knowledge of α or B, while others do not. The simulation validation and efficiency study shows that intervals depending on the MLE's are the best. The second best intervalsare those obtained through pivotal methods or variance stabilization transformation. The third group of intervals is that which depends on the central limit theorem when λ is known. There are two intervals which proved to be unacceptable under any criterion.  相似文献   

4.
This paper develops a new tree method for pricing financial derivatives in a regime-switching mean-reverting model. The tree achieves full node recombination and grows linearly as the number of time steps increases. Conditions for non-negative branch probabilities are presented. The weak convergence of the discrete tree approximations to the continuous regime-switching mean-reverting process is established. To illustrate the application in mathematical finance, the recombining tree is used to price commodity options and zero-coupon bonds. Numerical results are provided and compared.  相似文献   

5.
A Langevin piezoelectric transducer is used as a physical element for transmitting and receiving sound waves. The operating frequency of a transducer determines the distance that the sound wave can travel, so it is important to measure it. Due to the fact the structure of a transducer is quite complicated, it is quite difficult to estimate the precise physical parameters for the simulation model. Therefore, it takes a long time to measure the resonance frequency in the laboratory and fix the parameters by trial and error methods. This study applies a learning method to estimate a transducer frequency instead by trial and error experiments. The learning methods applied and compared including artificial neural network, support vector machine, C4.5, neuro-fuzzy, and ega-fuzzification. Compared with the theoretical one-dimensional model (simple lump element model), the results indicate that a learning method is an efficient way to estimate the piezoelectric transducer resonance frequency. The mega-fuzzification method is the best compared with other methods in this study.  相似文献   

6.
This paper is devoted to the numerical comparison of methods applied to solve the generalized Ito system. Four numerical methods are compared, namely, the Laplace decomposition method (LDM), the variation iteration method (VIM), the homotopy perturbation method (HPM) and the Laplace decomposition method with the Pade approximant (LD–PA) with the exact solution.  相似文献   

7.
This study attempts to show how a Kohonen map can be used to improve the temporal stability of the accuracy of a financial failure model. Most models lose a significant part of their ability to generalize when data used for estimation and prediction purposes are collected over different time periods. As their lifespan is fairly short, it becomes a real problem if a model is still in use when re-estimation appears to be necessary. To overcome this drawback, we introduce a new way of using a Kohonen map as a prediction model. The results of our experiments show that the generalization error achieved with a map remains more stable over time than that achieved with conventional methods used to design failure models (discriminant analysis, logistic regression, Cox’s method, and neural networks). They also show that type-I error, the economically costliest error, is the greatest beneficiary of this gain in stability.  相似文献   

8.
In this note, we compare several reconstruction methods to solve a linear ill-conditioned problem, a finite Haussdorf moment problem. The methods are the usual L2-regularization method, the linear programming method, and two maxentropic reconstruction methods. The scale seems to lean toward the maxentropic reconstructions methods.  相似文献   

9.
Although the classic exponential-smoothing models and grey prediction models have been widely used in time series forecasting, this paper shows that they are susceptible to fluctuations in samples. A new fractional bidirectional weakening buffer operator for time series prediction is proposed in this paper. This new operator can effectively reduce the negative impact of unavoidable sample fluctuations. It overcomes limitations of existing weakening buffer operators, and permits better control of fluctuations from the entire sample period. Due to its good performance in improving stability of the series smoothness, the new operator can better capture the real developing trend in raw data and improve forecast accuracy. The paper then proposes a novel methodology that combines the new bidirectional weakening buffer operator and the classic grey prediction model. Through a number of case studies, this method is compared with several classic models, such as the exponential smoothing model and the autoregressive integrated moving average model, etc. Values of three error measures show that the new method outperforms other methods, especially when there are data fluctuations near the forecasting horizon. The relative advantages of the new method on small sample predictions are further investigated. Results demonstrate that model based on the proposed fractional bidirectional weakening buffer operator has higher forecasting accuracy.  相似文献   

10.
The primary objectives of this paper are: (1) to present an improved formulation of the out-of-kilter algorithm; (2) to give the results of an extensive computational comparison of a code based on this formulation with three widely-used out-of-kilter production codes; (3) to study the possible sensitivity of these programs to the type of problem being solved; and (4) to investigate the effect of advanced dual start procedures on overall solution time.The study discloses that the new formulation does indeed provide the most efficient solution procedure of those tested. This streamlined version of out-of-kilter was found to be faster than the other out-of-kilter codes tested (SHARE, BSRL and Texas Water Development Board codes) by a factor of 2–5 on small and medium size problems and by a factor of 4–15 on large problems. The streamlined method's median solution time for 1500 node networks on a CDC 6600 computer is 33 seconds with a range of 33 to 35 seconds.  相似文献   

11.
In the analytic hierarchy process (AHP), a decision maker first gives linguistic pairwise comparisons, then obtains numerical pairwise comparisons by selecting certain numerical scale to quantify them, and finally derives a priority vector from the numerical pairwise comparisons. In particular, the validity of this decision-making tool relies on the choice of numerical scale and the design of prioritization method. By introducing a set of concepts regarding the linguistic variables and linguistic pairwise comparison matrices (LPCMs), and by defining the deviation measures of LPCMs, we present two performance measure algorithms to evaluate the numerical scales and the prioritization methods. Using these performance measure algorithms, we compare the most common numerical scales (the Saaty scale, the geometrical scale, the Ma–Zheng scale and the Salo–Hämäläinen scale) and the prioritization methods (the eigenvalue method and the logarithmic least squares method). In addition, we also discuss the parameter of the geometrical scale, develop a new prioritization method, and construct an optimization model to select the appropriate numerical scales for the AHP decision makers. The findings in this paper can help the AHP decision makers select suitable numerical scales and prioritization methods.  相似文献   

12.
To solve nonlinear complementarity problems, the inexact logarithmic-quadratic proximal (LQP) method solves a system of nonlinear equations (LQP system) approximately at each iteration. Therefore, the efficiencies of inexact-type LQP methods depend greatly on the involved inexact criteria used to solve the LQP systems. This paper relaxes inexact criteria of existing inexact-type LQP methods and thus makes it easier to solve the LQP system approximately. Based on the approximate solutions of the LQP systems, a descent method, and a prediction–correction method are presented. Convergence of the new methods are proved under mild assumptions. Numerical experiments for solving traffic equilibrium problems demonstrate that the new methods are more efficient than some existing methods and thus verify that the new inexact criterion is attractive in practice.  相似文献   

13.
Methods for forecasting intermittent demand are compared using a large data set from the UK Royal Air Force. Several important results are found. First, we show that the traditional per period forecast error measures are not appropriate for intermittent demand, even though they are consistently used in the literature. Second, by comparing the ability to approximate target service levels and stock holding implications, we show that Croston's method (and a variant) and Bootstrapping clearly outperform Moving Average and Single Exponential Smoothing. Third, we show that the performance of Croston and Bootstrapping can be significantly improved by taking into account that an order in a period is triggered by a demand in that period.  相似文献   

14.
The problem of comparing the accuracy of diagnostic tests is usually carried out through the comparison of the corresponding receiver operating characteristic (ROC) curves. This matter has been approached from different perspectives. Usually, ROC curves are compared through their respective areas under the curve, but in cases where there is no uniform dominance between the involved curves other procedures are preferred. Although the asymptotic distributions of the statistics behind these methods are, in general, known, resampling plans are considered. With the purpose of comparing the performance of different approaches, with different ways of calibrating the distribution of the tests, a simulation study is carried out in order to investigate the statistical power and the nominal level of each methodology.  相似文献   

15.
16.
We consider a two-dimensional homogeneous elastic state in the arch-like region a?≤?r?≤?b, 0?≤?θ?≤?α, where (r,θ) denotes plane polar coordinates. We assume that three of the edges are traction-free, while the fourth edge is subjected to a (in plane) self-equilibrated load. The Airy stress function ‘?’ satisfies a fourth-order differential equation in the plane polar coordinates with appropriate boundary conditions. We develop a method which allows us to treat in a unitary way the two problems corresponding to the self-equilibrated loads distributed on the straight and curved edges of the region. In fact, we introduce an appropriate change for the variable r and for the Airy stress functions to reduce the corresponding boundary value problem to a simpler one which allows us to indicate an appropriate measure of the solution valuable for both the types of boundary value problems. In terms of such measures we are able to establish some spatial estimates describing the spatial behavior of the Airy stress function. In particular, our spatial decay estimates prove a clear relationship with the Saint-Venant's principle on such regions.  相似文献   

17.
Interest in the design of efficient meta-heuristics for the application to combinatorial optimization problems is growing rapidly. The optimal design of water distribution networks is an important optimization problem which consists of finding the best way of conveying water from the sources to the users, thus satisfying their requirements. The efficient design of looped networks is a much more complex problem than the design of branched ones, but their greater reliability can compensate for the increase in cost when closing some loops. Mathematically, this is a non-linear optimization problem, constrained to a combinatorial space, since the diameters are discrete and it has a very large number of local solutions. Many works have dealt with the minimization of the cost of the network but few have considered their cost and reliability simultaneously. The aim of this paper is to evaluate the performance of an implementation of Scatter Search in a multi-objective formulation of this problem. Results obtained in three benchmark networks show that the method here proposed performs accurately well in comparison with other multi-objective approaches also implemented.  相似文献   

18.
The goal of this paper is to build an operational model for evaluating the financial viability of local municipalities in Greece. For this purpose, a multicriteria methodology is implemented combining a simulation analysis approach (stochastic multicriteria acceptability analysis) with a disaggregation technique. In particular, an evaluation model is developed on the basis of accrual financial data from 360 Greek municipalities for 2007. A set of customized to the local government context financial ratios is defined that rate municipalities and distinguish those with good financial condition from those experiencing financial problems. The model’s results are analyzed on the 2007 data as well as on a subsample of 100 local governments in 2009. The model succeeded in correctly classifying distressed municipalities according to a benchmark set by the central government in 2010. Such a model and methodology could be particularly useful for performance assessment in the context of several European Union countries that have a similar local government framework to the Greek one and apply accrual accounting techniques.  相似文献   

19.
20.
We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Fréchet embeddings for finite metrics, due to Bourgain (1985) and Rao (1999). We prove that any n-point metric space (X, d) embeds in Hilbert space with distortion where αX is a geometric estimate on the decomposability of X. As an immediate corollary, we obtain an distortion embedding, where λX is the doubling constant of X. Since λXn, this result recovers Bourgain’s theorem, but when the metric X is, in a sense, “low-dimensional,” improved bounds are achieved. Our embeddings are volume-respecting for subsets of arbitrary size. One consequence is the existence of (k, O(log n)) volume-respecting embeddings for all 1 ≤ kn, which is the best possible, and answers positively a question posed by U. Feige. Our techniques are also used to answer positively a question of Y. Rabinovich, showing that any weighted n-point planar graph embeds in with O(1) distortion. The O(log n) bound on the dimension is optimal, and improves upon the previously known bound of O((log n)2). Received: April 2004 Accepted: August 2004 Revision: December 2004 J.R.L. Supported by NSF grant CCR-0121555 and an NSF Graduate Research Fellowship.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号