首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers image classification based on a Markov random field (MRF), where the random field proposed here adopts Jeffreys divergence between category-specific probability densities. The classification method based on the proposed MRF is shown to be an extension of Switzer's soothing method, which is applied in remote sensing and geospatial communities. Furthermore, the exact error rates due to the proposed and Switzer's methods are obtained under the simple setup, and several properties are derived. Our method is applied to a benchmark data set of image classification, and exhibits a good performance in comparison with conventional methods.  相似文献   

2.
A crossing between the asymptotic expansion of an oscillatory integral and Filon-type methods is obtained by applying a Filon-type method to the error term in the asymptotic expansion, which is in itself an oscillatory integral. The efficiency of the approach is investigated through analysis and numerical experiments, revealing a method which in many cases performs better than the Filon-type method. It is shown that considerable savings in terms of the required number of potentially expensive moments can be expected. The case of multivariate oscillatory integrals is discussed briefly. AMS subject classification (2000)  65D30  相似文献   

3.
4.
Jiang and Tanner (2008) consider a method of classification using the Gibbs posterior which is directly constructed from the empirical classification errors. They propose an algorithm to sample from the Gibbs posterior which utilizes a smoothed approximation of the empirical classification error, via a Gibbs sampler with augmented latent variables. In this paper, we note some drawbacks of this algorithm and propose an alternative method for sampling from the Gibbs posterior, based on the Metropolis algorithm. The numerical performance of the algorithms is examined and compared via simulated data. We find that the Metropolis algorithm produces good classification results at an improved speed of computation.  相似文献   

5.
We introduce a method to minimize the mean square error (MSE) of an estimator which is derived from a classification. The method chooses an optimal discrimination threshold in the outcome of a classification algorithm and deals with the problem of unequal and unknown misclassification costs and class imbalance. The approach is applied to data from the MAGIC experiment in astronomy for choosing an optimal threshold for signal-background-separation. In this application one is interested in estimating the number of signal events in a dataset with very unfavorable signal to background ratio. Minimizing the MSE of the estimation is a rather general approach which can be adapted to various other applications, in which one wants to derive an estimator from a classification. If the classification depends on other or additional parameters than the discrimination threshold, MSE minimization can be used to optimize these parameters as well. We illustrate this by optimizing the parameters of logistic regression, leading to relevant improvements of the current approach used in the MAGIC experiment.  相似文献   

6.
A non-monotone FEM discretization of a singularly perturbed one-dimensional reaction-diffusion problem whose solution exhibits strong layers is considered. The method is shown to be maximum-norm stable although it is not inverse monotone. Both a priori and a posteriori error bounds in the maximum norm are derived. The a priori result can be used to deduce uniform convergence of various layer-adapted meshes proposed in the literature. Numerical experiments complement the theoretical results. AMS subject classification (2000)  65L10, 65L50, 65L60  相似文献   

7.
Recent developments in actuarial literature have shown that credibility theory can serve as an effective tool in mortality modelling, leading to accurate forecasts when applied to single or multi-population datasets. This paper presents a crossed classification credibility formulation of the Lee–Carter method particularly designed for multi-population mortality modelling. Differently from the standard Lee–Carter methodology, where the time index is assumed to follow an appropriate time series process, herein, future mortality dynamics are estimated under a crossed classification credibility framework, which models the interactions between various risk factors (e.g. genders, countries). The forecasting performances between the proposed model, the original Lee–Carter model and two multi-population Lee–Carter extensions are compared for both genders of multiple countries. Numerical results indicate that the proposed model produces more accurate forecasts than the Lee–Carter type models, as evaluated by the mean absolute percentage forecast error measure. Applications with life insurance and annuity products are also provided and a stochastic version of the proposed model is presented.  相似文献   

8.
The use of boxes for pattern classification has been widespread and is a fairly natural way in which to partition data into different classes or categories. In this paper we consider multi-category classifiers which are based on unions of boxes. The classification method studied may be described as follows: find boxes such that all points in the region enclosed by each box are assumed to belong to the same category, and then classify remaining points by considering their distances to these boxes, assigning to a point the category of the nearest box. This extends the simple method of classifying by unions of boxes by incorporating a natural way (based on proximity) of classifying points outside the boxes. We analyze the generalization accuracy of such classifiers and we obtain generalization error bounds that depend on a measure of how definitive is the classification of training points.  相似文献   

9.
偏倚一方差分析方法是在模型选择过程中权衡模型对现有样本解释程度和未知样本估计准确度的分析方法,目的是使选定的模型检验误差尽量小.在分类或回归过程中进行有效的变量筛选可以获得更准确的模型表达,但也会因此带来一定误差.提出"选择误差"的概念,用于刻画带有变量选择的分类问题中由于变量的某种选择方法所引起的误差.将分类问题的误差分解为偏倚—方差—选择误差进行研究,考察偏倚、方差和选择误差对分类问题的总误差所产生的影响.  相似文献   

10.
This paper discusses a problem of group decision making called ideal point estimation. This problem is equivalent to all individuals of the group, or a representative committee, attempting to estimate the maximum point for an unknown group multiattribute value function. The paper concentrates on a type of systematic error which is possible in such processes and proposes a method of estimation, the minimum variance of utility method, which is capable of debiasing such estimates in certain cases. A model of group error called the dome error process is shown to provide a concrete instance of this type of systematic bias. The method is illustrated numerically and some applications are discussed.  相似文献   

11.
In this work, we present an adaptive Newton-type method to solve nonlinear constrained optimization problems, in which the constraint is a system of partial differential equations discretized by the finite element method. The adaptive strategy is based on a goal-oriented a posteriori error estimation for the discretization and for the iteration error. The iteration error stems from an inexact solution of the nonlinear system of first-order optimality conditions by the Newton-type method. This strategy allows one to balance the two errors and to derive effective stopping criteria for the Newton iterations. The algorithm proceeds with the search of the optimal point on coarse grids, which are refined only if the discretization error becomes dominant. Using computable error indicators, the mesh is refined locally leading to a highly efficient solution process. The performance of the algorithm is shown with several examples and in particular with an application in the neurosciences: the optimal electrode design for the study of neuronal networks.  相似文献   

12.
Harmonic Balance is a very popular semi-analytic method in nonlinear dynamics. It is easy to apply and is known to produce good results for numerous examples. Adding an error criterion taking into account the neglected terms allows an evaluation of the results. Looking on the therefore determined error for increasing ansatz orders, it can be evaluated whether a solution really exists or is an artifact. For the low-error solutions additionally a stability analysis is performed which allows the classification of the solutions in three types, namely in large error solutions, low error stable solutions and low error unstable solution. Examples considered in this paper are the classical Duffing oscillator and an extended Duffing oscillator with nonlinear damping and excitation. Compared to numerical integration, the proposed procedure offers a faster calculation of existing multiple solutions and their character.  相似文献   

13.
In the literature, there are basically two kinds of resampling methods for least squares estimation in linear models; the E-type (the efficient ones like the classical bootstrap), which is more efficient when error variables are homogeneous, and the R-type (the robust ones like the jackknife), which is more robust for heterogeneous errors. However, for M-estimation of a linear model, we find a counterexample showing that a usually E-type method is less efficient than an R-type method when error variables are homogeneous. In this paper, we give sufficient conditions under which the classification of the two types of the resampling methods is still true.  相似文献   

14.
Summary. Using a method based on quadratic nodal spline interpolation, we define a quadrature rule with respect to arbitrary nodes, and which in the case of uniformly spaced nodes corresponds to the Gregory rule of order two, i.e. the Lacroix rule, which is an important example of a trapezoidal rule with endpoint corrections. The resulting weights are explicitly calculated, and Peano kernel techniques are then employed to establish error bounds in which the associated error constants are shown to grow at most linearly with respect to the mesh ratio parameter. Specializing these error estimates to the case of uniform nodes, we deduce non-optimal order error constants for the Lacroix rule, which are significantly smaller than those calculated by cruder methods in previous work, and which are shown here to compare favourably with the corresponding error constants for the Simpson rule. Received July 27, 1998/ Revised version received February 22, 1999 / Published online January 27, 2000  相似文献   

15.
针对一个球的模式分类(Single Sphere Pattern Classification(SSPC))方法中选取参数C比较困难的问题,提出一种改进的分类方法υ-SSPC.这种方法通过引入一个具有明确物理意义的参数υ,即υ是间隔错误样本占所总样本点的分额的上界,是支持向量的个数所占总样本点数的分额的下界,使参数可以灵活地根据实际问题的精度要求来选取.从而可以快速选取最有效的参数,提高分类预测的精度.  相似文献   

16.
Ensemble classification techniques such as bagging, (Breiman, 1996a), boosting (Freund & Schapire, 1997) and arcing algorithms (Breiman, 1997) have received much attention in recent literature. Such techniques have been shown to lead to reduced classification error on unseen cases. Even when the ensemble is trained well beyond zero training set error, the ensemble continues to exhibit improved classification error on unseen cases. Despite many studies and conjectures, the reasons behind this improved performance and understanding of the underlying probabilistic structures remain open and challenging problems. More recently, diagnostics such as edge and margin (Breiman, 1997; Freund & Schapire, 1997; Schapire et al., 1998) have been used to explain the improvements made when ensemble classifiers are built. This paper presents some interesting results from an empirical study performed on a set of representative datasets using the decision tree learner C4.5 (Quinlan, 1993). An exponential-like decay in the variance of the edge is observed as the number of boosting trials is increased. i.e. boosting appears to “homogenise ”the edge. Some initial theory is presented which indicates that a lack of correlation between the errors of individual classifiers is a key factor in this variance reduction.  相似文献   

17.
Two-phase miscible flow in porous media is governed by a system of nonlinear partial differential equations. In this paper, the upwind-mixed method on dynamically changing meshes is presented for the problem in two dimensions. The pressure is approximated by a mixed finite element method and the concentration by a method which upwinds the convection and incorporates diffusion using an expanded mixed finite element method. The method developed is shown to obtain almost optimal rate error estimate. When the method is modified we can obtain the optimal rate error estimate that is well known for static meshes. The modification of the scheme is the construction of a linear approximation to the solution, which is used in projecting the solution from one mesh to another. Finally, numerical experiments are given.  相似文献   

18.
The widely used Support Vector Machine (SVM) method has shown to yield good results in Supervised Classification problems. When the interpretability is an important issue, then classification methods such as Classification and Regression Trees (CART) might be more attractive, since they are designed to detect the important predictor variables and, for each predictor variable, the critical values which are most relevant for classification. However, when interactions between variables strongly affect the class membership, CART may yield misleading information. Extending previous work of the authors, in this paper an SVM-based method is introduced. The numerical experiments reported show that our method is competitive against SVM and CART in terms of misclassification rates, and, at the same time, is able to detect critical values and variables interactions which are relevant for classification.  相似文献   

19.
Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing. It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.  相似文献   

20.
Linear dimension reduction plays an important role in classification problems. A variety of techniques have been developed for linear dimension reduction to be applied prior to classification. However, there is no single definitive method that works best under all circumstances. Rather a best method depends on various data characteristics. We develop a two-step adaptive procedure in which a best dimension reduction method is first selected based on the various data characteristics, which is then applied to the data at hand. It is shown using both simulated and real life data that such a procedure can significantly reduce the misclassification rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号