首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对区间数多指标群决策问题,提出一种基于集值统计模型的改进灰靶决策方法。首先利用集值统计模型对多专家的区间评价进行估计,得到符合可信度要求的决策指标样本矩阵。然后利用基于加权广义马氏距离的灰靶决策方法对决策方案进行排序,给出决策样本为区间数群决策矩阵形式的灰靶决策模型。最后通过一个具体的算例给出决策方法的过程,避免了马氏距离不存在的情况,克服了决策指标间的相关性、重要性差异和不同量纲对决策过程和决策结果的影响,方法的可行性与有效性得到验证。  相似文献   

2.
Sufficient Dimension Reduction (SDR) in regression comprises the estimation of the dimension of the smallest (central) dimension reduction subspace and its basis elements. For SDR methods based on a kernel matrix, such as SIR and SAVE, the dimension estimation is equivalent to the estimation of the rank of a random matrix which is the sample based estimate of the kernel. A test for the rank of a random matrix amounts to testing how many of its eigen or singular values are equal to zero. We propose two tests based on the smallest eigen or singular values of the estimated matrix: an asymptotic weighted chi-square test and a Wald-type asymptotic chi-square test. We also provide an asymptotic chi-square test for assessing whether elements of the left singular vectors of the random matrix are zero. These methods together constitute a unified approach for all SDR methods based on a kernel matrix that covers estimation of the central subspace and its dimension, as well as assessment of variable contribution to the lower-dimensional predictor projections with variable selection, a special case. A small power simulation study shows that the proposed and existing tests, specific to each SDR method, perform similarly with respect to power and achievement of the nominal level. Also, the importance of the choice of the number of slices as a tuning parameter is further exhibited.  相似文献   

3.
针对多观测样本分类问题,提出一种基于Kernel Discriminant CanonicalCorrelation(KDCC)来实现多观测样本分类的模型.该算法首先把原空间样本非线性的投影到高维特征空间,通过KPCA得到核子空间,然后在高维特征空间定义一个使类内核子空间的相关性最大,同时使类间核子空间的相关性最小的KDCC矩阵,通过迭代法训练出最优的KDCC矩阵,把每个核子空间投影到KDCC矩阵上得到转换核子空间,采用典型相关性作为转换核子空间之间的相似性度量,并采用最近邻准则作为多观测样本的分类决策,从而实现多观测样本的分类.在三个数据库上进行了一系列实验,实验结果表明提出的方法对于多观测样本分类具有可行性和有效性.  相似文献   

4.
The concept of quadratic subspace is introduced as a helpful tool for dimension reduction in quadratic discriminant analysis (QDA). It is argued that an adequate representation of the quadratic subspace may lead to better methods for both data representation and classification. Several theoretical results describe the structure of the quadratic subspace, that is shown to contain some of the subspaces previously proposed in the literature for finding differences between the class means and covariances. A suitable assumption of orthogonality between location and dispersion subspaces allows us to derive a convenient reduced version of the full QDA rule. The behavior of these ideas in practice is illustrated with three real data examples.  相似文献   

5.
A new Gaussian graphical modeling that is robustified against possible outliers is proposed. The likelihood function is weighted according to how the observation is deviated, where the deviation of the observation is measured based on its likelihood. Test statistics associated with the robustified estimators are developed. These include statistics for goodness of fit of a model. An outlying score, similar to but more robust than the Mahalanobis distance, is also proposed. The new scores make it easier to identify outlying observations. A Monte Carlo simulation and an analysis of a real data set show that the proposed method works better than ordinary Gaussian graphical modeling and some other robustified multivariate estimators.  相似文献   

6.
Summary  Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods are discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study compares the performance of the different estimators under several kinds of sampling schemes. Robustness is studied as well by breakdown plots.  相似文献   

7.
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE. This work was supported by National Natural Science Foundation of China (Grant No. 10771015)  相似文献   

8.
Recent sufficient dimension reduction methodologies in multivariate regression do not have direct application to a categorical predictor. For this, we define the multivariate central partial mean subspace and propose two methodologies to estimate it. The first method uses the ordinary least squares. Chi-squared distributed statistics for dimension tests are constructed, and an estimate of the target subspace is consistent and efficient. Moreover, the effects of continuous predictors can be tested without assuming any model. The second method extends Iterative Hessian Transformation to this context. For dimension estimation, permutation tests are used. Simulated and real data examples for illustrating various properties of the proposed methods are presented.  相似文献   

9.
We consider the performance of Local Tangent Space Alignment (Zhang & Zha [1]), one of several manifold learning algorithms, which have been proposed as a dimension reduction method. Matrix perturbation theory is applied to obtain a worst-case upper bound on the angle between the computed linear invariant subspace and the linear invariant subspace that is associated with the embedded intrinsic parametrization. Our result is the first performance bound that has been derived.  相似文献   

10.
王正新 《经济数学》2012,29(2):17-20
针对决策指标之间的相关性问题,将马氏距离引入传统TOPSIS方法,提出了基于马氏距离的TOPSIS方法.在此基础上,分析了基于马氏距离改进后贴近度的性质,并以投资决策方案选择为例加以说明.结果表明,基于马氏距离改进的TOPSIS方法对决策数据的非奇异线性变换具有不变性.协方差矩阵体现了决策指标之间的相关性,因而可以有效避免指标的相关性对决策效果的影响.  相似文献   

11.
In this study, we present an approach based on neural networks, as an alternative to the ordinary least squares method, to describe the relation between the dependent and independent variables. It has been suggested to construct a model to describe the relation between dependent and independent variables as an alternative to the ordinary least squares method. A new model, which contains the month and number of payments, is proposed based on real data to determine total claim amounts in insurance as an alternative to the model suggested by Rousseeuw et al. (1984) [Rousseeuw, P., Daniels, B., Leroy, A., 1984. Applying robust regression to insurance. Insurance: Math. Econom. 3, 67–72] in view of an insurer.  相似文献   

12.
In this paper, we consider a semiparametric modeling with multi-indices when neither the response nor the predictors can be directly observed and there are distortions from some multiplicative factors. In contrast to the existing methods in which the response distortion deteriorates estimation efficacy even for a simple linear model, the dimension reduction technique presented in this paper interestingly does not have to account for distortion of the response variable. The observed response can be used directly whether distortion is present or not. The resulting dimension reduction estimators are shown to be consistent and asymptotically normal. The results can be employed to test whether the central dimension reduction subspace has been estimated appropriately and whether the components in the basis directions in the space are significant. Thus, the method provides an alternative for determining the structural dimension of the subspace and for variable selection. A simulation study is carried out to assess the performance of the proposed method. The analysis of a real dataset demonstrates the potential usefulness of distortion removal.  相似文献   

13.
We consider the GMRES(m,k) method for the solution of linear systems Ax=b, i.e. the restarted GMRES with restart m where to the standard Krylov subspace of dimension m the other subspace of dimension k is added, resulting in an augmented Krylov subspace. This additional subspace approximates usually an A‐invariant subspace. The eigenspaces associated with the eigenvalues closest to zero are commonly used, as those are thought to hinder convergence the most. The behaviour of residual bounds is described for various situations which can arise during the GMRES(m,k) process. The obtained estimates for the norm of the residual vector suggest sufficient conditions for convergence of GMRES(m,k) and illustrate that these augmentation techniques can remove stagnation of GMRES(m) in many cases. All estimates are independent of the choice of an initial approximation. Conclusions and remarks assessing numerically the quality of proposed bounds conclude the paper. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
Position estimation is an important technique for location-based services. Many services and applications, such as navigation assistance, surveillance of patients and social networking, have been developed based on users’ position. Although the GPS plays an important role in positioning systems, its signal strength is extremely weak inside buildings. Thus, other sensing devices are necessary to improve the accuracy of indoor localisation. In the past decade, researchers have developed a series of indoor positioning technologies based on the received signal strength (RSS) of WiFi, ZigBee or Bluetooth devices under the infrastructure of wireless sensor network for location estimation. We can compute the distance of the devices by measuring their RSS, but the correctness of the result is unsatisfactory because the radio signal interference is a considerable issue and the indoor radio propagation is too complicated to model. Using the location fingerprint to estimate a target position is a feasible strategy because the location fingerprint records the characteristics of the signals and the signal strength is related to the space relation. This type of algorithm estimates the location of a target by matching online measurements with the closest a-priori location fingerprints. The matching or classification algorithm is a key issue in the correctness of location fingerprinting. In this paper, we propose an effective location fingerprinting algorithm based on the general and weighted k-nearest neighbour algorithms to estimate the position of the target node. The grid points are trained with an interval of 2 m, and the estimated position error is about 1.8 m. Thus, the proposed method is low computation consumption, and with an acceptable accuracy.  相似文献   

15.
We consider a multivariate response regression analysis with a vector of predictors. In this article, we develop the modification of principal Hessian directions based on principal components for estimating the central mean subspace without requiring a prespecified parametric model. We use a permutation test suggested by Cook and Yin (Aust New Z J Stat 43:147–199, 2001) for inference about the dimension. Simulation results and one real data are reported, and comparisons are made with four methods—most predictable variates, k-means inverse regression, optimal method of Yoo and Cook (Biometrika 94:231–242, 2007) and canonical correlation approach.  相似文献   

16.
We consider sequential, i.e., Gauss–Seidel type, subspace correction methods for the iterative solution of symmetric positive definite variational problems, where the order of subspace correction steps is not deterministically fixed as in standard multiplicative Schwarz methods. Here, we greedily choose the subspace with the largest (or at least a relatively large) residual norm for the next update step, which is also known as the Gauss–Southwell method. We prove exponential convergence in the energy norm, with a reduction factor per iteration step directly related to the spectral properties, e.g., the condition number, of the underlying space splitting. To avoid the additional computational cost associated with the greedy pick, we alternatively consider choosing the next subspace randomly, and show similar estimates for the expected error reduction. We give some numerical examples, in particular applications to a Toeplitz system and to multilevel discretizations of an elliptic boundary value problem, which illustrate the theoretical estimates.  相似文献   

17.
For a subspace arrangement over a finite field we study the evaluation code defined on the arrangements set of points. The length of this code is given by the subspace arrangements characteristic polynomial. For coordinate subspace arrangements the dimension is bounded below by the face vector of the corresponding simplicial complex. The minimum distance is determined for coordinate subspace arrangements where the simplicial complex is a skeleton. A few examples are presented with high minimum distance and dimension.  相似文献   

18.
A useful application for copula functions is modeling the dynamics in the conditional moments of a time series. Using copulas, one can go beyond the traditional linear ARMA (p,q) modeling, which is solely based on the behavior of the autocorrelation function, and capture the entire dependence structure linking consecutive observations. This type of serial dependence is best represented by a canonical vine decomposition, and we illustrate this idea in the context of emerging stock markets, modeling linear and nonlinear temporal dependences of Brazilian series of realized volatilities. However, the analysis of intraday data collected from e‐markets poses some specific challenges. The large amount of real‐time information calls for heavy data manipulation, which may result in gross errors. Atypical points in high‐frequency intraday transaction prices may contaminate the series of daily realized volatilities, thus affecting classical statistical inference and leading to poor predictions. Therefore, in this paper, we propose to robustly estimate pair‐copula models using the weighted minimum distance and the weighted maximum likelihood estimates (WMLE). The excellent performance of these robust estimates for pair‐copula models are assessed through a comprehensive set of simulations, from which the WMLE emerged as the best option for members of the elliptical copula family. We evaluate and compare alternative volatility forecasts and show that the robustly estimated canonical vine‐based forecasts outperform the competitors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
The method of generalized estimating equations (GEE) introduced by K. Y. Liang and S. L. Zeger has been widely used to analyze longitudinal data. Recently, this method has been criticized for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. In this paper, we present a new method named as 'weighted estimating equations (WEE)' for estimating the correlation parameters. The new estimates of correlation parameters are obtained as the solutions of these weighted estimating equations. For some commonly assumed correlation structures, we show that there exists a unique feasible solution to these weighted estimating equations regardless the correlation structure is correctly specified or not. The new feasible estimates of correlation parameters are consistent when the working correlation structure is correctly specified. Simulation results suggest that the new method works well in finite samples.  相似文献   

20.
M. Meyer  H.G. Matthies 《PAMM》2002,1(1):77-78
In the simulation of fatigue loading of large wind turbines model reduction and thus reduction of computing time is essential to be able to perform Monte‐Carlo simulations in turbulent wind. We describe the application of two recently proposed methods to increase the accuracy of the reduced model. In most cases only a special functional of the solution is of interest to the engineer. To select the proper basis vectors spanning the subspace of the reduced model according to this functional of interest, the dual‐weighted‐residual method is employed. During the simulation the neglected basis vectors are used to increase the accuracy of the solution based on the idea of the nonlinear and postprocessed Galerkin methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号