首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   0篇
化学   2篇
力学   1篇
物理学   1篇
无线电   20篇
  2020年   1篇
  2018年   1篇
  2015年   1篇
  2012年   2篇
  2011年   1篇
  2007年   5篇
  2006年   5篇
  2005年   2篇
  2004年   2篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
排序方式: 共有24条查询结果,搜索用时 742 毫秒
1.
Principal components analysis is an important and well-studied subject in statistics and signal processing. Several algorithms for solving this problem exist, and could be mostly grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second order statistical criterion (like reconstruction error or output variance), and fixed point update rules with deflation. In this study, we propose an alternate approach that avoids deflation and gradient-search techniques. The proposed method is an on-line procedure based on recursively updating the eigenvector and eigenvalue matrices with every new sample such that the estimates approximately track their true values as would be calculated analytically from the current sample estimate of the data covariance matrix. The perturbation technique is theoretically shown to be applicable for recursive canonical correlation analysis, as well. The performance of this algorithm is compared with that of a structurally similar matrix perturbation-based method and also with a few other traditional methods like Sanger’s rule and APEX.
  相似文献   
2.
In pattern recognition, a suitable criterion for feature selection is the mutual information (MI) between feature vectors and class labels. Estimating MI in high dimensional feature spaces is problematic in terms of computation load and accuracy. We propose an independent component analysis based MI estimation (ICA-MI) methodology for feature selection. This simplifies the high dimensional MI estimation problem into multiple one-dimensional MI estimation problems. Nonlinear ICA transformation is achieved using piecewise local linear approximation on partitions in the feature space, which allows the exploitation of the additivity property of entropy and the simplicity of linear ICA algorithms. Number of partitions controls the tradeoff between more accurate approximation of the nonlinear data topology and small-sample statistical variations in estimation. We test the ICA-MI feature selection framework on synthetic, UCI repository, and EEG activity classification problems. Experiments demonstrate, as expected, that the selection of the number of partitions for local linear ICA is highly problem dependent and must be carried out properly through cross validation. When this is done properly, the proposed ICA-MI feature selection framework yields feature ranking results that are comparable to the optimal probability of error based feature ranking and selection strategy at a much lower computational load.  相似文献   
3.
This paper investigates the application of error-entropy minimization algorithms to digital communications channel equalization. The pdf of the error between the training sequence and the output of the equalizer is estimated using the Parzen windowing method with a Gaussian kernel, and then, the Renyi's quadratic entropy is minimized using a gradient descent algorithm. By estimating Renyi's entropy over a short sliding window, an online training algorithm is also introduced. Moreover, for a linear equalizer, an orthogonality condition for the minimum entropy solution that leads to an alternative fixed-point iterative minimization method is derived. The performance of linear and nonlinear equalizers trained with entropy and mean square error (MSE) is compared. As expected, the results of training a linear equalizer are very similar for both criteria since, even if the input noise is non-Gaussian, the output filtered noise tends to be Gaussian. On the other hand, for nonlinear channels and using a multilayer perceptron (MLP) as the equalizer, differences between both criteria appear. Specifically, it is shown that the additional information used by the entropy criterion yields a faster convergence in comparison with the MSE  相似文献   
4.
Multivariate density estimation is an important problem that is frequently encountered in statistical learning and signal processing. One of the most popular techniques is Parzen windowing, also referred to as kernel density estimation. Gaussianization is a procedure that allows one to estimate multivariate densities efficiently from the marginal densities of the individual random variables. In this paper, we present an optimal density estimation scheme that combines the desirable properties of Parzen windowing and Gaussianization, using minimum Kullback–Leibler divergence as the optimality criterion for selecting the kernel size in the Parzen windowing step. The utility of the estimate is illustrated in classifier design, independent components analysis, and Prices’ theorem.  相似文献   
5.
Estimation of mixture coefficients of protein conformations in solution find applications in understanding protein behavior. We describe a method for maximum a posteriori (MAP) estimation of the mixture coefficients of ensemble of conformations in a protein mixture solution using measured small angle X-ray scattering (SAXS) intensities. The proposed method builds upon a model for the measurements of crystallographically determined conformations. Assuming that a priori information on the protein mixture is available, and that priori information follows a Dirichlet distribution, we develop a method to estimate the relative abundances with MAP estimator. The Dirichlet distribution depends on concentration parameters which may not be known in practice and thus need to be estimated. To estimate these unknown concentration parameters we developed an expectation-maximization (EM) method. Adenylate kinase (ADK) protein was selected as the test bed due to its known conformations Beckstein et al. (Journal of Molecular Biology, 394(1), 160 1). Known conformations are assumed to form the full vector bases that span the measurement space. In Monte Carlo simulations, mixture coefficient estimation performances of MAP and maximum likelihood (ML) (which assumes a uniform prior on the mixture coefficients) estimators are compared. MAP estimators using known and unknown concentration parameters are also compared in terms of estimation performances. The results show that prior knowledge improves estimation accuracy, but performance is sensitive to perturbations in the Dirichlet distribution’s concentration parameters. Moreover, the estimation method based on EM algorithm shows comparable results to approximately known prior parameters.  相似文献   
6.
We propose the information regularization principle for fusing information from sets of identical sensors observing a target phenomenon. The principle basically proposes an importance-weighting scheme for each sensor measurement based on the mutual information based pairwise statistical similarity matrix between sensors. The principle is applied to maximum likelihood estimation and particle filter based state estimation. A demonstration of the proposed regularization scheme in centralized data fusion of dense motion detector networks for target tracking is provided. Simulations confirm that the introduction of information regularization significantly improves localization accuracy of both maximum likelihood and particle filter approaches compared to their baseline implementations. Outlier detection and sensor failure detection capabilities, as well as possible extensions of the principle to decentralized sensor fusion with communication constraints are briefly discussed.
Umut OzertemEmail:
  相似文献   
7.
Error whitening criterion for adaptive filtering: theory and algorithms   总被引:3,自引:0,他引:3  
Mean squared error (MSE) has been the dominant criterion in adaptive filter theory. A major drawback of the MSE criterion in linear filter adaptation is the parameter bias in the Wiener solution when the input data are contaminated with noise. We propose and analyze a new augmented MSE criterion called the Error Whitening Criterion (EWC). EWC is able to eliminate this bias when the noise is white. We will determine the analytical solution of the EWC, discuss some interesting properties, and develop stochastic gradient and other fast algorithms to calculate the EWC solution in an online fashion. The stochastic algorithms are locally computable and have structures and complexities similar to their MSE-based counterparts (LMS and NLMS). Convergence of the stochastic gradient algorithm is established with mild assumptions, and upper bounds on the step sizes are deduced for guaranteed convergence. We will briefly discuss an RLS-like Recursive Error Whitening (REW) algorithm and a minor components analysis (MCA) based EWC-total least squares (TLS) algorithm and further draw parallels between the REW algorithm and the Instrumental Variables (IV) method for system identification. Finally, we will demonstrate the noise-rejection capability of the EWC by comparing the performance with MSE criterion and TLS.  相似文献   
8.
Recent advances in computing capabilities and the interest in new challenging signal processing problems that cannot be successfully solved using traditional techniques have sparked an interest in information-theoretic signal processing techniques. Adaptive nonlinear filters that process signals based on their information content have become a major focus of interest. The design and analysis of such nonlinear information processing systems is demonstrated in this paper. Theoretical background on necessary information theoretic concepts are provided, nonparametric sample estimators for these quantities are derived and discussed, the use of these estimators for various statistical signal processing problems have been illustrated. These include data density modeling, system identification, blind source separation, dimensionality reduction, image registration, and data clustering  相似文献   
9.
An important problem in the field of blind source separation (BSS) of real convolutive mixtures is the determination of the role of the demixing filter structure and the criterion/optimization method in limiting separation performance. This issue requires the knowledge of the optimal performance for a given structure, which is unknown for real mixtures. Herein, the authors introduce an experimental upper bound on the separation performance for a class of convolutive blind source separation structures, which can be used to approximate the optimal performance. As opposed to a theoretical upper bound, the experimental upper bound produces an estimate of the optimal separating parameters for each dataset in addition to specifying an upper bound on separation performance. Estimation of the upper bound involves the application of a supervised learning method to the set of observations found by recording the sources one at a time. Using the upper bound, it is demonstrated that structures other than the finite-impulse-response (FIR) structure should be considered for real (convolutive) mixtures, there is still much room for improvement in current convolutive BSS algorithms, and the separation performance of these algorithms is not necessarily limited by local minima.  相似文献   
10.

Machine learning research related to the derivatives of the kernel density estimator has received limited attention compared to the density estimator itself. This is despite of the general consensus that most of the important features of a data distribution, such as modes, curvature or even cluster structure, are characterized by its derivatives. In this paper we present a computationally efficient algorithm to calculate kernel density estimates and their derivatives for linearly separable kernels, with significant savings especially for high dimensional data and higher order derivatives. It significantly reduces the number of operations (multiplications and derivative evaluations) to calculate the estimates, while keeping results exact (i.e. no approximations are involved). The main idea is that the calculation of multivariate separable kernels and their derivatives, such as the gradient vector and the Hessian matrix involves significant number of redundant operations that can be eliminated using the chain rule. A tree-based algorithm that calculates exact kernel density estimate and derivatives in the most efficient fashion is presented with the particular focus being on optimizing kernel evaluations for individual data pairs. In contrast, most approaches in the literature resort to approximations of functions or downsampling. Overall computational savings of the presented method could be further increased by incorporating such approximations, which aim to reduce the number of pairs of data considered. The theoretical computational complexity of the tree-based and direct methods that perform all multiplications are compared. In experimental results, calculating separable kernels and their derivatives is considered, as well as a measure that evaluates how close a point is to the principal curve of a density, which employs first and second derivatives. These results indicate considerable improvement in computational complexity, hence time over the direct approach.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号