首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Finding predictive gene groups from microarray data   总被引:1,自引:0,他引:1  
Microarray experiments generate large datasets with expression values for thousands of genes, but not more than a few dozens of samples. A challenging task with these data is to reveal groups of genes which act together and whose collective expression is strongly associated with an outcome variable of interest. To find these groups, we suggest the use of supervised algorithms: these are procedures which use external information about the response variable for grouping the genes. We present Pelora, an algorithm based on penalized logistic regression analysis, that combines gene selection, gene grouping and sample classification in a supervised, simultaneous way. With an empirical study on six different microarray datasets, we show that Pelora identifies gene groups whose expression centroids have very good predictive potential and yield results that can keep up with state-of-the-art classification methods based on single genes. Thus, our gene groups can be beneficial in medical diagnostics and prognostics, but they may also provide more biological insights into gene function and regulation.  相似文献   

2.
In this work, we assess the suitability of cluster analysis for the gene grouping problem confronted with microarray data. Gene clustering is the exercise of grouping genes based on attributes, which are generally the expression levels over a number of conditions or subpopulations. The hope is that similarity with respect to expression is often indicative of similarity with respect to much more fundamental and elusive qualities, such as function. By formally defining the true gene-specific attributes as parameters, such as expected expression across the conditions, we obtain a well-defined gene clustering parameter of interest, which greatly facilitates the statistical treatment of gene clustering. We point out that genome-wide collections of expression trajectories often lack natural clustering structure, prior to ad hoc gene filtering. The gene filters in common use induce a certain circularity to most gene cluster analyses: genes are points in the attribute space, a filter is applied to depopulate certain areas of the space, and then clusters are sought (and often found!) in the “cleaned” attribute space. As a result, statistical investigations of cluster number and clustering strength are just as much a study of the stringency and nature of the filter as they are of any biological gene clusters. In the absence of natural clusters, gene clustering may still be a worthwhile exercise in data segmentation. In this context, partitions can be fruitfully encoded in adjacency matrices and the sampling distribution of such matrices can be studied with a variety of bootstrapping techniques.  相似文献   

3.
Clustering is one of the most widely used procedures in the analysis of microarray data, for example with the goal of discovering cancer subtypes based on observed heterogeneity of genetic marks between different tissues. It is well known that in such high-dimensional settings, the existence of many noise variables can overwhelm the few signals embedded in the high-dimensional space. We propose a novel Bayesian approach based on Dirichlet process with a sparsity prior that simultaneous performs variable selection and clustering, and also discover variables that only distinguish a subset of the cluster components. Unlike previous Bayesian formulations, we use Dirichlet process (DP) for both clustering of samples as well as for regularizing the high-dimensional mean/variance structure. To solve the computational challenge brought by this double usage of DP, we propose to make use of a sequential sampling scheme embedded within Markov chain Monte Carlo (MCMC) updates to improve the naive implementation of existing algorithms for DP mixture models. Our method is demonstrated on a simulation study and illustrated with the leukemia gene expression dataset.  相似文献   

4.
5.
We discuss the theoretical structure and constructive methodology for large-scale graphical models, motivated by their potential in evaluating and aiding the exploration of patterns of association in gene expression data. The theoretical discussion covers basic ideas and connections between Gaussian graphical models, dependency networks and specific classes of directed acyclic graphs we refer to as compositional networks. We describe a constructive approach to generating interesting graphical models for very high-dimensional distributions that builds on the relationships between these various stylized graphical representations. Issues of consistency of models and priors across dimension are key. The resulting methods are of value in evaluating patterns of association in large-scale gene expression data with a view to generating biological insights about genes related to a known molecular pathway or set of specified genes. Some initial examples relate to the estrogen receptor pathway in breast cancer, and the Rb-E2F cell proliferation control pathway.  相似文献   

6.
Hierarchical and empirical Bayes approaches to inference are attractive for data arising from microarray gene expression studies because of their ability to borrow strength across genes in making inferences. Here we focus on the simplest case where we have data from replicated two colour arrays which compare two samples and where we wish to decide which genes are differentially expressed and obtain estimates of operating characteristics such as false discovery rates. The purpose of this paper is to examine the frequentist performance of Bayesian variable selection approaches to this problem for different prior specifications and to examine the effect on inference of commonly used empirical Bayes approximations to hierarchical Bayes procedures. The paper makes three main contributions. First, we describe how the log odds of differential expression can usually be computed analytically in the case where a double tailed exponential prior is used for gene effects rather than a normal prior, which gives an alternative to the commonly used B-statistic for ranking genes in simple comparative experiments. The second contribution of the paper is to compare empirical Bayes procedures for detecting differential expression with hierarchical Bayes methods which account for uncertainty in prior hyperparameters to examine how much is lost in using the commonly employed empirical Bayes approximations. Third, we describe an efficient MCMC scheme for carrying out the computations required for the hierarchical Bayes procedures. Comparisons are made via simulation studies where the simulated data are obtained by fitting models to some real microarray data sets. The results have implications for analysis of microarray data using parametric hierarchical and empirical Bayes methods for more complex experimental designs: generally we find that the empirical Bayes methods work well, which supports their use in the analysis of more complex experiments when a full hierarchical Bayes analysis would impose heavy computational demands.  相似文献   

7.
We formulate a discrete optimization problem that leads to a simple and informative derivation of a widely used class of spectral clustering algorithms. Regarding the algorithms as attempting to bi-partition a weighted graph with N vertices, our derivation indicates that they are inherently tuned to tolerate all partitions into two non-empty sets, independently of the cardinality of the two sets. This approach also helps to explain the difference in behaviour observed between methods based on the unnormalized and normalized graph Laplacian. We also give a direct explanation of why Laplacian eigenvectors beyond the Fiedler vector may contain fine-detail information of relevance to clustering. We show numerical results on synthetic data to support the analysis. Further, we provide examples where normalized and unnormalized spectral clustering is applied to microarray data—here the graph summarizes similarity of gene activity across different tissue samples, and accurate clustering of samples is a key task in bioinformatics.  相似文献   

8.
Statistical modeling is an important area of biomarker research of important genes for new drug targets, drug candidate validation, disease diagnoses, personalized treatment, and prediction of clinical outcome of a treatment. A widely adopted technology is the use of microarray data that are typically very high dimensional. After screening chromosomes for relative genes using methods such as quantitative trait locus mapping, there may still be a few thousands of genes related to the clinical outcome of interest. On the other hand, the sample size (the number of subjects) in a clinical study is typically much smaller. Under the assumption that only a few important genes are actually related to the clinical outcome, we propose a variable screening procedure to eliminate genes having negligible effects on the clinical outcome. Once the dimension of microarray data is reduced to a manageable number relative to the sample size, one can select a final set of genes via a well-known variable selection method such as the cross-validation. We establish the asymptotic consistency of the proposed variable screening procedure. Some simulation results are also presented.  相似文献   

9.
基于业绩持续性的证券投资基金聚类与实证研究   总被引:1,自引:0,他引:1  
目前基金的分类方法大多是从基金的性质特点出发的主观分类方法,没有反映出基金资产的实际运作效果。本文采用聚类的思想,以基金的业绩表现为基础,从业绩持续性的角度提出了一种新的分类方法:基于业绩持续性的基金聚类。通过对基金业绩持续性的研究,构造了一个业绩持续指数,并用该指数对样本进行聚类。实证研究结果表明,该分类方法是可行和有效的。  相似文献   

10.
In this paper we consider some iterative estimation algorithms, which are valid to analyse the variance of data, which may be either non-grouped or grouped with different classification intervals. This situation appears, for instance, when data is collected from different sources and the grouping intervals differ from one source to another. The analysis of variance is carried out by means of general linear models, whose error terms may be general. An initial procedure in the line of the EM, although it does not necessarily agree with it, opens the paper and gives rise to a simplified version where we avoid the double iteration, which implicitly appears in the EM and, also, in the initial procedure mentioned above. The asymptotic stochastic properties of the resulting estimates have been investigated in depth and used to test ANOVA hypothesis.  相似文献   

11.
Clustering is often useful for analyzing and summarizing information within large datasets. Model-based clustering methods have been found to be effective for determining the number of clusters, dealing with outliers, and selecting the best clustering method in datasets that are small to moderate in size. For large datasets, current model-based clustering methods tend to be limited by memory and time requirements and the increasing difficulty of maximum likelihood estimation. They may fit too many clusters in some portions of the data and/or miss clusters containing relatively few observations. We propose an incremental approach for data that can be processed as a whole in memory, which is relatively efficient computationally and has the ability to find small clusters in large datasets. The method starts by drawing a random sample of the data, selecting and fitting a clustering model to the sample, and extending the model to the full dataset by additional EM iterations. New clusters are then added incrementally, initialized with the observations that are poorly fit by the current model. We demonstrate the effectiveness of this method by applying it to simulated data, and to image data where its performance can be assessed visually.  相似文献   

12.
Deviations from theoretical assumptions together with the presence of certain amount of outlying observations are common in many practical statistical applications. This is also the case when applying Cluster Analysis methods, where those troubles could lead to unsatisfactory clustering results. Robust Clustering methods are aimed at avoiding these unsatisfactory results. Moreover, there exist certain connections between robust procedures and Cluster Analysis that make Robust Clustering an appealing unifying framework. A review of different robust clustering approaches in the literature is presented. Special attention is paid to methods based on trimming which try to discard most outlying data when carrying out the clustering process.  相似文献   

13.
基于贝叶斯统计方法的两总体基因表达数据分类   总被引:1,自引:0,他引:1  
在疾病的诊断过程中,对疾病的精确分类是提高诊断准确率和疾病治愈率至 关重要的一个环节,DNA芯片技术的出现使得我们从微观的层次获得与疾病分类及诊断 密切相关的基因功能信息.但是DNA芯片技术得到的基因的表达模式数据具有多变量小 样本特点,使得分类过程极不稳定,因此我们首先筛选出表达模式发生显著性变化的基因 作为特征基因集合以减少变量个数,然后再根据此特征基因集合建立分类器对样本进行分 类.本文运用似然比检验筛选出特征基因,然后基于贝叶斯方法建立了统计分类模型,并 应用马尔科夫链蒙特卡罗(MCMC)抽样方法计算样本归类后验概率.最后我们将此模型 应用到两组真实的DNA芯片数据上,并将样本成功分类.  相似文献   

14.
In data science, data are often represented by using an undirected graph where vertices represent objects and edges describe a relationship between two objects. In many applications, there can be many relations arising from different sources and/or different types of models. Clustering of multiple undirected graphs over the same set of vertices can be studied. Existing clustering methods of multiple graphs involve costly optimization and/or tensor computation. In this paper, we study block spectral clustering methods for these multiple graphs. The main contribution of this paper is to propose and construct block Laplacian matrices for clustering of multiple graphs. We present a novel variant of the Laplacian matrix called the block intra‐normalized Laplacian and prove the conditions required for zero eigenvalues in this variant. We also show that eigenvectors of the constructed block Laplacian matrix can be shown to be solutions of the relaxation of multiple graphs cut problems, and the lower and upper bounds of the optimal solutions of multiple graphs cut problems can also be established. Experimental results are given to demonstrate that the clustering accuracy and the computational time of the proposed method are better than those of tested clustering methods for multiple graphs.  相似文献   

15.
Clustering is a popular data analysis and data mining technique. Since clustering problem have NP-complete nature, the larger the size of the problem, the harder to find the optimal solution and furthermore, the longer to reach a reasonable results. A popular technique for clustering is based on K-means such that the data is partitioned into K clusters. In this method, the number of clusters is predefined and the technique is highly dependent on the initial identification of elements that represent the clusters well. A large area of research in clustering has focused on improving the clustering process such that the clusters are not dependent on the initial identification of cluster representation. Another problem about clustering is local minimum problem. Although studies like K-Harmonic means clustering solves the initialization problem trapping to the local minima is still a problem of clustering. In this paper we develop a new algorithm for solving this problem based on a tabu search technique—Tabu K-Harmonic means (TabuKHM). The experiment results on the Iris and the other well known data, illustrate the robustness of the TabuKHM clustering algorithm.  相似文献   

16.
Two robustness criteria are presented that are applicable to general clustering methods. Robustness and stability in cluster analysis are not only data dependent, but even cluster dependent. Robustness is in the present paper defined as a property of not only the clustering method, but also of every individual cluster in a data set. The main principles are: (a) dissimilarity measurement of an original cluster with the most similar cluster in the induced clustering obtained by adding data points, (b) the dissolution point, which is an adaptation of the breakdown point concept to single clusters, (c) isolation robustness: given a clustering method, is it possible to join, by addition of g points, arbitrarily well separated clusters?Results are derived for k-means, k-medoids (k estimated by average silhouette width), trimmed k-means, mixture models (with and without noise component, with and without estimation of the number of clusters by BIC), single and complete linkage.  相似文献   

17.
For qualitative data models, Gini-Simpson index and Shannon entropy are commonly used for statistical analysis. In the context of high-dimensional low-sample size (HDLSS) categorical models, abundant in genomics and bioinformatics, the Gini-Simpson index, as extended to Hamming distance in a pseudo-marginal setup, facilitates drawing suitable statistical conclusions. Under Lorenz ordering it is shown that Shannon entropy and its multivariate analogues proposed here appear to be more informative than the Gini-Simpson index. The nested subset monotonicity prospect along with subgroup decomposability of some proposed measures are exploited. The usual jackknifing (or bootstrapping) methods may not work out well for HDLSS constrained models. Hence, we consider a permutation method incorporating the union-intersection (UI) principle and Chen-Stein Theorem to formulate suitable statistical hypothesis testing procedures for gene classification. Some applications are included as illustration.  相似文献   

18.
We consider the problem of setting bootstrap confidence regions for multivariate parameters based on data depth functions. We prove, under mild regularity conditions, that depth-based bootstrap confidence regions are second-order accurate in the sense that their coverage error is of order n−1, given a random sample of size n. The results hold in general for depth functions of types A and D, which cover as special cases the Tukey depth, the majority depth, and the simplicial depth. A simulation study is also provided to investigate empirically the bootstrap confidence regions constructed using these three depth functions.  相似文献   

19.
一种新的分类方法   总被引:5,自引:0,他引:5  
本文在属性聚类网络的基础上 ,提出了堆近邻分类方法 .通过将无监督的属性聚类加上有监督信息 ,能自适应地优选堆数 .样本所考察的近邻个数依据它所在的堆的大小 ,因而每个样本所考查的近邻的个数不是完全相等的 .这种方法可用到高维小样本的数据分类问题中 .我们将它应用到基因表达谱形式的癌症辩识问题中 ,结果表明分类性能得到了较大的提高  相似文献   

20.
Recently developed SAGE technology enables us to simultaneously quantify the expression levels of thousands of genes in a population of cells. SAGE data is helpful in classification of different types of cancers. However, one main challenge in this task is the availability of a smaller number of samples compared to huge number of genes, many of which are irrelevant for classification. Another main challenge is that there is a lack of appropriate statistical methods that consider the specific properties of SAGE data. We propose an efficient solution by selecting relevant genes by information gain and building a multinomial event model for SAGE data. Promising results, in terms of accuracy, were obtained for the model proposed.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号