首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26篇
  免费   0篇
化学   17篇
数学   4篇
物理学   5篇
  2023年   2篇
  2016年   1篇
  2013年   4篇
  2011年   1篇
  2010年   1篇
  2009年   3篇
  2008年   3篇
  2007年   4篇
  2006年   2篇
  2005年   2篇
  2004年   2篇
  1999年   1篇
排序方式: 共有26条查询结果,搜索用时 15 毫秒
1.
Accurately and reliably identifying the actual number of clusters present with a dataset of gene expression profiles, when no additional information on cluster structure is available, is a problem addressed by few algorithms. GeneMCL transforms microarray analysis data into a graph consisting of nodes connected by edges, where the nodes represent genes, and the edges represent the similarity in expression of those genes, as given by a proximity measurement. This measurement is taken to be the Pearson correlation coefficient combined with a local non-linear rescaling step. The resulting graph is input to the Markov Cluster (MCL) algorithm, which is an elegant, deterministic, non-specific and scalable method, which models stochastic flow through the graph. The algorithm is inherently affected by any cluster structure present, and rapidly decomposes a graph into cohesive clusters. The potential of the GeneMCL algorithm is demonstrated with a 5,730 gene subset (IGS) of the Van't Veer breast cancer database, for which the clusterings are shown to reflect underlying biological mechanisms.  相似文献   
2.
Microarrays are becoming a ubiquitous tool of research in life sciences. However, the working principles of microarray-based methodologies are often misunderstood or apparently ignored by the researchers who actually perform and interpret experiments. This in turn seems to lead to a common over-expectation regarding the explanatory and/or knowledge-generating power of microarray analyses. In this note we intend to explain basic principles of five (5) major groups of analytical techniques used in studies of microarray data and their interpretation: the principal component analysis (PCA), the independent component analysis (ICA), the t-test, the analysis of variance (ANOVA), and self organizing maps (SOM). We discuss answers to selected practical questions related to the analysis of microarray data. We also take a closer look at the experimental setup and the rules, which have to be observed in order to exploit microarrays efficiently. Finally, we discuss in detail the scope and limitations of microarray-based methods. We emphasize the fact that no amount of statistical analysis can compensate for (or replace) a well thought through experimental setup. We conclude that microarrays are indeed useful tools in life sciences but by no means should they be expected to generate complete answers to complex biological questions. We argue that even well posed questions, formulated within a microarray-specific terminology, cannot be completely answered with the use of microarray analyses alone.  相似文献   
3.
Peptide nucleic acids (PNAs) have been used to encode a combinatorial library whereby each compound is labeled with a PNA tag which reflects its synthetic history and localizes the compound upon hybridization to an oligonucleotide array. We report herein the full synthetic details for a 4000 member PNA-encoded library targeted towards cysteine protease.  相似文献   
4.
Conventional methods for detecting single-nucleotide polymorphisms (SNPs), the most common form of genetic variation in human beings, are mostly limited by their analysis time and throughputs. In contrast, advances in microfabrication technology have led to the development of miniaturized platforms that can potentially provide rapid high-throughput analysis at small sample volumes. This review highlights some of the recent developments in the miniaturization of SNP detection platforms, including microarray-based, bead-based microfluidic and microelectrophoresis-based platforms. Particular attention is paid to their ease of fabrication, analysis time, and level of throughput.  相似文献   
5.
High-dimensional data are prevalent across many application areas, and generate an ever-increasing demand for statistical methods of dimension reduction, such as cluster and significance analysis. One application area that has recently received much interest is the analysis of microarray gene expression data.

The results of cluster analysis are open to subjective interpretation. To facilitate the objective inference of such analyses, we use flexible parameterizations of the cluster means, paired with model selection, to generate sparse and easy-to-interpret representations of each cluster. Model selection in cluster analysis is combinatorial in the numbers of clusters and data dimensions, and thus presents a computationally challenging task.

In this article we introduce a model selection method based on rate-distortion theory, which allows us to turn the combinatorial model selection problem into a fast and simultaneous selection across clusters. The method is also applicable to model selection in significance analysis

We show that simultaneous model selection for cluster analysis generates objectively interpretable cluster models, and that the selection performance is competitive with a combinatorial search, at a fraction of the computational cost. Moreover, we show that the rate-distortion based significance analysis substantially increases the power compared with standard methods.

This article has supplementary material online.  相似文献   
6.
DNA Microarrays     
The complete human genes (ca. 100 000) as well as the whole spectrum of biological diversity should soon be able to be analyzed simultaneously by means of DNA microarrays using the fast technical advances that are occurring in this area. The particular strength of array analysis, typically based on the hybridization of nucleic acid probes attached to microchips with labeled RNA or DNA samples, results from the highly redundant measurement of many parallel hybridization events (see picture), which leads to an extraordinary level of assay validation.  相似文献   
7.
The interest towards extracellular vesicles (EVs) has grown exponentially over the last few years; being involved in intercellular communication and serving as reservoirs for biomarkers for tumors, they have a great potential for liquid biopsy development, possibly replacing many costly and invasive tissue biopsies.  相似文献   
8.
Microarrays are used to simultaneously determine the expressions of thousands of genes. An important application of microarrays is in the classification of samples into classes of interest (e.g. either healthy cells or tumour cells). Discriminant partial least squares (DPLS) has often been used for this purpose. In this paper, we describe an improvement to DPLS that uses kernel-based probability density functions and the Bayes rule to classify samples whilst keeping the option of not classifying the sample if this cannot be done with sufficient confidence. With this approach, those samples outside the boundaries of the known classes or from the ambiguity region between classes are rejected and only samples with a high probability of being correctly classified are indeed classified. The optimal model is found by simultaneously minimizing the misclassification and rejection costs. The method (p-DPLS with reject option) was tested with two datasets. For the human cancers dataset the accuracy (obtained by leave-one-out cross-validation) was improved from 97% to 99% when compared to p-DPLS without reject option. For the breast cancer dataset, p-DPLS with reject option was able to reject 100% of the test samples that did not belong to any of the modelled classes. These samples would have been misclassified if the reject option had not been considered.  相似文献   
9.
We report a flexible method for selective capture of sequence fragments from complex, eukaryotic genome libraries for next-generation sequencing based on hybridization to DNA microarrays. Using microfluidic array architecture and integrated hardware, the process is amenable to complete automation and does not introduce amplification steps into the standard library preparation workflow, thereby avoiding bias of sequence distribution and fragment lengths. We captured a discontiguous human genomic target region of 185 kb using a tiling design with 50mer probes. Analysis by high-throughput sequencing using an Illumina/Solexa 1G Genome Analyzer revealed 2150-fold enrichment with mean per base coverage between 4.6 and 107.5-fold for the individual target regions. This method represents a flexible and cost-effective approach for large-scale resequencing of complex genomes. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users. Stephan Bau and Nadine Schracke contributed equally to this work.  相似文献   
10.
Revealing biological networks is one key objective in systems biology. With microarrays, researchers now routinely measure expression profiles at the genome level under various conditions, and such data may be used to statistically infer gene regulation networks. Gaussian graphical models (GGMs) have proven useful for this purpose by modeling the Markovian dependence among genes. However, a single GGM may not be adequate to describe the potentially differing networks across various conditions, and hence it is more natural to infer multiple GGMs from such data. In this article we propose a class of nonconvex penalty functions aiming at the estimation of multiple GGMs with a flexible joint sparsity constraint. We illustrate the property of our proposed nonconvex penalty functions by simulation study. We then apply the method to a gene expression dataset from the GenCord Project, and show that our method can identify prominent pathways across different conditions. Supplementary materials for this article are available online.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号