首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The human body consists of 37 trillion single cells represented by over 50 organs that are stitched together to make us who we are, yet we still have very little understanding about the basic units of our body: what cell types and states make up our organs both compositionally and spatially. Previous efforts to profile a wide range of human cell types have been attempted by the FANTOM and GTEx consortia. Now, with the advancement in genomic technologies, profiling the human body at single-cell resolution is possible and will generate an unprecedented wealth of data that will accelerate basic and clinical research with tangible applications to future medicine. To date, several major organs have been profiled, but the challenges lie in ways to integrate single-cell genomics data in a meaningful way. In recent years, several consortia have begun to introduce harmonization and equity in data collection and analysis. Herein, we introduce existing and nascent single-cell genomics consortia, and present benefits to necessitate single-cell genomic consortia in a regional environment to achieve the universal human cell reference dataset.Subject terms: Data integration, Genetics research  相似文献   

3.
Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.  相似文献   

4.
DNA microarray technology has become an important research tool for biotechnology and microbiology. It is now possible to characterize genetic diversity and gene expression in a genomewide manner. DNA microarrays have been applied extensively to study the biology of many bacteria including Escherichia coli, but only recently have they been developed for the Grampositive Corynebacterium glutamicum. Both bacteria are widely used for biotechnological amino acid production. In this article, in addition to the design and generation of microarrays as well as their use in hybridization experiments and subsequent data analysis, we describe recent applications of DNA microarray technology regarding amino acid production in C. glutamicum and E. coli. We also discuss the impact of functional genomics studies on fundamental as well as applied aspects of amino acid production with C. glutamicum and E. coli.  相似文献   

5.
A survey of the literature is made for the XPS analysis of food products (mainly spray‐dried powders, which reveal a considerable surface enrichment in lipids) and of microorganisms and related systems (extracellular polymer substances and biofilms). This survey is used as a background for discussions and recommendations regarding methodology. Sample preparation methods reviewed are freeze drying, analysis of frozen hydrated specimens and adsorption of surface‐active biocompounds on model substrates. Peak decomposition is a way to increase the wealth of information provided by the XPS spectra. It should be performed after a check that sample charge stabilization is satisfactory. Moreover, ensuring the precision needed to make comparisons within sets of samples may involve a trade‐off between imposing constraints and generating information. The examination of correlations between spectral data in the light of chemical guidelines is useful to validate or improve peak decomposition and component assignment, and may also upgrade the chemical information regarding speciation. Further upgrading may be achieved by expressing marker XPS data in terms of concentrations of compounds of interest. Different methods of computation are discussed, providing a composition in terms of ingredients, classes of biochemical compounds, or various organic and inorganic compounds. As an alternative or complement to this deterministic approach, multivariate analysis of suitable spectral windows provides spectral components, which may be used for comparing samples, and which may have a direct chemical relevance or be used to identify features of interest. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Recently, the interests in proteomics have been intensively increased, and the proteomic methods have been widely applied to many problems in cell biology. If the age of 1990s is considered to be a decade of genomics, we can claim that the following years of the new century is a decade of proteomics. The rapid evolution of proteomics has continued through these years, with a series of innovations in separation techniques and the core technologies of two‐dimensional gel electrophoresis and MS. Both technologies are fueled by automation and high throughput computation for profiling of proteins from biological systems. As Patterson ever mentioned, ‘data analysis is the Achilles heel of proteomics and our ability to generate data now outstrips our ability to analyze it’. The development of automatic and high throughput technologies for rapid identification of proteins is essential for large‐scale proteome projects and automatic protein identification and characterization is essential for high throughput proteomics. This review provides a snap shot of the tools and applications that are available for mass spectrometric high throughput biocomputation. The review starts with a brief introduction of proteomics and MS. Computational tools that can be employed at various stages of analysis are presented, including that for data processing, identification, quantification, and the understanding of the biological functions of individual proteins and their dynamic interactions. The challenges of computation software development and its future trends in MS‐based proteomics have also been speculated. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Proteomics: capacity versus utility   总被引:22,自引:0,他引:22  
Until recently scientists studied genes or proteins one at a time. With improvements in technology, new tools have become available to study the complex interactions that occur in biological systems. Global studies are required to do this, and these will involve genomic and proteomic approaches. High-throughput methods are necessary in each case because the number of genes and proteins in even the simplest of organisms are immense. In the developmental phase of genomics, the emphasis was on the generation and assembly of large amounts of nucleic acid sequence data. Proteomics is currently in a phase of technological development and establishment, and demonstrating the capacity for high throughput is a major challenge. However, funding bodies (both in the public and private sector) are increasingly focused on the usefulness of this capacity. Here we review the current state of proteome research in terms of capacity and utility.  相似文献   

8.
9.
The impact of bacterial genomics on natural product research   总被引:10,自引:0,他引:10  
"There's life in the old dog yet!" This adage also holds true for natural product research. After the era of natural products was declared to be over, because of the introduction of combinatorial synthesis techniques, natural product research has taken a surprising turn back towards a major field of pharmaceutical research. Current challenges, such as emerging multidrug-resistant bacteria, might be overcome by developments which combine genomic knowledge with applied biology and chemistry to identify, produce, and alter the structure of new lead compounds. Significant biological activity is reported much less frequently for synthetic compounds, a fact reflected in the large proportion of natural products and their derivatives in clinical use. This Review describes the impact of microbial genomics on natural products research, in particularly the search for new lead structures and their optimization. The limitations of this research are also discussed, thus allowing a look into future developments.  相似文献   

10.
With the accelerated accumulation of genomic sequence data, there is a pressing need to develop computational methods and advanced bioinformatics infrastructure for reliable and large-scale protein annotation and biological knowledge discovery. The Protein Information Resource (PIR) provides an integrated public resource of protein informatics to support genomic and proteomic research. PIR produces the Protein Sequence Database of functionally annotated protein sequences. The annotation problems are addressed by a classification-driven and rule-based method with evidence attribution, coupled with an integrated knowledge base system being developed. The approach allows sensitive identification, consistent and rich annotation, and systematic detection of annotation errors, as well as distinction of experimentally verified and computationally predicted features. The knowledge base consists of two new databases, sequence analysis tools, and graphical interfaces. PIR-NREF, a non-redundant reference database, provides a timely and comprehensive collection of all protein sequences, totaling more than 1,000,000 entries. iProClass, an integrated database of protein family, function, and structure information, provides extensive value-added features for about 830,000 proteins with rich links to over 50 molecular databases. This paper describes our approach to protein functional annotation with case studies and examines common identification errors. It also illustrates that data integration in PIR supports exploration of protein relationships and may reveal protein functional associations beyond sequence homology.  相似文献   

11.
Efficient target selection methods are an important prerequisite for increasing the success rate and reducing the cost of high-throughput structural genomics efforts. There is a high demand for sequence-based methods capable of predicting experimentally tractable proteins and filtering out potentially difficult targets at different stages of the structural genomic pipeline. Simple empirical rules based on anecdotal evidence are being increasingly superseded by rigorous machine-learning algorithms. Although the simplicity of less advanced methods makes them more human understandable, more sophisticated formalized algorithms possess superior classification power. The quickly growing corpus of experimental success and failure data gathered by structural genomics consortia creates a unique opportunity for retrospective data mining using machine learning techniques and results in increased quality of classifiers. For example, the current solubility prediction methods are reaching the accuracy of over 70%. Furthermore, automated feature selection leads to better insight into the nature of the correlation between amino acid sequence and experimental outcome. In this review we summarize methods for predicting experimental success in cloning, expression, soluble expression, purification and crystallization of proteins with a special focus on publicly available resources. We also describe experimental data repositories and machine learning techniques used for classification and feature selection.  相似文献   

12.
Many laboratories identify proteins by searching tandem mass spectrometry data against genomic or protein sequence databases. These database searches typically use the measured peptide masses or the derived peptide sequence and, in this paper, we focus on the latter. We study the minimum peptide sequence data requirements for definitive protein identification from protein sequence databases. Accurate mass measurements are not needed for definitive protein identification, even when a limited amount of sequence data is available for searching. This information has implications for the mass spectrometry performance (and cost), data base search strategies and proteomics research.  相似文献   

13.
High throughput technologies have the potential to affect all aspects of drug discovery. Considerable attention is paid to high throughput screening (HTS) for small molecule lead compounds. The identification of the targets that enter those HTS campaigns had been driven by basic research until the advent of genomics level data acquisition such as sequencing and gene expression microarrays. Large-scale profiling approaches (e.g., microarrays, protein analysis by mass spectrometry, and metabolite profiling) can yield vast quantities of data and important information. However, these approaches usually require painstaking in silico analysis and low-throughput basic wet-lab research to identify the function of a gene and validate the gene product as a potential therapeutic drug target. Functional genomic screening offers the promise of direct identification of genes involved in phenotypes of interest. In this review, RNA interference (RNAi) mediated loss-of-function screens will be discussed and as well as their utility in target identification. Some of the genes identified in these screens should produce similar phenotypes if their gene products are antagonized with drugs. With a carefully chosen phenotype, an understanding of the biology of RNAi and appreciation of the limitations of RNAi screening, there is great potential for the discovery of new drug targets.  相似文献   

14.
The Interval Correlation Optimised Shifting algorithm (icoshift) has recently been introduced for the alignment of nuclear magnetic resonance spectra. The method is based on an insertion/deletion model to shift intervals of spectra/chromatograms and relies on an efficient Fast Fourier Transform based computation core that allows the alignment of large data sets in a few seconds on a standard personal computer. The potential of this programme for the alignment of chromatographic data is outlined with focus on the model used for the correction function. The efficacy of the algorithm is demonstrated on a chromatographic data set with 45 chromatograms of 64,000 data points. Computation time is significantly reduced compared to the Correlation Optimised Warping (COW) algorithm, which is widely used for the alignment of chromatographic signals. Moreover, icoshift proved to perform better than COW in terms of quality of the alignment (viz. of simplicity and peak factor), but without the need for computationally expensive optimisations of the warping meta-parameters required by COW. Principal component analysis (PCA) is used to show how a significant reduction on data complexity was achieved, improving the ability to highlight chemical differences amongst the samples.  相似文献   

15.
An approach is described for genomic database searching based on experimentally observed proteolytic fragments, e.g., isolated from 1D or 2D gels or analyzed directly, that can be applied to unfinished prokaryotic genomic data in the absence of annotations or previously assigned open reading frames (ORFs). This variation on the database search is in contrast to the more familiar use of peptide mass spectral fragmentation data to search fully annotated inferred protein databases, e.g., OWL or SWISS-PROT. We compared the SEQUEST search results from a six reading frame translation of the Porphyromonas gingivalis genome DNA sequence with those from computationally derived ORFs created using publicly available genomics software tools. The ORF approach eliminated many of the artifacts present in output from the six reading frame search. The method was applied to uninterpreted tandem mass spectrometric data derived from proteins secreted by the periodontal pathogen Porphyromonas gingivalis in response to the gingival epithelial cell environment, a model system for the study of host-pathogen interactions relevant to human periodontal disease.  相似文献   

16.
Optical labelling reagents (dyes and fluorophores) are an essential component of probe-based biomolecule detection, an approach widely employed in a variety of areas including environmental analysis, disease diagnostics, pharmaceutical screening, and proteomic and genomic studies. Recently, functional nanomaterials, as a new generation of high-value optical labels, have been applied to molecular detection. The great potential of such recent optical labels has paved the way for the development of new biomolecule assays with unprecedented analytical performance characteristics, related to sensitivity, multiplexing capability, sample throughput, cost-effectiveness and ease of use. This review aims to provide an overview of recent advances using different nanoparticles (such as quantum dots, rare earth doped nanoparticles or gold nanoparticles) for analytical genomics and proteomics, with particular emphasis on the outlook for different strategies of using nanoparticles for bioimaging and quantitative bioanalytical applications, as well as possibilities and limitations of nanoparticles in such a growing field.  相似文献   

17.
In this review research papers on the application of CEC are summarized that have been published between May 2003 and May 2005. First, a short overview is given of trends and developments in CEC that may increase the applicability of the separation technique. Next, application-oriented research using CEC is described in biochemical studies, including proteomics and genomics, in the analysis of food and natural products, and in pharmaceutical, industrial, and environmental analysis.  相似文献   

18.
戴昕 《高分子学报》2021,53(4):104-117
现有研究对信息隐私规范的关注集中于有关信息收集和信息披露行为的规制要求。但隐私规范中包含另一类内容,即要求已获取甚至披露他人私密信息的知情人积极投入成本,在一定范围内掩饰其对特定信息的占有或使用状态。这种可被称为“看破不说破”的要求是一种基础隐私规范,信息隐私制度关涉的各类社会价值都需借助甚至依赖这种规范才能实现。但这种规范的实际行为约束力和适用范围也存在局限和边界。在包括消费者保护、数据监控在内的重要当代信息法治议题中,“看破不说破”均已在制度和实践中有所体现,并为思考信息隐私制度的演化、走向与重建路径提供了重要启示。  相似文献   

19.
Recent theoretical developments regarding the understanding of weakly chaotic transients in ferroelectric liquid crystals (FLCs), induced by electric field, are studied in terms of the interaction with magnetic field. Our research is related with the nonlinear dynamical system represented by a thin film of surface-stabilized FLC in smectic C* phase, and subjected by the swinging magnetic field. The computation of the Lyapunov exponents from the dynamic equation for the director field reveals that the director dynamics exhibits limit cycle, hyperchaotic attractor and strange attractor behavior in the dissipative nonlinear media. The transients between director’s phase space trajectories can be handled by the magnetic field parameters. The fundamental understanding of the director dynamics may have a valuable contribution to the applications of thin liquid crystal films.  相似文献   

20.
Measuring the metabolome: current analytical technologies   总被引:44,自引:0,他引:44  
Dunn WB  Bailey NJ  Johnson HE 《The Analyst》2005,130(5):606-625
The post-genomics era has brought with it ever increasing demands to observe and characterise variation within biological systems. This variation has been studied at the genomic (gene function), proteomic (protein regulation) and the metabolomic (small molecular weight metabolite) levels. Whilst genomics and proteomics are generally studied using microarrays (genomics) and 2D-gels or mass spectrometry (proteomics), the technique of choice is less obvious in the area of metabolomics. Much work has been published employing mass spectrometry, NMR spectroscopy and vibrational spectroscopic techniques, amongst others, for the study of variations within the metabolome in many animal, plant and microbial systems. This review discusses the advantages and disadvantages of each technique, putting the current status of the field of metabolomics in context, and providing examples of applications for each technique employed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号