首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Natural products are often attractive and challenging targets for synthetic chemists, and many have interesting biological activities. However, synthetic chemists need to be more than simply suppliers of compounds to biologists. Therefore, we have been seeking ways to actively apply organic synthetic methods to chemical biology studies of natural products and their activities. In this personal review, I would like to introduce our work on the development of new biologically active compounds inspired by, or extracted from, the structures of natural products, focusing on enhancement of functional activity and specificity and overcoming various drawbacks of the parent natural products.  相似文献   

2.
High-throughput screening (HTS) of chemical libraries is often used for the unbiased identification of compounds interacting with G protein-coupled receptors (GPCRs), the largest family of therapeutic targets. However, current HTS methods require removing GPCRs from their native environment, which modifies their pharmacodynamic properties and biases the screen toward false positive hits. Here, we developed and validated a molecular imaging (MI) agent, NIR-mbc94, which emits near infrared (NIR) light and selectively binds to endogenously expressed cannabinoid CB(2) receptors,?a recognized target for treating autoimmune diseases, chronic pain and cancer. The precision and ease of this assay allows for the HTS of compounds interacting with CB(2) receptors expressed in their native environment.  相似文献   

3.
Non-specific chemical modification of protein thiol groups continues to be a significant source of false positive hits from high-throughput screening campaigns and can even plague certain protein targets and chemical series well into lead optimization. While experimental tools exist to assess the risk and promiscuity associated with the chemical reactivity of existing compounds, computational tools are desired that can reliably identify substructures that are associated with chemical reactivity to aid in triage of HTS hit lists, external compound purchases, and library design. Here we describe a Bayesian classification model derived from more than 8,800 compounds that have been experimentally assessed for their potential to covalently modify protein targets. The resulting model can be implemented in the large-scale assessment of compound libraries for purchase or design. In addition, the individual substructures identified as highly reactive in the model can be used as look-up tables to guide chemists during hit-to-lead and lead optimization campaigns.  相似文献   

4.
High-throughput screening (HTS) campaigns in pharmaceutical companies have accumulated a large amount of data for several million compounds over a couple of hundred assays. Despite the general awareness that rich information is hidden inside the vast amount of data, little has been reported for a systematic data mining method that can reliably extract relevant knowledge of interest for chemists and biologists. We developed a data mining approach based on an algorithm called ontology-based pattern identification (OPI) and applied it to our in-house HTS database. We identified nearly 1500 scaffold families with statistically significant structure-HTS activity profile relationships. Among them, dozens of scaffolds were characterized as leading to artifactual results stemming from the screening technology employed, such as assay format and/or readout. Four types of compound scaffolds can be characterized based on this data mining effort: tumor cytotoxic, general toxic, potential reporter gene assay artifact, and target family specific. The OPI-based data mining approach can reliably identify compounds that are not only structurally similar but also share statistically significant biological activity profiles. Statistical tests such as Kruskal-Wallis test and analysis of variance (ANOVA) can then be applied to the discovered scaffolds for effective assignment of relevant biological information. The scaffolds identified by our HTS data mining efforts are an invaluable resource for designing SAR-robust diversity libraries, generating in silico biological annotations of compounds on a scaffold basis, and providing novel target family specific scaffolds for focused compound library design.  相似文献   

5.
High throughput screening (HTS) data is often noisy, containing both false positives and negatives. Thus, careful triaging and prioritization of the primary hit list can save time and money by identifying potential false positives before incurring the expense of followup. Of particular concern are cell-based reporter gene assays (RGAs) where the number of hits may be prohibitively high to be scrutinized manually for weeding out erroneous data. Based on statistical models built from chemical structures of 650 000 compounds tested in RGAs, we created "frequent hitter" models that make it possible to prioritize potential false positives. Furthermore, we followed up the frequent hitter evaluation with chemical structure based in silico target predictions to hypothesize a mechanism for the observed "off target" response. It was observed that the predicted cellular targets for the frequent hitters were known to be associated with undesirable effects such as cytotoxicity. More specifically, the most frequently predicted targets relate to apoptosis and cell differentiation, including kinases, topoisomerases, and protein phosphatases. The mechanism-based frequent hitter hypothesis was tested using 160 additional druglike compounds predicted by the model to be nonspecific actives in RGAs. This validation was successful (showing a 50% hit rate compared to a normal hit rate as low as 2%), and it demonstrates the power of computational models toward understanding complex relations between chemical structure and biological function.  相似文献   

6.
Fragment-based screening is an emerging technology which is used as an alternative to high-throughput screening (HTS), and often in parallel. Fragment screening focuses on very small compounds. Because of their small size and simplicity, fragments exhibit a low to medium binding affinity (mM to μM) and must therefore be screened at high concentration in order to detect binding events. Since some issues are associated with high-concentration screening in biochemical assays, biophysical methods are generally employed in fragment screening campaigns. Moreover, these techniques are very sensitive and some of them can give precise information about the binding mode of fragments, which facilitates the mandatory hit-to-lead optimization. One of the main advantages of fragment-based screening is that fragment hits generally exhibit a strong binding with respect to their size, and their subsequent optimization should lead to compounds with better pharmacokinetic properties compared to molecules evolved from HTS hits. In other words, fragments are interesting starting points for drug discovery projects. Besides, the chemical space of low-complexity compounds is very limited in comparison to that of drug-like molecules, and thus easier to explore with a screening library of limited size. Furthermore, the "combinatorial explosion" effect ensures that the resulting combinations of interlinked binding fragments may cover a significant part of "drug-like" chemical space. In parallel to experimental screening, virtual screening techniques, dedicated to fragments or wider compounds, are gaining momentum in order to further reduce the number of compounds to test. This article is a review of the latest news in both experimental and in silico virtual screening in the fragment-based discovery field. Given the specificity of this journal, special attention will be given to fragment library design.  相似文献   

7.
The identification of promising hits and the generation of high quality leads are crucial steps in the early stages of drug discovery projects. The definition and assessment of both chemical and biological space have revitalized the screening process model and emphasized the importance of exploring the intrinsic complementary nature of classical and modern methods in drug research. In this context, the widespread use of combinatorial chemistry and sophisticated screening methods for the discovery of lead compounds has created a large demand for small organic molecules that act on specific drug targets. Modern drug discovery involves the employment of a wide variety of technologies and expertise in multidisciplinary research teams. The synergistic effects between experimental and computational approaches on the selection and optimization of bioactive compounds emphasize the importance of the integration of advanced technologies in drug discovery programs. These technologies (VS, HTS, SBDD, LBDD, QSAR, and so on) are complementary in the sense that they have mutual goals, thereby the combination of both empirical and in silico efforts is feasible at many different levels of lead optimization and new chemical entity (NCE) discovery. This paper provides a brief perspective on the evolution and use of key drug design technologies, highlighting opportunities and challenges.  相似文献   

8.
Most of the recent published works in the field of docking and scoring protein/ligand complexes have focused on ranking true positives resulting from a Virtual Library Screening (VLS) through the use of a specified or consensus linear scoring function. In this work, we present a methodology to speed up the High Throughput Screening (HTS) process, by allowing focused screens or for hitlist triaging when a prohibitively large number of hits is identified in the primary screen, where we have extended the principle of consensus scoring in a nonlinear neural network manner. This led us to introduce a nonlinear Generalist scoring Function, GFscore, which was trained to discriminate true positives from false positives in a data set of diverse chemical compounds. This original Generalist scoring Function is a combination of the five scoring functions found in the CScore package from Tripos Inc. GFscore eliminates up to 75% of molecules, with a confidence rate of 90%. The final result is a Hit Enrichment in the list of molecules to investigate during a research campaign for biological active compounds where the remaining 25% of molecules would be sent to in vitro screening experiments. GFscore is therefore a powerful tool for the biologist, saving both time and money.  相似文献   

9.
Molecular similarity methods for ligand-based virtual screening (VS) generally do not take compound potency as a variable or search parameter into account. We have incorporated a logarithmic potency scaling function into two conceptually distinct VS algorithms to account for relative compound potency during search calculations. A high-throughput screening (HTS) data set containing cathepsin B inhibitors was analyzed to evaluate the effects of potency scaling. Sets of template compounds were randomly selected from the HTS data and used to search for hits having varying potency levels in the presence or absence of potency scaling. Enrichment of potent compounds in small subsets of the HTS data set was observed as a consequence of potency scaling. In part, observed enrichments could be rationalized as a result of recentering chemical reference space on a subspace populated by potent compounds. Our findings suggest that VS calculations using multiple reference compounds can be directed toward the preferential detection of potent database hits by scaling compound contributions according to potency differences.  相似文献   

10.
A key challenge in many drug discovery programs is to accurately assess the potential value of screening hits. This is particularly true in fragment-based drug design (FBDD), where the hits often bind relatively weakly, but are correspondingly small. Ligand efficiency (LE) considers both the potency and the size of the molecule, and enables us to estimate whether or not an initial hit is likely to be optimisable to a potent, druglike lead. While size is a key property that needs to be controlled in a small molecule drug, there are a number of additional properties that should also be considered. Lipophilicity is amongst the most important of these additional properties, and here we present a new efficiency index (LLEAT) that combines lipophilicity, size and potency. The index is intuitively defined, and has been designed to have the same target value and dynamic range as LE, making it easily interpretable by medicinal chemists. Monitoring both LE and LLEAT should help both in the selection of more promising fragment hits, and controlling molecular weight and lipophilicity during optimisation.  相似文献   

11.
NMR-based screening has become a powerful method for the identification and analysis of low-molecular weight organic compounds that bind to protein targets and can be utilized in drug discovery programs. In particular, heteronuclear NMR-based screening can yield information about both the affinity and binding location of potential lead compounds. In addition, heteronuclear NMR-based screening has wide applications in complementing and facilitating conventional high-throughout screening programs. This article will describe several strategies for the integration of NMR-based screening and high-throughput screening. The marriage of these two techniques promises to be of tremendous benefit in the triage of hits that come from HTS, and can aid the medicinal chemist in the identification of quality leads that have high potential for further optimization.  相似文献   

12.
High-throughput screening (HTS) of large compound collections typically results in numerous small molecule hits that must be carefully evaluated to identify valid drug leads. Although several filtering mechanisms and other tools exist that can assist the chemist in this process, it is often the case that costly synthetic resources are expended in pursuing false positives. We report here a rapid and reliable NMR-based method for identifying reactive false positives including those that oxidize or alkylate a protein target. Importantly, the reactive species need not be the parent compound, as both reactive impurities and breakdown products can be detected. The assay is called ALARM NMR (a La assay to detect reactive molecules by nuclear magnetic resonance) and is based on monitoring DTT-dependent (13)C chemical shift changes of the human La antigen in the presence of a test compound or mixture. Extensive validation has been performed to demonstrate the reliability and utility of using ALARM NMR to assess thiol reactivity. This included comparing ALARM NMR to a glutathione-based fluorescence assay, as well as testing a collection of more than 3500 compounds containing HTS hits from 23 drug targets. The data show that current in silico filtering tools fail to identify more than half of the compounds that can act via reactive mechanisms. Significantly, we show how ALARM NMR data has been critical in identifying reactive compounds that would otherwise have been prioritized for lead optimization. In addition, a new filtering tool has been developed on the basis of the ALARM NMR data that can augment current in silico programs for identifying nuisance compounds and improving the process of hit triage.  相似文献   

13.
The completion of the human genome project has opened novel scientific avenues in functional genomics, structural genomics and proteomics. These areas have a common goal: the identification of all the proteins acting and cross-talking in a single cell at a defined moment of its lifecycle. The expansion of these areas in bioscience has been facilitated by the rapid development of high throughput screening (HTS) methods which has, in turn, attracted the business community to make investments in this novel business segment of biotechnology. By using these HTS methods, the hope is that novel targets will be validated much more rapidly speeding up the development of novel drugs. Numerous techniques and tools have emerged over the past decade for the identification of small target-specific molecular ligands that exploit a common feature: the exploration of molecular diversity using combinatorial methods. While chemists developed new methods for rapidly and efficiently synthesising and screening large collections of small molecules, biologists used recombinant DNA techniques for selecting displayed repertoires. To this end, the discovery of new low molecular weight peptides is becoming increasingly important, not only as molecular tools for the understanding of protein-protein interactions but also for the generation of lead compounds.  相似文献   

14.
While many large publicly accessible databases provide excellent annotation for biological macromolecules, the same is not true for small chemical compounds. Commercial data sources also fail to encompass an annotation interface for large numbers of compounds and tend to be cost prohibitive to be widely available to biomedical researchers. Therefore, using annotation information for the selection of lead compounds from a modern day high-throughput screening (HTS) campaign presently occurs only under a very limited scale. The recent rapid expansion of the NIH PubChem database provides an opportunity to link existing biological databases with compound catalogs and provides relevant information that potentially could improve the information garnered from large-scale screening efforts. Using the 2.5 million compound collection at the Genomics Institute of the Novartis Research Foundation (GNF) as a model, we determined that approximately 4% of the library contained compounds with potential annotation in such databases as PubChem and the World Drug Index (WDI) as well as related databases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) and ChemIDplus. Furthermore, the exact structure match analysis showed 32% of GNF compounds can be linked to third party databases via PubChem. We also showed annotations such as MeSH (medical subject headings) terms can be applied to in-house HTS databases in identifying signature biological inhibition profiles of interest as well as expediting the assay validation process. The automated annotation of thousands of screening hits in batch is becoming feasible and has the potential to play an essential role in the hit-to-lead decision making process.  相似文献   

15.
Integration of flexible data-analysis tools with cheminformatics methods is a prerequisite for successful identification and validation of “hits” in high-throughput screening (HTS) campaigns. We have designed, developed, and implemented a suite of robust yet flexible cheminformatics tools to support HTS activities at the Broad Institute, three of which are described herein. The “hit-calling” tool allows a researcher to set a hit threshold that can be varied during downstream analysis. The results from the hit-calling exercise are reported to a database for record keeping and further data analysis. The “cherry-picking” tool enables creation of an optimized list of hits for confirmatory and follow-up assays from an HTS hit list. This tool allows filtering by computed chemical property and by substructure. In addition, similarity searches can be performed on hits of interest and sets of related compounds can be selected. The third tool, an “S/SAR viewer,” has been designed specifically for the Broad Institute’s diversity-oriented synthesis (DOS) collection. The compounds in this collection are rich in chiral centers and the full complement of all possible stereoisomers of a given compound are present in the collection. The S/SAR viewer allows rapid identification of both structure/activity relationships and stereo-structure/activity relationships present in HTS data from the DOS collection. Together, these tools enable the prioritization and analysis of hits from diverse compound collections, and enable informed decisions for follow-up biology and chemistry efforts.  相似文献   

16.
The time-limiting step in HTS often is the development of an appropriate assay. In addition, hits from HTS fairly often turn out to be false positives and generally display unfavorable properties for further development. Here we describe an alternative process for hit generation, applied to the human adipocyte fatty acid binding protein FABP4. A small molecular ligand for FABP4 that blocks the binding of endogenous ligands may be developed into a drug for the treatment of type-2 diabetes. Using NMR spectroscopy, we screened FABP4 for low-affinity binders in a diversity library consisting of small soluble scaffolds, which yielded 52 initial hits in total. The potencies of these hits were ranked, and crystal structures of FABP4 complexes for two of the hits were obtained. The structural data were subsequently used to direct similarity searches for available analogues, as well as chemical synthesis of 12 novel analogues. In this way, a series of three selective FABP4 ligands with attractive pharmacochemical profiles and potencies of 10 microM or better was obtained.  相似文献   

17.
A methodology is introduced to assign energy-based scores to two-dimensional (2D) structural features based on three-dimensional (3D) ligand-target interaction information and utilize interaction-annotated features in virtual screening. Database molecules containing such fragments are assigned cumulative scores that serve as a measure of similarity to active reference compounds. The Interaction Annotated Structural Features (IASF) method is applied to mine five high-throughput screening (HTS) data sets and often identifies more hits than conventional fragment-based similarity searching or ligand-protein docking.  相似文献   

18.
High throughput screening (HTS) has emerged as an important technique for allowing researchers to rapidly profile very large numbers of chemicals against drug targets. As recent and future advances make HTS cheaper to perform on even larger scales, the amount of data that has to be processed, analyzed, and searched will only grow larger in size and harder for researchers to manually sift through. It is therefore an unavoidable requirement that institutions utilizing HTS technology will need to begin looking for effective solutions in the maturing area of laboratory information management systems like many other types of labs have already done. K-Screen is one such solution. Our initial goal with K-Screen was to have an integrated application environment that supported data analysis, management, and presentation so we could efficiently perform client requested screens and searches as well as generate detailed reports on the results of those. Previously, we had attempted but failed to locate an existing software suite that sufficiently addressed all our requirements. K-Screen is a web accessible application that offers the ability to host a large chemical structure library, process and store single-dose (primary) and dose response (secondary) screening data, perform searches based on screening results, plate coordinates, and structure, substructure and structure similarity. It uses heat maps and histograms to visualize screen or plate level statistics. Interfaces to external searches against PubChem and ZINC databases are also provided. We feel that these features make K-Screen a practical and effective alternative to other commercial or academic HTS LIMS systems.  相似文献   

19.
A process for objective identification and filtering of undesirable compounds that contribute to high-throughput screening (HTS) deck promiscuity is described. Two methods of mapping hit promiscuity have been developed linking SMARTS-based structural queries with historical primary HTS data. The first compares an expected assay hit rate to actual hit rates. The second examines the propensity of an individual compound to hit multiple assays. Statistical evaluation of the data indicates a correlation between the resultant functional group filters and compound promiscuity. These data corroborate a number of commonly applied filters as well as producing some unexpected results. Application of these models to HTS collection triage reduced the number of in-house compounds considered for screening by 12%. The implications of these findings are further discussed in the context of the HTS screening set and combinatorial library design as well as compound acquisition.  相似文献   

20.
High throughput screening (HTS) campaigns, where laboratory automation is used to expose biological targets to large numbers of materials from corporate compound collections, have become commonplace within the lead generation phase of pharmaceutical discovery. Advances in genomics and related fields have afforded a wealth of targets such that screening facilities at larger organizations routinely execute over 100 hit-finding campaigns per year. Often, 10(5) or 10(6) molecules will be tested within a campaign/cycle to locate a large number of actives requiring follow-up investigation. Due to resource constraints at every organization, traditional chemistry methods for validating hits and developing structure activity relationships (SAR) become untenable when challenged with hundreds of hits in multiple chemical families per target. To compound the issue, comparison and prioritization of hits versus multiple screens, or physical chemical property criteria, is made more complex by the informatics issues associated with handling large data sets. This article describes a collaborative research project designed to simultaneously leverage the medicinal chemistry and drug development expertise of the Novartis Institutes for Biomedical Research Inc. (NIBRI) and ArQule Inc.'s high throughput library design, synthesis and purification capabilities. The work processes developed by the team to efficiently design, prepare, purify, assess and prioritize multiple chemical classes that were identified during high throughput screening, cheminformatics and molecular modeling activities will be detailed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号