首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A tandem mass spectral database system consists of a library of reference spectra and a search program. State‐of‐the‐art search programs show a high tolerance for variability in compound‐specific fragmentation patterns produced by collision‐induced decomposition and enable sensitive and specific ‘identity search’. In this communication, performance characteristics of two search algorithms combined with the ‘Wiley Registry of Tandem Mass Spectral Data, MSforID’ (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30 000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their setup from tandem‐in‐space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
The search for a global minimum related to molecular electronic structure and chemical bonding has received wide attention based on some theoretical calculations at various levels of theory. Particle swarm optimization (PSO) algorithm and modified PSO have been used to predict the energetically stable/metastable states associated with a given chemical composition. Out of a variety of techniques such as genetic algorithm, basin hopping, simulated annealing, PSO, and so on, PSO is considered to be one of the most suitable methods due to its various advantages over others. We use a swarm‐intelligence based parallel code to improve a PSO algorithm in a multidimensional search space augmented by quantum chemical calculations on gas phase structures at 0 K without any symmetry constraint to obtain an optimal solution. Our currently employed code is interfaced with Gaussian software for single point energy calculations. The code developed here is shown to be efficient. Small population size (small cluster) in the multidimensional space is actually good enough to get better results with low computational cost than the typical larger population. But for larger systems also the analysis is possible. One can try with a large number of particles as well. We have also analyzed how arbitrary and random structures and the local minimum energy structures gravitate toward the target global minimum structure. At the same time, we compare our results with that obtained from other evolutionary techniques.  相似文献   

3.
In this work the Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms were employed to modeling liquid–liquid phase equilibrium data. For this purpose, some strategies for stochastic algorithms were investigated from common test functions and used in LLE parameter estimation procedure. The strategy used for the flash calculation was based on the isoactivity criteria associated with phase stability test and interpolation function for the initial estimate to improve reliability of phase equilibria calculations. It is shown that both algorithms SA and PSO were capable of estimating the parameters in models describing liquid–liquid phase behavior of binary and multicomponent systems with a good representation of the experimental data.  相似文献   

4.
An evolutionary algorithm (EA) using a graph-based data structure to explore the molecular constitution space is presented. The EA implementation proves to be a promising alternative to deterministic approaches to the problem of computer-assisted structure elucidation (CASE). While not relying on any external database, the EA-guided CASE program SENECA is able to find correct solutions within calculation times comparable to that of other CASE expert systems. The implementation presented here significantly expands the size limit of constitutional optimization problems treatable with evolutionary algorithms by introducing novel efficient graph-based genetic operators. The new EA-based search strategy is discussed including the underlying data structures, component design, parameter optimization, and evolution process control. Typical structure elucidation examples are given to demonstrate the algorithm's performance.  相似文献   

5.
Type II diabetes was diagnosed by Fourier transform mid-infrared (FTMIR) attenuated total reflection (ATR) spectroscopy in combination with support vector machine (SVM). Spectra of serum samples from 65 patients with clinical confirmed type II diabetes mellitus and 55 healthy volunteers were acquired using ATR-FTMIR and were first pretreated by three pretreatments (Savitzky–Golay smoothing, multiple scattering correction, and wavelet transforms algorithms) to reduce the interfering information before establishing the SVM models. The parameters of SVM (penalty factor C and kernel function parameter gamma) were optimized to improve the generalization abilities of the models. A grid search method (GS), genetic algorithm (GA), and particle swarm optimization (PSO) algorithm, were used to find out the optimal parameter values. The results showed that the maximum accuracies were 95.74, 97.87, and 89.36% for the optimized GS, GA, and PSO algorithms. The maximum sensitivities were 96, 100, and 92, and the maximum specificity were 95.45, 95.45, and 86.36%, respectively. The results indicated that the accuracy of type II diabetes was improved using the GS, GA, and PSO algorithms for optimizing the SVM parameters. The GA was found to be slightly better than the GS and PSO. The results of the experiment confirmed that the combination of the ATR-FTMIR spectroscopy and SVM was able to rapidly and accurately diagnose type II diabetes without reagents.  相似文献   

6.
Optimizing the size and configuration of combinatorial libraries   总被引:3,自引:0,他引:3  
This paper addresses a major issue in library design, namely how to efficiently optimize the library size (number of products) and configuration (number of reagents at each position) simultaneously with other properties such as diversity, cost, and drug-like physicochemical property profiles. These objectives are often in competition, for example, minimizing the number of reactants while simultaneously maximizing diversity, and thus present difficulties for traditional optimization methods such as genetic algorithms and simulated annealing. Here, a multiobjective genetic algorithm (MOGA) is used to vary library size and configuration simultaneously with other library properties. The result is a family of solutions that explores the tradeoffs in the objectives. This is achieved without the need to assign relative weights to the objectives. The user is then able to make an informed choice on an appropriate compromise solution. The method has been applied to two different virtual libraries: a two-component aminothiazole library and a four-component benzodiazepine library.  相似文献   

7.
A novel method for the prediction of RNA secondary structure was proposed based on the particle swarm optimization(PSO). PSO is known to be effective in solving many different types of optimization problems and known for being able to approximate the global optimal results in the solution space. We designed an efficient objective function according to the minimum free energy, the number of selected stems and the average length of selected stems. We calculated how many legal stems there were in the sequence,...  相似文献   

8.
Hamilton paths, or Hamiltonian paths, are walks on a lattice which visit each site exactly once. They have been proposed as models of globular proteins and of compact polymers. A previously published algorithm [Mansfield, Macromolecules 27, 5924 (1994)] for sampling Hamilton paths on simple square and simple cubic lattices is tested for bias and for efficiency. Because the algorithm is a Metropolis Monte Carlo technique obviously satisfying detailed balance, we need only demonstrate ergodicity to ensure unbiased sampling. Two different tests for ergodicity (exact enumeration on small lattices, nonexhaustive enumeration on larger lattices) demonstrate ergodicity unequivocally for small lattices and provide strong support for ergodicity on larger lattices. Two other sampling algorithms [Ramakrishnan et al., J. Chem. Phys. 103, 7592 (1995); Lua et al., Polymer 45, 717 (2004)] are both known to produce biases on both 2x2x2 and 3x3x3 lattices, but it is shown here that the current algorithm gives unbiased sampling on these same lattices. Successive Hamilton paths are strongly correlated, so that many iterations are required between statistically independent samples. Rules for estimating the number of iterations needed to dissipate these correlations are given. However, the iteration time is so fast that the efficiency is still very good except on extremely large lattices. For example, even on lattices of total size 10x10x10 we are able to generate tens of thousands of uncorrelated Hamilton paths per hour of CPU time.  相似文献   

9.
This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.  相似文献   

10.
We present a novel computer algorithm, called GLARE (Global Library Assessment of REagents), that addresses the issue of optimal reagent selection in combinatorial library design. This program reduces or eliminates the time a medicinal chemist spends examining reagents which a priori cannot be part of a "good" library. Our approach takes the large reagent sets returned by standard chemical database queries and produces often considerably reduced reagent sets that are well-behaved with respect to a specific template. The pruning enforces "goodness" constraints such as the Lipinski rule of five on the product properties such that any reagent selection from the resulting sets produces only "good" products. The algorithm we implemented has three important features: (i) As opposed to genetic algorithms or other stochastic algorithms, GLARE uses a deterministic greedy procedure that smoothly filters out nonviable reagents. (ii) The pruning method can be biased to produce reagent sets with a balanced size, conserving proportionally more reagents in smaller sets. (iii) For very large combinatorial libraries, a partitioning scheme allows libraries as large as 10(12) to be evaluated in 0.25 s on an IBM AMD Opteron processor. This algorithm is validated on a diverse set of 12 libraries. The results that we obtained show an excellent compliance to the product property requirements and very fast timings.  相似文献   

11.
This article presents the results obtained using an unbiased Population Based Search (PBS) for optimizing Lennard-Jones clusters. PBS is able to repeatedly obtain all putative global minima, for Lennard-Jones clusters in the range 2 < or = N < or = 372, as reported in the Cambridge Cluster Database. The PBS algorithm incorporates and extends key techniques that have been developed in other Lennard-Jones optimization algorithms over the last decade. Of particular importance are the use of cut-and-paste operators, structure niching (using the cluster strain energy as a metric), two-phase local search, and a new operator, Directed Optimization, which extends the previous concept of directed mutation. In addition, PBS is able to operate in a parallel mode for optimizing larger clusters.  相似文献   

12.
Tauler R 《Analytica chimica acta》2007,595(1-2):289-298
Although alternating least squares algorithms have revealed extremely useful and flexible to solve multivariate curve resolution problems, other approaches based on non-linear optimization algorithms using non-linear constraints are possible. Once the subspaces defined by PCA solutions are identified, appropriate rotation and perturbation of these solutions can produce solutions fulfilling the constraints obeyed by the physical nature of the investigated systems. In order to perform such a rotation, an optimization algorithm based in the fulfillment of constraints and some examples of application in chemistry and environmental chemistry are given. It is shown that the solutions obtained either by alternating least squares or by the new proposed algorithm are rather similar and that they are both within the boundaries of the band of feasible solutions obtained by an algorithm previously developed to estimate them.  相似文献   

13.
14.
Parameter estimation for vapor–liquid equilibrium (VLE) data modeling plays an important role in design, optimization and control of separation units. This optimization problem is very challenging due to the high non-linearity of thermodynamic models. Recently, several stochastic optimization methods such as Differential Evolution with Tabu List (DETL) and Particle Swarm Optimization (PSO) have evolved as alternative and reliable strategies for solving global optimization problems including parameter estimation in thermodynamic models. However, these methods have not been applied and compared with respect to other stochastic strategies such as Simulated Annealing (SA), Differential Evolution (DE) and Genetic Algorithm (GA) in the context of parameter estimation for VLE data modeling. Therefore, in this study several stochastic optimization methods are applied to solve parameter estimation problems for VLE modeling using both the classical least squares and maximum likelihood approaches. Specifically, we have tested and compared the reliability and efficiency of SA, GA, DE, DETL and PSO for modeling several binary VLE data using local composition models. These methods were also tested on benchmark problems for global optimization. Our results show that the effectiveness of these stochastic methods varies significantly between the different tested problems and also depends on the stopping criterion especially for SA, GA and PSO. Overall, DE and DETL have better performance for solving the parameter estimation problems in VLE data modeling.  相似文献   

15.
The pharmacophore concept is of central importance in computer-aided drug design (CADD) mainly because of its successful application in medicinal chemistry and, in particular, high-throughput virtual screening (HTVS). The simplicity of the pharmacophore definition enables the complexity of molecular interactions between ligand and receptor to be reduced to a handful set of features. With many pharmacophore screening softwares available, it is of the utmost interest to explore the behavior of these tools when applied to different biological systems. In this work, we present a comparative analysis of eight pharmacophore screening algorithms (Catalyst, Unity, LigandScout, Phase, Pharao, MOE, Pharmer, and POT) for their use in typical HTVS campaigns against four different biological targets by using default settings. The results herein presented show how the performance of each pharmacophore screening tool might be specifically related to factors such as the characteristics of the binding pocket, the use of specific pharmacophore features, and the use of these techniques in specific steps/contexts of the drug discovery pipeline. Algorithms with rmsd-based scoring functions are able to predict more compound poses correctly as overlay-based scoring functions. However, the ratio of correctly predicted compound poses versus incorrectly predicted poses is better for overlay-based scoring functions that also ensure better performances in compound library enrichments. While the ensemble of these observations can be used to choose the most appropriate class of algorithm for specific virtual screening projects, we remarked that pharmacophore algorithms are often equally good, and in this respect, we also analyzed how pharmacophore algorithms can be combined together in order to increase the success of hit compound identification. This study provides a valuable benchmark set for further developments in the field of pharmacophore search algorithms, e.g., by using pose predictions and compound library enrichment criteria.  相似文献   

16.
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient‐based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two‐dimensional and three‐dimensional off‐lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.  相似文献   

17.
《印度化学会志》2021,98(12):100241
Process optimization in a mixer-settler is of great importance. Optimization algorithm of particle swarm optimization is one of the evolutionary algorithms to solve optimization problem which is used in many fields. In this study, the optimal condition is calculated in finite volume method in terms of the number of baffles, inlet velocity of fluid, and impeller speed in a mixer-settler with a specific dimension that can be extended to industrial dimensions using the PSO algorithm and the numerical solution of Navier-Stokes equations and k-ε standard.  相似文献   

18.
The voluminous utilization and application of plate and frame heat exchangers (PFHE) in many industries has accelerated the consumer and designer both to optimize exchanger total cost. Over the last few years, several old and new generation algorithms were employed and exploited to optimize PFHE cost. This study explores the application and performance of three new-generation algorithms Big Bang-Big Crunch (BBBC), Grey Wolf Optimizer (GWO), and Water Evaporation Optimization (WEO) in designing optimally PFHE. Besides, this study also compares the performance of three well-established old generations algorithms namely genetic algorithm (genetics and natural selection), particle swarm optimization (animals behaviour), and differential evolution (population-based) with the above three new algorithms in the optimization of PFHE.Seven design factors are chosen for PFHE optimization: exchanger length on hot and cold sides, height and thickness of fin, length of the fin-strip, fin frequency, and the number of hot side layers. The applicability of the suggested algorithms is assessed using a case study based on published research. Though DE performs the best in this study of design optimization concerning total cost and computational time, the three new-generation meta-heuristic algorithms BBBC, GWO, and WEO also provide the novel scope of application in heat exchanger design optimization and successfully finding the cost of the heat exchanger. According to this study, capital costs increase by 19.5% for BBBC, 24% for GWO, and 7.6% for GWO, but operational costs fall by 9.5% for BBBC and GWO when compared to the best performing algorithm (DE). On the other hand, WEO shows an increase of 32.6% in operational costs. Aside from that, a full analysis of the computing time for each algorithm is also provided. The DE has the quickest run time of 0.09 ?s, while the PSO takes the longest at 33.97 ?s. The rest of the algorithms have nearly identical values. As a result, a good comparison is established in this study, offering an excellent platform for designers and customers to make selections. Additionally, the three new generations algorithms mentioned here were not used earlier for optimization of PFHE and the comparative study illustrates that each of them possesses eat potential for cost optimization and also solving other complex problems.  相似文献   

19.
Protein-ligand docking can be formulated as a parameter optimization problem associated with an accurate scoring function, which aims to identify the translation, orientation, and conformation of a docked ligand with the lowest energy. The parameter optimization problem for highly flexible ligands with many rotatable bonds is more difficult than that for less flexible ligands using genetic algorithm (GA)-based approaches, due to the large numbers of parameters and high correlations among these parameters. This investigation presents a novel optimization algorithm SODOCK based on particle swarm optimization (PSO) for solving flexible protein-ligand docking problems. To improve efficiency and robustness of PSO, an efficient local search strategy is incorporated into SODOCK. The implementation of SODOCK adopts the environment and energy function of AutoDock 3.05. Computer simulation results reveal that SODOCK is superior to the Lamarckian genetic algorithm (LGA) of AutoDock, in terms of convergence performance, robustness, and obtained energy, especially for highly flexible ligands. The results also reveal that PSO is more suitable than the conventional GA in dealing with flexible docking problems with high correlations among parameters. This investigation also compared SODOCK with four state-of-the-art docking methods, namely GOLD 1.2, DOCK 4.0, FlexX 1.8, and LGA of AutoDock 3.05. SODOCK obtained the smallest RMSD in 19 of 37 cases. The average 2.29 A of the 37 RMSD values of SODOCK was better than those of other docking programs, which were all above 3.0 A.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号