全文获取类型
收费全文 | 1627篇 |
免费 | 63篇 |
国内免费 | 27篇 |
专业分类
化学 | 342篇 |
晶体学 | 2篇 |
力学 | 68篇 |
综合类 | 13篇 |
数学 | 792篇 |
物理学 | 500篇 |
出版年
2024年 | 13篇 |
2023年 | 22篇 |
2022年 | 27篇 |
2021年 | 33篇 |
2020年 | 29篇 |
2019年 | 32篇 |
2018年 | 35篇 |
2017年 | 57篇 |
2016年 | 47篇 |
2015年 | 47篇 |
2014年 | 103篇 |
2013年 | 168篇 |
2012年 | 99篇 |
2011年 | 108篇 |
2010年 | 96篇 |
2009年 | 138篇 |
2008年 | 87篇 |
2007年 | 112篇 |
2006年 | 75篇 |
2005年 | 52篇 |
2004年 | 49篇 |
2003年 | 30篇 |
2002年 | 31篇 |
2001年 | 27篇 |
2000年 | 16篇 |
1999年 | 10篇 |
1998年 | 19篇 |
1997年 | 34篇 |
1996年 | 20篇 |
1995年 | 15篇 |
1994年 | 8篇 |
1993年 | 6篇 |
1992年 | 9篇 |
1991年 | 11篇 |
1990年 | 6篇 |
1989年 | 5篇 |
1988年 | 4篇 |
1987年 | 5篇 |
1986年 | 6篇 |
1985年 | 6篇 |
1983年 | 2篇 |
1982年 | 3篇 |
1980年 | 2篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1969年 | 1篇 |
1966年 | 1篇 |
排序方式: 共有1717条查询结果,搜索用时 0 毫秒
111.
For screening purposes, two-level screening designs, such as fractional factorial (FF) and Plackett–Burman (PB) designs, are
usually applied. These designs enable examination of, at most, N−1 factors in N experiments. However, when many factors need to be examined, the number of experiments still becomes unfeasibly large. Occasionally,
in order to reduce time and costs, a given number of factors can be examined in fewer experiments than with the above screening
designs, by using supersaturated designs. These designs examine more than N
SS−1 factors in N
SS experiments. In this review, different set-ups to construct supersaturated designs are explained and discussed, followed
by several possible data interpretations of supersaturated design results. Finally, some analytical applications of supersaturated
designs are given. 相似文献
112.
Małgorzata Jakubowska 《Electroanalysis》2011,23(3):553-572
113.
The Interval Correlation Optimised Shifting algorithm (icoshift) has recently been introduced for the alignment of nuclear magnetic resonance spectra. The method is based on an insertion/deletion model to shift intervals of spectra/chromatograms and relies on an efficient Fast Fourier Transform based computation core that allows the alignment of large data sets in a few seconds on a standard personal computer. The potential of this programme for the alignment of chromatographic data is outlined with focus on the model used for the correction function. The efficacy of the algorithm is demonstrated on a chromatographic data set with 45 chromatograms of 64,000 data points. Computation time is significantly reduced compared to the Correlation Optimised Warping (COW) algorithm, which is widely used for the alignment of chromatographic signals. Moreover, icoshift proved to perform better than COW in terms of quality of the alignment (viz. of simplicity and peak factor), but without the need for computationally expensive optimisations of the warping meta-parameters required by COW. Principal component analysis (PCA) is used to show how a significant reduction on data complexity was achieved, improving the ability to highlight chemical differences amongst the samples. 相似文献
114.
Emily Grace Armitage Joanna Godzien Vanesa Alonso‐Herranz Ángeles López‐Gonzálvez Coral Barbas 《Electrophoresis》2015,36(24):3050-3060
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k‐means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a “gray area” and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k‐means nearest neighbor and the best approximation of positioning real zeros. 相似文献
115.
B.T. Luke 《SAR and QSAR in environmental research》2013,24(1):41-57
While quantitative structure-activity relationships attempt to predict the numerical value of the activities, it is found that statistically good predictors do not always do a good job of qualitatively determining the activity. This study shows how Fuzzy classifiers can be used to generate Fuzzy structure-activity relationships which can more accurately determine whether or not a compound will be highly inactive, moderately inactive or active, or highly active. Four examples of these classifiers are presented and applied to a well-studied activity dataset. 相似文献
116.
117.
Combined neural network and reduced FRF techniques for slight damage detection using measured response data 总被引:2,自引:0,他引:2
Summary This paper deals with structural damage detection using measured frequency response functions (FRF) as input data to artificial neural networks (ANN). A major obstacle, the impracticality of using full-size FRF data with ANNs, was circumvented by applying a data-reduction technique based on principal component analysis (PCA). The compressed FRFs, represented by their projection onto the most significant principal components, were used as the ANN input variables instead of the raw FRF data. The output is a prediction of the actual state of the specimen, i.e. healthy or damaged. A further advantage of this particular approach is its ability to deal with relatively high measurement noise, which is a common occurrence when dealing with industrial structures. The methodology was applied to detect three different states of a space antenna: reference, slight mass damage and slight stiffness damage. About 600 FRF measurements, each with 1024 spectral points, were included in the analysis. Six 2-hidden layer networks, each with an individually-optimised architecture for a specific FRF reduction level, were used for damage detection. The results showed that it was possible to distinguish between the three states of the antenna with good accuracy, subject to using an adequate number of principal components together with a suitable neural network configuration. It was also found that the quality of the raw FRF data remained a major consideration, though the method was able to filter out some of the measurement noise. The convergence and detection properties of the networks were improved significantly by removing those FRFs associated with measurement errors. Received 9 March 2000; accepted for publication 12 December 2000 相似文献
118.
Tran Thu Ha Nguyen Hong Phong François-Xavier Le Dimet Hong Son Hoang 《Comptes Rendus Mecanique》2019,347(5):423-444
This article presents a correction method for a better resolution of the problem of estimating and predicting pollution, governed by Burgers' equations. The originality of the method consists in the introduction of an error function into the system's equations of state to model uncertainty in the model. The initial conditions and diffusion coefficients, present in the equations for pollution and concentration, and also those in the model error equations, are estimated by solving a data assimilation problem. The efficiency of the correction method is compared with that produced by the traditional method without introduction of an error function.Three test cases are presented in this study in order to compare the performances of the proposed methods. In the first two tests, the reference is the analytical solution and the last test is formulated as part of the “twin experiment”.The numerical results obtained confirm the important role of the model error equation for improving the prediction capability of the system, in terms of both accuracy and speed of convergence. 相似文献
119.
This work honors the 75th birthday of Professor Ionel Michael Navon by presenting original results highlighting the computational efficiency of the adjoint sensitivity analysis methodology for function‐valued operator responses by means of an illustrative paradigm dissolver model. The dissolver model analyzed in this work has been selected because of its applicability to material separations and its potential role in diversion activities associated with proliferation and international safeguards. This dissolver model comprises eight active compartments in which the 16 time‐dependent nonlinear differential equations modeling the physical and chemical processes comprise 619 scalar and time‐dependent model parameters, related to the model's equation of state and inflow conditions. The most important response for the dissolver model is the time‐dependent nitric acid in the compartment furthest away from the inlet, where measurements are available at 307 time instances over the transient's duration of 10.5 h. The sensitivities to all model parameters of the acid concentrations at each of these instances in time are computed efficiently by applying the adjoint sensitivity analysis methodology for operator‐valued responses. The uncertainties in the model parameters are propagated using the above‐mentioned sensitivities to compute the uncertainties in the computed responses. A predictive modeling formalism is subsequently used to combine the computational results with the experimental information measured in the compartment furthest from the inlet and then predict optimal values and uncertainties throughout the dissolver. This predictive modeling methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution for the a priori known mean values and uncertainties characterizing the model parameters and the computed and experimentally measured model responses. This approximate a priori distribution is subsequently combined using Bayes' theorem with the “likelihood” provided by the multi‐physics computational models. Finally, the posterior distribution is evaluated using the saddle‐point method to obtain analytical expressions for the optimally predicted values for the parameters and responses of both multi‐physics models, along with corresponding reduced uncertainties. This work shows that even though the experimental data pertains solely to the compartment furthest from the inlet (where the data were measured), the predictive modeling procedure used herein actually improves the predictions and reduces the predicted uncertainties for the entire dissolver, including the compartment furthest from the measurements, because this predictive modeling methodology combines and transmits information simultaneously over the entire phase‐space, comprising all time steps and spatial locations. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
120.
等高线图分析技术在GC/FTIR联用数据处理中的应用 总被引:1,自引:0,他引:1
钟山 《光谱学与光谱分析》1995,15(6):41-44,50
在三种高聚物的裂解大口径毛细管气相色谱-傅里叶变换红外光谱联用分析(PyWBCGC/FTIR)数据获得的基础上,介绍应用等高线图(CTD)分析技术处理色谱重叠峰,并在该分析提供的信息指导下进行红外光谱累加及差减技术,获得了重叠峰中各纯物质较好信噪比的气相红外谱图并予以定性,显著了等高线图分析技术在联用数据中的优点,充分发挥了GC/FTIR联用特有的光谱分离离度优势。 相似文献