全文获取类型
收费全文 | 12853篇 |
免费 | 1332篇 |
国内免费 | 320篇 |
专业分类
化学 | 1158篇 |
晶体学 | 12篇 |
力学 | 240篇 |
综合类 | 186篇 |
数学 | 2401篇 |
物理学 | 1607篇 |
无线电 | 8901篇 |
出版年
2024年 | 24篇 |
2023年 | 115篇 |
2022年 | 297篇 |
2021年 | 495篇 |
2020年 | 428篇 |
2019年 | 315篇 |
2018年 | 301篇 |
2017年 | 498篇 |
2016年 | 584篇 |
2015年 | 617篇 |
2014年 | 1044篇 |
2013年 | 1003篇 |
2012年 | 955篇 |
2011年 | 962篇 |
2010年 | 612篇 |
2009年 | 600篇 |
2008年 | 745篇 |
2007年 | 795篇 |
2006年 | 654篇 |
2005年 | 602篇 |
2004年 | 539篇 |
2003年 | 451篇 |
2002年 | 306篇 |
2001年 | 267篇 |
2000年 | 228篇 |
1999年 | 164篇 |
1998年 | 136篇 |
1997年 | 108篇 |
1996年 | 109篇 |
1995年 | 109篇 |
1994年 | 72篇 |
1993年 | 62篇 |
1992年 | 51篇 |
1991年 | 42篇 |
1990年 | 27篇 |
1989年 | 21篇 |
1988年 | 23篇 |
1987年 | 20篇 |
1986年 | 17篇 |
1985年 | 18篇 |
1984年 | 26篇 |
1983年 | 10篇 |
1982年 | 13篇 |
1981年 | 9篇 |
1980年 | 7篇 |
1979年 | 7篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1976年 | 3篇 |
1973年 | 4篇 |
排序方式: 共有10000条查询结果,搜索用时 781 毫秒
991.
Joseph B. Kadane 《Journal of Chemometrics》2016,30(3):93-98
This paper analyzes data from experiments on simple polymer chains. It measures the extent to which a particular monomer prefers to link with another of the same type. To analyze the data, it derives the likelihood function for a two‐state Markov model in which only the number in each state, but not the order, is observed. This technology is applied to a data set on which experimenters mixed lactic‐glycolic monomers with a known proportion of a contaminant consisting of an extra lactic acid. The resulting copolymers were subjected to matrix‐assisted laser desorption ionization mass spectrometry. This records the number of copolymers at each atomic weight, which can be associated with a given length of copolymer and number of contaminant monomers. Analysis of the data shows that the proportion of contaminant monomers exceeded the proportion of experimentally induced contaminant. Maximum likelihood estimates using the data show that lactic‐glycolic monomers show a positive affinity for the contaminant. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
992.
We propose a form of random forests that is especially suited for functional covariates. The method is based on partitioning the functions' domain in intervals and using the functions' mean values across those intervals as predictors in regression or classification trees. This approach appears to be more intuitive to applied researchers than usual methods for functional data, while also performing very well in terms of prediction accuracy. The intervals are obtained from randomly drawn, exponentially distributed waiting times. We apply our method to data from Raman spectra on boar meat as well as near‐infrared absorption spectra. The predictive performance of the proposed functional random forests is compared with commonly used parametric and nonparametric functional methods and with a nonfunctional random forest using the single measurements of the curve as covariates. Further, we present a functional variable importance measure, yielding information about the relevance of the different parts of the predictor curves. Our variable importance curve is much smoother and hence easier to interpret than the one obtained from nonfunctional random forests. 相似文献
993.
A Parabola-Hyperbola (P-H) kinetic model for NR sulphur vulcanization is presented. The idea originates from the fitting composite Parabola-Parabola-Hyperbola (P-P-H) function used by the authors in [1,2] to approximate experimental rheometer curves with the knowledge of a few key parameters of vulcanization, such as the scorch point, initial vulcanization rate, 90% of vulcanization, maximum point and reversion percentage. After proper normalization of experimental data (i.e. excluding induction and normalizing against maximum torque), the P-P-H model reduces to the discussed P-H composite function, which is linked to the kinetic scheme originally proposed by Han and co-workers [3]. Typically, it is characterized by three kinetic constants, where classically the first two describe incipient curing and stable/instable crosslinks and the last reproduces reversion.The powerfulness of the proposed approach stands into the very reduced number of input parameters required to accurately fit normalized experimental data (i.e. rate of vulcanization at scorch, vulcanization at 90%, maximum point and reversion percentage), and the translation of a mere geometric data-fitting into a kinetic model. Kinetic constants knowledge from simple geometric fitting allows characterizing rubber curing also at temperature different from those experimentally tested.The P-H model can be applied also in the so-called backward direction, i.e. assuming Han's kinetic constants known from other models and deriving the geometric fitting parameters as result.Some existing experimental data available, relying into rheometer curves conducted at 5 different temperatures on the same rubber blend are used to benchmark the P-H kinetic approach proposed, in both backward and forward direction. Very good agreement with previously presented kinetic approaches and experimental data is observed. 相似文献
994.
995.
在对多通道定位数据进行数据融合中,采用H∞滤波可有效地解决各传感器数据误差模型不确定的滤波和状态估计问题。但由于数据采样时刻及精度的不一致性,对于多通道数据的融合应采用变权融合方式,以提高融合后数据的可靠性。该由此给出了一种基于H∞滤波的变权数据融合计算方法,通过对实测实时数据的仿真计算表明,其方法具有很好的实用性。 相似文献
996.
997.
The performance of Partial Least Squares regression (PLS) in predicting the output with multivariate cross‐ and autocorrelated data is studied. With many correlated predictors of varying importance PLS does not always predict well and we propose a modified algorithm, Partitioned Partial Least Squares (PPLS). In PPLS the predictors are partitioned into smaller subgroups and the important subgroups with high prediction power are identified. Finally, regular PLS analysis using only those subgroups is performed. The proposed Partitioned PLS (PPLS) algorithm is used in the analysis of data from a real pharmaceutical batch fermentation process for which the process variables follow certain profiles during a specific fermentation period. We observed that PPLS leads to a more accurate prediction of the yield of the fermentation process and an easier interpretation, since fewer predictors are used in the final PLS prediction. In the application important issues such as alignment of the profiles from one batch to another and standardization of the predictors are also addressed. For instance, in PPLS noise magnification due to standardization does not seem to create problems as it might in regular PLS. Finally, PPLS is compared to several recently proposed functional PLS and PCR methods and a genetic algorithm for variable selection. More specifically for a couple of publicly available data sets with near infrared spectra it is shown that overall PPLS has lower cross‐validated error than PLS, PCR and the functional modifications hereof, and is similar in performance to a more complex genetic algorithm. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
998.
Hai‐Yan Fu Hai‐Long Wu Yong‐Jie Yu Li‐Li Yu Shu‐Rong Zhang Jin‐Fang Nie Shu‐Fang Li Ru‐Qin Yu 《Journal of Chemometrics》2011,25(8):408-429
A novel third‐order calibration algorithm, alternating weighted residue constraint quadrilinear decomposition (AWRCQLD) based on pseudo‐fully stretched matrix forms of quadrilinear model, was developed for the quantitative analysis of four‐way data arrays. The AWRCQLD algorithm is based on the new scheme that introduces four unique constraint parts to improve the quality of four‐way PARAFAC algorithm. The tested results demonstrated that the AWRCQLD algorithm has the advantage of faster convergence rate and being insensitive to the excess component number adopted in the model compared with four‐way PARAFAC. Moreover, simulated data and real experimental data were analyzed to explore the third‐order advantage over the second‐order counterpart. The results showed that third‐order calibration methods possess third‐order advantages which allow more inherent information to be obtained from four‐way data, so it can improve the resolving and quantitative capability in contrast with second‐order calibration especially in high collinear systems. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
999.
In spite of its simplicity and a well-defined theoretical basis, the Flory–Guggenheim approach is conventionally regarded as inapplicable to off-lattice system since the insertion probability of the approach does not account for the excluded region, existing in the off-lattice system. In this work, we propose the insertion probability accounting for the excluded region of off-lattice fluids and derive a new version of equation of state (EOS) for hard-sphere chains basing on the Flory–Guggenheim approach. To investigate the behavior of the excluded regions, a Monte Carlo sampling was performed for hard disks and the various excluded regions were found to have different density dependence. On the basis of the simulation result, we formulated the insertion probability for hard-sphere and that of hard-sphere chain which accounts for the effect of chain-connectivity on the monomer insertion. The proposed insertion probability was found to correctly predict the simulation data for monomer and correctly correlate the simulation data for chain fluids. The resulting EOS was found to meet closed-packed limit and predict the simulation data of compressibility factor for monomer and chains with a reasonable degree of accuracy. When compared with other off-lattice based EOS, it shows a comparable or better result. For second virial coefficient of chain molecules, the model was found to reasonably predict the simulation data. 相似文献
1000.
Proper permutation of data matrix rows and columns may result in plots showing striking information on the objects and variables under investigation. To control the permutation first, a diagonal matrix measure D was defined expressing the size relations of the matrix elements. D is essentially the absolute norm of a matrix where the matrix elements are weighted by their distance to the matrix diagonal. Changing the order of rows and columns increases or decreases D. Monte Carlo technique was used to achieve maximum D in the case of the object distance matrix or even minimal D in the case of the variable correlation matrix to get similar objects or variables close together. Secondly, a local distance matrix was defined, where an element reflects the distances of neighboring objects in a limited subspace of the variables. Due to the maximization of D in the local distance matrix by row and column changes of the original data matrix, the similar objects were arranged close to each other and simultaneously the variables responsible for their similarity were collected close to the diagonal part defined by these objects. This combination of the diagonal measure and the local distance matrix seems to be an efficient tool in the exploration of hidden similarities of a data matrix. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献