首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1993篇
  免费   107篇
  国内免费   56篇
化学   354篇
晶体学   1篇
力学   82篇
综合类   15篇
数学   1129篇
物理学   575篇
  2024年   3篇
  2023年   20篇
  2022年   23篇
  2021年   34篇
  2020年   19篇
  2019年   44篇
  2018年   48篇
  2017年   72篇
  2016年   57篇
  2015年   60篇
  2014年   127篇
  2013年   202篇
  2012年   122篇
  2011年   134篇
  2010年   126篇
  2009年   167篇
  2008年   107篇
  2007年   138篇
  2006年   100篇
  2005年   64篇
  2004年   62篇
  2003年   53篇
  2002年   44篇
  2001年   42篇
  2000年   30篇
  1999年   29篇
  1998年   24篇
  1997年   42篇
  1996年   28篇
  1995年   18篇
  1994年   12篇
  1993年   13篇
  1992年   10篇
  1991年   12篇
  1990年   9篇
  1989年   7篇
  1988年   6篇
  1987年   6篇
  1986年   8篇
  1985年   8篇
  1984年   6篇
  1983年   2篇
  1982年   4篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有2156条查询结果,搜索用时 187 毫秒
31.
Managerial efficiency within the performing arts programming can be understood as the technical efficiency of transforming the resources cultural managers have available into a determined cultural output. Through this explanation different conceptions on the finished performance product it leads us to select two different output variables (number of performances, and number of attendances). In this way, three different models are considered regarding those conceptual points of view. Data on the Circuït Teatral Valencià, a Spanish regional theatres network, is used to develop empirically the concept of Managerial Efficiency and set up a framework to allow us to monitor it.  相似文献   
32.
Functional central limit theorems for triangular arrays of rowwise independent stochastic processes are established by a method replacing tail probabilities by expectations throughout. The main tool is a maximal inequality based on a preliminary version proved by P. Gaenssler and Th. Schlumprecht. Its essential refinement used here is achieved by an additional inequality due to M. Ledoux and M. Talagrand. The entropy condition emerging in our theorems was introduced by K. S. Alexander, whose functional central limit theorem for so-calledmeasure-like processeswill be also regained. Applications concern, in particular, so-calledrandom measure processeswhich include function-indexed empirical processes and partial-sum processes (with random or fixed locations). In this context, we obtain generalizations of results due to K. S. Alexander, M. A. Arcones, P. Gaenssler, and K. Ziegler. Further examples include nonparametric regression and intensity estimation for spatial Poisson processes.  相似文献   
33.
Orthogonal WAVElet correction (OWAVEC) is a pre-processing method aimed at simultaneously accomplishing two essential needs in multivariate calibration, signal correction and data compression, by combining the application of an orthogonal signal correction algorithm to remove information unrelated to a certain response with the great potential that wavelet analysis has shown for signal processing. In the previous version of the OWAVEC method, once the wavelet coefficients matrix had been computed from NIR spectra and deflated from irrelevant information in the orthogonalization step, effective data compression was achieved by selecting those largest correlation/variance wavelet coefficients serving as the basis for the development of a reliable regression model. This paper presents an evolution of the OWAVEC method, maintaining the first two stages in its application procedure (wavelet signal decomposition and direct orthogonalization) intact but incorporating genetic algorithms as a wavelet coefficients selection method to perform data compression and to improve the quality of the regression models developed later. Several specific applications dealing with diverse NIR regression problems are analyzed to evaluate the actual performance of the new OWAVEC method. Results provided by OWAVEC are also compared with those obtained with original data and with other orthogonal signal correction methods.  相似文献   
34.
Non-negative matrix factorization(NMF)is a technique for dimensionality reduction by placing non-negativity constraints onthe matrix.Based on the PARAFAC model,NMF was extended for three-dimension data decomposition.The three-dimension non-negative matrix factorization(NMF3)algorithm,which was concise and easy to implement,was given in this paper.The NMF3algorithm implementation was based on elements but not on vectors.It could decompose a data array directly without unfolding,which was not similar to that the traditional algorithms do.It has been applied to the simulated data array decomposition andobtained reasonable results.It showed that NMF3 could be introduced for curve resolution in chemometrics.  相似文献   
35.
Formylation is one of the newly discovered post-translational modifications in lysine residue which is responsible for different kinds of diseases. In this work, a novel predictor, named predForm-Site, has been developed to predict formylation sites with higher accuracy. We have integrated multiple sequence features for developing a more informative representation of formylation sites. Moreover, decision function of the underlying classifier have been optimized on skewed formylation dataset during prediction model training for prediction quality improvement. On the dataset used by LFPred and Formator predictor, predForm-Site achieved 99.5% sensitivity, 99.8% specificity and 99.8% overall accuracy with AUC of 0.999 in the jackknife test. In the independent test, it has also achieved more than 97% sensitivity and 99% specificity. Similarly, in benchmarking with recent method CKSAAP_FormSite, the proposed predictor significantly outperformed in all the measures, particularly sensitivity by around 20%, specificity by nearly 30% and overall accuracy by more than 22%. These experimental results show that the proposed predForm-Site can be used as a complementary tool for the fast exploration of formylation sites. For convenience of the scientific community, predForm-Site has been deployed as an online tool, accessible at http://103.99.176.239:8080/predForm-Site.  相似文献   
36.
本文发展了一套分析处理分子束光解反应实验中二级分解产物飞行谱的方法, 它改进了Kroger和Riley的最初讨论。本文表明许多重要的信息都可以从高度平均的实验数据中得出。这包括二级分解产物的平均平动能分布、空间各向异性参数、平行竞争通道间的反应比。模拟的结果可以表现二级分解反应的一些主要特征。  相似文献   
37.
The performances of some numerical methods to improve the signal to noise ratio are compared and applied to enhance noisy signals obtained in gas chromatography with capillary columns and a flame Ionization detector. Several methods have been considered: cutoffs In the Fourier transform of the recorded signal; real time numerical filtering; theoretical model curve fitting; and the correlation of a chromatogram recorded from a pseudorandomly injected sample with the pseudorandom injection function. Numerical real time filtering is shown to be the most convenient method when the main periodic component of the noise has been determined by Fourier analysis.  相似文献   
38.
近年来, 越来越多的人意识到随机互补问题在经济管理中具有十分重要的作用。有学者已将随机互补问题由矩阵推广到张量, 并提出了张量随机互补问题。本文通过引入一类光滑函数, 提出了求解张量随机互补问题的一种光滑牛顿算法, 并证明了算法的全局和局部收敛性, 最后通过数值实验验证了算法的有效性。  相似文献   
39.
The computation ofL 1 smoothing splines on large data sets is often desirable, but computationally infeasible. A locally weighted, LAD smoothing spline based smoother is suggested, and preliminary results will be discussed. Specifically, one can seek smoothing splines in the spacesW m (D), with [0, 1] n D. We assume data of the formy i =f(t i )+ i ,i=1,..., N with {t i } i=1 N D, the i are errors withE( i )=0, andf is assumed to be inW m . An LAD smoothing spline is the solution,s , of the following optimization problem
  相似文献   
40.
Using statistically designed experiments, 12,500 observations are generated from a 4-pieced Cobb-Douglas function exhibiting increasing and decreasing returns to scale in its different pieces. Performances of DEA and frontier regressions represented by COLS (Corrected Ordinary Least Squares) are compared at sample sizes ofn=50, 100, 150 and 200. Statistical consistency is exhibited, with performances improving as sample sizes increase. Both DEA and COLS generally give good results at all sample sizes. In evaluating efficiency, DEA generally shows superior performance, with BCC models being best (except at corner points), followed by the CCR model and then by COLS, with log-linear regressions performing better than their translog counterparts at almost all sample sizes. Because of the need to consider locally varying behavior, only the CCR and translog models are used for returns to scale, with CCR being the better performer. An additional set of 7,500 observations were generated under conditions that made it possible to compare efficiency evaluations in the presence of collinearity and with model misspecification in the form of added and omitted variables. Results were similar to the larger experiment: the BCC model is the best performer. However, COLS exhibited surprisingly good performances — which suggests that COLS may have previously unidentified robustness properties — while the CCR model is the poorest performer when one of the variables used to generate the observations is omitted.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号