首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3731篇
  免费   471篇
  国内免费   156篇
化学   285篇
晶体学   2篇
力学   352篇
综合类   96篇
数学   2674篇
物理学   949篇
  2024年   7篇
  2023年   48篇
  2022年   60篇
  2021年   118篇
  2020年   94篇
  2019年   89篇
  2018年   74篇
  2017年   147篇
  2016年   169篇
  2015年   91篇
  2014年   247篇
  2013年   340篇
  2012年   180篇
  2011年   195篇
  2010年   181篇
  2009年   215篇
  2008年   234篇
  2007年   214篇
  2006年   178篇
  2005年   181篇
  2004年   137篇
  2003年   109篇
  2002年   108篇
  2001年   113篇
  2000年   103篇
  1999年   94篇
  1998年   95篇
  1997年   66篇
  1996年   65篇
  1995年   57篇
  1994年   35篇
  1993年   35篇
  1992年   22篇
  1991年   22篇
  1990年   35篇
  1989年   19篇
  1988年   12篇
  1987年   25篇
  1986年   11篇
  1985年   38篇
  1984年   11篇
  1983年   12篇
  1982年   12篇
  1981年   12篇
  1980年   9篇
  1979年   14篇
  1978年   4篇
  1976年   4篇
  1974年   3篇
  1973年   4篇
排序方式: 共有4358条查询结果,搜索用时 343 毫秒
131.
This paper presents the theoretical analysis of adaptive multiuser RAKE receiver scheme in frequency selective fading channel for direct-sequence code division multiple access (DS-CDMA) system. Least mean square (LMS) algorithm is used to estimate the channel coefficients. Chaotic sequences are used as spreading sequence and corresponding bit error rate (BER) in closed form is derived for imperfect channel estimation conditions. Performances of chaotic sequences are compared with pseudorandom noise (PN) sequences. Under perfect synchronization assumption, various simulation results are shown to investigate the performance of the proposed system.  相似文献   
132.
为了实现对视频中运动目标的运动矢量估计,建立了多维矢量矩阵变换域运动估计系统,在变换域对运动视频中运动目标形成的多维能量集中平面进行研究。首先,介绍了多维矢量矩阵理论、变换理论以及运动目标在变换域形成能量集中平面的理论推导;然后,采用平面拟合的方法,求取运动矢量的大小;最后,分析对比了几种方法的效果和迭代速度。实验结果表明:多维矢量矩阵变换域的运动平面拟合方法估计的运动矢量精度达到10-2 pixel,提供了一种变换域运动矢量估计的高精度方法。  相似文献   
133.
In a quantum key distribution(QKD)system,the error rate needs to be estimated for determining the joint probability distribution between legitimate parties,and for improving the performance of key reconciliation.We propose an efficient error estimation scheme for QKD,which is called parity comparison method(PCM).In the proposed method,the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling.From the simulation results,the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations.  相似文献   
134.
Nylon 6 and 6,6 literature data are collected over a wide range of water concentrations and temperatures (0 ≤ [W]0 ≤ 40.8 wt%, 200 ≤ T ≤300 °C) and used to fit parameters in an updated batch reactor model. The resulting copolymerization model uses side reactions to account for the complex influence of water on kinetics and reaction equilibria. The proposed parameter estimates result in a significant improvement in the fit to the data, corresponding to a 73% reduction in the weighted‐least‐squares objective function compared to when the parameters of Arai et al. are used. Copolymerization simulations are conducted at industrially relevant conditions, shedding light on the complex influence of water and on the potential to include waste nylon 6 cyclic dimer in the feedstock. The model and parameter estimates will be helpful in future models of nylon 6/6,6 copolymerization in continuous reactor systems.  相似文献   
135.
An approach for the analysis of large experimental datasets in electrochemical impedance spectroscopy (EIS) has been developed. The approach uses the idea of successive Bayesian estimation and splits the multidimensional EIS datasets into parts with reduced dimensionality. Afterwards, estimation of the parameters of the EIS-models is performed successively, from one part to another, using complex nonlinear least squares (CNLS) method. The results obtained on the previous step are used as a priori values (in the Bayesian form) for the analysis of the next part. To provide high stability of the sequential CNLS minimisation procedure, a new hybrid algorithm has been developed. This algorithm fits the datasets of reduced dimensionality to the selected EIS models, provides high stability of the fitting and allows semi-automatic data analysis on a reasonable timescale. The hybrid algorithm consists of two stages in which different zero-order optimisation strategies are used, reducing both the computational time and the probability to overlook the global optimum. The performance of the developed approach has been evaluated using (i) simulated large EIS dataset which represents a possible output of a scanning electrochemical impedance microscopy experiments, and (ii) experimental dataset, where EIS spectra were acquired as a function of the electrode potential and time. The developed data analysis strategy showed promise and can be further extended to other electroanalytical EIS applications which require multidimensional data analysis.  相似文献   
136.
In this paper, we address the accuracy of the results for the overdetermined full rank linear least‐squares problem. We recall theoretical results obtained in (SIAM J. Matrix Anal. Appl. 2007; 29 (2):413–433) on conditioning of the least‐squares solution and the components of the solution when the matrix perturbations are measured in Frobenius or spectral norms. Then we define computable estimates for these condition numbers and we interpret them in terms of statistical quantities when the regression matrix and the right‐hand side are perturbed. In particular, we show that in the classical linear statistical model, the ratio of the variance of one component of the solution by the variance of the right‐hand side is exactly the condition number of this solution component when only perturbations on the right‐hand side are considered. We explain how to compute the variance–covariance matrix and the least‐squares conditioning using the libraries LAPACK (LAPACK Users' Guide (3rd edn). SIAM: Philadelphia, 1999) and ScaLAPACK (ScaLAPACK Users' Guide. SIAM: Philadelphia, 1997) and we give the corresponding computational cost. Finally we present a small historical numerical example that was used by Laplace (Théorie Analytique des Probabilités. Mme Ve Courcier, 1820; 497–530) for computing the mass of Jupiter and a physical application if the area of space geodesy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
137.
With the three-dimensional symmetry and wide potential application, spherical array signal processing has been a hot research area for years. This paper devotes to the direction-of-arrival (DOA) estimation of the spherical arrays. Based on the orthogonality of the sensors’ location, MUSIC algorithm in spherical space is proposed, named as SH-MUSIC. Similar to beamspace MUSIC, spherical harmonics transformation is operated before MUSIC algorithm and a better performance is gotten because SH-MUSIC utilizes the array configuration’s orthogonality. On account of the transformation matrix’s orthogonality, spherical harmonics transformation is suggested to be operated firstly in other improved MUSIC algorithms without rejection, and it is demonstrated in beamspace MUSIC. In addition, owing to the tiny error between the steering vectors and the spherical harmonics with high order, sphere array data models including open sphere and rigid sphere are constructed. Simulation proves SH-MUSIC to be effective. Moreover, experimental data from a rigid sphere microphone array is dealt with by SH-MUSIC and the DOAs are estimated accurately.  相似文献   
138.
The relationship of the phase morphology of polypropylene/polyethylene‐terephthalate (PP/PET) blends and their corresponding compatibilized blends with composition was investigated using digital image analysis. A diameter, d g , was defined and calculated to discuss the phase morphology of this polymer blend system. A figure‐estimation method was introduced to determine the width of the distribution of d g . Based on the method, it is proven that the distribution of d g obeys a log‐normal distribution and consequently, the distribution width, σ was calculated. Further, a fractal dimension, D f , was introduced to describe the distribution of main sizes of the particles of the dispersed phase. The results showed that, while d g increased with the concentration of the dispersed phase, σ and D f show different dependence relations on composition;σ increases monotonously but D f shows a maximum at a PET content of 30%, indicating that, even though the whole size distribution is much broader, the distribution of the main body of size becomes more uniform when the content of PET is less than 30%.  相似文献   
139.
Modern radiometric analytics demands a complex consideration of nuclear and electron shell processes, if more pretentious aims are envisaged. As an example the small variation of decay rates of radionuclides presents possibilities for information on chemical situations of decaying atoms. In principle this phenomenon is well known since many years, but now the situation is such that, e.g. in 99mTc internal conversion, a full agreement of the difficult experiments and the respective theory was established. The secondary emission of X-rays as a consequence of high excitation of electron shells in combination with nuclear transitions supplies another example for a methodical progress of radiometry. Investigations on 51Cr as an electron capture nuclide have shown that chemically induced variations of the Kα to Kβ X-ray intensity ratio is at least qualitatively understood.  相似文献   
140.
This paper provides simulation comparisons among the performance of 11 possible prediction intervals for the geometric.mean of a Pareto distribution with parameters (αB, ). Six different procedures were used to obtain these intervals , namely; true inter -val , pivotal interval , maximum likelihood estimation interval, centrallimit teorem interval, variance stabilizing interval and a mixture of the above intervals . Some of these intervals are valid if the observed sample size m,are large , others are valid if both, n and the future sample size m, are large. Some of these intervals require a knowledge of α or B, while others do not. The simulation validation and efficiency study shows that intervals depending on the MLE's are the best. The second best intervalsare those obtained through pivotal methods or variance stabilization transformation. The third group of intervals is that which depends on the central limit theorem when λ is known. There are two intervals which proved to be unacceptable under any criterion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号