首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6535篇
  免费   613篇
  国内免费   371篇
化学   1152篇
晶体学   12篇
力学   469篇
综合类   241篇
数学   3884篇
物理学   1761篇
  2024年   5篇
  2023年   57篇
  2022年   95篇
  2021年   190篇
  2020年   155篇
  2019年   194篇
  2018年   184篇
  2017年   287篇
  2016年   287篇
  2015年   242篇
  2014年   379篇
  2013年   525篇
  2012年   288篇
  2011年   391篇
  2010年   310篇
  2009年   343篇
  2008年   383篇
  2007年   409篇
  2006年   330篇
  2005年   342篇
  2004年   266篇
  2003年   239篇
  2002年   218篇
  2001年   190篇
  2000年   150篇
  1999年   154篇
  1998年   121篇
  1997年   106篇
  1996年   112篇
  1995年   105篇
  1994年   74篇
  1993年   59篇
  1992年   47篇
  1991年   34篇
  1990年   33篇
  1989年   22篇
  1988年   25篇
  1987年   24篇
  1986年   23篇
  1985年   19篇
  1984年   30篇
  1983年   10篇
  1982年   13篇
  1981年   10篇
  1980年   10篇
  1979年   7篇
  1978年   4篇
  1977年   5篇
  1976年   5篇
  1973年   5篇
排序方式: 共有7519条查询结果,搜索用时 15 毫秒
1.
本文基于新的Kronecker型替换,给出两个由黑盒表示的稀疏多项式的新确定性插值算法.令f∈R[x1,……,xn]是一个稀疏黑盒多项式,其次数上界为D.当R是C或者是有限域时,相对于已有算法,新算法具有更好的计算复杂度或者关于D的复杂度更低.特别地,对于一般黑盒模型,D是复杂度中的主要因素,而在所有的确定性算法中,本文的第二个算法的复杂度关于D是最低的.  相似文献   
2.
The aim of this paper is to present a new idea to construct the nonlinear fractal interpolation function, in which we exploit the Matkowski and the Rakotch fixed point theorems. Our technique is different from the methods presented in the previous literatures.  相似文献   
3.
4.
We investigate cosmological dark energy models where the accelerated expansion of the universe is driven by a field with an anisotropic universe. The constraints on the parameters are obtained by maximum likelihood analysis using observational of 194 Type Ia supernovae(SNIa) and the most recent joint light-curve analysis(JLA) sample. In particular we reconstruct the dark energy equation of state parameter w(z) and the deceleration parameter q(z). We find that the best fit dynamical w(z) obtained from the 194 SNIa dataset does not cross the phantom divide line w(z) =-1 and remains above and close to w(z)≈-0.92 line for the whole redshift range 0 ≤ z ≤ 1.75 showing no evidence for phantom behavior. By applying the anisotropy effect on the ΛCDM model, the joint analysis indicates that ?_(σ0)= 0.0163 ± 0.03,with 194 SNIa, ?_(σ0)=-0.0032 ± 0.032 with 238 the SiFTO sample of JLA and ?_(σ0)= 0.011 ± 0.0117 with 1048 the SALT2 sample of Pantheon at 1σ′confidence interval. The analysis shows that by considering the anisotropy, it leads to more best fit parameters in all models with JLA SNe datasets. Furthermore, we use two statistical tests such as the usual χ_(min)~2/dof and p-test to compare two dark energy models with ΛCDM model. Finally we show that the presence of anisotropy is confirmed in mentioned models via SNIa dataset.  相似文献   
5.
An efficient edge based data structure has been developed in order to implement an unstructured vertex based finite volume algorithm for the Reynolds-averaged Navier–Stokes equations on hybrid meshes. In the present approach, the data structure is tailored to meet the requirements of the vertex based algorithm by considering data access patterns and cache efficiency. The required data are packed and allocated in a way that they are close to each other in the physical memory. Therefore, the proposed data structure increases cache performance and improves computation time. As a result, the explicit flow solver indicates a significant speed up compared to other open-source solvers in terms of CPU time. A fully implicit version has also been implemented based on the PETSc library in order to improve the robustness of the algorithm. The resulting algebraic equations due to the compressible Navier–Stokes and the one equation Spalart–Allmaras turbulence equations are solved in a monolithic manner using the restricted additive Schwarz preconditioner combined with the FGMRES Krylov subspace algorithm. In order to further improve the computational accuracy, the multiscale metric based anisotropic mesh refinement library PyAMG is used for mesh adaptation. The numerical algorithm is validated for the classical benchmark problems such as the transonic turbulent flow around a supercritical RAE2822 airfoil and DLR-F6 wing-body-nacelle-pylon configuration. The efficiency of the data structure is demonstrated by achieving up to an order of magnitude speed up in CPU times.  相似文献   
6.
A quick and effective workflow based on ultra‐performance liquid chromatography coupled with electron spray ionization and LTQ‐Orbitrap mass spectrometry (UPLC‐LTQ‐Orbitrap MS) was established for compositional analysis and screening of the characteristic compounds of three species of Atractylodes rhizome for quality evaluation. This technique was employed to determine the seven main components in Atractylodes rhizome samples. Ultimately, 78 constituents were identified; of these, seven characteristic compounds were selected for species discrimination, comprising atractylodin (63), atractylenolide I (43), atractylenolide II (49), atractylenolide III (53), atractylon (69), methyl‐atractylenolide II (54) and (4E,6E,12E)‐tetradecadecatriene‐8,10‐diyne‐1,3‐diacetate (59). The seven main compounds, including six characteristic compounds, were simultaneously determined in 29 batches of Atractylodes rhizome samples. Thus, the method validation showed acceptable results. Quantitative analysis showed significantly different contents of the seven main components among the three species of Atractylodes rhizome, which indicates possible distinctions in the pharmacological effects. This established method can simultaneously provide qualitative and quantitative results for compositional characterization of Atractylodes rhizomes and for quality control.  相似文献   
7.
The traditional way to enhance signal-to-noise ratio (SNR) of nuclear magnetic resonance (NMR) signals is to increase the number of scans. However, this procedure increases the measuring time that can be prohibitive for some applications. Therefore, we have tested the use of several post-acquisition digital filters to enhance SNR up to one order of magnitude in time domain NMR (TD-NMR) relaxation measurements. The procedures were studied using continuous wave free precession (CWFP-T1) signals, acquired with very low flip angles that contain six times more noise than the Carr–Purcell–Meiboom–Gill (CPMG) signal of the same sample and experimental time. Linear (LI) and logarithmic (LO) data compression, low-pass infinity impulse response (LP), Savitzky–Golay (SG), and wavelet transform (WA) post-acquisition filters enhanced the SNR of the CWFP-T1 signals by at least six times. The best filters were LO, SG, and WA that have high enhancement in SNR without significant distortions in the ILT relaxation distribution data. Therefore, it was demonstrated that these post-acquisition digital filters could be a useful way to denoise CWFP-T1, as well as CPMG noisy signals, and consequently reducing the experimental time. It was also demonstrated that filtered CWFP-T1 method has the potential to be a rapid and nondestructive method to measure fat content in beef and certainly in other meat samples.  相似文献   
8.
The aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm.  相似文献   
9.
10.
选取山西省为研究对象,以美国国家极轨合作仪件-可见红外成像辐射计套件(NPP-VIIRS)夜间灯光数据、GDP统计数据等为数据源,构建GDP空间化拟合模型,建立山西省GDP密度图,据此研究山西省经济的空间差异性。通过对NPP-VIIRS夜间灯光数据的空间化处理,提取灯光指数,并将其与GDP进行回归拟合,建立最佳回归模型,得到GDP密度拟合图;利用县级GDP数据进行线性纠正,从而提高GDP的模拟精度。结果表明:(1)NPP-VIIRS夜间灯光数据与GDP的相关性较高,可用于山西省GDP模拟;(2)与GDP分区建模相比,GDP整体建模的精度更高;(3)山西省GDP的空间分布整体呈由城市中心逐渐向周边辐射的特点,构成GDP过渡带。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号