首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   520篇
  免费   41篇
  国内免费   17篇
化学   307篇
力学   13篇
综合类   2篇
数学   23篇
物理学   124篇
无线电   109篇
  2024年   1篇
  2023年   2篇
  2022年   5篇
  2021年   7篇
  2020年   3篇
  2019年   8篇
  2018年   7篇
  2017年   19篇
  2016年   18篇
  2015年   14篇
  2014年   28篇
  2013年   23篇
  2012年   18篇
  2011年   70篇
  2010年   44篇
  2009年   45篇
  2008年   18篇
  2007年   31篇
  2006年   23篇
  2005年   20篇
  2004年   18篇
  2003年   19篇
  2002年   10篇
  2001年   5篇
  2000年   6篇
  1999年   12篇
  1998年   11篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   8篇
  1993年   8篇
  1992年   4篇
  1991年   6篇
  1990年   6篇
  1989年   6篇
  1988年   5篇
  1987年   8篇
  1986年   4篇
  1985年   1篇
  1984年   8篇
  1983年   1篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   4篇
  1978年   1篇
排序方式: 共有578条查询结果,搜索用时 0 毫秒
51.
汪永明  杨子晨  黄载禄 《电子学报》1998,26(10):28-32,95
输入为同峰值速率突发信源的突发级损失系统是ATM网络资源分配和呼叫允许控制研究中的一种重要排队模型,本文给出了一种计算该排队系统的时间拥塞率和信元丢失率的快速算法,并且该算法具有递推特性,计算表明:该算法不仅能够达到实时运算,而且其精度很高,足以满足实际应用的需要。  相似文献   
52.
基于多尺度小波变换的红外光谱谱峰识别算法   总被引:1,自引:0,他引:1  
蔡涛  王先培  杜双育  阳婕 《分析化学》2011,39(6):911-914
传统的谱峰检测方法一般分为3个步骤:谱线平滑、基线校正和谱峰识别.现有的基于小波变换的峰值检测方法能较好地将基线校正和谱峰识别两个步骤融为一步.在此基础之上,本研究将谱线平滑也很好地融入到小波变换的峰值检测算法中,使整个峰值检测算法成为一个整体.在峰值提取时,原始谱图直接处理,不再是处理加工过的谱图,减小了谱峰检测结果...  相似文献   
53.
The problem of determining the chemical composition of monazite grains through electron probe microanalysis is studied, by using a scanning electron microscope with a wavelength dispersive spectrometer. A careful qualitative analysis is performed with the purpose of determining all the elements present in the samples, the lines to be used in the quantifications trying to minimize interferences, the angular positions and the acquisition times for the measurement of peak and background intensities and the crystals to be used. Particular emphasis is devoted to the analysis of Th, U and Pb, which are used to determine the age of the rock by means of the U-Th-Pb method, commonly used in geochronology. Quantitative determinations of the chemical composition of monazite grains are performed, optimizing the experimental conditions on the basis of the qualitative analysis. The determinations were made under two different criteria of quantification of oxygen, and the dissimilar results obtained are discussed.  相似文献   
54.
Peak profiling and high-performance columns containing immobilized human serum albumin (HSA) were used to study the interaction kinetics of chiral solutes with this protein. This approach was tested using the phenytoin metabolites 5-(3-hydroxyphenyl)-5-phenylhydantoin (m-HPPH) and 5-(4-hydroxyphenyl)-5-phenylhydantoin (p-HPPH) as model analytes. HSA columns provided some resolution of the enantiomers for each phenytoin metabolite, which made it possible to simultaneously conduct kinetic studies on each chiral form. The dissociation rate constants for these interactions were determined by using both the single flow rate and multiple flow rate peak profiling methods. Corrections for non-specific interactions with the support were also considered. The final estimates obtained at pH 7.4 and 37°C for the dissociation rate constants of these interactions were 8.2-9.6 s(-1) for the two enantiomers of m-HPPH and 3.2-4.1 s(-1) for the enantiomers of p-HPPH. These rate constants agreed with previous values that have been reported for other drugs and solutes that have similar affinities and binding regions on HSA. The approach used in this report was not limited to phenytoin metabolites or HSA but could be applied to a variety of other chiral solutes and proteins. This method could also be adopted for use in the rapid screening of drug-protein interactions.  相似文献   
55.
Samples with a large number of compounds or similarities in their structure and polarity may yield insufficient chromatographic resolution. In such cases, however, finding conditions where the largest number of compounds appears sufficiently resolved can be still worthwhile. A strategy is here reported that optimises the resolution level of chromatograms in cases where conventional global criteria, such as the worst resolved peak pair or the product of elementary resolutions, are not able to detect any separation, even when most peaks are baseline resolved. The strategy applies a function based on the number of "well resolved" peaks, which are those that exceed a given threshold of peak purity. It is, therefore, oriented to quantify the success in the separation, and not the failure, as the conventional criteria do. The conditions that resolve the same amount of peaks are discriminated by either quantifying the partial resolution of those peaks that exceed the established threshold, or by improving the separation of peaks below it. The proposed approach is illustrated by the reversed-phase liquid chromatographic separation of a mixture of 30 ionisable and neutral compounds, using the acetonitrile content and pH as factors.  相似文献   
56.
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics.  相似文献   
57.
The difference in B-term diffusion between fully porous and porous-shell particles is investigated using the physically sound diffusion equations originating from the Effective Medium Theory (EMT). Experimental data of the B-term diffusion obtained via peak parking measurements on six different commercial particle types have been analyzed (3 porous and 3 non porous). All particles were investigated using the same experimental design and test analytes, over a very broad range of retention factor values. First, the B-term reducing effect of the solid core (inducing an additional obstruction compared to fully porous particles) has been quantified using the Hashin-Shtrikman expression, showing that the presence of a solid core can account for a reduction of about 11% when the core diameter makes up 63% of the total particle diameter (Halo and Poroshell-particles) and a reduction of 16% when the core diameter makes up 73% (Kinetex). Remaining differences can be attributed to differences in the microscopic structure of the meso-porous material (meso-pore diameter, internal porosity or relative void volume). The much lower B-term diffusion of Halo and Kinetex particles compared to the fully porous Acquity particles (some 20-40% difference, of which about 10-15% can be attributed to the presence of the solid core) can hence largely be attributed to the much smaller internal porosity and the smaller pore size of the meso-porous material making up the shell of these particles.  相似文献   
58.
The true efficiency of a column is derived from the differences between the variances of the peak profiles of the same compound recorded in the presence and the absence of the chromatographic column. These variances are usually derived using one of three methods: (1) the retention time of the peak apex and its half-height width; (2) the moments of the best fit between the experimental data and a hybrid response function, e.g., an exponentially convoluted Gaussian; or (3) the exact moments of the experimental band profiles. Comparisons of the results of these methods show that the first method is always inaccurate because all the band profiles recorded are strongly tailing. The peak fit method is accurate only for 4.6mm I.D. columns operated with instruments having low extra-column volume but fails for short narrow-bore columns due to the severe tailing of peaks passing through the complex channels of the extra-column volumes and to the inaccuracies in the fit of experimental data to the selected function. Although far better, the moment method may be inaccurate when the zero dead volume union used to measure the extra-column peak variances has a higher permeability than the column, causing the upstream part of the instrument to operate under comparatively low pressures.  相似文献   
59.
We report on a general theoretical assessment of the potential kinetic advantages of running LC gradient elution separations in the constant-pressure mode instead of in the customarily used constant-flow rate mode. Analytical calculations as well as numerical simulation results are presented. It is shown that, provided both modes are run with the same volume-based gradient program, the constant-pressure mode can potentially offer an identical separation selectivity (except from some small differences induced by the difference in pressure and viscous heating trajectory), but in a significantly shorter time. For a gradient running between 5 and 95% of organic modifier, the decrease in analysis time can be expected to be of the order of some 20% for both water–methanol and water–acetonitrile gradients, and only weakly depending on the value of VG/V0 (or equivalently tG/t0). Obviously, the gain will be smaller when the start and end composition lie closer to the viscosity maximum of the considered water-organic modifier system. The assumptions underlying the obtained results (no effects of pressure and temperature on the viscosity or retention coefficient) are critically reviewed, and can be inferred to only have a small effect on the general conclusions. It is also shown that, under the adopted assumptions, the kinetic plot theory also holds for operations where the flow rate varies with the time, as is the case for constant-pressure operation. Comparing both operation modes in a kinetic plot representing the maximal peak capacity versus time, it is theoretically predicted here that both modes can be expected to perform equally well in the fully C-term dominated regime (where H varies linearly with the flow rate), while the constant pressure mode is advantageous for all lower flow rates. Near the optimal flow rate, and for linear gradients running from 5 to 95% organic modifier, time gains of the order of some 20% can be expected (or 25–30% when accounting for the fact that the constant pressure mode can be run without having to leave a pressure safety margin of 5–10% as is needed in the constant flow rate mode).  相似文献   
60.
Ming Yin  Wei Liu  Xia Zhao  Qing-Wei Guo  Rui-Feng Bai 《Optik》2013,124(24):6896-6904
Image denoising is always the basic problem of image processing, and the main challenge is how to effectively remove the noise and preserve the detailed information. This paper presents a new image denoising algorithm based on the combination of trivariate prior model in nonsubsampled dual-tree complex contourlet transformlet transform (NSDTCT) domain and non-local means filter (NLMF) in spatial domain. Firstly, NSDTCT is constructed by combining the dual-tree complex wavelet transform (DTCWT) and nonsubsampled directional filter banks (NSDFB). The noisy image is decomposed by using NSDTCT. Secondly, based on the correlation between the interscale and intrascale dependencies of NSDTCT coefficients, the distribution of the high frequency coefficients is modeled with the trivariate non-Gaussian distribution model. A nonlinear trivariate shrinkage function is derived in the framework of Bayesian theory, and then the denoised coefficients are obtained and inverse NSDTCT is performed to get the initial denoised image. Finally, NLMF is used to smooth the initial denoised image. Simulation experiment shows that our algorithm can obtain better performances than those outstanding denoising algorithms in terms of peak signal-to-noise ratio (PSNR), mean structural similarity (MSSIM) as well as visual quality.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号