全文获取类型
收费全文 | 4883篇 |
免费 | 208篇 |
国内免费 | 77篇 |
专业分类
化学 | 1099篇 |
晶体学 | 6篇 |
力学 | 119篇 |
综合类 | 53篇 |
数学 | 678篇 |
物理学 | 803篇 |
无线电 | 2410篇 |
出版年
2024年 | 8篇 |
2023年 | 90篇 |
2022年 | 61篇 |
2021年 | 71篇 |
2020年 | 88篇 |
2019年 | 68篇 |
2018年 | 79篇 |
2017年 | 116篇 |
2016年 | 147篇 |
2015年 | 138篇 |
2014年 | 250篇 |
2013年 | 435篇 |
2012年 | 409篇 |
2011年 | 287篇 |
2010年 | 230篇 |
2009年 | 291篇 |
2008年 | 249篇 |
2007年 | 289篇 |
2006年 | 273篇 |
2005年 | 245篇 |
2004年 | 181篇 |
2003年 | 152篇 |
2002年 | 139篇 |
2001年 | 84篇 |
2000年 | 102篇 |
1999年 | 93篇 |
1998年 | 70篇 |
1997年 | 73篇 |
1996年 | 74篇 |
1995年 | 71篇 |
1994年 | 37篇 |
1993年 | 44篇 |
1992年 | 27篇 |
1991年 | 35篇 |
1990年 | 19篇 |
1989年 | 17篇 |
1988年 | 20篇 |
1987年 | 19篇 |
1986年 | 9篇 |
1985年 | 8篇 |
1984年 | 13篇 |
1983年 | 7篇 |
1982年 | 12篇 |
1981年 | 11篇 |
1980年 | 11篇 |
1979年 | 9篇 |
1978年 | 2篇 |
1977年 | 2篇 |
1974年 | 1篇 |
1959年 | 1篇 |
排序方式: 共有5168条查询结果,搜索用时 15 毫秒
61.
利用Rhcovibron DDV-II-EA型动态粘弹谱仪测试了PET平纹布在不同温度热定型后的布样及其经、纬纱的动态力学性质。发现布及其纱的动态力学-温度谱与原纤维的截然不同。在[Ε]-T 曲线上出现[Ε]峰,[Ε]_max值随织物热定型温度的增高呈指数下降,峰位向高温移动。同时在Ε’’-T曲线上出现双损耗模量峰,相应的松弛转变活化能相差半个数量级。初步分析认为,与织布过程及随后的织物热定型有关。 相似文献
62.
A high-performance thin-layer chromatographic (HPTLC) method is described for the determination of tributyltin compounds (bis(tri-n-butyltin) oxide, TBTO, and tri-n-butyltin naphthenate, TBTN) and their degradation products (dibutyltin and monobutyltin compounds). The organotin compounds are extracted from wood with ethanol containing 0.5% (v/v) of hydrochloric acid and the separation of the defferent kinds of organotin compounds is achieved by thin-layer chromatography. The sample spots are measured using a scanning densitometer after decomposing the organotin compounds to inorganic tin by ultraviolet irradiation and visualization of the spots with pyrocatechol violet. Applications of the method to detection and quantification of organotin compounds in preservative solutions, in recently impregnated wood, and in wood samples from five-year-old window frames are described. 相似文献
63.
64.
65.
I. Kuselman 《Accreditation and quality assurance》1998,3(3):131-133
It is argued that results of uncertainty calculations in chemical analysis should be taken into consideration with some caution
owing to their limited generality. The issue of the uncertainty in uncertainty estimation is discussed in two aspects. The
first is due to the differences between procedure-oriented and result-oriented uncertainty assessments, and the second is
due to the differences between the theoretical calculation of uncertainty and its quantication using the validation (experimental)
data. It is shown that the uncertainty calculation for instrumental analytical methods using a regression calibration curve
is result-oriented and meaningful only until the next calibration. A scheme for evaluation of the uncertainty in uncertainty
calculation by statistical analysis of experimental data is given and illustrated with examples from the author's practice.
Some recommendations for the design of corresponding experiments are formulated. 相似文献
66.
Zusammenfassung In der vorliegenden Arbeit wird eine Methode zur Retentionsindex-Bestimmung beschrieben, die von einem kubischen Zusammenhang zwischen Bruttoretentionszeit-Differenzen der Referenzhomologen und der Kohlenstoffzahl ausgeht. Hieraus ergeben sich direkt die Nettoretentionszeiten. Der Fehler der Totzeitbestimmung entfällt bei dieser Methode. Mit den so gewonnenen Nettoretentionszeiten erhält man über einen kubischen Zusammenhang zwischen 1g ts=f(C) die Retentionsindices. Extrapolationen und Interpolationen sind über 300 Retentionsindexeinheiten mit einem mittleren Fehler von ±0,02 Retentionsindexeinheiten möglich. Das Verfahren bietet sich für eine automatische Berechnung der I-Werte mittels on-line-Datenverarbeitung an.
Cubic calculation of retention indices without determining the dead-time tm
Summary The method for the calculation of retention indices described here is based on a third order relationship between the logarithm of differences of unadjusted retention times of homologues and the carbon number. From this adjusted retention times are directly calculated. A determination of the dead-time is not necessary thus avoiding the errors connected with this factor. A cubic equation for the logarithm of the adjusted retention time lg ts as a function of carbon number Cn is used for the retention index calculation. Extrapolations and interpolations can be done over a range of 300 index units with an average deviation of ±0.02 i.u.. The method offers the possibility of an automated on-line calculation of retention indices by computer merely on the basis of unadjusted retention times.相似文献
67.
In this paper we present and study a new algorithm for the Maximum Satisfiability (Max Sat) problem. The algorithm is based on the Method of Conditional Expectations (MOCE, also known as Johnson’s Algorithm) and applies a greedy variable ordering to MOCE. Thus, we name it Greedy Order MOCE (GO-MOCE). We also suggest a combination of GO-MOCE with CCLS, a state-of-the-art solver. We refer to this combined solver as GO-MOCE-CCLS.We conduct a comprehensive comparative evaluation of GO-MOCE versus MOCE on random instances and on public competition benchmark instances. We show that GO-MOCE reduces the number of unsatisfied clauses by tens of percents, while keeping the runtime almost the same. The worst case time complexity of GO-MOCE is linear. We also show that GO-MOCE-CCLS improves on CCLS consistently by up to about 80%.We study the asymptotic performance of GO-MOCE. To this end, we introduce three measures for evaluating the asymptotic performance of algorithms for Max Sat. We point out to further possible improvements of GO-MOCE, based on an empirical study of the main quantities managed by GO-MOCE during its execution. 相似文献
68.
《Digital Communications & Networks》2022,8(5):843-852
Heterogeneous Networks (HetNets) and cell densification represent promising solutions for the surging data traffic demand in wireless networks. In dense HetNets, user traffic is steered toward the Low-Power Node (LPN) when possible to enhance the user throughput and system capacity by increasing the area spectral efficiency. However, because of the transmit power differences in different tiers of HetNets and irregular service demand, a load imbalance typically exists among different serving nodes. To offload more traffic to LPNs and coordinate the Inter-Cell Interference (ICI), Third-Generation Partnership Project (3GPP) has facilitated the development of the Cell Range Expansion (CRE), enhanced Inter-Cell Interference Coordination (eICIC) and Further enhanced ICIC (FeICIC). In this paper, we develop a cell clustering-based load-aware offsetting and an adaptive Low-Power Subframe (LPS) approach. Our solution allows the separation of User Association (UA) functions at the User Equipment (UE) and network server such that users can make a simple cell-selection decision similar to that in the maximum Received Signal Strength (max-RSS) based UA scheme, where the network server computes the load-aware offsetting and required LPS periods based on the load conditions of the system. The proposed solution is evaluated using system-level simulations wherein the results correspond to performance changes in different service regions. Results show that our method effectively solves the offloading and interference coordination problems in dense HetNets. 相似文献
69.
从建筑智能化专业楼宇自控系统工程技术课程的实训教学实际需要出发,以模拟实际智能建筑现场为背景,开发了智能照明监控实训系统。该系统使用霍尼韦尔WEBs为系统软件平台,应用霍尼韦尔Spyder DDC控制器PUB6438S作为系统的控制核心,利用人体红外感应、定时时间和光照度感应等方法进行实时控制。学生利用该实训教学系统可以更加深刻理解楼控系统软硬件开发的整个生命周期,对楼控项目设计和编程调试等应用技术进行系统学习和实训,从而提升职业院校学生的工程实践能力。 相似文献
70.
Using statistically designed experiments, 12,500 observations are generated from a 4-pieced Cobb-Douglas function exhibiting increasing and decreasing returns to scale in its different pieces. Performances of DEA and frontier regressions represented by COLS (Corrected Ordinary Least Squares) are compared at sample sizes ofn=50, 100, 150 and 200. Statistical consistency is exhibited, with performances improving as sample sizes increase. Both DEA and COLS generally give good results at all sample sizes. In evaluating efficiency, DEA generally shows superior performance, with BCC models being best (except at corner points), followed by the CCR model and then by COLS, with log-linear regressions performing better than their translog counterparts at almost all sample sizes. Because of the need to consider locally varying behavior, only the CCR and translog models are used for returns to scale, with CCR being the better performer. An additional set of 7,500 observations were generated under conditions that made it possible to compare efficiency evaluations in the presence of collinearity and with model misspecification in the form of added and omitted variables. Results were similar to the larger experiment: the BCC model is the best performer. However, COLS exhibited surprisingly good performances — which suggests that COLS may have previously unidentified robustness properties — while the CCR model is the poorest performer when one of the variables used to generate the observations is omitted. 相似文献