全文获取类型
收费全文 | 348篇 |
免费 | 17篇 |
国内免费 | 41篇 |
专业分类
化学 | 169篇 |
晶体学 | 1篇 |
力学 | 16篇 |
综合类 | 4篇 |
数学 | 116篇 |
物理学 | 100篇 |
出版年
2023年 | 19篇 |
2022年 | 5篇 |
2021年 | 9篇 |
2020年 | 13篇 |
2019年 | 14篇 |
2018年 | 5篇 |
2017年 | 10篇 |
2016年 | 14篇 |
2015年 | 10篇 |
2014年 | 22篇 |
2013年 | 35篇 |
2012年 | 25篇 |
2011年 | 14篇 |
2010年 | 20篇 |
2009年 | 17篇 |
2008年 | 19篇 |
2007年 | 23篇 |
2006年 | 12篇 |
2005年 | 11篇 |
2004年 | 16篇 |
2003年 | 9篇 |
2002年 | 13篇 |
2001年 | 7篇 |
2000年 | 6篇 |
1999年 | 7篇 |
1998年 | 4篇 |
1997年 | 5篇 |
1996年 | 3篇 |
1995年 | 4篇 |
1994年 | 7篇 |
1993年 | 5篇 |
1991年 | 1篇 |
1990年 | 2篇 |
1989年 | 5篇 |
1987年 | 1篇 |
1985年 | 5篇 |
1984年 | 3篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 3篇 |
排序方式: 共有406条查询结果,搜索用时 0 毫秒
41.
The network loading problem (NLP) is a specialized capacitated network design problem in which prescribed point-to-point demand between various pairs of nodes of a network must be met by installing (loading) a capacitated facility. We can load any number of units of the facility on each of the arcs at a specified arc dependent cost. The problem is to determine the number of facilities to be loaded on the arcs that will satisfy the given demand at minimum cost.This paper studies two core subproblems of the NLP. The first problem, motivated by a Lagrangian relaxation approach for solving the problem, considers a multiple commodity, single arc capacitated network design problem. The second problem is a three node network; this specialized network arises in larger networks if we aggregate nodes. In both cases, we develop families of facets and completely characterize the convex hull of feasible solutions to the integer programming formulation of the problems. These results in turn strengthen the formulation of the NLP.Research of this author was supported in part by a Faculty Grant from the Katz Graduate School of Business, University of Pittsburgh. 相似文献
42.
J. Sánchez-Marín I. Nebot-Gil D. Maynau J. P. Malrieu 《Theoretical chemistry accounts》1995,92(4):241-252
Summary A previously proposed procedure including the linked and unlinked contributions due to Triple and Quadruple excitations into a size-consistent SDCI-like model has been applied to HF and F2 single-bond systems. The procedure is a non-iterative approximation to the more general total dressing model, which is based on the intermediate Hamiltonians theory. Three basis sets have been employed: the correlation consistent cc-pVTZ basis, a similar one including 3d1f polarization functions, and another including one set of g polarization functions. Excellent agreement with experiment and high-quality calculations is obtained for both equilibrium distances and spectroscopic constants. The possibilities of the method in treating single-bond breaking are also demonstrated. Finally, the Linked and Non-Linked contributions from Triple and Quadruple excitations are analysed separately and it is suggested that the addition of the linked triples to the size-consistent SDCI is sufficient to have quantitatively correct spectroscopic properties in going from the size-consistent SDCI to nearly experimental values. 相似文献
43.
44.
Peter Khler 《Journal of Computational and Applied Mathematics》1994,50(1-3):349-360
Let R[f] be the remainder of some approximation method, having estimates of the form f;R[f]f; ρi ; f(i) for i = 0,…, r. In many cases, ρ0 and ρr are known, but not the intermediate error constants ρ1,…,ρr−1. For periodic functions, Ligun (1973) has obtained an estimate for these intermediate error constants by ρ0 and ρr. In this paper, we show that this holds in the nonperiodic case, too. For instance, the estimates obtained can be applied to the error of polynomial or spline approximation and interpolation, or to numerical integration and differentiation. 相似文献
45.
Jafar Fathali Hossein T. Kakhki Rainer E. Burkard 《Central European Journal of Operations Research》2006,14(3):229-246
Let a graph G = (V, E) with vertex set V and edge set E be given. The classical graph version of the p-median problem asks for a subset
of cardinality p, so that the (weighted) sum of the minimum distances from X to all other vertices in V is minimized. We consider the semi-obnoxious case, where every vertex has either a positive or a negative weight. This gives rise to two different objective functions, namely the weighted sum of the minimum distances from X to the vertices in V\X and, differently, the sum over the minimum weighted distances from X to V\X. In this paper an Ant Colony algorithm with a tabu restriction is designed for both problems. Computational results show its superiority with respect to a previously investigated variable neighborhood search and a tabu search heuristic.This research has partially been supported by the Spezialforschungsbereich F 003 “Optimierung und Kontrolle”, Projektbereich Diskrete Optimierung. 相似文献
46.
Amit Verma 《Physics letters. A》2010,374(8):1009-1020
A generalized notion of higher order nonclassicality (in terms of higher order moments) is introduced. Under this generalized framework of higher order nonclassicality, conditions of higher order squeezing and higher order subpoissonian photon statistics are derived. A simpler form of the Hong-Mandel higher order squeezing criterion is derived under this framework by using an operator ordering theorem introduced by us in [A. Pathak, J. Phys. A 33 (2000) 5607]. It is also generalized for multi-photon Bose operators of Brandt and Greenberg. Similarly, condition for higher order subpoissonian photon statistics is derived by normal ordering of higher powers of number operator. Further, with the help of simple density matrices, it is shown that the higher order antibunching (HOA) and higher order subpoissonian photon statistics (HOSPS) are not the manifestation of the same phenomenon and consequently it is incorrect to use the condition of HOA as a test of HOSPS. It is also shown that the HOA and HOSPS may exist even in absence of the corresponding lower order phenomenon. Binomial state, nonlinear first order excited squeezed state (NLESS) and nonlinear vacuum squeezed state (NLVSS) are used as examples of quantum state and it is shown that these states may show higher order nonclassical characteristics. It is observed that the Binomial state which is always antibunched, is not always higher order squeezed and NLVSS which shows higher order squeezing does not show HOSPS and HOA. The opposite is observed in NLESS and consequently it is established that the HOSPS and HOS are two independent signatures of higher order nonclassicality. 相似文献
47.
The randomly driven Cohen–Kulsrud–Burgers equation is used to study the influence of viscous intermediate shocks (IS) on Alfvénic turbulence. Some of these structures are unstable and undergo gradient collapse leading, as the viscosity is reduced, to increasingly intermittent dissipation bursts. The slow decay at intermediate scales of stable IS prevents the existence of a usual inertial range. Furthermore, the dissipation is unable to adiabatically compensate for the energy injection, making the total energy sensitive to the viscosity parameter. Turbulence thus looses its universal character. Preliminary simulations extend these conclusions to magnetohydrodynamic equations with anisotropic viscosity, typical of strongly magnetized plasmas. 相似文献
48.
In this paper, a directional distance approach is proposed to deal with network DEA problems in which the processes may generate not only desirable final outputs but also undesirable outputs. The proposed approach is applied to the problem of modelling and benchmarking airport operations. The corresponding network DEA model considers two process (Aircraft Movement and Aircraft Loading) with two final outputs (Annual Passenger Movement and Annual Cargo handled), one intermediate product (Aircraft Traffic Movements) and two undesirable outputs (Number of Delayed Flights and Accumulated Flight Delays). The proposed approach has been applied to Spanish airports data for year 2008 comparing the computed directional distance efficiency scores with those obtained using a conventional, single-process directional distance function approach. From this comparison, it can be concluded that the proposed network DEA approach has more discriminatory power than its single-process counterpart, uncovering more inefficiencies and providing more valid results. 相似文献
49.
Network DEA pitfalls: Divisional efficiency and frontier projection under general network structures
Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently network DEA models been developed to examine the efficiency of DMUs with internal structures. The internal network structures range from a simple two-stage process to a complex system where multiple divisions are linked together with intermediate measures. In general, there are two types of network DEA models. One is developed under the standard multiplier DEA models based upon the DEA ratio efficiency, and the other under the envelopment DEA models based upon production possibility sets. While the multiplier and envelopment DEA models are dual models and equivalent under the standard DEA, such is not necessarily true for the two types of network DEA models. Pitfalls in network DEA are discussed with respect to the determination of divisional efficiency, frontier type, and projections. We point out that the envelopment-based network DEA model should be used for determining the frontier projection for inefficient DMUs while the multiplier-based network DEA model should be used for determining the divisional efficiency. Finally, we demonstrate that under general network structures, the multiplier and envelopment network DEA models are two different approaches. The divisional efficiency obtained from the multiplier network DEA model can be infeasible in the envelopment network DEA model. This indicates that these two types of network DEA models use different concepts of efficiency. We further demonstrate that the envelopment model’s divisional efficiency may actually be the overall efficiency. 相似文献
50.
M. Grace Russell Timothy F. Jamison 《Angewandte Chemie (Weinheim an der Bergstrasse, Germany)》2019,131(23):7760-7763
Herein, the blockbuster antibacterial drug linezolid is synthesized from simple starting blocks by a convergent continuous flow sequence involving seven (7) chemical transformations. This is the highest total number of distinct reaction steps ever performed in continuous flow without conducting solvent exchanges or intermediate purification. Linezolid was obtained in 73 % isolated yield in a total residence time of 27 minutes, corresponding to a throughput of 816 mg h?1. 相似文献