全文获取类型
收费全文 | 1733篇 |
免费 | 178篇 |
国内免费 | 32篇 |
专业分类
化学 | 529篇 |
晶体学 | 1篇 |
力学 | 118篇 |
综合类 | 5篇 |
数学 | 638篇 |
物理学 | 652篇 |
出版年
2024年 | 6篇 |
2023年 | 23篇 |
2022年 | 41篇 |
2021年 | 65篇 |
2020年 | 61篇 |
2019年 | 47篇 |
2018年 | 42篇 |
2017年 | 52篇 |
2016年 | 69篇 |
2015年 | 50篇 |
2014年 | 106篇 |
2013年 | 103篇 |
2012年 | 113篇 |
2011年 | 110篇 |
2010年 | 91篇 |
2009年 | 93篇 |
2008年 | 104篇 |
2007年 | 87篇 |
2006年 | 91篇 |
2005年 | 108篇 |
2004年 | 81篇 |
2003年 | 70篇 |
2002年 | 67篇 |
2001年 | 47篇 |
2000年 | 51篇 |
1999年 | 31篇 |
1998年 | 27篇 |
1997年 | 22篇 |
1996年 | 13篇 |
1995年 | 11篇 |
1994年 | 5篇 |
1993年 | 6篇 |
1992年 | 6篇 |
1991年 | 7篇 |
1990年 | 3篇 |
1989年 | 6篇 |
1988年 | 4篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 2篇 |
1984年 | 4篇 |
1983年 | 3篇 |
1981年 | 2篇 |
1978年 | 2篇 |
1973年 | 1篇 |
1970年 | 1篇 |
1969年 | 2篇 |
排序方式: 共有1943条查询结果,搜索用时 15 毫秒
101.
For many industries (e.g., apparel retailing) managing demand through price adjustments is often the only tool left to companies once the replenishment decisions are made. A significant amount of uncertainty about the magnitude and price sensitivity of demand can be resolved using the early sales information. In this study, a Bayesian model is developed to summarize sales information and pricing history in an efficient way. This model is incorporated into a periodic pricing model to optimize revenues for a given stock of items over a finite horizon. A computational study is carried out in order to find out the circumstances under which learning is most beneficial. The model is extended to allow for replenishments within the season, in order to understand global sourcing decisions made by apparel retailers. Some of the findings are empirically validated using data from U.S. apparel industry. 相似文献
102.
Sampling errors can be divided into two classes, incorrect sampling and correct sampling errors. Incorrect sampling errors arise from incorrectly designed sampling equipment or procedures. Correct sampling errors are due to the heterogeneity of the material in sampling targets. Excluding the incorrect sampling errors, which can all be eliminated in practice although informed and diligent work is often needed, five factors dominate sampling variance: two factors related to material heterogeneity (analyte concentration; distributional heterogeneity) and three factors related to the sampling process itself (sample type, sample size, sampling modus). Due to highly significant interactions, a comprehensive appreciation of their combined effects is far from trivial and has in fact never been illustrated in detail. Heterogeneous materials can be well characterized by the two first factors, while all essential sampling process characteristics can be summarized by combinations of the latter three. We here present simulations based on an experimental design that varies all five factors. Within the framework of the Theory of Sampling, the empirical Total Sampling Error is a function of the fundamental sampling error and the grouping and segregation error interacting with a specific sampling process. We here illustrate absolute and relative sampling variance levels resulting from a wide array of simulated repeated samplings and express the effects by pertinent lot mean estimates and associated Root Mean Squared Errors/sampling variances, covering specific combinations of materials’ heterogeneity and typical sampling procedures as used in current science, technology and industry. Factors, levels and interactions are varied within limits selected to match realistic materials and sampling situations that mimic, e.g., sampling for genetically modified organisms; sampling of geological drill cores; sampling during off-loading 3-dimensional lots (shiploads, railroad cars, truckloads etc.) and scenarios representing a range of industrial manufacturing and production processes. A new simulation facility “SIMSAMP” is presented with selected results designed to show also the wider applicability potential. This contribution furthers a general exposé of all essential effects in the regimen covered by “correct sampling errors”, valid for all types of materials in which non-bias sampling can be achieved. 相似文献
103.
104.
Mahdi Zarghami Ferenc Szidarovszky Reza Ardakanian 《Fuzzy Optimization and Decision Making》2008,7(1):1-15
All realistic Multi-Criteria Decision Making (MCDM) problems face various kinds of uncertainty. Since the evaluations of alternatives
with respect to the criteria are uncertain they will be assumed to have stochastic nature. To obtain the uncertain optimism
degree of the decision maker fuzzy linguistic quantifiers will be used. Then a new approach for fuzzy-stochastic modeling
of MCDM problems will be introduced by merging the stochastic and fuzzy approaches into the OWA operator. The results of the
new approach, entitled FSOWA, give the expected value and the variance of the combined goodness measure for each alternative.
Robust decision depends on the combined goodness measures of alternatives and also on the variations of these measures under
uncertainty. In order to combine these two characteristics a composite goodness measure will be defined. The theoretical results
will be illustrated in a watershed management problem. By using this measure will give more sensitive decisions to the stakeholders
whose optimism degrees are different than that of the decision maker. FSOWA can be used for robust decision making on the
competitive alternatives under uncertainty. 相似文献
105.
V. V. Podinovski 《Computational Mathematics and Mathematical Physics》2008,48(11):1981-1998
Multicriteria decision-making problems under bounded (above, below, or from both sides) continuous or discrete criteria are considered. Methods for comparing variants of solutions using the information accumulated in the form of interval estimates of replacing the values of some criteria by the values of others (such replacements are called tradeoffs; in other words, this can be considered as a compensation of the deterioration of some criteria by improving the values of others) are proposed along with simple consistency conditions of such information. The issue of constructing the set of nondominated variants is discussed. 相似文献
106.
We study inventory systems with two demand classes (critical and non-critical), Poisson demand and backordering. We analyze dynamic rationing strategies where the number of items reserved for critical demand depends on the remaining time until the next order arrives. Different from results in the literature, we do not discretize demand but derive a set of formulae that determine the optimal rationing level for any possible value of the remaining time. Moreover, we show that the cost parameters can be captured in a single relevant dimension, which allows us to present the optimal rationing levels in charts and lookup tables that are easy to implement. Numerical examples illustrate that the optimal dynamic rationing strategy outperforms all static strategies with fixed rationing levels. 相似文献
107.
In this paper, we examine a joint lot-sizing and process investment problem with random yield and backorders. We allow for inspection and develop stochastic models which provide the optimal inspection and lot-sizing policy as well as the optimal process investment for variance reduction. The process quality loss profile around the target is captured via a modification of the Reflected Normal loss function. We conduct numerical experiments assuming that the proportion of defectives follows a Uniform distribution while the process quality characteristic follows either a Normal or Uniform distribution. We also develop closed-form solutions that depend on at most the first two moments of any general probability distribution of defective units and which allow us to examine the nature of optimal policies. We demonstrate via numerical experiments the value of our integrated approach for jointly determining optimal inventory, inspection, and investment policies. Overall, our models and analyses provide some interesting insights into this reasonably complex inventory-quality problem and open up several avenues for future work in this area. 相似文献
108.
We consider the evolution of a black hole involving an f(R) global monopole based on the Extended Uncertainty Principle (EUP). The black hole evolutions refer to the instability due to the Parikh-Kraus-Wilczeck tunneling radiation or fragmentation. It is found that the EUP corrections make the entropy difference larger to encourage the black hole to radiate more significantly. We also show that the appearance of the EUP effects results in the black hole's division. The influence from the global monopole and the revision of general relativity can also adjust the black hole evolution simultaneously but cannot change the final result that the black hole will not be stable because of the EUP's effects. 相似文献
109.
Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting. 相似文献
110.
Christina Petschnigg Markus Spitzner Lucas Weitzendorf Jürgen Pilz 《Entropy (Basel, Switzerland)》2021,23(3)
The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene. 相似文献