首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   660篇
  免费   3篇
  国内免费   2篇
化学   39篇
力学   2篇
综合类   1篇
数学   563篇
物理学   60篇
  2023年   2篇
  2022年   7篇
  2021年   8篇
  2020年   7篇
  2019年   11篇
  2018年   10篇
  2017年   2篇
  2016年   12篇
  2015年   11篇
  2014年   48篇
  2013年   43篇
  2012年   43篇
  2011年   41篇
  2010年   46篇
  2009年   67篇
  2008年   53篇
  2007年   71篇
  2006年   22篇
  2005年   16篇
  2004年   10篇
  2003年   10篇
  2002年   9篇
  2001年   15篇
  2000年   9篇
  1999年   11篇
  1998年   5篇
  1997年   4篇
  1996年   10篇
  1995年   5篇
  1994年   2篇
  1993年   3篇
  1992年   3篇
  1990年   2篇
  1989年   1篇
  1988年   3篇
  1987年   2篇
  1985年   10篇
  1984年   7篇
  1983年   3篇
  1982年   2篇
  1981年   4篇
  1980年   2篇
  1979年   7篇
  1978年   2篇
  1977年   1篇
  1976年   2篇
  1969年   1篇
排序方式: 共有665条查询结果,搜索用时 15 毫秒
1.
一类经典”秘书问题”的推广   总被引:2,自引:0,他引:2  
”秘书问题”在最优停时理论的发展中曾起过重要作用 ,实际中的一类问题与”秘书问题”有类似之处 ,但比”秘书问题”更复杂 .本文将经典”秘书问题”进行推广 ,建立了一类比经典”秘书问题”更有实际意义的模型 ,并给出了该类模型的解 .  相似文献   
2.
基于SPA的双枝模糊决策分析   总被引:3,自引:0,他引:3  
将集对理论(SPA)与双枝模糊决策理论相结合,用集对分析的方法研究进行双枝模糊决策的方法,分别对双枝模糊决策因素域和双枝模糊决策的上枝、下枝和双枝进行了集对分析,给出了用SPA进行双枝模糊决策的性质,同时讨论了双枝模糊决策的动态分析方法,以及双枝模糊决策度强弱态势预测的方法,从而为双枝模糊决策理论的研究提供了新的视角和方法,为双枝模糊决策的应用提供了更方便的工具和更强的理论支持。  相似文献   
3.
Chen  Shan-Tai  Lin  Shun-Shii  Huang  Li-Te  Wei  Chun-Jen 《Journal of Heuristics》2004,10(3):337-355
Binary Decision Diagrams (BDDs) are the state-of-the-art data structure for representation and manipulation of Boolean functions. In general, exact BDD minimization is NP-complete. For BDD-based technology, a small improvement in the number of nodes often simplifies the follow-up problem tremendously. This paper proposes an elitism-based evolutionary algorithm (EBEA) for BDD minimization. It can efficiently find the optimal orderings of variables for all LGSynth91 benchmark circuits with a known minimum size. Moreover, we develop a distributed model of EBEA, DEBEA, which obtains the best-ever variable orders for almost all benchmarks in the LGSynth91. Experimental results show that DEBEA is able to achieve super-linear performance compared to EBEA for some hard benchmarks.  相似文献   
4.
We consider bounded distance list decoding, such that the decoder calculates the list of all codewords within a sphere around the received vector. We analyze the performance and the complexity of this suboptimum list decoding scheme for the binary symmetric channel. The reliability function of the list decoding scheme is equivalent to the sphere-packing bound, where the decoding complexity is asymptotically bounded by 2nR(1-R). Furthermore, we investigate a decision feedback strategy that is based on bounded distance list decoding. Here, any output with zero or many codewords will call for a repeated transmission. In this case the decoding complexity will be of the order 2nR(1-C), where C denotes the channel capacity. The reliability function is close to Forney's feedback exponent.  相似文献   
5.
Nearly four hundred non-routine organizational decisions were investigated to discover search approaches––determining the frequency of use and success of each search approach uncovered. A “search approach” is made up of a direction and a means to uncover solution ideas. Direction indicates desired results and it can be either implicit or explicit, with an explicit direction offering either a problem or a goal-like target. Solutions can be uncovered by opportunity, bargaining, and chance as well as by rational approaches. Defining a search approach as a direction coupled with a means of search, search approaches were linked with indicators of success, measured by the decision's adoption, value and timeliness, noting frequency. A rational, goal-directed, search approach was more apt to produce successful outcomes. Bargaining with stakeholders to uncover solutions was always combined some of the search approaches in this study, and this merger improved the prospects of success. Searches with an opportunistic or chance (emergent opportunity) features and rational searches with a problem target were more apt to produce unsuccessful outcomes. The means used to come up with a solution had less bearing on success than did the type of direction, with goal-directed searches leading to the best outcomes. Each search approach is discussed to reveal best practices and to offer suggestions to improve practice.  相似文献   
6.
An important aspect of learning is the ability to transfer knowledge to new contexts. However, in dynamic decision tasks, such as bargaining, firefighting, and process control, where decision makers must make repeated decisions under time pressure and outcome feedback may relate to any of a number of decisions, such transfer has proven elusive. This paper proposes a two-stage connectionist model which hypothesizes that decision makers learn to identify categories of evidence requiring similar decisions as they perform in dynamic environments. The model suggests conditions under which decision makers will be able to use this ability to help them in novel situations. These predictions are compared against those of a one-stage decision model that does not learn evidence categories, as is common in many current theories of repeated decision making. Both models' predictions are then tested against the performance of decision makers in an Internet bargaining task. Both models correctly predict aspects of decision makers' learning under different interventions. The two-stage model provides closer fits to decision maker performance in a new, related bargaining task and accounts for important features of higher-performing decision makers' learning. Although frequently omitted in recent accounts of repeated decision making, the processes of evidence category formation described by the two-stage model appear critical in understanding the extent to which decision makers learn from feedback in dynamic tasks. Faison (Bud) Gibson is an Assistant Professor at College of Business, Eastern Michigan University. He has extensive experience developing and empirically testing models of decision behavior in dynamic decision environments.  相似文献   
7.
We present in this paper several asymptotic properties of constrained Markov Decision Processes (MDPs) with a countable state space. We treat both the discounted and the expected average cost, with unbounded cost. We are interested in (1) the convergence of finite horizon MDPs to the infinite horizon MDP, (2) convergence of MDPs with a truncated state space to the problem with infinite state space, (3) convergence of MDPs as the discount factor goes to a limit. In all these cases we establish the convergence of optimal values and policies. Moreover, based on the optimal policy for the limiting problem, we construct policies which are almost optimal for the other (approximating) problems. Based on the convergence of MDPs with a truncated state space to the problem with infinite state space, we show that an optimal stationary policy exists such that the number of randomisations it uses is less or equal to the number of constraints plus one. We finally apply the results to a dynamic scheduling problem.This work was partially supported by the Chateaubriand fellowship from the French embassy in Israel and by the European Grant BRA-QMIPS of CEC DG XIII  相似文献   
8.
The Commission Decision of August 12, 2002 on the performance of analytical methods and the interpretation of results was applied to the HPLC method for the analysis of parabens, 2-phenoxyethanol and 1-phenoxypropan-2-ol in cosmetic products. This method is published in the seventh Directive 96/45/EC of the European Commission. Non-compliant concentrations, taking into account the data distribution (CC) and the probability of false negative values (CC) were determined. The repeatability and reproducibility amount to <4% and <7%, respectively. These values were obtained with blanc samples that were fortified in the laboratory. Calibration linearity was confirmed by absence of lack of fit for all seven preservatives. Matrix effects on the determinations of the preservatives in body milk or shampoo are negligible.  相似文献   
9.
Analyses of dioxins in food have become increasingly important since the European Commission has enforced maximal toxic equivalent concentration (TEQ) levels in various food and feed products. Screening methodologies are usually used to exempt those samples that are below the maximum permitted limit and that can, therefore, be released to the market. In addition, one needs to select those samples that require confirmation of their dioxin TEQ level. When bioassays are used as screening tools, the interpretation of the obtained results should consider the higher variability and uncertainty associated with them. This paper explores the use of CALUX data as quantitative screening results. The validation of the method for the polychlorinated dibenzo-p-dioxins (PCDD)/F TEQ determination in milk samples is described with emphasis on the decision limit (CC) and the precision of the method. The decision limit amounts to 4.53 pg TEQ/g fat. Repeatability and within-lab reproducibility coefficients of variation are below 30%. The newly introduced parameter CC* of 1.47 pg TEQ/g fat delimits with CC a range of suspicious results. These data are not significantly different from the maximum limit of 3 pg TEQ/g fat and should be confirmed by a confirmatory analytical method such as HRGC–HRMS.  相似文献   
10.
We consider supplier development decisions for prime manufacturers with extensive supply bases producing complex, highly engineered products. We propose a novel modelling approach to support supply chain managers decide the optimal level of investment to improve quality performance under uncertainty. We develop a Poisson–Gamma model within a Bayesian framework, representing both the epistemic and aleatory uncertainties in non-conformance rates. Estimates are obtained to value a supplier quality improvement activity and assess if it is worth gaining more information to reduce epistemic uncertainty. The theoretical properties of our model provide new insights about the relationship between the degree of epistemic uncertainty, the effectiveness of development programmes, and the levels of investment. We find that the optimal level of investment does not have a monotonic relationship with the rate of effectiveness. If investment is deferred until epistemic uncertainty is removed then the expected optimal investment monotonically decreases as prior variance increases but only if the prior mean is above a critical threshold. We develop methods to facilitate practical application of the model to industrial decisions by a) enabling use of the model with typical data available to major companies and b) developing computationally efficient approximations that can be implemented easily. Application to a real industry context illustrates the use of the model to support practical planning decisions to learn more about supplier quality and to invest in improving supplier capability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号