首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1089篇
  免费   83篇
  国内免费   23篇
化学   85篇
晶体学   1篇
力学   46篇
综合类   13篇
数学   832篇
物理学   218篇
  2024年   2篇
  2023年   15篇
  2022年   23篇
  2021年   72篇
  2020年   46篇
  2019年   47篇
  2018年   24篇
  2017年   32篇
  2016年   61篇
  2015年   34篇
  2014年   60篇
  2013年   151篇
  2012年   46篇
  2011年   55篇
  2010年   59篇
  2009年   52篇
  2008年   50篇
  2007年   49篇
  2006年   43篇
  2005年   31篇
  2004年   16篇
  2003年   25篇
  2002年   16篇
  2001年   18篇
  2000年   15篇
  1999年   13篇
  1998年   16篇
  1997年   10篇
  1996年   11篇
  1995年   9篇
  1994年   12篇
  1993年   7篇
  1992年   8篇
  1991年   7篇
  1990年   7篇
  1989年   6篇
  1988年   9篇
  1987年   12篇
  1986年   5篇
  1985年   3篇
  1984年   8篇
  1983年   2篇
  1982年   1篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1976年   1篇
  1973年   2篇
排序方式: 共有1195条查询结果,搜索用时 31 毫秒
141.
For a number of situations, a Bayesian network can be split into a core network consisting of a set of latent variables describing the status of a system, and a set of fragments relating the status variables to observable evidence that could be collected about the system state. This situation arises frequently in educational testing, where the status variables represent the student proficiency and the evidence models (graph fragments linking competency variables to observable outcomes) relate to assessment tasks that can be used to assess that proficiency. The traditional approach to knowledge engineering in this situation would be to maintain a library of fragments, where the graphical structure is specified using a graphical editor and then the probabilities are entered using a separate spreadsheet for each node. If many evidence model fragments employ the same design pattern, a lot of repetitive data entry is required. As the parameter values that determine the strength of the evidence can be buried on interior screens of an interface, it can be difficult for a design team to get an impression of the total evidence provided by a collection of evidence models for the system variables, and to identify holes in the data collection scheme. A Q-matrix - an incidence matrix whose rows represent observable outcomes from assessment tasks and whose columns represent competency variables - provides the graphical structure of the evidence models. The Q-matrix can be augmented to provide details of relationship strengths and provide a high level overview of the kind of evidence available. The relationships among the status variables can be represented with an inverse covariance matrix; this is particularly useful in models from the social sciences as often the domain experts’ knowledge about the system states comes from factor analyses and similar procedures that naturally produce covariance matrixes. The representation of the model using matrixes means that the bulk of the specification work can be done using a desktop spreadsheet program and does not require specialized software, facilitating collaboration with external experts. The design idea is illustrated with some examples from prior assessment design projects.  相似文献   
142.
Geospatial reasoning has been an essential aspect of military planning since the invention of cartography. Although maps have always been a focal point for developing situational awareness, the dawning era of network-centric operations brings the promise of unprecedented battlefield advantage due to improved geospatial situational awareness. Geographic information systems (GIS) and GIS-based decision support systems are ubiquitous within current military forces, as well as civil and humanitarian organizations. Understanding the quality of geospatial data is essential to using it intelligently. A systematic approach to data quality requires: estimating and describing the quality of data as they are collected; recording the data quality as metadata; propagating uncertainty through models for data processing; exploiting uncertainty appropriately in decision support tools; and communicating to the user the uncertainty in the final product. There are shortcomings in the state-of-the-practice in GIS applications in dealing with uncertainty. No single point solution can fully address the problem. Rather, a system-wide approach is necessary. Bayesian reasoning provides a principled and coherent framework for representing knowledge about data quality, drawing inferences from data of varying quality, and assessing the impact of data quality on modeled effects. Use of a Bayesian approach also drives a requirement for appropriate probabilistic information in geospatial data quality metadata. This paper describes our research on data quality for military applications of geospatial reasoning, and describes model views appropriate for model builders, analysts, and end users.  相似文献   
143.
We consider the situation where two agents try to solve each their own task in a common environment. In particular, we study simple sequential Bayesian games with unlimited time horizon where two players share a visible scene, but where the tasks (termed assignments) of the players are private information. We present an influence diagram framework for representing simple type of games, where each player holds private information. The framework is used to model the analysis depth and time horizon of the opponent and to determine an optimal policy under various assumptions on analysis depth of the opponent. Not surprisingly, the framework turns out to have severe complexity problems even in simple scenarios due to the size of the relevant past. We propose two approaches for approximation. One approach is to use Limited Memory Influence Diagrams (LIMIDs) in which we convert the influence diagram into a set of Bayesian networks and perform single policy update. The other approach is information enhancement, where it is assumed that the opponent in a few moves will know your assignment. Empirical results are presented using a simple board game.  相似文献   
144.
145.
When dealing with risk models the typical assumption of independence among claim size distributions is not always satisfied. Here we consider the case when the claim sizes are exchangeable and study the implications when constructing aggregated claims through compound Poisson‐type processes. In particular, exchangeability is achieved through conditional independence, using parametric and nonparametric measures for the conditioning distribution. Bayes' theorem is employed to ensure an arbitrary but fixed marginal distribution for the claim sizes. A full Bayesian analysis of the proposed model is illustrated with a panel‐type data set coming from a Medical Expenditure Panel Survey (MEPS). Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
146.
We use the statistical model of bandit processes to formulate and solve two kinds of optimal investment and consumption problems. The payoffs from the investment are dividend payments with fixed return rates, but the payment frequency is stochastic following a Poisson distribution. The financial market consists of assets which follow Poisson distributions with known or unknown intensity rates. Two kinds of consumption patterns are defined and the optimality of the myopic strategy, the Gittins index strategy, and the play‐the‐winner strategy are discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
147.
A multi‐armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi‐armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
148.
The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilizing a Markov chain Monte Carlo (MCMC) technique, ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we cannot evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions; this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the value at risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods.  相似文献   
149.
智能电网的10G-EPON中基于贝叶斯分类的业务感知机制   总被引:2,自引:0,他引:2  
随着智能电网的发展及其多种信息业务的涌现,10G-EPON作为业务接入技术日益成为重要支撑;然而业务的多元化对10G-EPON的多业务支撑能力提出了重要挑战.为了适应电力系统中多种不同类型业务的需求,本文对智能电网的信息业务特性进行分析,提出了一种基于贝叶斯分类的10G-EPON业务感知机制;并且根据10G-EPON中OLT与ONU的主从式网络架构特点,提出了业务感知的主从式实现方式.该机制使用贝叶斯网络分析数据包的特征,进而确认待传送业务的类型.在贝叶斯业务分类的基础上,通过OLT和ONU之间的交互决定业务的资源分配和传输策略.为了验证新机制的有效性,分别从时延和丢包率两方面进行系统仿真.仿真结果表明,所提出的基于贝叶斯分类的业务感知机制在时延和丢包率具有显著的优势,能够实现业务与10G-EPON的高效匹配,提高10G-EPON在智能电网应用中多业务的区分支持能力.  相似文献   
150.
提出了一种基于贝叶斯验后概率的序贯检验方法,建立了检验的判别准则,给出了判别准则临界值的计算方法。在给定截尾实验次数的条件下,提出了一种截尾方案,建立了截尾判断方法。将其用于解决验证效应物在微波作用下失效概率达到给定水平的微波效应实验设计问题,给出了相应的实验方案和所需样本量的估计。最后,通过实例对上述方法的应用过程进行了说明,并和现有方法进行了分析比较。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号