首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   250篇
  免费   7篇
  国内免费   10篇
化学   42篇
力学   4篇
综合类   2篇
数学   172篇
物理学   47篇
  2023年   5篇
  2022年   4篇
  2021年   2篇
  2020年   2篇
  2019年   3篇
  2018年   5篇
  2017年   10篇
  2016年   8篇
  2015年   7篇
  2014年   11篇
  2013年   39篇
  2012年   14篇
  2011年   15篇
  2010年   9篇
  2009年   17篇
  2008年   17篇
  2007年   17篇
  2006年   9篇
  2005年   8篇
  2004年   5篇
  2003年   4篇
  2002年   9篇
  2001年   4篇
  2000年   4篇
  1999年   3篇
  1998年   3篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   1篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1983年   2篇
排序方式: 共有267条查询结果,搜索用时 187 毫秒
91.
In practice, managers often wish to ascertain that a particular engineering design of a production system meets their requirements. The future environment of this design is likely to differ from the environment assumed during the design. Therefore it is crucial to find out which variations in that environment may make this design unacceptable (unfeasible). This article proposes a methodology for estimating which uncertain environmental parameters are important (so managers can become pro-active) and which combinations of parameter values (scenarios) make the design unacceptable. The proposed methodology combines simulation, bootstrapping, design of experiments, and linear regression metamodeling. This methodology is illustrated through a simulated manufacturing system, including fourteen uncertain parameters of the input distributions for the various arrival and service times. These parameters are investigated through the simulation of sixteen scenarios, selected through a two-level fractional–factorial statistical design. The resulting simulation Input/Output (I/O) data are analyzed through a first-order polynomial metamodel and bootstrapping. A second experiment with other scenarios gives some outputs that turn out to be unacceptable. In general, polynomials fitted to the simulation’s I/O data can estimate the border line (frontier) between acceptable and unacceptable environments.  相似文献   
92.
《随机分析与应用》2013,31(4):853-869
Abstract

For bootstrap sample means resulting from a sequence {X n , n ≥ 1} of random variables, very general weak laws of large numbers are established. The random variables {X n , n ≥ 1} do not need to be independent or identically distributed or be of any particular dependence structure. In general, no moment conditions are imposed on the {X n , n ≥ 1}. Examples are provided that illustrate the sharpness of the main results.  相似文献   
93.
Building on the work of Noam Chomsky (1963), this paper presents a hierarchy of grammars and associated computational automata in order to inform social theory construction and method. A detailed exposition of linguistic forms within the grammar hierarchy reveals clear analogues with common social scientific paradigms. Two of these paradigms (which are termed structural and process approaches) are already being widely exploited by formal methodological techniques. A third paradigm, which is rooted in a tradition of interpretive sociology, has been more resistant to formalization. Using arguments from theoretical computer science, the paper suggests that existing quantitative methodologies can be extended to accommodate qualitative arguments which subsume empirical domains as diverse as natural language and structurational phenomena.  相似文献   
94.
Abstract

Empirical likelihood methods are developed for constructing confidence bands in problems of nonparametric density estimation. These techniques have an advantage over more conventional methods in that the shape of the bands is determined solely by the data. We show how to construct an empirical likelihood functional, rather than a function, and contour it to produce the confidence bands. Analogs of Wilks's theorem are established in this infinite-parameter setting and may be used to select the appropriate contour. An alternative calibration, based on the bootstrap, is also suggested. Large-sample theory is developed to show that the bands have asymptotically correct coverage, and a numerical example is presented to demonstrate the technique. Comparisons are made with the use of bootstrap replications to choose both the shape and size of the bands.  相似文献   
95.
Statistical inference can be over optimistic and even misleading based on a selected model due to the uncertainty of the model selection procedure, especially in the high-dimensional data analysis. In this article, we propose a bootstrap-based tilted correlation screening learning (TCSL) algorithm to alleviate this uncertainty. The algorithm is inspired by the recently proposed variable selection method, TCS algorithm, which screens variables via tilted correlation. Our algorithm can reduce the prediction error and make the interpretation more reliable. The other gain of our algorithm is the reduced computational cost compared with the TCS algorithm when the dimension is large. Extensive simulation examples and the analysis of one real dataset are conducted to exhibit the good performance of our algorithm. Supplementary materials for this article are available online.  相似文献   
96.
B.A. Desmarais  S.J. Cranmer 《Physica A》2012,391(4):1865-1876
Exponential random graph models (ERGMs) are powerful tools for formulating theoretical models of network generation or learning the properties of empirical networks. They can be used to construct models that exactly reproduce network properties of interest. However, tuning these models correctly requires computationally intractable maximization of the probability of a network of interest—maximum likelihood estimation (MLE). We discuss methods of approximate MLE and show that, though promising, simulation based methods pose difficulties in application because it is not known how much simulation is required. An alternative to simulation methods, maximum pseudolikelihood estimation (MPLE), is deterministic and has known asymptotic properties, but standard methods of assessing uncertainty with MPLE perform poorly. We introduce a resampling method that greatly outperforms the standard approach to characterizing uncertainty with MPLE. We also introduce ERGMs for dynamic networks—temporal ERGM (TERGM). In an application to modeling cosponsorship networks in the United States Senate, we show how recently proposed methods for dynamic network modeling can be integrated into the TERGM framework, and how our resampling method can be used to characterize uncertainty about network dynamics.  相似文献   
97.
We have examined the hierarchical structures of correlations networks among Turkey’s exports and imports by currencies for the 1996–2010 periods, using the concept of a minimal spanning tree (MST) and hierarchical tree (HT) which depend on the concept of ultrametricity. These trees are useful tools for understanding and detecting the global structure, taxonomy and hierarchy in financial markets. We derived a hierarchical organization and build the MSTs and HTs during the 1996–2001 and 2002–2010 periods. The reason for studying two different sub-periods, namely 1996–2001 and 2002–2010, is that the Euro (EUR) came into use in 2001, and some countries have made their exports and imports with Turkey via the EUR since 2002, and in order to test various time-windows and observe temporal evolution. We have carried out bootstrap analysis to associate a value of the statistical reliability to the links of the MSTs and HTs. We have also used the average linkage cluster analysis (ALCA) to observe the cluster structure more clearly. Moreover, we have obtained the bidimensional minimal spanning tree (BMST) due to economic trade being a bidimensional problem. From the structural topologies of these trees, we have identified different clusters of currencies according to their proximity and economic ties. Our results show that some currencies are more important within the network, due to a tighter connection with other currencies. We have also found that the obtained currencies play a key role for Turkey’s exports and imports and have important implications for the design of portfolio and investment strategies.  相似文献   
98.
In productivity and efficiency analysis, the technical efficiency of a production unit is measured through its distance to the efficient frontier of the production set. The most familiar non-parametric methods use Farrell–Debreu, Shephard, or hyperbolic radial measures. These approaches require that inputs and outputs be non-negative, which can be problematic when using financial data. Recently, Chambers et al. (1998) have introduced directional distance functions which can be viewed as additive (rather than multiplicative) measures efficiency. Directional distance functions are not restricted to non-negative input and output quantities; in addition, the traditional input and output-oriented measures are nested as special cases of directional distance functions. Consequently, directional distances provide greater flexibility. However, until now, only free disposal hull (FDH) estimators of directional distances (and their conditional and robust extensions) have known statistical properties (Simar and Vanhems, 2012). This paper develops the statistical properties of directional d estimators, which are especially useful when the production set is assumed convex. We first establish that the directional Data Envelopment Analysis (DEA) estimators share the known properties of the traditional radial DEA estimators. We then use these properties to develop consistent bootstrap procedures for statistical inference about directional distance, estimation of confidence intervals, and bias correction. The methods are illustrated in some empirical examples.  相似文献   
99.
精算实务界通常采用链梯法等确定性方法评估未决赔款准备金,这些评估方法存在一定缺陷,一方面不能有效考虑保险公司历史数据中所包含的已决赔款和已报案赔款数据信息,另一方面只能得到未决赔款准备金的均值估计,不能度量不确定性。为了克服这些缺陷,本文结合Mack模型假设和非参数Bootstrap重抽样方法,提出了未决赔款准备金评估的随机性Munich链梯法,并应用R软件对精算实务中的实例给出了数值分析。  相似文献   
100.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号