首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   198篇
  免费   6篇
  国内免费   9篇
化学   11篇
力学   2篇
综合类   2篇
数学   164篇
物理学   34篇
  2022年   1篇
  2021年   2篇
  2019年   3篇
  2018年   2篇
  2017年   6篇
  2016年   7篇
  2015年   4篇
  2014年   6篇
  2013年   30篇
  2012年   11篇
  2011年   12篇
  2010年   8篇
  2009年   16篇
  2008年   14篇
  2007年   15篇
  2006年   9篇
  2005年   6篇
  2004年   5篇
  2003年   2篇
  2002年   7篇
  2001年   4篇
  2000年   4篇
  1999年   3篇
  1998年   3篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   1篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1983年   2篇
排序方式: 共有213条查询结果,搜索用时 15 毫秒
71.
Positive matrix factorization (PMF) was used to deduce the aerosol sources at a rural site on the Mediterranean coast of Turkey, using sample collected between February 1992 and December 1993. Approximately 600 daily aerosol samples were collected and 40 elements and compounds were analyzed by atomic absorption spectrometry, instrumental neutron activation analysis, ion chromatography and colorimetry.Seven factors were identified with PMF, namely local dust, Saharan dust, sea salt, long range transport, smelter, arsenic and fertilizer factors. The non-parametric bootstrapped potential source contribution function (PSCF) was then used to help identify likely locations of the regional sources of pollution. Besides, explained variance, enrichment factors, seasonal variation of G-score values and back trajectories were used to define the source regions of the factors. Results demonstrated that there are major potential source areas, for the pollution-derived component in aerosol mass, on the Aegean coast, Northwest Turkey, Balkan countries, Ukraine and regions located northern part of Ukraine.  相似文献   
72.
73.
An extended version of Hatzopoulos and Haberman (2009) dynamic parametric model is proposed for analyzing mortality structures, incorporating the cohort effect. A one-factor parameterized exponential polynomial in age effects within the generalized linear models (GLM) framework is used. Sparse principal component analysis (SPCA) is then applied to time-dependent GLM parameter estimates and provides (marginal) estimates for a two-factor principal component (PC) approach structure. Modeling the two-factor residuals in the same way, in age-cohort effects, provides estimates for the (conditional) three-factor age-period-cohort model. The age-time and cohort related components are extrapolated using dynamic linear regression (DLR) models. An application is presented for England & Wales males (1841-2006).  相似文献   
74.
In practice, managers often wish to ascertain that a particular engineering design of a production system meets their requirements. The future environment of this design is likely to differ from the environment assumed during the design. Therefore it is crucial to find out which variations in that environment may make this design unacceptable (unfeasible). This article proposes a methodology for estimating which uncertain environmental parameters are important (so managers can become pro-active) and which combinations of parameter values (scenarios) make the design unacceptable. The proposed methodology combines simulation, bootstrapping, design of experiments, and linear regression metamodeling. This methodology is illustrated through a simulated manufacturing system, including fourteen uncertain parameters of the input distributions for the various arrival and service times. These parameters are investigated through the simulation of sixteen scenarios, selected through a two-level fractional–factorial statistical design. The resulting simulation Input/Output (I/O) data are analyzed through a first-order polynomial metamodel and bootstrapping. A second experiment with other scenarios gives some outputs that turn out to be unacceptable. In general, polynomials fitted to the simulation’s I/O data can estimate the border line (frontier) between acceptable and unacceptable environments.  相似文献   
75.
《随机分析与应用》2013,31(4):853-869
Abstract

For bootstrap sample means resulting from a sequence {X n , n ≥ 1} of random variables, very general weak laws of large numbers are established. The random variables {X n , n ≥ 1} do not need to be independent or identically distributed or be of any particular dependence structure. In general, no moment conditions are imposed on the {X n , n ≥ 1}. Examples are provided that illustrate the sharpness of the main results.  相似文献   
76.
Abstract

Empirical likelihood methods are developed for constructing confidence bands in problems of nonparametric density estimation. These techniques have an advantage over more conventional methods in that the shape of the bands is determined solely by the data. We show how to construct an empirical likelihood functional, rather than a function, and contour it to produce the confidence bands. Analogs of Wilks's theorem are established in this infinite-parameter setting and may be used to select the appropriate contour. An alternative calibration, based on the bootstrap, is also suggested. Large-sample theory is developed to show that the bands have asymptotically correct coverage, and a numerical example is presented to demonstrate the technique. Comparisons are made with the use of bootstrap replications to choose both the shape and size of the bands.  相似文献   
77.
Statistical inference can be over optimistic and even misleading based on a selected model due to the uncertainty of the model selection procedure, especially in the high-dimensional data analysis. In this article, we propose a bootstrap-based tilted correlation screening learning (TCSL) algorithm to alleviate this uncertainty. The algorithm is inspired by the recently proposed variable selection method, TCS algorithm, which screens variables via tilted correlation. Our algorithm can reduce the prediction error and make the interpretation more reliable. The other gain of our algorithm is the reduced computational cost compared with the TCS algorithm when the dimension is large. Extensive simulation examples and the analysis of one real dataset are conducted to exhibit the good performance of our algorithm. Supplementary materials for this article are available online.  相似文献   
78.
B.A. Desmarais  S.J. Cranmer 《Physica A》2012,391(4):1865-1876
Exponential random graph models (ERGMs) are powerful tools for formulating theoretical models of network generation or learning the properties of empirical networks. They can be used to construct models that exactly reproduce network properties of interest. However, tuning these models correctly requires computationally intractable maximization of the probability of a network of interest—maximum likelihood estimation (MLE). We discuss methods of approximate MLE and show that, though promising, simulation based methods pose difficulties in application because it is not known how much simulation is required. An alternative to simulation methods, maximum pseudolikelihood estimation (MPLE), is deterministic and has known asymptotic properties, but standard methods of assessing uncertainty with MPLE perform poorly. We introduce a resampling method that greatly outperforms the standard approach to characterizing uncertainty with MPLE. We also introduce ERGMs for dynamic networks—temporal ERGM (TERGM). In an application to modeling cosponsorship networks in the United States Senate, we show how recently proposed methods for dynamic network modeling can be integrated into the TERGM framework, and how our resampling method can be used to characterize uncertainty about network dynamics.  相似文献   
79.
We have examined the hierarchical structures of correlations networks among Turkey’s exports and imports by currencies for the 1996–2010 periods, using the concept of a minimal spanning tree (MST) and hierarchical tree (HT) which depend on the concept of ultrametricity. These trees are useful tools for understanding and detecting the global structure, taxonomy and hierarchy in financial markets. We derived a hierarchical organization and build the MSTs and HTs during the 1996–2001 and 2002–2010 periods. The reason for studying two different sub-periods, namely 1996–2001 and 2002–2010, is that the Euro (EUR) came into use in 2001, and some countries have made their exports and imports with Turkey via the EUR since 2002, and in order to test various time-windows and observe temporal evolution. We have carried out bootstrap analysis to associate a value of the statistical reliability to the links of the MSTs and HTs. We have also used the average linkage cluster analysis (ALCA) to observe the cluster structure more clearly. Moreover, we have obtained the bidimensional minimal spanning tree (BMST) due to economic trade being a bidimensional problem. From the structural topologies of these trees, we have identified different clusters of currencies according to their proximity and economic ties. Our results show that some currencies are more important within the network, due to a tighter connection with other currencies. We have also found that the obtained currencies play a key role for Turkey’s exports and imports and have important implications for the design of portfolio and investment strategies.  相似文献   
80.
In productivity and efficiency analysis, the technical efficiency of a production unit is measured through its distance to the efficient frontier of the production set. The most familiar non-parametric methods use Farrell–Debreu, Shephard, or hyperbolic radial measures. These approaches require that inputs and outputs be non-negative, which can be problematic when using financial data. Recently, Chambers et al. (1998) have introduced directional distance functions which can be viewed as additive (rather than multiplicative) measures efficiency. Directional distance functions are not restricted to non-negative input and output quantities; in addition, the traditional input and output-oriented measures are nested as special cases of directional distance functions. Consequently, directional distances provide greater flexibility. However, until now, only free disposal hull (FDH) estimators of directional distances (and their conditional and robust extensions) have known statistical properties (Simar and Vanhems, 2012). This paper develops the statistical properties of directional d estimators, which are especially useful when the production set is assumed convex. We first establish that the directional Data Envelopment Analysis (DEA) estimators share the known properties of the traditional radial DEA estimators. We then use these properties to develop consistent bootstrap procedures for statistical inference about directional distance, estimation of confidence intervals, and bias correction. The methods are illustrated in some empirical examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号