首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
数学   3篇
物理学   1篇
  2016年   1篇
  2007年   1篇
  2003年   1篇
  2002年   1篇
排序方式: 共有4条查询结果,搜索用时 31 毫秒
1
1.
Several methods for imputing the number of responders from summary continuous outcome data in randomized controlled trials exist. A method by Furukawa and others was used in the quite common case that only such summary continuous outcome measures, but not the actual numbers of responders, are reported in order to estimate response rates (probabilities) for different treatments and response ratios between treatments in such trials. The authors give some empirical justification, but encourage search for theoretical support and further empirical exploration. In particular, a problem that needs to be addressed is that randomness in baseline score is not taken into consideration. This will be done in the present paper. Assuming a binormal model for the data, we compare theoretically the true response rate for a single treatment arm to the theoretical response rate underlying two versions of the suggested imputation method. We also assess the performance of the method numerically for some choices of model parameters. We show that the method works satisfactorily in some cases, but can be seriously biased in others. Moreover, assessing the uncertainty of the estimates is problematic. We suggest an alternative Bayesian estimation procedure, based directly on the normal model, which avoids these problems and provides more precise estimates when applied to simulated data sets.  相似文献   
2.
We present spectra of charged hadrons from Au+Au and d+Au collisions at sqrt[s(NN)]=200 GeV measured with the BRAHMS experiment at RHIC. The spectra for different collision centralities are compared to spectra from p+(-)p collisions at the same energy scaled by the number of binary collisions. The resulting ratios (nuclear modification factors) for central Au+Au collisions at eta=0 and eta=2.2 evidence a strong suppression in the high p(T) region (>2 GeV/c). In contrast, the d+Au nuclear modification factor (at eta=0) exhibits an enhancement of the high p(T) yields. These measurements indicate a high energy loss of the high p(T) particles in the medium created in the central Au+Au collisions. The lack of suppression in d+Au collisions makes it unlikely that initial state effects can explain the suppression in the central Au+Au collisions.  相似文献   
3.
In this paper, we apply the theory of Bayesian forecasting and dynamic linear models, as presented in West and Harrison (1997), to monthly data from insurance of companies. The total number reported claims of compensation is chosen to be the primary time series of interest. The model is decomposed into a trend block, a seasonal effects block and a regression block with a transformed number of policies as regressor. An essential part of the West and Harrison (1997) approach is to find optimal discount factors for each block and hence avoid the specification of the variance matrices of the error terms in the system equations. The BATS package of Pole et al. (1994) is applied in the analysis. We compare predictions based on this analytical approach with predictions based on a standard simulation approach applying the BUGS package of Spiegelhalter et al. (1995). The motivation for this comparison is to gain knowledge on the quality of predictions based on more or less standard simulation techniques in other applications where an analytical approach is impossible. The predicted values of the two approaches are very similar. The uncertainties, however, in the predictions based on the simulation approach are far larger especially two months or more ahead. This partly indicates the advantages of applying optimal discount factors and partly the disadvantages of at least a standard simulation approach for long term predictions.  相似文献   
4.
Stochastic earthquake models are often based on a marked point process approach as for instance presented in Vere-Jones (Int. J. Forecast., 11:503–538, 1995). This gives a fine resolution both in space and time making it possible to represent each earthquake. However, it is not obvious that this approach is advantageous when aiming at earthquake predictions. In the present paper we take a coarse point of view considering grid cells of 0.5 × 0.5°, or about 50 × 50 km, and time periods of 4 months, which seems suitable for predictions. More specifically, we will discuss different alternatives of a Bayesian hierarchical space–time model in the spirit of Wikle et al. (Environ. Ecol. Stat., 5:117–154, 1998). For each time period the observations are the magnitudes of the largest observed earthquake within each grid cell. As data we apply parts of an earthquake catalogue provided by The Northern California Earthquake Data Center where we limit ourselves to the area 32–37° N, 115–120° W for the time period January 1981 through December 1999 containing the Landers and Hector Mine earthquakes of magnitudes, respectively, 7.3 and 7.1 on the Richter scale. Based on space-time model alternatives one step earthquake predictions for the time periods containing these two events for all grid cells are arrived at. The model alternatives are implemented within an MCMC framework in Matlab. The model alternative that gives the overall best predictions based on a standard loss is claimed to give new knowledge on the spatial and time related dependencies between earthquakes. Also considering a specially designed loss using spatially averages of the 90th percentiles of the predicted values distribution of each cell it is clear that the best model predicts the high risk areas rather well. By using these percentiles we believe that one has a valuable tool for defining high and low risk areas in a region in short term predictions.   相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号