首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   47篇
  免费   2篇
  国内免费   2篇
化学   3篇
力学   1篇
综合类   1篇
数学   39篇
物理学   7篇
  2021年   2篇
  2020年   1篇
  2019年   3篇
  2017年   1篇
  2013年   1篇
  2012年   3篇
  2011年   6篇
  2010年   2篇
  2009年   3篇
  2008年   3篇
  2007年   1篇
  2004年   2篇
  2003年   1篇
  2002年   2篇
  2000年   3篇
  1999年   1篇
  1997年   1篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1991年   3篇
  1990年   1篇
  1989年   1篇
  1987年   1篇
  1985年   2篇
  1984年   3篇
排序方式: 共有51条查询结果,搜索用时 31 毫秒
1.
The generalized Kullback-Leibler divergence (K-Ld) in Tsallis statistics [constrained by the additive duality of generalized statistics (dual generalized K-Ld)] is here reconciled with the theory of Bregman divergences for expectations defined by normal averages, within a measure-theoretic framework. Specifically, it is demonstrated that the dual generalized K-Ld is a scaled Bregman divergence. The Pythagorean theorem is derived from the minimum discrimination information principle using the dual generalized K-Ld as the measure of uncertainty, with constraints defined by normal averages. The minimization of the dual generalized K-Ld, with normal averages constraints, is shown to exhibit distinctly unique features.  相似文献   
2.
Kullback-Leibler距离的基础上,对Kullback-Leibler距离进行改进,给出了新的Kullback-Leibler距离,并讨论了它的性质.计算了两个不同广义伽玛分布之间新的Kullback-Leibler距离.推导出伽玛分布、Weibull分布、Rayleigh分布、正态分布、指数分布新的Kullback-Leibler距离.另外在新的KullbackLeibler距离下,还得到digamma函数Ψ(x)=(Γ'(x)/(Γ(x))为单调递增函数.  相似文献   
3.
One feasible approach to aggregating uncertainty judgments in risk assessments is to use calibration variables (or seed questions) and the Kullback-Leibler (K-L) distance to evaluate experts’ substantive or normative expertise and assign weights based on the corresponding scores. However, the reliability of this aggregation model and the effects of the number of seed questions or experts on the stability of the aggregated results are still at issue. To assess the stability of the aggregation model, this study applies the jackknife re-sampling technique to a large data set of real-world expert opinions. We also use a nonlinear regression model to analyze and interpret the resulting jackknife estimates. Our statistical model indicates that the stability of Cooke’s classical model, in which the components of the scoring rule are determined by the K-L distance, increases exponentially as the number of seed questions increases. Considering the difficulty and importance of creating and choosing appropriate seed variables, the results of this study justify the use of the K-L distance to determine and aggregate better probability interval or distribution estimates.  相似文献   
4.
We propose a sequential optimizing betting strategy in the multi-dimensional bounded forecasting game in the framework of game-theoretic probability of Shafer and Vovk (2001) [10]. By studying the asymptotic behavior of its capital process, we prove a generalization of the strong law of large numbers, where the convergence rate of the sample mean vector depends on the growth rate of the quadratic variation process. The growth rate of the quadratic variation process may be slower than the number of rounds or may even be zero. We also introduce an information criterion for selecting efficient betting items. These results are then applied to multiple asset trading strategies in discrete-time and continuous-time games. In the case of a continuous-time game we present a measure of the jaggedness of a vector-valued continuous process. Our results are examined by several numerical examples.  相似文献   
5.
Summary It is shown that a normal probability density can be characterized as a limit of conditional probability densities of i.i.d. uniform random variables.  相似文献   
6.
The estimation problem in multivariate linear calibration with elliptical errors is considered under a loss function which can be derived from the Kullback-Leibler distance. First, we discuss the problem under normal errors and give unbiased estimate of risk of an alternative estimator by means of the Stein and Stein-Haff identities for multivariate normal distribution. From the unbiased estimate of risk, it is shown that a shrinkage estimator improves on the classical estimator under the loss function. Furthermore, from the extended Stein and Stein-Haff identities for our elliptically contoured distribution, the above result under normal errors is extended to the estimation problem under elliptical errors. We show that the shrinkage estimator obtained under normal models is better than the classical estimator under elliptical errors with the above loss function and hence we establish the robustness of the above shrinkage estimator.  相似文献   
7.
Summary The problem is to estimate the mean of the normal distribution under the situation where there is vague information that the mean might be equal to zero. A minimax property of the preliminary test estimator obtained by the use of AIC (Akaike information Criterion) procedure is proved under a loss function based on the Kullback-Leibler information measure.  相似文献   
8.
A measure of discrepancy between two residual-life distributions is proposed on the basis of Kullback-Leibler discrimination information. Properties of this measure are studied and the minimum discrimination principle is applied to obtain the proportional hazards model.Northern Illinois University on a professional development leave.  相似文献   
9.
In this paper, we cast the problem of income redistribution in two different ways, one as a nonlinear goal programming model and the other as a game theoretic model. These two approaches give characterizations for the probabilistic approach suggested by Intriligator for this problem. All three approaches reinforce the linear income redistribution plan as a desirable mechanism of income redistribution.This research was partly supported by ONR Contract No. N00014-82-K-0295 with the Center for Cybernetic Studies, The University of Texas, Austin, Texas.  相似文献   
10.
Asymptotic distances between probability distributions appearing in πps sampling theory are studied. The distributions are Poisson, Conditional Poisson (CP), Sampford, Pareto, Adjusted CP and Adjusted Pareto sampling. We start with the Kullback-Leibler divergence and the Hellinger distance and derive a simpler distance measure using a Taylor expansion of order two. This measure is evaluated first theoretically and then numerically, using small populations. The numerical examples are also illustrated using a multidimensional scaling technique called principal coordinate analysis (PCO). It turns out that Adjusted CP, Sampford, and adjusted Pareto are quite close to each other. Pareto is a bit further away from these, then comes CP and finally Poisson which is rather far from all the others.   相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号