首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
考虑k个有连续分布Fi=F(x-μi/σ)的总体,i=1,...,k。其中μi和σ均未知,我们通过两阶段抽取样本以得到最大均值的那个总体,第一步根据抽样删去一部分看来不可能的总体,第二步再从余下总体中抽样,取含样本均值最大的那个总体为所需总体,我们用渐过分布方法得到了在所定规则下能选到正确总体的概率下界。特别对Logistic总体计算了有关数值以及关于一次抽样的相对效。  相似文献   

2.
The empirical likelihood is a general nonparametric inference procedure with many desirable properties. Recently, theoretical results for empirical likelihood with certain censored/truncated data have been developed. However, the computation of empirical likelihood ratios with censored/truncated data is often nontrivial. This article proposes a modified self-consistent/EM algorithm to compute a class of empirical likelihood ratios for arbitrarily censored/truncated data with a mean type constraint. Simulations show that the chi-square approximations of the log-empirical likelihood ratio perform well. Examples and simulations are given in the following cases: (1) right-censored data with a mean parameter; and (2) left-truncated and right-censored data with a mean type parameter.  相似文献   

3.
Summary The concept of ignorance prior distribution is extended to the case of a hyperparameter. This leads to a procedure of formulating the partial ignorance of the original parameter. Its application to the estimation of the mean of a multivariate normal distribution with a particular hyperparameterized prior distribution of the mean leads to an improper prior distribution with the corresponding posterior mean very close to the James-Stein estimate. The Institute of Statistical Mathematics  相似文献   

4.
This paper presents a method of estimation of an “optimal” smoothing parameter (window width) in kernel estimators for a probability density. The obtained estimator is calculated directly from observations. By “optimal” smoothing parameters we mean those parameters which minimize the mean integral square error (MISE) or the integral square error (ISE) of approximation of an unknown density by the kernel estimator. It is shown that the asymptotic “optimality” properties of the proposed estimator correspond (with respect to the order) to those of the well-known cross-validation procedure [1, 2]. Translated fromStatisticheskie Metody Otsenivaniya i Proverki Gipotez, pp. 67–80, Perm, 1990.  相似文献   

5.
In this paper we present the asymptotic analysis of the linear Boltzmann equation for neutrons with a small positive parameter ? related to the mean free path, based upon the Chapman–Enskog procedure of the kinetic theory. We prove that if proper initial conditions derived by considering initial layer solutions are used, the diffusion equation gives the uniform approximation to the neutron density function with the O(?2) accuracy.  相似文献   

6.
Summary. It has been shown that local linear smoothing possesses a variety of very attractive properties, not least being its mean square performance. However, such results typically refer only to asymptotic mean squared error, meaning the mean squared error of the asymptotic distribution, and in fact, the actual mean squared error is often infinite. See Seifert and Gasser (1996). This difficulty may be overcome by shrinking the local linear estimator towards another estimator with bounded mean square. However, that approach requires information about the size of the shrinkage parameter. From at least a theoretical viewpoint, very little is known about the effects of shrinkage. In particular, it is not clear how small the shrinkage parameter may be chosen without affecting first-order properties, or whether infinitely supported kernels such as the Gaussian require shrinkage in order to achieve first-order optimal performance. In the present paper we provide concise and definitive answers to such questions, in the context of general ridged and shrunken local linear estimators. We produce necessary and sufficient conditions on the size of the shrinkage parameter that ensure the traditional mean squared error formula. We show that a wide variety of infinitely-supported kernels, with tails even lighter than those of the Gaussian kernel, do not require any shrinkage at all in order to achieve traditional first-order optimal mean square performance. Received: 22 May 1995 / In revised form: 23 January 1997  相似文献   

7.
The paper documents an investigation into some methods for fitting surfaces to scattered data. The form of the fitting function is a multiquadratic function with the criteria for the fit being the least mean squared residual for the data points. The principal problem is the selection of knot points (or base points for the multiquadratic basis functions), although the selection of the multiquadric parameter also plays a nontrivial role in the process. We first describe a greedy algorithm for knot selection, and this procedure is used as an initial step in what follows. The minimization including knot locations and the multiquadric parameter is explored, with some unexpected results in terms of “near repeated” knots. This phenomenon is explored, and leads us to consider variable parameter values for the basis functions. Examples and results are given throughout.  相似文献   

8.
We consider the empirical Bayes decision problem where the component problem is the sequential estimation of the mean of one-parameter exponential family of distributions with squared error loss for the estimation error and a cost c>0 for each observation. The present paper studies the untruncated sequential component case. In particular, an untruncated asymptotically pointwise optimal sequential procedure is employed as the component. With sequential components, an empirical Bayes decision procedure selects both a stopping time and a terminal decision rule for use in the component with parameter . The goodness of the empirical Bayes sequential procedure is measured by comparing the asymptotic behavior of its Bayes risk with that of the component procedure as the number of past data increases to infinity. Asymptotic risk equivalence of the proposed empirical Bayes sequential procedure to the component procedure is demonstrated.This research was supported in part by the Natural Sciences and Engineering Research Council of Canada under grant GP7987.  相似文献   

9.
This paper presents a homotopy procedure which improves the solvability of mathematical programming problems arising from total variational methods for image denoising. The homotopy on the regularization parameter involves solving a sequence of equality-constrained optimization problems where the positive regularization parameter in each optimization problem is initially large and is reduced to zero. Newton’s method is used to solve the optimization problems and numerical results are presented.  相似文献   

10.
Studies of near periodic patterns in many self-organizing physical and biological systems give rise to a nonlocal geometric problem in the entire space involving the mean curvature and the Newtonian potential. One looks for a set in space of the prescribed volume such that on the boundary of the set the sum of the mean curvature of the boundary and the Newtonian potential of the set, multiplied by a parameter, is constant. Despite its simple form, the problem has a rich set of solutions and its corresponding energy functional has a complex landscape. When the parameter is sufficiently large, there exists a solution that consists of two tori: a larger torus and a smaller torus. Due to the axisymmetry, the problem is formulated on a half plane. A variant of the Lyapunov–Schmidt procedure is developed to reduce the problem to minimizing the energy of the set of two exact tori, as an approximate solution, with respect to their radii. A re-parameterization argument shows that the double tori so obtained indeed solves the equation of mean curvature and Newtonian potential. One also obtains the asymptotic formulae for the radii of the tori in terms of the parameter. This double tori set is the first known disconnected solution.  相似文献   

11.
We consider in the present paper the analysis of parameter designs in off‐line quality control. The main objective is to seek levels of the production factors that would minimize the expected loss. Unlike classical analyses which focus on the analysis of the mean and variance in minimizing a quadratic loss function, the proposed method is applicable to a general loss function. An appropriate transformation is first sought to eliminate the dependency of the variance on the mean (to achieve ‘separation’ in the terminology of Box). This is accomplished through a preliminary analysis using a recently proposed parametric heteroscedastic regression model. With the dependency of the variance on the mean eliminated, methods with established properties can be applied to estimate simultaneously the mean and the variance functions in the new metric. The expected loss function is then estimated and minimized based on a distributional free procedure using the empirical distribution of the standardized residuals. This alleviates the need for a full parametric model, which, if incorrectly specified, may lead to biased results. Although a transformation is employed as an intermediate step of analysis, the loss function is minimized in its original metric. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Summary A Bayesian approach to nonstationary process analysis is proposed. Given a set of data, it is divided into several blocks with the same length, and in each block an autoregressive model is fitted to the data. A constraint on the autoregressive coefficients of the successive blocks is considered. This constraint controls the smoothness of the temporal change of spectrum as shown in Section 2. A smoothness parameter, which is called a hyper parameter in this article, is determined with the aid of the minimum ABIC (Akaike Bayesian Information Criterion) procedure. Numerical examples of our procedure are also given. The Institute of Statistical Mathematics  相似文献   

13.
This paper considers an integrated formulation in selecting the best normal mean in the case of unequal and unknown variances. The formulation separates the parameter space into two disjoint parts, the preference zone (PZ) and the indifference zone (IZ). In the PZ we insist on selecting the best for a correct selection (CS1) but in the IZ we define any selected subset to be correct (CS2) if it contains the best population. We find the least favorable configuration (LFC) and the worst configuration (WC) respectively in PZ and IZ. We derive formulas for P(CS1|LFC), P(CS2|WC) and the bounds for the expected sample size E(N). We also give tables for the procedure parameters to implement the proposed procedure. An example is given to illustrate how to apply the procedure and how to use the table.  相似文献   

14.
Summary This paper examines a simple transformation which enables the use of product method in place of ratio method. The convenience with the former, proposed by Murthy [3], is that expressions for bias and mean square error (mse) can be exactly evaluated. The optimum situation in the minimum mse sense and allowable departures from this optimum are indicated. The procedure requires a good guess of a certain parameter, which does not seem very restrictive for practice. Two methods for dealing with the bias of the estimator are mentioned. An extension to use multiauxiliary information is outlined.  相似文献   

15.
In this paper we consider kernel estimation of a density when the data are contaminated by random noise. More specifically we deal with the problem of how to choose the bandwidth parameter in practice. A theoretical optimal bandwidth is defined as the minimizer of the mean integrated squared error. We propose a bootstrap procedure to estimate this optimal bandwidth, and show its consistency. These results remain valid for the case of no measurement error, and hence also summarize part of the theory of bootstrap bandwidth selection in ordinary kernel density estimation. The finite sample performance of the proposed bootstrap selection procedure is demonstrated with a simulation study. An application to a real data example illustrates the use of the method. This research was supported by ‘Projet d’Actions de Recherche Concertées’ (No. 98/03-217) from the Belgian government. Financial support from the IAP research network nr P5/24 of the Belgian State (Federal Office for Scientific, Technical and Cultural Affairs) is also gratefully acknowledged.  相似文献   

16.
A formal parameter estimation procedure for the two-parameter M-Wright distribution is proposed. This procedure is necessary to make the model useful for real-world applications. Note that its generalization of the Gaussian density makes the M-Wright distribution appealing to practitioners. Closed-form estimators are also derived from the moments of the log-transformed M-Wright distributed random variable, and are shown to be asymptotically normal. Tests using simulated data indicated favorable results for our estimation procedure.  相似文献   

17.
We consider a method to efficiently evaluate in a real-time context an output based on the numerical solution of a partial differential equation depending on a large number of parameters. We state a result allowing to improve the computational performance of a three-step RB–ANOVA–RB method. This is a combination of the reduced basis (RB) method and the analysis of variations (ANOVA) expansion, aiming at compressing the parameter space without affecting the accuracy of the output. The idea of this method is to compute a first (coarse) RB approximation of the output of interest involving all the parameter components, but with a large tolerance on the a posteriori error estimate; then, we evaluate the ANOVA expansion of the output and freeze the least important parameter components; finally, considering a restricted model involving just the retained parameter components, we compute a second (fine) RB approximation with a smaller tolerance on the a posteriori error estimate. The fine RB approximation entails lower computational costs than the coarse one, because of the reduction of parameter dimensionality. Our result provides a criterion to avoid the computation of those terms in the ANOVA expansion that are related to the interaction between parameters in the bilinear form, thus making the RB–ANOVA–RB procedure computationally more feasible.  相似文献   

18.
This paper contains some alternative methods for estimating the parameters in the beta binomial and truncated beta binomial models. These methods are compared with maximum likelihood on the basis of Asymptotic Relative Efficiency (ARE). For the beta binomial distribution a simple estimator based on moments or ratios of factorial moments has high ARE for most of the parameter space and it is an attractive and viable alternative to computing the maximum likelihood estimator. It is also simpler to compute than an estimator based on the mean and zeros, proposed by Chatfield and Goodhart (1970,Appl. Statist.,19, 240–250), and has much higher ARE for most part of the parameter space. For the truncated beta binomial, the simple estimator based on two moment relations does not behave quite as well as for the BB distribution, but a simple estimator based on two linear relations involving the first three moments and the frequency of ones has extremely high ARE. Some examples are provided to illustrate the procedure for the two models.  相似文献   

19.
It is a common belief that the Tikhonov scheme with the -penalty fails to reconstruct a sparse structure with respect to a given system {ϕ i }. However, in this paper we present a procedure for the sparse recovery, which is totally based on the standard Tikhonov method. This procedure consists of two steps. At first the Tikhonov scheme is used as a sieve to find the coefficients near ϕ i , which are suspected to be non-zero. Within this step the performance of the standard Tikhonov method is controlled in some sparsity promoting space rather than in the original Hilbert one. In the second step of the proposed procedure, the coefficients with indices selected in the previous step are estimated by means of the data functional strategy. The choice of the regularization parameter is a crucial issue for both steps. We show that a recently developed parameter choice rule called the balancing principle can be effectively used here. We also present the results of computational experiments giving the evidence of the reliability of our approach.  相似文献   

20.
This paper deals with the problem of estimating the minimum lifetime (guarantee time) of the two parameter exponential distribution through a three-stage sampling procedure. Several forms of loss functions are considered. The regret associated with each loss function is determined. The results in this paper generalize the basic results of Hall (1981, Ann. Statist., 9, 1229–1238).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号