首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
M. Argáez  H. Klie  C. Quintero  L. Velázquez  M. Wheeler 《PAMM》2007,7(1):1062507-1062508
We present a hybrid optimization approach for solving automated parameter estimation models. The hybrid approach is based on the coupling of the Simultaneous Perturbation Stochastic Approximation (SPSA) [1] and a Newton-Krylov Interior-Point method (NKIP) [2] via a surrogate model. The global method SPSA performs a stochastic search to find target regions with low function values. Next, we generate a surrogate model based on the points of regions on which the local method NKIP algorithm is applied for finding an optimal solution. We illustrate the behavior of the hybrid optimization algorithm on one testcase. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
L. Gerencsér  Zs. Vágó  Stacy D. Hill 《PAMM》2007,7(1):1062501-1062502
The basics of SPSA (simultaneous perturbation stochastic approximation) initiated and developed in [1] will be desctribed. We point out the advantages of SPSA over the finite difference stochastic approximation (FDSA) or Kiefer-Wolfowitz (KW) method. Its applicability in noise free optimization and in discrete optimization will be also briefly described. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
Patrik Lambert  Rafael E. Banchs 《PAMM》2007,7(1):1062503-1062504
Most statistical machine translation systems are combinations of various models and tuning scaling factors is an important step. However, this optimisation problem is hard because the objective function has many local minima and the available algorithms cannot achieve a global optimum. Consequently, optimisations starting from different initial settings can converge to fairly different solutions. We present tuning experiments with the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm, and compare them with the widely used downhill simplex method. With IWSLT 2005 Chinese-English data, both methods showed similar performance, but SPSA was more robust to the choice of initial settings. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
The Fisher information matrix summarizes the amount of information in the data relative to the quantities of interest. There are many applications of the information matrix in modeling, systems analysis, and estimation, including confidence region calculation, input design, prediction bounds, and “noninformative” priors for Bayesian analysis. This article reviews some basic principles associated with the information matrix, presents a resampling-based method for computing the information matrix together with some new theory related to efficient implementation, and presents some numerical results. The resampling-based method relies on an efficient technique for estimating the Hessian matrix, introduced as part of the adaptive (“second-order”) form of the simultaneous perturbation stochastic approximation (SPSA) optimization algorithm.  相似文献   

5.
An important model in handling the multivariate data is the partially linear single-index regression model with a very flexible distribution—beta distribution, which is commonly used to model data restricted to some open intervals on the line. In this paper, the score test is extended to the partially linear single-index beta regression model. The penalized likelihood estimation based on P-spline is proposed. Based on the estimation, the score test statistics about varying dispersion parameter is given. Its asymptotical property is investigated. Both simulated examples are used to illustrate our proposed methods.  相似文献   

6.
7.
The paper is dealing with estimation of rare event probabilities in stochastic networks. The well known variance reduction technique, called Importance Sampling (IS) is an effective tool for doing this. The main idea of IS is to simulate the random system under a modified set of parameters, so as to make the occurrence of the rare event more likely. The major problem of the IS technique is that the optimal modified parameters, called reference parameters to be used in IS are usually very difficult to obtain. Rubinstein (Eur J Oper Res 99:89–112, 1997) developed the Cross Entropy (CE) method for the solution of this problem of IS technique and then he and his collaborators applied this for estimation of rare event probabilities in stochastic networks with exponential distribution [see De Boer et al. (Ann Oper Res 134:19–67, 2005)]. In this paper, we test this simulation technique also for medium sized stochastic networks and compare its effectiveness to the simple crude Monte Carlo (CMC) simulation. The effectiveness of a variance reduction simulation algorithm is measured in the following way. We calculate the product of the necessary CPU time and the estimated variance of the estimation. This product is compared to the same for the simple Crude Monte Carlo simulation. This was originally used for comparison of different variance reduction techniques by Hammersley and Handscomb (Monte Carlo Methods. Methuen & Co Ltd, London, 1967). The main result of the paper is the extension of CE method for estimation of rare event probabilities in stochastic networks with beta distributions. In this case the calculation of reference parameters of the importance sampling distribution requires numerical solution of a nonlinear equation system. This is done by applying a Newton–Raphson iteration scheme. In this case the CPU time spent for calculation of the reference parameter values cannot be neglected. Numerical results will also be presented. This work was supported by grant from the Hungarian National Scientific Research Grant OTKA T047340.  相似文献   

8.
Optimal design of arch dams including dam-water–foundation rock interaction is achieved using the soft computing techniques. For this, linear dynamic behavior of arch dam-water–foundation rock system subjected to earthquake ground motion is simulated using the finite element method at first and then, to reduce the computational cost of optimization process, a wavelet back propagation neural network (WBPNN) is designed to predict the arch dam response instead of directly evaluating it by a time-consuming finite-element analysis (FEA). In order to enhance the performance generality of the neural network, a dam grading technique (DGT) is also introduced. To assess the computational efficiency of the proposed methodology for arch dam optimization, an actual arch dam is considered. The optimization is implemented via the simultaneous perturbation stochastic approximation (SPSA) algorithm for the various conditions of the interaction problem. Numerical results show the merits of the suggested techniques for arch dam optimization. It is also found that considering the dam-water–foundation rock interaction has an important role for safely designing an arch dam.  相似文献   

9.
本文通过直方图和Q-Q图的直观方法展示了上证指数和深证指数的对数收益率具有尖峰厚尾和偏斜的分布特征,利用Shapiro-Wilk正态性检验和Kolmogorov-Smirnov检验等方法检验了对数收益率的分布与正态分布有显著性差异,并以较大的概率水平接受了对数收益率服从偏斜Logistic分布,同时给出了基于偏斜Logistic分布的VaR风险量的估计,结果显示上证指数的风险小于深证指数的风险。  相似文献   

10.
Clustering methods have led to a number of important discoveries in bioinformatics and beyond. A major challenge in their use is determining which clusters represent important underlying structure, as opposed to spurious sampling artifacts. This challenge is especially serious, and very few methods are available, when the data are very high in dimension. Statistical significance of clustering (SigClust) is a recently developed cluster evaluation tool for high-dimensional low sample size (HDLSS) data. An important component of the SigClust approach is the very definition of a single cluster as a subset of data sampled from a multivariate Gaussian distribution. The implementation of SigClust requires the estimation of the eigenvalues of the covariance matrix for the null multivariate Gaussian distribution. We show that the original eigenvalue estimation can lead to a test that suffers from severe inflation of Type I error, in the important case where there are a few very large eigenvalues. This article addresses this critical challenge using a novel likelihood based soft thresholding approach to estimate these eigenvalues, which leads to a much improved SigClust. Major improvements in SigClust performance are shown by both mathematical analysis, based on the new notion of theoretical cluster index (TCI), and extensive simulation studies. Applications to some cancer genomic data further demonstrate the usefulness of these improvements.  相似文献   

11.
Let (X,Y) denote a random vector with decomposition Y = f(X) + where f(x) = E[Y ¦ X = x] is the regression of Y on X. In this paper we propose a test for the hypothesis that f is a linear combination of given linearly independent regression functions g1,..,gd. The test is based on an estimator of the minimal L2-distance between f and the subspace spanned by the regression functions. More precisely, the method is based on the estimation of certain integrals of the regression function and therefore does not require an explicit estimation of the regression. For this reason the test proposed in this paper does not depend on the subjective choice of a smoothing parameter. Differences between the problem of regression diagnostics in the nonrandom and random design case are also discussed.  相似文献   

12.
Empirical Bayes test for scale exponential family   总被引:1,自引:0,他引:1  
In this paper, we consider the empirical Bayes (EB) test problem for the scale parameters in the scale exponential family with a weighted linear loss function. The EB test rules are constructed by the kernel estimation method. The asymptotical optimality and convergence rates of the EB test rules are obtained. The main results are illustrated by applying the proposed test to type II censored data from the exponential distribution and to the test problem for the dispersion parameter in the linear regression model. __________ Translated from Journal of University of Science and Technology of China, 2004, 34(1): 1–10  相似文献   

13.
The optimization of three problems with high dimensionality and many local minima are investigated under five different optimization algorithms: DIRECT, simulated annealing, Spall’s SPSA algorithm, the KNITRO package, and QNSTOP, a new algorithm developed at Indiana University.  相似文献   

14.
Central limit theorem of linear regression model under right censorship   总被引:1,自引:0,他引:1  
In this paper,the estimation of joint dlstribution F(y,z)of(Y,Z)and the estimation in thelinear regression model Y=b'Z+εfor complete data are extended to that of the right censored data.Theregression parameter estimates of b and the variance of ε are weighted least square estimates with randomweights. The central limit theorems of the estimators are obtained under very weak conditions and the derivedasymptotic variance has a very simple form.  相似文献   

15.
The paper presents smooth estimation of densities utilizing penalized splines. The idea is to represent the unknown density by a convex mixture of basis densities, where the weights are estimated in a penalized form. The proposed method extends the work of Komárek and Lesaffre (Comput Stat Data Anal 52(7):3441–3458, 2008) and allows for general density estimation. Simulations show a convincing performance in comparison to existing density estimation routines. The idea is extended to allow the density to depend on some (factorial) covariate. Assuming a binary group indicator, for instance, we can test on equality of the densities in the groups. This provides a smooth alternative to the classical Kolmogorov-Smirnov test or an Analysis of Variance and it shows stable and powerful behaviour.  相似文献   

16.
Generalised varying-coefficient models (GVC) are very important models. There are a considerable number of literature addressing these models. However, most of the existing literature are devoted to the estimation procedure. In this paper, we systematically investigate the statistical inference for GVC, which includes confidence band as well as hypothesis test. We establish the asymptotic distribution of the maximum discrepancy between the estimated functional coefficient and the true functional coefficient. We compare different approaches for the construction of confidence band and hypothesis test. Finally, the proposed statistical inference methods are used to analyse the data from China about contraceptive use there, which leads to some interesting findings.  相似文献   

17.
ABSTRACT

In order to achieve the accurate estimation of state of charge (SOC) of the battery in a hybrid electric vehicle (HEV), this paper proposed a new estimation model based on the classification and regression tree (CART) which belongs to a kind of decision tree. The basic principle and modelling process of the CART decision tree were introduced in detail in this paper, and we used the voltage, current, and temperature of the battery in an HEV to estimate the value of SOC under the driving cycle. Meanwhile, we took the energy feedback of the HEV under the regenerative braking into consideration. The simulation data and experimental data were used to test the effectiveness of the estimation model of CART, and the results indicate that the proposed estimation model has high accuracy, the relative error of simulation is within 0.035, while the relative error of experiment is less than 0.05.  相似文献   

18.
This paper is devoted to estimation of parameters for a noisy sum of two real exponential functions. Singular Spectrum Analysis is used to extract the signal subspace and then the ESPRIT method exploiting signal subspace features is applied to obtain estimates of the desired exponential rates. Dependence of estimation quality on signal eigenvalues is investigated. The special design to test this relation is elaborated. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
Permutation or randomization test is a nonparametric test in which the null distribution (distribution under the null hypothesis of no relationship or no effect) of the test statistic is attained by calculating the values of the test statistic overall permutations (or by considering a large number of random permutation) of the observed dataset. The power of permutation test evaluated based on the observed dataset is called conditional power. In this paper, the conditional power of permutation tests is reviewed. The use of the conditional power function for sample size estimation is investigated. Moreover, reproducibility and generalizability probabilities are defined. The use of these probabilities for sample size adjustment is shown. Finally, an illustration example is used.  相似文献   

20.
We explore the performance of sample average approximation in comparison with several other methods for stochastic optimization. The methods we evaluate are (a) bagging; (b) kernel density estimation; (c) maximum likelihood estimation; and (d) a Bayesian approach. We use two test sets: first a set of quadratic objective functions allowing different types of interaction between the random component and the univariate decision variable; and second a set of portfolio optimization problems. We make recommendations for effective approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号