首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
王兆军 《经济数学》2001,18(2):23-31
本文首先提出了广义均匀设计抽样,并给出了利用广义均匀设计抽样在半连续半离散区域上求取函数最值的方法,之后利用这种方法,针对香港股票市场,给出了技术分析指标-相对强弱指数的最佳参数组合.  相似文献   

2.
本文给出了利用均匀设计和正交表构造低偏差OALH设计的方法,该方法构造的设计既有优良的均匀性具有正交设计的均衡性,一个更重要的优点是可以构造较大样本容量的设计点集,本文同时给出了某些参数的均匀设计表,这些设计优于现有的均匀设计,具有实用价值。  相似文献   

3.
U∧*均匀设计的均匀性研究   总被引:4,自引:0,他引:4  
本文以均匀设计在几何和物理意义下的均匀性等价准则为基础,研究了U∧*均匀设计的均匀性,利用本文结果,构造U∧*均匀设计使用表的运算量可减少为原来的s/2∧2-1(这里s为因素数),而且,当因素数为1/2ψ(n 1)或1/2ψ(n 1)-1时,不必进行任何均匀度评价也可直接给出其使用表,本文还从几何的角度说明了U∧*均匀设计最多只能安排1/2ψ(n 1)个因素。  相似文献   

4.
本文首先利用Bundschuh和朱尧辰(1993)的偏差计算公式,给出了构造某些非平衡均匀设计方法及相应的均匀设计表,之后把随机化均匀设计与移动平均线相结合,给出了移动平均线的最佳参数组合。  相似文献   

5.
移动平均线的最佳参数组合   总被引:3,自引:0,他引:3  
本文首先简化了(1)的偏差计算公式,并利用此公式给出某些新的均匀设计表及某些非平衡均匀设计表。其次,提出了广义的均匀设计抽样,最后把随机化均匀设计与广义的均匀设计抽样应用于移动平均线,得到了它的最佳参数组合并得到了改进后移动平均线的最佳参数组合。  相似文献   

6.
众所周知,遗传算法的运行机理及特点是具有定向制导的随机搜索技术,其定向制导的原则是:导向以高适应度模式为祖先的"家族"方向.以此结论为基础,利用均匀设计抽样的理论和方法,对遗传算法中的交叉操作进行了重新设计,给出了一个新的GA算法,称之为均匀设计抽样遗传算法.最后将均匀设计抽样遗传算法应用于求解背包问题,并与简单遗传算...  相似文献   

7.
随机化均匀设计遗传算法   总被引:1,自引:0,他引:1  
众所周知,遗传算法的运行机理及特点是具有定向制导的随机搜索技术,其定向制导的原则是:导向以高适应度模式为祖先的"家族"方向.以此结论为基础.利用随机化均匀设计的理论和方法,对遗传算法中的交叉操作进行了重新设计,给出了一个新的GA算法,称之为随机化均匀设计遗传算法.最后将随机化均匀设计遗传算法应用于求解函数优化问题,并与简单遗传算法和佳点集遗传算法进行比较.通过模拟比较,可以看出新的算法不但提高了算法的速度和精度,而且避免了其它方法常有的早期收敛现象,  相似文献   

8.
均匀设计及其应用(二)   总被引:3,自引:0,他引:3  
均匀设计及其应用(二)方开泰第二章均匀设计的应用由于均匀设计要求的试验数较少,我们无法直接估计出各因素的主效应和交互效应,只能通过回归模型来建立试验指标(Y)和各因素之间的关系,利用这种关系来寻求最优工艺条件或最优配方.试验设计的目的通常主要有二个,...  相似文献   

9.
1.引言。 参数设计是是日本田口玄一博土(Dr.Genichi Taguchi)创立的优化方法。主要内容为探求参数的最佳搭配,提高产品性能的稳定性。 均匀设计是我国王元教授和方开泰教授共同提出的。其关键是将数论方法应用于试验设计,以均匀分散为标准,使试验点均匀地分布在试验范围内,使每个试验点有充分的代表性。这样,均匀设计的试验点比正交设计的试验点分布得更均匀,又由于不再考虑“整齐可比”性,而大大减少了试验次数,也就大大减少了试验费用。  相似文献   

10.
混水平均匀设计的构造   总被引:2,自引:0,他引:2  
覃红 《应用数学学报》2005,28(4):704-712
我们用离散偏差来度量部分因子设计的均匀性,本文的目的在于寻找一些构造混水平均匀设计的方法,这些方法比文献中已有的方法更简单且计算成本更低.我们得到了离散偏差的一个下界,如果一个U 型设计的离散偏差值达到这个下界,那么该设计是—个均匀设计.我们建立了均匀设计与组合设计理论中一致可分解设计之间的联系.通过一致可分解设计,我们提出了一些构造均匀设计的新方法,同时也给出了许多均匀设计存在的无穷类.  相似文献   

11.
We present a new parametric model for the angular measure of a multivariate extreme value distribution. Unlike many parametric models that are limited to the bivariate case, the flexible model can describe the extremes of random vectors of dimension greater than two. The novel construction method relies on a geometric interpretation of the requirements of a valid angular measure. An advantage of this model is that its parameters directly affect the level of dependence between each pair of components of the random vector, and as such the parameters of the model are more interpretable than those of earlier parametric models for multivariate extremes. The model is applied to air quality data and simulated spatial data.  相似文献   

12.
Summary Let X ∼ Np(μ,σ2Ip) and let s/σ2 ∼ χ n 2 , independent ofX, where μ and σ2 are unknown. This paper considers the estimation of μ (by δ) relative to a convex loss function given by (δ−μ)′[(1−α)Ip2+αQ](δ−μ)/[(1−α)p/σ2+α tr (Q)], whereQ is a knownp×p diagonal matrix and 0≦α≦1. Two classes of minimax estimators are obtained for μ whenp≦3; the first is a new result and the second is a generalization of a result of Strawderman (1973,Ann. Statist.,1, 1189–1194). A proper Bayes estimator is also obtained which is shown to satisfy the conditions of the second class of minimax estimators. The paper concludes by discussing the estimation of μ relative to another convex loss function. This work was supported by the Army, Navy and Air Force under Office of Naval Research Contract No. N00014-80-C-0093. Reproduction in whole or in part is permitted for any purpose of the United States Government.  相似文献   

13.
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients.  相似文献   

14.
The multivariate probit model is very useful for analyzing correlated multivariate dichotomous data. Recently, this model has been generalized with a confirmatory factor analysis structure for accommodating more general covariance structure, and it is called the MPCFA model. The main purpose of this paper is to consider local influence analysis, which is a well-recognized important step of data analysis beyond the maximum likelihood estimation, of the MPCFA model. As the observed-data likelihood associated with the MPCFA model is intractable, the famous Cook's approach cannot be applied to achieve local influence measures. Hence, the local influence measures are developed via Zhu and Lee's [Local influence for incomplete data model, J. Roy. Statist. Soc. Ser. B 63 (2001) 111-126.] approach that is closely related to the EM algorithm. The diagnostic measures are derived from the conformal normal curvature of an appropriate function. The building blocks are computed via a sufficiently large random sample of the latent response strengths and latent variables that are generated by the Gibbs sampler. Some useful perturbation schemes are discussed. Results that are obtained from analyses of an artificial example and a real example are presented to illustrate the newly developed methodology.  相似文献   

15.
Logic Regression is an adaptive regression methodology mainly developed to explore high-order interactions in genomic data. Logic Regression is intended for situations where most of the covariates in the data to be analyzed are binary. The goal of Logic Regression is to find predictors that are Boolean (logical) combinations of the original predictors. In this article, we give an overview of the methodology and discuss some applications. We also describe the software for Logic Regression, which is available as an R and S-Plus package.  相似文献   

16.
Summary Formulae for variance of difference between two estimated responses are derived form-grouped first-order and 2-grouped second- and thirdorder cylindrically rotatable designs of type 3.  相似文献   

17.
Summary Some new third-order rotatable designs in three dimensions are derived from some of the available third-order rotatable designs in two dimensions. When these designs are used the results of the experiments performed according to the two-dimentional designs need not be discarded. Some of these designs may be performed sequentially in all three factors, starting with a one-dimensional design. Further, these third-order rotatable designs require a smaller number of points than most of the available three-dimensional third-order rotatable designs.  相似文献   

18.
The known mathematical properties of the multivariate t distribution are reviewed. We believe that this review will serve as an important reference and encourage further research activities in the area.  相似文献   

19.
The problem of estimating linear functionals based on Gaussian observations is considered. Probabilistic error is used as a measure of accuracy and attention is focused on the construction of adaptive estimators which are simultaneously near optimal under probabilistic error over a collection of convex parameter spaces. In contrast to mean squared error it is shown that fully rate optimal adaptive estimators can be constructed for probabilistic error. A general construction of such estimators is provided and examples are given to illustrate the general theory.  相似文献   

20.
In this article we investigate the nonparametric estimation of the jump density of a compound Poisson process from the discrete observation of one trajectory over [0,T][0,T]. We consider the case where the sampling rate Δ=ΔT→0Δ=ΔT0 as T→∞T. We propose an adaptive wavelet threshold density estimator and study its performance for LpLp losses, p≥1p1, over Besov spaces. The main novelty is that we achieve minimax rates of convergence for sampling rates ΔTΔT that vanish slowly. The estimation procedure is based on the explicit inversion of the operator giving the law of the increments as a nonlinear transformation of the jump density.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号