首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In the general problem of Parametric Point Estimation the Mean Squared Error often appears as a useful measure of goodness or closeness of estimates. Nevertheless, in very rare cases an estimator with smallest Mean Squared Error exists, but Statistical Inference provides a variety of methods to find estimates. that are usually characterized by a small Mean Squared Error.When the observation of outcomes from the probabilistic information system or experiment concerning the estimation problem involves fuzzy imprecision, so that the observable events are described by means of fuzzy events on the sample space, the use of Zadeh's probabilistic definition allows us to immediately extend the Mean Squared Error.In the present paper we are going to verify that the presence of fuzziness in experimental data entails a variation in that measure of goodnesss of estimation. On the basis of the last assertion the problem of selecting the suitable sample size, in order to remove the variation in the Mean Squared Error due to fuzziness or to estimate the parameter with a specified degree of precision, will be then discussed.  相似文献   

2.
Summary This paper establishes asymptotic lower bounds which specify, in a variety of contexts, how well (in terms of relative rate of convergence) one may select the bandwidth of a kernel density estimator. These results provide important new insights concerning how the bandwidth selection problem should be considered. In particular it is shown that if the error criterion is Integrated Squared Error (ISE) then, even under very strong assumptions on the underlying density, relative error of the selected bandwidth cannot be reduced below ordern –1/10 (as the sample size grows). This very large error indicates that any technique which aims specifically to minimize ISE will be subject to serious practical difficulties arising from sampling fluctuations. Cross-validation exhibits this very slow convergence rate, and does suffer from unacceptably large sampling variation. On the other hand, if the error criterion is Mean Integrated Squared Error (MISE) then relative error of bandwidth selection can be reduced to ordern –1/2, when enough smoothness is assumed. Therefore bandwidth selection techniques which aim to minimize MISE can be much more stable, and less sensitive to small sampling fluctuations, than those which try to minimize ISE. We feel this indicates that performance in minimizing MISE, rather than ISE, should become the benchmark for measuring performance of bandwidth selection methods.Research partially supported by National Science Foundation Grants DMS-8701201 and DMS-8902973Research of the first author was done while on leave from the Australian National University  相似文献   

3.
We consider the fixed design regression model Yi = g(ti) + ξi, i = 1, …, n, where ξi are (not necessarily i.i.d.) no variables, ti constitute the design points where nonrepeatable measurements are to be taken and Yi are the observations from which g and its derivatives are to be estimated. The dependency of the Integrated Mean Squared Error of two different types of kernel estimates on the design {t1, …, tn} is established. This allows the derivation of asymptotically optimal designs.  相似文献   

4.
We study two estimators of the long-range parameter of a covariance stationary linear process. We show that one of the estimators achieve the optimal semiparametric rate of convergence, whereas the other has a rate of convergence as close as desired to the optimal rate. Moreover, we show that the estimators are asymptotically normal with a variance, which does not depend on any unknown parameter, smaller than others suggested in the literature. Finally, a small Monte Carlo study is included to illustrate the finite sample relative performance of our estimators compared to other suggested semiparametric estimators. More specifically, the Monte-Carlo experiment shows the superiority of the proposed estimators in terms of the Mean Squared Error. The first author research was funded by the Economic and Social Research Council (ESRC) reference number: R000238212. The second author research was funded by the Ministry of Education, Culture, Sports and Technology of Japan, reference number: 09CE2002 and B(2)10202202.  相似文献   

5.
The continuous and discrete time Linear Quadratic Regulator (LQR) theory has been used in this paper for the design of optimal analog and discrete PID controllers respectively. The PID controller gains are formulated as the optimal state-feedback gains, corresponding to the standard quadratic cost function involving the state variables and the controller effort. A real coded Genetic Algorithm (GA) has been used next to optimally find out the weighting matrices, associated with the respective optimal state-feedback regulator design while minimizing another time domain integral performance index, comprising of a weighted sum of Integral of Time multiplied Squared Error (ITSE) and the controller effort. The proposed methodology is extended for a new kind of fractional order (FO) integral performance indices. The impact of fractional order (as any arbitrary real order) cost function on the LQR tuned PID control loops is highlighted in the present work, along with the achievable cost of control. Guidelines for the choice of integral order of the performance index are given depending on the characteristics of the process, to be controlled.  相似文献   

6.
The utilization of multiple fidelity simulators for the design and analysis of computer experiments has received increased attention in recent years. In this paper, we study the contour estimation problem for complex systems by considering two fidelity simulators. Our goal is to design a methodology of choosing the best suited simulator and input location for each simulation trial so that the overall estimation of the desired contour can be as good as possible under limited simulation resources. The proposed methodology is sequential and based on the construction of Gaussian process surrogate for the output measure of interest. We illustrate the methodology on a canonical queueing system and evaluate its efficiency via a simulation study.  相似文献   

7.
Many properties of nanostructures depend on the atomic configuration at the surface. One common technique used for determining this surface structure is based on the low energy electron diffraction (LEED) method, which uses a high-fidelity physics model to compare experimental results with spectra computed via a computer simulation. While this approach is highly effective, the computational cost of the simulations can be prohibitive for large systems. In this work, we propose the use of a direct search method in conjunction with an additive surrogate. This surrogate is constructed from a combination of a simplified physics model and an interpolation that is based on the differences between the simplified physics model and the full fidelity model. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
This paper deals with optimal experimental design criteria and neural networks in the aim of building experimental designs from observational data. It addresses the following three main issues: (i) the introduction of two radically different approaches, namely T‐optimal designs extended to Generalized Linear Models and Evolutionary Neural Networks Design; (ii) the proposal of two algorithms, based on model selection procedures, to exploit the information of already collected data; and (iii) the comparison of the suggested methods and corresponding algorithms by means of a simulated case study in the technological field. Results are compared by considering elements of the proposed algorithms, in terms of models and experimental design strategies. In particular, we highlight the algorithmic features, the performances of the approaches, the optimal solutions and the optimal levels of variables involved in a simulated foaming process. The optimal solutions obtained by the two proposed algorithms are very similar, nevertheless, the differences between the paths followed by the two algorithms to reach optimal values are substantial, as detailed step‐by‐step in the discussion. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
We consider, in the setting of censored data, a kernel estimator of hazard rate. We give the asymptotic expression of the Integrated Square Error (ISE), and propose a method to select the asymptotically optimal bandwidth.  相似文献   

10.
This paper considers a general family of Stein rule estimators for the coefficient vector of a linear regression model with nonspherical disturbances, and derives estimators for the Mean Squared Error (MSE) matrix, and risk under quadratic loss for this family of estimators. The confidence ellipsoids for the coefficient vector based on this family of estimators are proposed, and the performance of the confidence ellipsoids under the criterion of coverage probability and expected volumes is investigated. The results of a numerical simulation are presented to illustrate the theoretical findings, which could be applicable in the area of economic growth modeling.  相似文献   

11.
This paper proposes an online surrogate model-assisted multiobjective optimization framework to identify optimal remediation strategies for groundwater contaminated with dense non-aqueous phase liquids. The optimization involves three objectives: minimizing the remediation cost and duration and maximizing the contamination removal rate. The proposed framework adopts a multiobjective feasibility-enhanced particle swarm optimization algorithm to solve the optimization model and uses an online surrogate model as a substitute for the time-consuming multiphase flow model for calculating contamination removal rates during the optimization process. The resulting approach allows decision makers to find a balance among the remediation cost, remediation duration and contamination removal rate for remediating contaminated groundwater. The new algorithm is compared with the nondominated sorting genetic algorithm II, which is an extensively applied and well-known algorithm. The results show that the Pareto solutions obtained by the new algorithm have greater diversity and stability than those obtained by the nondominated sorting genetic algorithm II, indicating that the new algorithm is more applicable than the nondominated sorting genetic algorithm II for optimizing remediation strategies for contaminated groundwater. Additionally, the surrogate model and Pareto optimal set obtained by the proposed framework are compared with those of the offline surrogate model-assisted multiobjective optimization framework. The results indicate that the surrogate model accuracy and Pareto front achieved by the proposed framework outperform those of the offline surrogate model-assisted optimization framework. Thus, we conclude that the proposed framework can effectively enhance the surrogate model accuracy and further extend the comprehensive performance of Pareto solutions.  相似文献   

12.
Recently, the use of Bayesian optimal designs for discrete choice experiments, also called stated choice experiments or conjoint choice experiments, has gained much attention, stimulating the development of Bayesian choice design algorithms. Characteristic for the Bayesian design strategy is that it incorporates the available information about people's preferences for various product attributes in the choice design. This is in contrast with the linear design methodology, which is also used in discrete choice design and which depends for any claims of optimality on the unrealistic assumption that people have no preference for any of the attribute levels. Although linear design principles have often been used to construct discrete choice experiments, we show using an extensive case study that the resulting utility‐neutral optimal designs are not competitive with Bayesian optimal designs for estimation purposes. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
An efficient methodology is presented to achieve optimal design of structures for earthquake loading. In this methodology a combination of wavelet transforms, neural networks and evolutionary algorithms are employed. The stochastic nature of the evolutionary algorithms makes the slow convergence. Specially, when earthquake induced loads are taken into account. To reduce the computational burden, a discrete wavelet transform is used by means of which the number of points in the earthquake record is decreased. Then, by using a surrogate model, the dynamic responses of the structures are predicted. In order to investigate the efficiency of the proposed methodology, two structures are designed for optimal weight. The numerical results demonstrate the computational advantages of the proposed hybrid methodology to optimal dynamic design of structures.  相似文献   

14.
In this paper we study the problem of designing periodic orbits for a special class of hybrid systems, namely mechanical systems with underactuated continuous dynamics and impulse events. We approach the problem by means of optimal control. Specifically, we design an optimal control based strategy that combines trajectory optimization, dynamics embedding, optimal control relaxation and root finding techniques. The proposed strategy allows us to design, in a numerically stable manner, trajectories that optimize a desired cost and satisfy boundary state constraints consistent with a periodic orbit. To show the effectiveness of the proposed strategy, we perform numerical computations on a compass biped model with torso.  相似文献   

15.
In this work, a flat pressure bulkhead reinforced by an array of beams is designed using a suite of heuristic optimization methods (Ant Colony Optimization, Genetic Algorithms, Particle Swarm Optimization and LifeCycle Optimization), and the Nelder-Mead simplex direct search method. The compromise between numerical performance and computational cost is addressed, calling for inexpensive, yet accurate analysis procedures. At this point, variable fidelity is proposed as a tradeoff solution. The difference between the low-fidelity and high-fidelity models at several points is used to fit a surrogate that corrects the low-fidelity model at other points. This allows faster linear analyses during the optimization; whilst a reduced set of expensive non-linear analyses are run “off-line,” enhancing the linear results according to the physics of the structure. Numerical results report the success of the proposed methodology when applied to aircraft structural components. The main conclusions of the work are (i) the variable fidelity approach enabled the use of intensive computing heuristic optimization techniques; and (ii) this framework succeeded in exploring the design space, providing good initial designs for classical optimization techniques. The final design is obtained when validating the candidate solutions issued from both heuristic and classical optimization. Then, the best design can be chosen by direct comparison of the high-fidelity responses.  相似文献   

16.
We consider forecasting in systems whose underlying laws are uncertain, while contextual information suggests that future system properties will differ from the past. We consider linear discrete-time systems, and use a non-probabilistic info-gap model to represent uncertainty in the future transition matrix. The forecaster desires the average forecast of a specific state variable to be within a specified interval around the correct value. Traditionally, forecasting uses a model with optimal fidelity to historical data. However, since structural changes are anticipated, this is a poor strategy. Our first theorem asserts the existence, and indicates the construction, of forecasting models with sub-optimal-fidelity to historical data which are more robust to model error than the historically optimal model. Our second theorem identifies conditions in which the probability of forecast success increases with increasing robustness to model error. The proposed methodology identifies reliable forecasting models for systems whose trajectories evolve with Knightian uncertainty for structural change over time. We consider various examples, including forecasting European Central Bank interest rates following 9/11.  相似文献   

17.
Artificial Neural Networks (ANNs) are well known for their credible ability to capture non-linear trends in scientific data. However, the heuristic nature of estimation of parameters associated with ANNs has prevented their evolution into efficient surrogate models. Further, the dearth of optimal training size estimation algorithms for the data greedy ANNs resulted in their overfitting. Therefore, through this work, we aim to contribute a novel ANN building algorithm called TRANSFORM aimed at simultaneous and optimal estimation of ANN architecture, training size and transfer function. TRANSFORM is integrated with three standalone Sobol sampling based training size determination algorithms which incorporate the concepts of hypercube sampling and optimal space filling. TRANSFORM was used to construct ANN surrogates for a highly non-linear industrially validated continuous casting model from steel plant. Multiobjective optimization of casting model to ensure maximum productivity, maximum energy saving and minimum operational cost was performed by ANN assisted Non-dominated Sorting Genetic Algorithms (NSGA-II). The surrogate assisted optimization was found to be 13 times faster than conventional optimization, leading to its online implementation. Simple operator's rules were deciphered from the optimal solutions using Pareto front characterization and K-means clustering for optimal functioning of casting plant. Comprehensive studies on (a) computational time comparisons between proposed training size estimation algorithms and (b) predictability comparisons between constructed ANNs and state of art statistical models, Kriging Interpolators adds to the other highlights of this work. TRANSFORM takes physics based model as the only input and provides parsimonious ANNs as outputs, making it generic across all scientific domains.  相似文献   

18.
Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance pre-dicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations), SD (Squared Deviations) and NAD (Normalized Absolute Deviations) are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the lkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute pre-diction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.  相似文献   

19.
Lotfi Tadj  Gautam Choudhury 《TOP》2005,13(2):359-412
We have divided this review into two parts. The first part is concerned with the optimal design of queueing systems and the second part deals with the optimal control of queueing systems. The second part, which has the lion’s share of the review since it has received the most attention, focuses mainly on the modelling aspects of the problem and describes the different kinds of threshold (control) policy models available in the literature. To limit the scope of this survey, we decided to limit ourselves to research on papers dealing with the three policies (N, T, and D), where a cost function is designed specifically and optimal thresholds that yield minimum cost are sought.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号