首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work represents the first step towards a Dynamic Data-Driven Application System (DDDAS) for wildland fire prediction. Our main efforts are focused on taking advantage of the computing power provided by High Performance Computing systems and to propose computational data-driven steering strategies to overcome input data uncertainty. In doing so, prediction quality can be enhanced significantly. On the other hand, these proposals reduce the execution time of the overall prediction process in order to be of use during real-time crisis. In particular, this work describes a Dynamic Data-Driven Genetic Algorithm (DDDGA) used as steering strategy to automatically adjust highly dynamic input data values of forest fire simulators taking into account the underlying propagation model and real fire behaviour.  相似文献   

2.
Prediction of damage to orbiting space craft due to collisions with hypervelocity space debris is an important issue in the design of Space Station Freedom. Space station wall structures are designed to absorb impact energy during a collision. A proposed wall structure consists of a multilayer insulation (MLI) directly covering the pressure wall, and a bumper layer placed 100mm from the pressure wall. In experiments at The Marshall Space Flight Center, 2.5–12.7 mm projectiles have been fired at this wall structure at speeds of 2–8 km/s. In this paper, three-layer backpropagation networks are trained with two sets of impact damage data. The input parameters for training are pressure wall thickness, bumper plate thickness, projectile diameter, impact angle, and the projectile velocity. Output from the network consists of hole dimensions for the bumper and the pressure wall in the minor and major axis directions, and damage to the MLI. To evaluate network generalization, networks are tested with experimental data points that are not used for training. Network performance is compared with that of other damage prediction methods. Network determination of qualitative damage estimation is suggested as a new direction for research. Preliminary testing of qualitative prediction of pressure wall damage is presented. The results are promising, and suggest several areas for further study.  相似文献   

3.
In the Capacitated Clustering Problem (CCP), a given set of n weighted points is to be partitioned into p clusters such that, the total weight of the points in each cluster does not exceed a given cluster capacity. The objective is to find a set of p centers that minimises total scatter of points allocated to them. In this paper a new constructive method, a general framework to improve the performance of greedy constructive heuristics, and a problem space search procedure for the CCP are proposed. The constructive heuristic finds patterns of natural subgrouping in the input data using concept of density of points. Elements of adaptive computation and periodic construction–deconstruction concepts are implemented within the constructive heuristic to develop a general framework for building efficient heuristics. The problem-space search procedure is based on perturbations of input data for which a controlled perturbation strategy, intensification and diversification strategies are developed. The implemented algorithms are compared with existing methods on a standard set of bench-marks and on new sets of large-sized instances. The results illustrate the strengths of our algorithms in terms of solution quality and computational efficiency.  相似文献   

4.
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.  相似文献   

5.
The goal of this paper is to address the problem of evaluating the performance of a system running under unknown values for its stochastic parameters. A new approach called LAD for Simulation, based on simulation and classification software, is presented. It uses a number of simulations with very few replications and records the mean value of directly measurable quantities (called observables). These observables are used as input to a classification model that produces a prediction for the performance of the system. Application to an assemble-to-order system from the literature is described and detailed results illustrate the strength of the method.  相似文献   

6.
Prediction models are traditionally optimized independently from decision-based optimization. Conversely, a ‘smart predict then optimize’ (SPO) framework optimizes prediction models to minimize downstream decision regret. In this paper we present dboost, the first general purpose implementation of smart gradient boosting for ‘predict, then optimize’ problems. The framework supports convex quadratic cone programming and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. Experiments comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.  相似文献   

7.
A new correlation method for the aerodynamic service loads determination of a rigid wing based on CFD analysis is presented. All flight conditions can be handled by the proposed method. The derived correlation equations are achieved by considering a training fighter aircraft as a prototype. Each wing of aircraft is divided into thirty three parts in the span wise direction. Extensive numerical solutions have been attempted by varying a number of parameters that directly affect the wings aerodynamic loads, such as Mach numbers, angle of attack, control surfaces deflections and etc. For each set of input parameters, the corresponding aerodynamic loads applied to different wing parts are calculated. The resulted loads and the corresponding input parameters are incorporated into a linear regression method in order to develop the appropriate correlation equations. The outputs of the developed equations are the aerodynamic loads at each part of the wing based on the independent variables, which are the above mentioned input parameters. The validity of the developed equations is shown by comparing the loads obtained from the latter equations with the corresponding ones calculated through numerical analysis for different flight conditions. The correlation equations can now be used to calculate the aerodynamic loads at each part for any set of arbitrary values assigned to the input parameters.  相似文献   

8.
Artificial neural networks (ANN) have been widely used for both classification and prediction. This paper is focused on the prediction problem in which an unknown function is approximated. ANNs can be viewed as models of real systems, built by tuning parameters known as weights. In training the net, the problem is to find the weights that optimize its performance (i.e., to minimize the error over the training set). Although the most popular method for training these networks is back propagation, other optimization methods such as tabu search or scatter search have been successfully applied to solve this problem. In this paper we propose a path relinking implementation to solve the neural network training problem. Our method uses GRG, a gradient-based local NLP solver, as an improvement phase, while previous approaches used simpler local optimizers. The experimentation shows that the proposed procedure can compete with the best-known algorithms in terms of solution quality, consuming a reasonable computational effort.  相似文献   

9.
Newton-Raphson method has always remained as the widely used method for finding simple and multiple roots of nonlinear equations. In the past years, many new methods have been introduced for finding multiple zeros that involve the use of weight function in the second step, thereby, increasing the order of convergence and giving a flexibility to generate a family of methods satisfying some underlying conditions. However, in almost all the schemes developed over the past, the usual way is to use Newton-type method at the first step. In this paper, we present a new two-step optimal fourth-order family of methods for multiple roots (m > 1). The proposed iterative family has the flexibility of choice at both steps. The development of the scheme is based on using weight functions. The first step can not only recapture Newton's method for multiple roots as special case but is also capable of defining new choices of first step. A stability analysis of some particular cases is also given to explain the dynamical behavior of the new methods around the multiple roots and decide the best values of the free parameters involved. Finally, we compare our methods with the existing schemes of the same order with a real life application as well as standard test problems. From the numerical results, we find that our methods can be considered as a better alternative for the existing procedures of same order.  相似文献   

10.
We consider optimal decision-making problems in an uncertain environment. In particular, we consider the case in which the distribution of the input is unknown, yet there is some historical data drawn from the distribution. In this paper, we propose a new type of distributionally robust optimization model called the likelihood robust optimization (LRO) model for this class of problems. In contrast to previous work on distributionally robust optimization that focuses on certain parameters (e.g., mean, variance, etc.) of the input distribution, we exploit the historical data and define the accessible distribution set to contain only those distributions that make the observed data achieve a certain level of likelihood. Then we formulate the targeting problem as one of optimizing the expected value of the objective function under the worst-case distribution in that set. Our model avoids the over-conservativeness of some prior robust approaches by ruling out unrealistic distributions while maintaining robustness of the solution for any statistically likely outcomes. We present statistical analyses of our model using Bayesian statistics and empirical likelihood theory. Specifically, we prove the asymptotic behavior of our distribution set and establish the relationship between our model and other distributionally robust models. To test the performance of our model, we apply it to the newsvendor problem and the portfolio selection problem. The test results show that the solutions of our model indeed have desirable performance.  相似文献   

11.
Integration of renewable generations, such as wind and photovoltaic, into electrical power systems is rapidly growing throughout the world. Stochastic and variable nature of these resources makes some operational challenges to power systems. The most effective way to tackle these challenges is short‐term prediction of their available powers. Despite various developed methods to forecast generation of renewable resources, still they have large errors, which may lead to under/over‐commitment of conventional generators in power systems. Prediction of net demand (ND), defined as electrical load minus renewable generations, can provide useful information for accurate scheduling of conventional generators. In this article, characteristics of the time series of electric load, renewable generations and ND are analyzed, and a new hybrid prediction strategy is presented for direct prediction of ND. The training mechanism of the proposed forecasting engine is composed of a new stochastic search method and Levenberg–Marquardt learning algorithm based on an iterative procedure and greedy search. The suggested prediction strategy is tested on different real‐world power systems and its obtained results are compared with the results of several other forecast methods and published literature figures. These comparisons confirm the validity of the developed forecasting strategy. © 2016 Wiley Periodicals, Inc. Complexity 21: 296–308, 2016  相似文献   

12.
We propose an extension of the FlowSort sorting method to the case when there is imprecision on the input data. Within multicriteria decision aid, a lot of attention has been paid to sorting problems where a set of actions has to be assigned to completely ordered categories. However, few methods suit when the data or the parameters of the model are not precisely defined. In this paper, instead of reducing the imprecise data to single values, we consider that the sorting parameters or the data are defined by intervals. We analyse the properties usually required for a sorting method and illustrate this extension on a practical example.  相似文献   

13.
14.
Euclidean distance-based classification rules are derived within a certain nonclassical linear model approach and applied to elliptically contoured samples having a density generating function g. Then a geometric measure theoretical method to evaluate exact probabilities of correct classification for multivariate uncorrelated feature vectors is developed. When doing this one has to measure suitably defined sets with certain standardized measures. The geometric key point is that the intersection percentage functions of the areas under investigation coincide with those of certain parabolic cylinder type sets. The intersection percentage functions of the latter sets can be described as threefold integrals. It turns out that these intersection percentage functions yield simultaneously geometric representation formulae for the doubly noncentral g-generalized F-distributions. Hence, we get beyond new formulae for evaluating probabilities of correct classification new geometric representation formulae for the doubly noncentral g-generalized F-distributions. A numerical study concerning several aspects of evaluating both probabilities of correct classification and values of the doubly noncentral g-generalized F-distributions demonstrates the advantageous computational properties of the present new approach. This impression will be supported by comparison with the literature.It is shown that probabilities of correct classification depend on the parameters of the underlying sample distribution through a certain well-defined set of secondary parameters. If the underlying parameters are unknown, we propose to estimate probabilities of correct classification.  相似文献   

15.
Many applications aim to learn a high dimensional parameter of a data generating distribution based on a sample of independent and identically distributed observations. For example, the goal might be to estimate the conditional mean of an outcome given a list of input variables. In this prediction context, bootstrap aggregating (bagging) has been introduced as a method to reduce the variance of a given estimator at little cost to bias. Bagging involves applying an estimator to multiple bootstrap samples and averaging the result across bootstrap samples. In order to address the curse of dimensionality, a common practice has been to apply bagging to estimators which themselves use cross-validation, thereby using cross-validation within a bootstrap sample to select fine-tuning parameters trading off bias and variance of the bootstrap sample-specific candidate estimators. In this article we point out that in order to achieve the correct bias variance trade-off for the parameter of interest, one should apply the cross-validation selector externally to candidate bagged estimators indexed by these fine-tuning parameters. We use three simulations to compare the new cross-validated bagging method with bagging of cross-validated estimators and bagging of non-cross-validated estimators.  相似文献   

16.
For models with dependent input variables, sensitivity analysis is often a troublesome work and only a few methods are available. Mara and Tarantola in their paper (“Variance-based sensitivity indices for models with dependent inputs”) defined a set of variance-based sensitivity indices for models with dependent inputs. We in this paper propose a method based on moving least squares approximation to calculate these sensitivity indices. The new proposed method is adaptable to both linear and nonlinear models since the moving least squares approximation can capture severe change in scattered data. Both linear and nonlinear numerical examples are employed in this paper to demonstrate the ability of the proposed method. Then the new sensitivity analysis method is applied to a cantilever beam structure and from the results the most efficient method that can decrease the variance of model output can be determined, and the efficiency is demonstrated by exploring the dependence of output variance on the variation coefficients of input variables. At last, we apply the new method to a headless rivet model and the sensitivity indices of all inputs are calculated, and some significant conclusions are obtained from the results.  相似文献   

17.
Incomplete information is notoriously common in planning soil and groundwater remediation. For making decisions groundwater flow and transport models are commonly used. However, uncertainty in prediction arises due to imprecise information on flow and transport parameters like saturated/unsaturated hydraulic conductivity, water retention curve parameters, precipitation and evapo-transpiration rates as well as factors governing the fate of pollutant in soil like dispersion, diffusion, degradation and chemical transformation. Different methods exist for quantifying uncertainty, e.g. first and second order Taylor’s Series and Monte-Carlo method. In this paper, a methodology based on fuzzy set theory is presented to express imprecision of input data, in terms of fuzzy number, to quantify the uncertainty in prediction. The application of the fuzzy set theory is demonstrated through pesticide (endosulfan) transport in an unsaturated layered soil profile. The governing partial differential equation along with fuzzy inputs, results in a non-linear optimization problem. The solution gives complete membership functions for flow (suction head) and pesticide concentration in soil column.  相似文献   

18.
The problem of local parameter identifiability of an input–output system is considered. A set lf of systems is studied for which the property of local parameter identifiability holds for almost all values of input signals and parameters in both topological and metric senses. Sufficient conditions are pointed out under which the set LI contains a prevalent subset. The proof is based on the prevalent transversality theorem proved by Kaloshin. Systems are considered that are characterized by a family of (structural) parameters a and a control block. It is shown that if the dimension of the set of parameters a is large enough (the structure of the system is rich enough), then, generically, a system f a belongs to the class lf for a set of parameters a having full measure.  相似文献   

19.
Classical biplot methods allow for the simultaneous representation of individuals (rows) and variables (columns) of a data matrix. For binary data, logistic biplots have been recently developed. When data are nominal, both classical and binary logistic biplots are not adequate and techniques such as multiple correspondence analysis (MCA), latent trait analysis (LTA) or item response theory (IRT) for nominal items should be used instead. In this paper we extend the binary logistic biplot to nominal data. The resulting method is termed “nominal logistic biplot”(NLB), although the variables are represented as convex prediction regions rather than vectors. Using the methods from computational geometry, the set of prediction regions is converted to a set of points in such a way that the prediction for each individual is established by its closest “category point”. Then interpretation is based on distances rather than on projections. We study the geometry of such a representation and construct computational algorithms for the estimation of parameters and the calculation of prediction regions. Nominal logistic biplots extend both MCA and LTA in the sense that they give a graphical representation for LTA similar to the one obtained in MCA.  相似文献   

20.
Generalized canonical correlation analysis is a versatile technique that allows the joint analysis of several sets of data matrices. The generalized canonical correlation analysis solution can be obtained through an eigenequation and distributional assumptions are not required. When dealing with multiple set data, the situation frequently occurs that some values are missing. In this paper, two new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach, missing values are imputed in such a way that the generalized canonical correlation analysis objective function does not increase in subsequent steps. Convergence is achieved when the value of the objective function remains constant. By means of a simulation study, we assess the performance of the new methods. We compare the results with those of two available methods; the missing-data passive method, introduced in Gifi’s homogeneity analysis framework, and the GENCOM algorithm developed by Green and Carroll. An application using world bank data is used to illustrate the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号