首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In this paper, a computational algorithm, named RST2ANU algorithm, has been developed for solving integer and mixed integer global optimization problems. This algorithm, which primarily is based on the original controlled random search approach of Price [22i], incorporates a simulated annealing type acceptance criterion in its working so that not only downhill moves but also occasional uphill moves can be accepted. In its working it employs a special truncation procedure which not only ensures that the integer restrictions imposed on the decision variables are satisfied, but also creates greater possibilities for the search leading to a global optimal solution. The reliability and efficiency of the proposed RST2ANU algorithm has been demonstrated on thirty integer and mixed integer optimization problems taken from the literature. The performance of the algorithm has been compared with the performance of the corresponding purely controlled random search based algorithm as well as the standard simulated annealing algorithm. The performance of the method on mathematical models of three realistic problems has also been demonstrated.  相似文献   

2.
Summary  This paper presents a heuristic approach for multivariate random number generation. Our aim is to generate multivariate samples with specified marginal distributions and correlation matrix, which can be incorporated into risk analysis models to conduct simulation studies. The proposed sampling approach involves two distinct steps: first a univariate random sample from each specified probability distribution is generated; then a heuristic combinatorial optimization procedure is used to rearrange the generated univariate samples, in order to obtain the desired correlations between them. The combinatorial optimization step is performed with a simulated annealing algorithm, which changes only the positions and not the values of the numbers generated in the first step. The proposed multivariate sampling approach can be used with any type of marginal distributions: continuous or discrete, parametric or non-parametric, etc.  相似文献   

3.
The problem of dependency between two random variables has been studied throughly in the literature. Many dependency measures have been proposed according to concepts such as concordance, quadrant dependency, etc. More recently, the development of the Theory of Copulas has had a great impact in the study of dependence of random variables specially in the case of continuous random variables. In the case of the multivariate setting, the study of the strong mixing conditions has lead to interesting results that extend some results like the central limit theorem to the case of dependent random variables.In this paper, we study the behavior of a multidimensional extension of two well-known dependency measures, finding their basic properties as well as several examples. The main difference between these measures and others previously proposed is that these ones are based on the definition of independence among n random elements or variables, therefore they provide a nice way to measure dependency.The main purpose of this paper is to present a sample version of one of these measures, find its properties, and based on this sample version to propose a test of independence of multivariate observations. We include several references of applications in Statistics.  相似文献   

4.
We propose a probability model for random partitions in the presence of covariates. In other words, we develop a model-based clustering algorithm that exploits available covariates. The motivating application is predicting time to progression for patients in a breast cancer trial. We proceed by reporting a weighted average of the responses of clusters of earlier patients. The weights should be determined by the similarity of the new patient’s covariate with the covariates of patients in each cluster. We achieve the desired inference by defining a random partition model that includes a regression on covariates. Patients with similar covariates are a priori more likely to be clustered together. Posterior predictive inference in this model formalizes the desired prediction.

We build on product partition models (PPM). We define an extension of the PPM to include a regression on covariates by including in the cohesion function a new factor that increases the probability of experimental units with similar covariates to be included in the same cluster. We discuss implementations suitable for any combination of continuous, categorical, count, and ordinal covariates.

An implementation of the proposed model as R-package is available for download.  相似文献   

5.
System reliability analysis involving correlated random variables is challenging because the failure probability cannot be uniquely determined under the given probability information. This paper proposes a system reliability evaluation method based on non-parametric copulas. The approximated joint probability distribution satisfying the constraints specified by correlations has the maximal relative entropy with respect to the joint probability distribution of independent random variables. Thus the reliability evaluation is unbiased from the perspective of information theory. The estimation of the non-parametric copula parameters from Pearson linear correlation, Spearman rank correlation, and Kendall rank correlation are provided, respectively. The approximated maximum entropy distribution is then integrated with the first and second order system reliability method. Four examples are adopted to illustrate the accuracy and efficiency of the proposed method. It is found that traditional system reliability method encodes excessive dependence information for correlated random variables and the estimated failure probability can be significantly biased.  相似文献   

6.
This paper proposes a new methodology to model uncertainties associated with functional random variables. This methodology allows to deal simultaneously with several dependent functional variables and to address the specific case where these variables are linked to a vectorial variable, called covariate. In this case, the proposed uncertainty modelling methodology has two objectives: to retain both the most important features of the functional variables and their features which are the most correlated to the covariate. This methodology is composed of two steps. First, the functional variables are decomposed on a functional basis. To deal simultaneously with several dependent functional variables, a Simultaneous Partial Least Squares algorithm is proposed to estimate this basis. Second, the joint probability density function of the coefficients selected in the decomposition is modelled by a Gaussian mixture model. A new sparse method based on a Lasso penalization algorithm is proposed to estimate the Gaussian mixture model parameters and reduce their number. Several criteria are introduced to assess the methodology performance: its ability to approximate the functional variables probability distribution, their dependence structure and their features which explain the covariate. Finally, the whole methodology is applied on a simulated example and on a nuclear reliability test case.  相似文献   

7.
One of the benefits of modular design is ease-of-service. While modular design helps simplify field maintenance, extensive depot maintenance and spare modules are required to support the field maintenance. This study develops a dynamic approach for scheduling preventive maintenance at a depot with the limited availability of spare modules and other constraints. A backward allocation algorithm is proposed and applied to scheduling the preventive maintenance of an engine module installed in T-59 advanced jet trainers in the Republic of Korea Air Force. The algorithm developed by this study can be used to solve similar problems for various systems such as aerospace vehicles, heavy machinery, and medical equipment. The contribution of this study includes the uniqueness of the algorithm, the flexibility to deal with variables changing over time, and the ability to incorporate additional variables to handle complex situations.  相似文献   

8.
A modification of the standard algorithm for the simulation of order statistics for a uniform distribution is proposed that uses confidence intervals. It is found that one of the applications of the algorithms for the simulation of order statistics (namely, simulation of the beta distribution with integer parameters) gives more efficient methods for the simulation of order statistics than the algorithm based on confidence intervals. It is shown that the resulting algorithm can be used for the efficient simulation of random variables with polynomial density and of beta distributed random variables with large noninteger parameters.  相似文献   

9.
A general methodology to optimize the weight of power transmission structures is presented in this article. This methodology is based on the simulated annealing algorithm defined by Kirkpatrick in the early ‘80s. This algorithm consists of a stochastic approach that allows to explore and analyze solutions that do not improve the objective function in order to develop a better exploration of the design region and to obtain the global optimum. The proposed algorithm allows to consider the discrete behavior of the sectional variables for each element and the continuous behavior of the general geometry variables. Thus, an optimization methodology that can deal with a mixed optimization problem and includes both continuum and discrete design variables is developed. In addition, it does not require to study all the possible design combinations defined by discrete design variables. The algorithm proposed usually requires to develop a large number of simulations (structural analysis in this case) in practical applications. Thus, the authors have developed first order Taylor expansions and the first order sensitivity analysis involved in order to reduce the CPU time required. Exterior penalty functions have been also included to deal with the design constraints. Thus, the general methodology proposed allows to optimize real power transmission structures in acceptable CPU time.  相似文献   

10.
张璐  孔令臣  陈黄岳 《计算数学》2019,41(3):320-334
随着大数据时代的到来,各个领域涌现出海量数据且结构复杂.如变量的维数不同、尺度不同等.而现实中变量之间往往存在着不确定关系,经典的Pearson相关系数仅能反映两个同维变量间的线性相关关系,不足以完全刻画变量间的相关关系.2007年Szekely等提出的距离相关系数则能描述不同维数变量间的非线性关系.为了探索变量之间的内在信息,本文基于距离相关系数提出了最大距离相关系数法对变量聚类,且有超度量性和空间收缩性.为充分发挥距离相关系数的优势,对上述方法改进得到类整体距离相关系数法.该方法在刻画两类间相似性时,将每类中的所有变量合并成一个整体,再计算这两个不同维数的整体间的距离相关系数.最后,将类整体距离相关系数法应用到几个实际问题中,验证了算法的有效性.  相似文献   

11.
In this paper mathematical methods for fuzzy stochastic analysis in engineering applications are presented. Fuzzy stochastic analysis maps uncertain input data in the form of fuzzy random variables onto fuzzy random result variables. The operator of the mapping can be any desired deterministic algorithm, e.g. the dynamic analysis of structures. Two different approaches for processing the fuzzy random input data are discussed. For these purposes two types of fuzzy probability distribution functions for describing fuzzy random variables are introduced. On the basis of these two types of fuzzy probability distribution functions two appropriate algorithms for fuzzy stochastic analysis are developed. Both algorithms are demonstrated and compared by way of an example.  相似文献   

12.
Importance analysis is aimed at finding the contributions of the inputs to the output uncertainty. For structural models involving correlated input variables, the variance contribution by an individual input variable is decomposed into correlated contribution and uncorrelated contribution in this study. Based on point estimate, this work proposes a new algorithm to conduct variance based importance analysis for correlated input variables. Transformation of the input variables from correlation space to independence space and the computation of conditional distribution in the process ensure that the correlation information is inherited correctly. Different point estimate methods can be employed in the proposed algorithm, thus the algorithm is adaptable and evolvable. Meanwhile, the proposed algorithm is also applicable to uncertainty systems with multiple modes. The proposed algorithm avoids the sampling procedure, which usually consumes a heavy computational cost. Results of several examples in this work have proven the proposed algorithm can be used as an effective tool to deal with uncertainty analysis involving correlated inputs.  相似文献   

13.
In this paper, we present an interactive algorithm (ISTMO) for stochastic multiobjective problems with continuous random variables. This method combines the concept of probability efficiency for stochastic problems with the reference point philosophy for deterministic multiobjective problems. The decision maker expresses her/his references by dividing the variation range of each objective into intervals, and by setting the desired probability for each objective to achieve values belonging to each interval. These intervals may also be redefined during the process. This interactive procedure helps the decision maker to understand the stochastic nature of the problem, to discover the risk level (s)he is willing to assume for each objective, and to learn about the trade-offs among the objectives.  相似文献   

14.
A simple measure of similarity for the construction of the market graph is proposed. The measure is based on the probability of the coincidence of the signs of the stock returns. This measure is robust, has a simple interpretation, is easy to calculate and can be used as measure of similarity between any number of random variables. For the case of pairwise similarity the connection of this measure with the sign correlation of Fechner is noted. The properties of the proposed measure of pairwise similarity in comparison with the classic Pearson correlation are studied. The simple measure of pairwise similarity is applied (in parallel with the classic correlation) for the study of Russian and Swedish market graphs. The new measure of similarity for more than two random variables is introduced and applied to the additional deeper analysis of Russian and Swedish markets. Some interesting phenomena for the cliques and independent sets of the obtained market graphs are observed.  相似文献   

15.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

16.
The Biogeography-Based Optimization algorithm and its variants have been used widely for optimization problems. To get better performance, a novel Biogeography-Based Optimization algorithm with Hybrid migration and global-best Gaussian mutation is proposed in this paper. Firstly, a linearly dynamic random heuristic crossover strategy and an exponentially dynamic random differential mutation one are presented to form a hybrid migration operator, and the former is used to get stronger local search ability and the latter strengthen the global search ability. Secondly, a new global-best Gaussian mutation operator is put forward to balance exploration and exploitation better. Finally, a random opposition learning strategy is merged to avoid getting stuck in local optima. The experiments on the classical benchmark functions and the complexity functions from CEC-2013 and CEC-2017 test sets, and the Wilcoxon, Bonferroni-Holm and Friedman statistical tests are used to evaluate our algorithm. The results show that our algorithm obtains better performance and faster running speed compared with quite a few state-of-the-art competitive algorithms. In addition, experimental results on Minimum Spanning Tree and K-means clustering optimization show that our algorithm can cope with these two problems better than the comparison algorithms.  相似文献   

17.
变量选择控制图是高维统计过程监控的重要方法。针对传统变量选择控制图较少考虑高维过程空间相关性而造成监控效率低的问题,提出一种基于Fused-LASSO的高维空间相关过程监控模型。首先,利用Fused LASSO算法对似然比检验进行改进;然后,推导出基于惩罚似然比的监控统计量;最后,通过仿真模拟和真实案例分析所提监控模型的性能。仿真实验和真实案例均表明:在高维空间相关过程中,当相邻监控变量同时发生异常时,利用所提监控方法能够准确识别潜在异常变量,取得较好的监控效果。  相似文献   

18.
当前上市公司信用风险数据所呈现出的高维度以及高相关性的特点严重影响了信用风险模型的准确性。为此本文结合已有算法以及信用风险模型的特点设计了一种新的基于非参数的变量选择方法。通过该方法对上市公司用风险相关变量进行分析筛选可以消除数据集中包含的噪声变量以及线性相关变量。本文同时还针对该方法设计了高变量维度下最优解求解算法。文章以Logistic模型为例对上市公司信用风险做了实证分析,研究结果表明与以往的变量选择方法相比该方法可以有效的降低数据维度,消除变量间的相关性,并同时提高模型的可靠性和预测精度。  相似文献   

19.
An approximation to the least squares filter is proposed for discrete signals whose evolution is governed by nonlinear functions, when the estimation is based on nonlinear observations with additive noise which can consist only of random noise; this uncertainty in the observation process is modelled by Bernoulli random variables which are correlated at consecutive time instants and are otherwise independent. The proposed recursive approximation is based on the unscented principle; successive applications of the unscented transformation to a suitable augmented state vector enable us to approximate the one-stage state and observation predictors from the state filter at the previous time instant. The performance of the proposed algorithm is compared with that of an extended algorithm in a numerical simulation example.  相似文献   

20.
This paper presents a model which has been designed to decide the number of advertisement in different advertising media and the optimal allocation of the budget assigned to the different media. The main objective of this problem is to maximize the reach to the desired section of people for different media within their maximum allowable budget without violating the max and min number of advertisement goals. The media have been considered as different newspapers and different channels in Televisions. Here in this article the model has been formulated in such a way that the advertisement should reach to those who are suitable for the product instead of going to those section who are not considered suitable for the product as well. A chance constrained goal programming model has been designed after considering the parameter corresponding to reach for different media as random variables. The random variables in this case has been considered as values which have known mean and standard deviations. A case for an upcoming institution who are interested to advertise for its two years Post Graduate Diploma in Management (PGDM) programme to the different newspapers and television channels has been designed to illustrate the solution methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号