首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a method combining projection-outline-based active learning strategy with Kriging metamodel for reliability analysis of structures with mixed random and convex variables. In this method, it is determined that the approximation accuracy of projection outlines on the limit-state surface is crucial for estimation of failure probability instead of the whole limit-state surface. To efficiently improve the approximation accuracy of projection outlines, a new projection-outline-based active learning strategy is developed to sequentially obtain update points located around the projection outlines. Taking into account the influence of metamodel uncertainty on the estimation of failure probability, a quantification function of metamodel uncertainty is developed and introduced in the stopping condition of Kriging metamodel update. Finally, Monte Carlo simulation is employed to calculate the failure probability based on the refined Kriging metamodel. Four examples including the Burro Creek Bridge and a piezoelectric energy harvester are tested to validate the performance of the proposed method. Results indicate that the proposed method is accurate and efficient for reliability analysis of structures with mixed random and convex variables.  相似文献   

2.
In this study, we attempt to propose a new super parametric convex model by giving the mathematical definition, in which an effective minimum volume method is constructed to give a reasonable enveloping of limited experimental samples by selecting a proper super parameter. Two novel reliability calculation algorithms, including nominal value method and advanced nominal value method, are proposed to evaluate the non-probabilistic reliability index. To investigate the influence of non-probabilistic convex model type on non-probabilistic reliability-based design optimization, an effective approach based on advanced nominal value method is further developed. Four examples, including two numerical examples and two engineering applications, are tested to demonstrate the superiority of the proposed non-probabilistic reliability analysis and optimization technique.  相似文献   

3.
The response surface method (RSM), a simple and effective approximation technique, is widely used for reliability analysis in civil engineering. However, the traditional RSM needs a considerable number of samples and is computationally intensive and time-consuming for practical engineering problems with many variables. To overcome these problems, this study proposes a new approach that samples experimental points based on the difference between the last two trial design points. This new method constructs the response surface using a support vector machine (SVM); the SVM can build complex, nonlinear relations between random variables and approximate the performance function using fewer experimental points. This approach can reduce the number of experimental points and improve the efficiency and accuracy of reliability analysis. The advantages of the proposed method were verified using four examples involving random variables with different distributions and correlation structures. The results show that this approach can obtain the design point and reliability index with fewer experimental points and better accuracy. The proposed method was also employed to assess the reliability of a numerically modeled tunnel. The results indicate that this new method is applicable to practical, complex engineering problems such as rock engineering problems.  相似文献   

4.
In practical engineering and scientific researches, all engineering analysis and design problems involve uncertainties to various degrees. Dynamic loads acting on a structure are usually with uncertain nature due to the difficulty of predicting the magnitudes of such loads. In this paper, a non-probabilistic and set-theoretical model named interval analysis method is developed to predict the transient vibrations of cross-ply plates with uncertain excitations. The dynamic loads involve deterministic and uncertain components of force function and initial conditions. Uncertainties in these functions are required to be bounded on the L2 norm and expressed by finite eigenmodes. Analyzed by a numerical example, the width of the upper and lower bounds of the critical buckling loads that calculated by the interval analysis method is sharper than those are obtained by convex models. Moreover, the interval analysis has less computational cost than convex models. Considering specific cases, the effect of various parameters and the level of uncertainty on the response of the cross-ply plates are different.  相似文献   

5.
In this paper, new concepts of balanced systems are proposed based on real engineering problems. The system under study consists of l groups and each group has n functional sectors. The conception of balance difference is proposed for the first time. It is assumed that unbalanced systems are rebalanced by either forcing down some working units into standby or resuming some standby units into operation. In addition, a case that the forced-down units are subject to failure during standby is studied in this paper. Based on different balanced cases and system failure criteria, two reliability models for balanced systems are developed. The proposed systems have widespread applications in aerospace and military industries, such as wing systems of airplane and unmanned aerial vehicles with balanced engine systems. Markov process imbedding method is used to analyze the number of working units in each sector for each model. Finite Markov chain imbedding approach and universal generating function technique are used to obtain the system reliability for different models. Several case studies are finally presented to demonstrate the new models.  相似文献   

6.
We introduce a novel approach for analyzing the worst-case performance of first-order black-box optimization methods. We focus on smooth unconstrained convex minimization over the Euclidean space. Our approach relies on the observation that by definition, the worst-case behavior of a black-box optimization method is by itself an optimization problem, which we call the performance estimation problem (PEP). We formulate and analyze the PEP for two classes of first-order algorithms. We first apply this approach on the classical gradient method and derive a new and tight analytical bound on its performance. We then consider a broader class of first-order black-box methods, which among others, include the so-called heavy-ball method and the fast gradient schemes. We show that for this broader class, it is possible to derive new bounds on the performance of these methods by solving an adequately relaxed convex semidefinite PEP. Finally, we show an efficient procedure for finding optimal step sizes which results in a first-order black-box method that achieves best worst-case performance.  相似文献   

7.
This paper is a follow-up to the author’s previous paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there the three most popular greedy algorithms in nonlinear approximation in Banach spaces-Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation, and Weak Relaxed Greedy Algorithm-for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the mth iteration is equal to the sum of the approximant from the previous, (m ? 1)th, iteration and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique, can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.  相似文献   

8.
An inexact Newton method for nonconvex equality constrained optimization   总被引:1,自引:0,他引:1  
We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence of negative curvature without requiring information about the inertia of the primal–dual iteration matrix. Negative curvature may arise from second-order information of the problem functions, but in fact exact second derivatives are not required in the approach. The complete algorithm is characterized by its emphasis on sufficient reductions in a model of an exact penalty function. We analyze the global behavior of the algorithm and present numerical results on a collection of test problems.  相似文献   

9.
In this paper, a novel method for non-probabilistic convex modelling with the bounds to precisely encircle all the data of uncertain parameters extracted from practical engineering is developed. The method is based on the traditional statistical method and the correlation analysis technique. Mean values and correlation coefficients of uncertain parameters are first calculated by utilizing the information of all the given data. Then, a simple yet effective optimization procedure is first introduced in the mathematical modelling process for uncertain parameters to obtain their precise bounds. This procedure works by optimizing the area of the convex model, at the same time, covering all the given data. Thus, the effective mathematical expression of the convex models are finally formulated. To test the prediction capability and generalization ability of the proposed convex modelling method, evaluation criteria, i.e. volume ratio, standard volume ratio, and prediction accuracy are established. The performance of the proposed method is systematically studied and compared with other existing competitive methods through test standards. The results demonstrate the effectiveness and efficiency of the present method.  相似文献   

10.
We present the AQUARS (A QUAsi-multistart Response Surface) framework for finding the global minimum of a computationally expensive black-box function subject to bound constraints. In a traditional multistart approach, the local search method is blind to the trajectories of the previous local searches. Hence, the algorithm might find the same local minima even if the searches are initiated from points that are far apart. In contrast, AQUARS is a novel approach that locates the promising local minima of the objective function by performing local searches near the local minima of a response surface (RS) model of the objective function. It ignores neighborhoods of fully explored local minima of the RS model and it bounces between the best partially explored local minimum and the least explored local minimum of the RS model. We implement two AQUARS algorithms that use a radial basis function model and compare them with alternative global optimization methods on an 8-dimensional watershed model calibration problem and on 18 test problems. The alternatives include EGO, GLOBALm, MLMSRBF (Regis and Shoemaker in INFORMS J Comput 19(4):497–509, 2007), CGRBF-Restart (Regis and Shoemaker in J Global Optim 37(1):113–135 2007), and multi level single linkage (MLSL) coupled with two types of local solvers: SQP and Mesh Adaptive Direct Search (MADS) combined with kriging. The results show that the AQUARS methods generally use fewer function evaluations to identify the global minimum or to reach a target value compared to the alternatives. In particular, they are much better than EGO and MLSL coupled to MADS with kriging on the watershed calibration problem and on 15 of the test problems.  相似文献   

11.
Practically, the performance of many engineering problems can be defined using a complex implicit limit state function. Approximation of the accurate failure probability is very time-consuming and inefficient based on Monte Carlo simulation (MCS) for complex performance functions. M5 model tree (M5Tree) model is robust approach for simulation and prediction phenomena, which provides ability to dealing with complex implicit problems by dividing them into smaller problems. By improving the efficiency of reliability method using accurate approximated failure probability, an efficient reliability method using the MCS and M5Tree is proposed to calibrate the performance function and estimate the failure probability, respectively. The superiorities including simplicity and accuracy of M5Tree meta-model are investigated to evaluate the actual performance function through five nonlinear complex mathematical and structural reliability problems. The proposed reliability method-based MCS and M5Tree improved the computational efforts for evaluating the performance function in reliability analysis. The M5Tree significantly increased the efficiency of reliability analysis with accurate failure probability.  相似文献   

12.
Product design and selection using fuzzy QFD and fuzzy MCDM approaches   总被引:1,自引:0,他引:1  
Quality function deployment (QFD) is a useful analyzing tool in product design and development. To solve the uncertainty or imprecision in QFD, numerous researchers have applied the fuzzy set theory to QFD and developed various fuzzy QFD models. Three issues are investigated by examining their models. First, the extant studies focused on identifying important engineering characteristics and seldom explored the subsequent prototype product selection issue. Secondly, the previous studies usually use fuzzy number algebraic operations to calculate the fuzzy sets in QFD. This approach may cause a great deviation in the result from the correct value. Thirdly, few studies have paid attention to the competitive analysis in QFD. However, it can provide product developers with a large amount of valuable information. Aimed at these three issues, this study integrates fuzzy QFD and the prototype product selection model to develop a product design and selection (PDS) approach. In fuzzy QFD, the α-cut operation is adopted to calculate the fuzzy set of each component. Competitive analysis and the correlations among engineering characteristics are also considered. In prototype product selection, engineering characteristics and the factors involved in product development are considered. A fuzzy multi-criteria decision making (MCDM) approach is proposed to select the best prototype product. A case study is given to illustrate the research steps for the proposed PDS method. The proposed method provides product developers with more useful information and precise analysis results. Thus, the PDS method can serve as a helpful decision-aid tool in product design.  相似文献   

13.
The paper presents a novel evolutionary technique constructed as an alternative of the standard support vector machines architecture. The approach adopts the learning strategy of the latter but aims to simplify and generalize its training, by offering a transparent substitute to the initial black-box. Contrary to the canonical technique, the evolutionary approach can at all times explicitly acquire the coefficients of the decision function, without any further constraints. Moreover, in order to converge, the evolutionary method does not require the positive (semi-)definition properties for kernels within nonlinear learning. Several potential structures, enhancements and additions are proposed, tested and confirmed using available benchmarking test problems. Computational results show the validity of the new approach in terms of runtime, prediction accuracy and flexibility.  相似文献   

14.
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.  相似文献   

15.
A small polygon is a convex polygon of unit diameter. We are interested in small polygons which have the largest area for a given number of vertices n. Many instances are already solved in the literature, namely for all odd n, and for n = 4, 6 and 8. Thus, for even n ≥ 10, instances of this problem remain open. Finding those largest small polygons can be formulated as nonconvex quadratic programming problems which can challenge state-of-the-art global optimization algorithms. We show that a recently developed technique for global polynomial optimization, based on a semidefinite programming approach to the generalized problem of moments and implemented in the public-domain Matlab package GloptiPoly, can successfully find largest small polygons for n = 10 and n = 12. Therefore this significantly improves existing results in the domain. When coupled with accurate convex conic solvers, GloptiPoly can provide numerical guarantees of global optimality, as well as rigorous guarantees relying on interval arithmetic.  相似文献   

16.
带自由变量的广义几何规划(FGGP)问题广泛出现在证券投资和工程设计等实际问题中.利用等价转换及对目标函数和约束函数的凸下界估计,提出一种求(FGGP)问题全局解的凸松弛方法.与已有方法相比,方法可处理符号项中含有更多变量的(FGGP)问题,且在最后形成的凸松弛问题中含有更少的变量和约束,从而在计算上更容易实现.最后数值实验表明文中方法是可行和有效的.  相似文献   

17.
Latent or unobserved phenomena pose a significant difficulty in data analysis as they induce complicated and confounding dependencies among a collection of observed variables. Factor analysis is a prominent multivariate statistical modeling approach that addresses this challenge by identifying the effects of (a small number of) latent variables on a set of observed variables. However, the latent variables in a factor model are purely mathematical objects that are derived from the observed phenomena, and they do not have any interpretation associated to them. A natural approach for attributing semantic information to the latent variables in a factor model is to obtain measurements of some additional plausibly useful covariates that may be related to the original set of observed variables, and to associate these auxiliary covariates to the latent variables. In this paper, we describe a systematic approach for identifying such associations. Our method is based on solving computationally tractable convex optimization problems, and it can be viewed as a generalization of the minimum-trace factor analysis procedure for fitting factor models via convex optimization. We analyze the theoretical consistency of our approach in a high-dimensional setting as well as its utility in practice via experimental demonstrations with real data.  相似文献   

18.
Fractional calculus has been used to model physical and engineering processes that are found to be best described by fractional differential equations. For that reason we need a reliable and efficient technique for the solution of fractional differential equations. Here we construct the operational matrix of fractional derivative of order α in the Caputo sense using the linear B-spline functions. The main characteristic behind the approach using this technique is that it reduces such problems to those of solving a system of algebraic equations thus we can solve directly the problem. The method is applied to solve two types of fractional differential equations, linear and nonlinear. Illustrative examples are included to demonstrate the validity and applicability of the new technique presented in the current paper.  相似文献   

19.
The paper shows that the global resolution of a general convex quadratic program with complementarity constraints (QPCC), possibly infeasible or unbounded, can be accomplished in finite time. The method constructs a minmax mixed integer formulation by introducing finitely many binary variables, one for each complementarity constraint. Based on the primal-dual relationship of a pair of convex quadratic programs and on a logical Benders scheme, an extreme ray/point generation procedure is developed, which relies on valid satisfiability constraints for the integer program. To improve this scheme, we propose a two-stage approach wherein the first stage solves the mixed integer quadratic program with pre-set upper bounds on the complementarity variables, and the second stage solves the program outside this bounded region by the Benders scheme. We report computational results with our method. We also investigate the addition of a penalty term y T Dw to the objective function, where y and w are the complementary variables and D is a nonnegative diagonal matrix. The matrix D can be chosen effectively by solving a semidefinite program, ensuring that the objective function remains convex. The addition of the penalty term can often reduce the overall runtime by at least 50 %. We report preliminary computational testing on a QP relaxation method which can be used to obtain better lower bounds from infeasible points; this method could be incorporated into a branching scheme. By combining the penalty method and the QP relaxation method, more than 90 % of the gap can be closed for some QPCC problems.  相似文献   

20.
Anti-optimization technique, on the one hand, represents an alternative and complement to traditional probabilistic methods, and on the other hand, it is a generalization of the mathematical theory of interval analysis. In this study, in terms of interval analysis or interval mathematics, the arithmetic operations and the partial order relation of anti-optimization technique can be defined, and the convex model variables and the convex model extension function of convex models can also be introduced. The comparison of the Lagrange multiplier method with the convex model extension method for evaluating the region of static displacements of structures with uncertain-but-bounded parameters shows that the width of the upper and lower bounds on the static displacement yielded by the Lagrange multiplier method of convex models is tighter than those produced by the convex model extension.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号