首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The falsification of a hybrid system aims at finding trajectories that violate a given safety property. This is a challenging problem, and the practical applicability of current falsification algorithms still suffers from their high time complexity. In contrast to falsification, verification algorithms aim at providing guarantees that no such trajectories exist. Recent symbolic reachability techniques are capable of efficiently computing linear constraints that enclose all trajectories of the system with reasonable precision. In this paper, we leverage the power of symbolic reachability algorithms to improve the scalability of falsification techniques. Recent approaches to falsification reduce the problem to a nonlinear optimization problem. We propose to reduce the search space of the optimization problem by adding linear state constraints obtained with a reachability algorithm. An empirical study of how varying abstractions during symbolic reachability analysis affect the performance of solving a falsification problem is presented. In addition, for solving a falsification problem, we propose an alternating minimization algorithm that solves a linear programming problem and a non-linear programming problem in alternation finitely many times. We showcase the efficacy of our algorithms on a number of standard hybrid systems benchmarks demonstrating the performance increase and number of falsifyable instances.  相似文献   

2.
Clustering is an important problem in data mining. It can be formulated as a nonsmooth, nonconvex optimization problem. For the most global optimization techniques this problem is challenging even in medium size data sets. In this paper, we propose an approach that allows one to apply local methods of smooth optimization to solve the clustering problems. We apply an incremental approach to generate starting points for cluster centers which enables us to deal with nonconvexity of the problem. The hyperbolic smoothing technique is applied to handle nonsmoothness of the clustering problems and to make it possible application of smooth optimization algorithms to solve them. Results of numerical experiments with eleven real-world data sets and the comparison with state-of-the-art incremental clustering algorithms demonstrate that the smooth optimization algorithms in combination with the incremental approach are powerful alternative to existing clustering algorithms.  相似文献   

3.
Branch and Bound Algorithms based on Interval Arithmetic permit to solve exactly continuous (as well as mixed) non-linear and non-convex global optimization problems. However, their intrinsic exponential time-complexities do not make it possible to solve some quite large problems. The idea proposed in this paper is to limit the memory available during the computations of such a global optimization code in order to find some efficient feasible solutions. By this way, we introduce a metaheuristic frame to develop some new heuristic global optimization algorithms based on an exact code. We show in this paper, with a small assumption about the sorting by breadth first of elements in the data structure, that the time-complexity of such metaheuristic algorithms becomes polynomial instead of exponential for the exact code. In order to validate our metaheuristic approach, some numerical experiments about constrained global optimization problems coming from the COCONUT library were solved using a heuristic which certifies an enclosure of the global minimum value. The objective is not to solve completely the problem or find a better solution, but it is to know what is the highest precision which can be guaranteed reliably with the available memory.  相似文献   

4.
In this paper we consider a portfolio optimization problem where the underlying asset returns are distributed as a mixture of two multivariate Gaussians; these two Gaussians may be associated with “distressed” and “tranquil” market regimes. In this context, the Sharpe ratio needs to be replaced by other non-linear objective functions which, in the case of many underlying assets, lead to optimization problems which cannot be easily solved with standard techniques. We obtain a geometric characterization of efficient portfolios, which reduces the complexity of the portfolio optimization problem.  相似文献   

5.
Optimization over geodesics for exact principal geodesic analysis   总被引:1,自引:0,他引:1  
In fields ranging from computer vision to signal processing and statistics, increasing computational power allows a move from classical linear models to models that incorporate non-linear phenomena. This shift has created interest in computational aspects of differential geometry, and solving optimization problems that incorporate non-linear geometry constitutes an important computational task. In this paper, we develop methods for numerically solving optimization problems over spaces of geodesics using numerical integration of Jacobi fields and second order derivatives of geodesic families. As an important application of this optimization strategy, we compute exact Principal Geodesic Analysis (PGA), a non-linear version of the PCA dimensionality reduction procedure. By applying the exact PGA algorithm to synthetic data, we exemplify the differences between the linearized and exact algorithms caused by the non-linear geometry. In addition, we use the numerically integrated Jacobi fields to determine sectional curvatures and provide upper bounds for injectivity radii.  相似文献   

6.
This paper presents a kind of dynamic genetic algorithm based on a continuous neural network, which is intrinsically the steepest decent method for constrained optimization problems. The proposed algorithm combines the local searching ability of the steepest decent methods with the global searching ability of genetic algorithms. Genetic algorithms are used to decide each initial point of the steepest decent methods so that all the initial points can be searched intelligently. The steepest decent methods are employed to decide the fitness of genetic algorithms so that some good initial points can be selected. The proposed algorithm is motivated theoretically and biologically. It can be used to solve a non-convex optimization problem which is quadratic and even more non-linear. Compared with standard genetic algorithms, it can improve the precision of the solution while decreasing the searching scale. In contrast to the ordinary steepest decent method, it can obtain global sub-optimal solution while lessening the complexity of calculation.  相似文献   

7.
The L 1-median is a robust estimator of multivariate location with good statistical properties. Several algorithms for computing the L 1-median are available. Problem specific algorithms can be used, but also general optimization routines. The aim is to compare different algorithms with respect to their precision and runtime. This is possible because all considered algorithms have been implemented in a standardized manner in the open source environment R. In most situations, the algorithm based on the optimization routine NLM (non-linear minimization) clearly outperforms other approaches. Its low computation time makes applications for large and high-dimensional data feasible.  相似文献   

8.
O. Avci  W. Ehlers 《PAMM》2006,6(1):351-352
The simulation of deformation process of landsliding needs the knowledge of the very complex behaviour of granular materials, e. g., sand. The triax experiments on sand show a highly non-linear elasto-plastic material behaviour. Therefore, it is necessary to use a yield criteria, e. g., single-surface yield criteria with isotropic hardening and non-associated plastic potential, which satisfies adequately the requirements of the material properties. This kind of material behaviour can be described by an elasto-plastic material law in the frame of Theory of Porous Media, which is implemented in the FE tool PANDAS. By means of the data of Hostun-Sand, the material parameters of the singlesurface yield criteria are determined by use of a optimization algorithm, namely Sequential Quadratic Programming (SQP) a gradient based optimization method, which is coupled with PANDAS. Using this optimized material parameters, a simulation of a initial boundary-value problem of landsliding is presented. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
We consider the challenge of numerically comparing optimization algorithms that employ random-restarts under the assumption that only limited test data is available. We develop a bootstrapping technique to estimate the incumbent solution of the optimization problem over time as a stochastic process. The asymptotic properties of the estimator are examined and the approach is validated by an out-of-sample test. Finally, three methods for comparing the performance of different algorithms based on the estimator are proposed and demonstrated with data from a real-world optimization problem.  相似文献   

10.
Portfolio optimization is an important aspect of decision-support in investment management. Realistic portfolio optimization, in contrast to simplistic mean-variance optimization, is a challenging problem, because it requires to determine a set of optimal solutions with respect to multiple objectives, where the objective functions are often multimodal and non-smooth. Moreover, the objectives are subject to various constraints of which many are typically non-linear and discontinuous. Conventional optimization methods, such as quadratic programming, cannot cope with these realistic problem properties. A valuable alternative are stochastic search heuristics, such as simulated annealing or evolutionary algorithms. We propose a new multiobjective evolutionary algorithm for portfolio optimization, which we call DEMPO??Differential Evolution for Multiobjective Portfolio Optimization. In our experimentation, we compare DEMPO with quadratic programming and another well-known evolutionary algorithm for multiobjective optimization called NSGA-II. The main advantage of DEMPO is its ability to tackle a portfolio optimization task without simplifications, while obtaining very satisfying results in reasonable runtime.  相似文献   

11.
This paper introduces Empirically Adjusted Greedy Heuristics (EAGH), a procedure for designing greedy algorithms for a given combinatorial optimization problem and illustrates the way in which EAGH works with an application to minimize the makespan in the permutation flow-shop problem. The basic idea behind EAGH is that a greedy heuristic can be seen as a member of an infinite set of heuristics, this set being defined by a function that depends on several parameters. Each set of values of the parameters corresponds to a specific greedy heuristic. Then, the best element of the set, for a training set of instances of the problem, is found by applying a non-linear optimization algorithm to a function that measures the quality of the obtained solutions to the instances of the training set, and which depends on the parameters that characterize each specific algorithm. EAGH allows improving known heuristics or finding good new ones.  相似文献   

12.
基于GA-SA的混合U型装配线平衡   总被引:2,自引:0,他引:2  
在JIT生产系统中,混合U型装配线是一种能够满足市场多样化需求的柔性系统,章综合考虑作业元素的分配和产品的投产排序两个因素,建立了混合U型装配线的平衡模型,给出了人工智能算法的平衡方法,从全局优化的角度研究了混合U型装配线的平衡问题。  相似文献   

13.
A non-linear area traffic control system with limited capacity is considered in this paper. Optimal signal settings and link capacity expansions can be determined while trip distribution and network flow are in equilibrium. This problem can be formulated as a non-linear mathematical program with equilibrium constraints. For the objective function a non-linear constrained optimization program for signal settings and link capacity expansion is determined. For the constraint set the elastic user equilibrium traffic assignment obeying Wardrop’s first principle can be formulated as a variational inequality. Since the constrained optimization problem is non-convex, only local optima can be obtained. In this paper, a novel algorithm using a non-smooth trust region approach is proposed. Numerical tests are performed using a real data city network and various example test networks in which the effectiveness and robustness of the proposed method are confirmed as compared to other well-known solution methods.  相似文献   

14.
This paper presents a multiobjective model for crop planning in agriculture. The approach is based on portfolio theory. The model takes into account weather risks, market risks and environmental risks. Input data include historical land productivity data for various crops, soil types and yield response to fertilizer/pesticide application. Several environmental levels for the application of fertilizers/pesticides, and the monetary penalties for overcoming these levels, are also considered. Starting from the multiobjective model we formulate several single objective optimization problems: the minimum environmental risk problem, the maximum expected return problem and the minimum financial risk problem. We prove that the minimum environmental risk problem is equivalent to a mixed integer problem with a linear objective function. Two numerical results for the minimum environmental risk problem are presented.  相似文献   

15.
This paper presents a new procedure that extends genetic algorithms from their traditional domain of optimization to fuzzy ranking strategy for selecting efficient portfolios of restricted cardinality. The uncertainty of the returns on a given portfolio is modeled using fuzzy quantities and a downside risk function is used to describe the investor's aversion to risk. The fitness functions are based both on the value and the ambiguity of the trapezoidal fuzzy number which represents the uncertainty on the return. The soft-computing approach allows us to consider uncertainty and vagueness in databases and also to incorporate subjective characteristics into the portfolio selection problem. We use a data set from the Spanish stock market to illustrate the performance of our approach to the portfolio selection problem.  相似文献   

16.
A scenario tree is an efficient way to represent a stochastic data process in decision problems under uncertainty. This paper addresses how to efficiently generate appropriate scenario trees. A knowledge‐based scenario tree generation method is proposed; the new method is further improved by accounting for subjective judgements or expectations about the random future. Compared with existing approaches, complicated mathematical models and time‐consuming estimation, simulation and optimization problem solution are avoided in our knowledge‐based algorithms, and large‐scale scenario trees can be quickly generated. To show the advantages of the new algorithms, a multiperiod portfolio selection problem is considered, and a dynamic risk measure is adopted to control the intermediate risk, which is superior to the single‐period risk measure used in the existing literature. A series of numerical experiments are carried out by using real trading data from the Shanghai stock market. The results show that the scenarios generated by our algorithms can properly represent the underlying distribution; our algorithms have high performance, say, a scenario tree with up to 10,000 scenarios can be generated in less than a half minute. The applications in the multiperiod portfolio management problem demonstrate that our scenario tree generation methods are stable, and the optimal trading strategies obtained with the generated scenario tree are reasonable, efficient and robust. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
k-平均问题是计算机科学和组合优化领域的经典问题之一.k-平均聚类作为最受重视而且最简单易懂的一种聚类分析方法流行于数据挖掘领域.k-平均问题可描述为:给定n个元素的观测集,其中每个观测点都是d维实向量,目标是把这n个观测点划分到k(≤n)个集合中,使得所有集合中的点到对应的聚类中心的距离的平方和最小,其中一个集合的聚类中心指的是该集合中所有观测点的均值.k-平均问题在理论上是NP-难的,但有高效的启发式算法,广泛应用在市场划分、机器视觉、地质统计学、天文学和农业等实际背景中.随着实际问题中遇到的k-平均问题更加复杂,数据量更加庞大,还需学者进行更深一步的研究.罗列出k-平均问题及其诸多变形及推广问题的经典算法,并总结k-平均中尚待研究的若干问题.  相似文献   

18.
Optimizing some model parameters a reduced-form model of the Atlantic thermohaline circulation is fitted to data provided by a comprehensive climate model. Different techniques to compute stationary states of the reduced-form model are discussed. The fitting problem is formulated as weighted least-squares optimization problem with non-linear constraints that enforce a proper representation of the present climate. Possible formulations of the optimization problem are presented and compared with respect to their numerical treatment. The technique of algorithmic or automatic differentiation (AD) is used to provide gradient information that can be used in the optimization. The application of the AD software is described in detail and numerical results are given.  相似文献   

19.
Combined heat and power (CHP) production is an important energy production technology that can yield much higher total energy efficiency than separate heat and power generation. In CHP production, the heat and power production follows a joint characteristic, which means that the production planning must be done in coordination. Cost-efficient operation of a CHP system can be planned by using an optimization model. A long-term planning model decomposes into thousands of hourly models. Earlier, in the regulated electric power market, the planning problem was symmetrically driven by heat and power demand. The liberalization of the power market has created an asymmetrical planning problem, where heat production responds to the demand and power production to the volatile market price. In this paper, we utilize this asymmetry to develop novel envelope-based dual algorithms for solving the hourly CHP models efficiently. The basic idea is to transform the three-dimensional characteristic operating region for heat and power production of each CHP plant into a two-dimensional envelope by taking the power price as a parameter. Then the envelopes of each plant are used for looking up the optimal solution rapidly. We propose two versions of the algorithm: the on-line envelope construction algorithm (ECON) where the envelopes are constructed for each hour based on the power price and the off-line envelope construction algorithm (ECOFF) where envelopes are pre-computed for all different power price ranges. We derive the theoretical time complexity of the two algorithms and compare their performance empirically with realistic test models against the ILOG CPLEX solver and the Power Simplex (PS) algorithm. PS is an extremely efficient specialized primal algorithm developed for the symmetrical CHP planning problem under the regulated market. On average, when reusing previous basic solutions, ECON is 603 times faster than CPLEX and 1.3 times faster than PS. ECOFF is 1860 times faster than CPLEX and four times faster than PS.  相似文献   

20.
Artificial Neural Networks (ANNs) are well known for their credible ability to capture non-linear trends in scientific data. However, the heuristic nature of estimation of parameters associated with ANNs has prevented their evolution into efficient surrogate models. Further, the dearth of optimal training size estimation algorithms for the data greedy ANNs resulted in their overfitting. Therefore, through this work, we aim to contribute a novel ANN building algorithm called TRANSFORM aimed at simultaneous and optimal estimation of ANN architecture, training size and transfer function. TRANSFORM is integrated with three standalone Sobol sampling based training size determination algorithms which incorporate the concepts of hypercube sampling and optimal space filling. TRANSFORM was used to construct ANN surrogates for a highly non-linear industrially validated continuous casting model from steel plant. Multiobjective optimization of casting model to ensure maximum productivity, maximum energy saving and minimum operational cost was performed by ANN assisted Non-dominated Sorting Genetic Algorithms (NSGA-II). The surrogate assisted optimization was found to be 13 times faster than conventional optimization, leading to its online implementation. Simple operator's rules were deciphered from the optimal solutions using Pareto front characterization and K-means clustering for optimal functioning of casting plant. Comprehensive studies on (a) computational time comparisons between proposed training size estimation algorithms and (b) predictability comparisons between constructed ANNs and state of art statistical models, Kriging Interpolators adds to the other highlights of this work. TRANSFORM takes physics based model as the only input and provides parsimonious ANNs as outputs, making it generic across all scientific domains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号