首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Finding the set of nearest neighbors for a query point of interest appears in a variety of algorithms for machine learning and pattern recognition. Examples include k nearest neighbor classification, information retrieval, case-based reasoning, manifold learning, and nonlinear dimensionality reduction. In this work, we propose a new approach for determining a distance metric from the data for finding such neighboring points. For a query point of interest, our approach learns a generalized quadratic distance (GQD) metric based on the statistical properties in a “small” neighborhood for the point of interest. The locally learned GQD metric captures information such as the density, curvature, and the intrinsic dimensionality for the points falling in this particular neighborhood. Unfortunately, learning the GQD parameters under such a local learning mechanism is a challenging problem with a high computational overhead. To address these challenges, we estimate the GQD parameters using the minimum volume covering ellipsoid (MVCE) for a set of points. The advantage of the MVCE is two-fold. First, the MVCE together with the local learning approach approximate the functionality of a well known robust estimator for covariance matrices. Second, computing the MVCE is a convex optimization problem which, in addition to having a unique global solution, can be efficiently solved using a first order optimization algorithm. We validate our metric learning approach on a large variety of datasets and show that the proposed metric has promising results when compared with five algorithms from the literature for supervised metric learning.  相似文献   

2.
In general, classical iterative algorithms for optimization, such as Newton-type methods, perform only local search around a given starting point. Such feature is an impediment to the direct use of these methods to global optimization problems, when good starting points are not available. To overcome this problem, in this work we equipped a Newton-type method with the topographical global initialization strategy, which was employed together with a new formula for its key parameter. The used local search algorithm is a quasi-Newton method with backtracking. In this approach, users provide initial sets, instead of starting points. Then, using points sampled in such initial sets (merely boxes in \({\mathbb {R}}^{n}\)), the topographical method selects appropriate initial guesses for global optimization tasks. Computational experiments were performed using 33 test problems available in literature. Comparisons against three specialized methods (DIRECT, MCS and GLODS) have shown that the present methodology is a powerful tool for unconstrained global optimization.  相似文献   

3.
Any global minimization algorithm is made by several local searches performed sequentially. In the classical multistart algorithm, the starting point for each new local search is selected at random uniformly in the region of interest. In the tunneling algorithm, such a starting point is required to have the same function value obtained by the last local minimization. We introduce the class of acceptance-rejection based algorithms in order to investigate intermediate procedures. A particular instance is to choose at random the new point approximately according to a Boltzmann distribution, whose temperatureT is updated during the algorithm. AsT 0, such distribution peaks around the global minima of the cost function, producing a kind of random tunneling effect. The motivation for such an approach comes from recent works on the simulated annealing approach in global optimization. The resulting algorithm has been tested on several examples proposed in the literature.  相似文献   

4.
We present a new strategy for the constrained global optimization of expensive black box functions using response surface models. A response surface model is simply a multivariate approximation of a continuous black box function which is used as a surrogate model for optimization in situations where function evaluations are computationally expensive. Prior global optimization methods that utilize response surface models were limited to box-constrained problems, but the new method can easily incorporate general nonlinear constraints. In the proposed method, which we refer to as the Constrained Optimization using Response Surfaces (CORS) Method, the next point for costly function evaluation is chosen to be the one that minimizes the current response surface model subject to the given constraints and to additional constraints that the point be of some distance from previously evaluated points. The distance requirement is allowed to cycle, starting from a high value (global search) and ending with a low value (local search). The purpose of the constraint is to drive the method towards unexplored regions of the domain and to prevent the premature convergence of the method to some point which may not even be a local minimizer of the black box function. The new method can be shown to converge to the global minimizer of any continuous function on a compact set regardless of the response surface model that is used. Finally, we considered two particular implementations of the CORS method which utilize a radial basis function model (CORS-RBF) and applied it on the box-constrained Dixon–Szegö test functions and on a simple nonlinearly constrained test function. The results indicate that the CORS-RBF algorithms are competitive with existing global optimization algorithms for costly functions on the box-constrained test problems. The results also show that the CORS-RBF algorithms are better than other algorithms for constrained global optimization on the nonlinearly constrained test problem.  相似文献   

5.
Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control, system identification and machine learning. In this paper, the non-Lipschitz ? p (0 < p < 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover, some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation (p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.  相似文献   

6.
The class of Mesh Adaptive Direct Search (Mads) algorithms is designed for the optimization of constrained black-box problems. The purpose of this paper is to compare instantiations of Mads under different strategies to handle constraints. Intensive numerical tests are conducted from feasible and/or infeasible starting points on three real engineering applications.  相似文献   

7.
This paper considers online stochastic combinatorial optimization problems where uncertainties, i.e., which requests come and when, are characterized by distributions that can be sampled and where time constraints severely limit the number of offline optimizations which can be performed at decision time and/or in between decisions. It proposes online stochastic algorithms that combine the frameworks of online and stochastic optimization. Online stochastic algorithms differ from traditional a priori methods such as stochastic programming and Markov Decision Processes by focusing on the instance data that is revealed over time. The paper proposes three main algorithms: expectation E, consensus C, and regret R. They all make online decisions by approximating, for each decision, the solution to a multi-stage stochastic program using an exterior sampling method and a polynomial number of samples. The algorithms were evaluated experimentally and theoretically. The experimental results were obtained on three applications of different nature: packet scheduling, multiple vehicle routing with time windows, and multiple vehicle dispatching. The theoretical results show that, under assumptions which seem to hold on these, and other, applications, algorithm E has an expected constant loss compared to the offline optimal solution. Algorithm R reduces the number of optimizations by a factor |R|, where R is the number of requests, and has an expected ρ(1+o(1)) loss when the regret gives a ρ-approximation to the offline problem.  相似文献   

8.
A new method for unconstrained global function optimization, acronymedtrust, is introduced. This method formulates optimization as the solution of a deterministic dynamical system incorporating terminal repellers and a novel subenergy tunneling function. Benchmark tests comparing this method to other global optimization procedures are presented, and thetrust algorithm is shown to be substantially faster. Thetrust formulation leads to a simple stopping criterion. In addition, the structure of the equations enables an implementation of the algorithm in analog VLSI hardware, in the vein of artificial neural networks, for further substantial speed enhancement.This work was supported by the Department of Energy, Office of Basic Energy Sciences, Grant No. DE-A105-89-ER14086.  相似文献   

9.
Clusterwise regression consists of finding a number of regression functions each approximating a subset of the data. In this paper, a new approach for solving the clusterwise linear regression problems is proposed based on a nonsmooth nonconvex formulation. We present an algorithm for minimizing this nonsmooth nonconvex function. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate a good starting point for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or near global solution to the problem when the data sets are sufficiently dense. The algorithm is compared with the multistart Späth algorithm on several publicly available data sets for regression analysis.  相似文献   

10.
Here we propose a global optimization method for general, i.e. indefinite quadratic problems, which consist of maximizing a non-concave quadratic function over a polyhedron inn-dimensional Euclidean space. This algorithm is shown to be finite and exact in non-degenerate situations. The key procedure uses copositivity arguments to ensure escaping from inefficient local solutions. A similar approach is used to generate an improving feasible point, if the starting point is not the global solution, irrespective of whether or not this is a local solution. Also, definiteness properties of the quadratic objective function are irrelevant for this procedure. To increase efficiency of these methods, we employ pseudoconvexity arguments. Pseudoconvexity is related to copositivity in a way which might be helpful to check this property efficiently even beyond the scope of the cases considered here.  相似文献   

11.
We present an interior-point trust-funnel algorithm for solving large-scale nonlinear optimization problems. The method is based on an approach proposed by Gould and Toint (Math Prog 122(1):155–196, 2010) that focused on solving equality constrained problems. Our method is similar in that it achieves global convergence guarantees by combining a trust-region methodology with a funnel mechanism, but has the additional capability of being able to solve problems with both equality and inequality constraints. The prominent features of our algorithm are that (i) the subproblems that define each search direction may be solved with matrix-free methods so that derivative matrices need not be formed or factorized so long as matrix-vector products with them can be performed; (ii) the subproblems may be solved approximately in all iterations; (iii) in certain situations, the computed search directions represent inexact sequential quadratic optimization steps, which may be desirable for fast local convergence; (iv) criticality measures for feasibility and optimality aid in determining whether only a subset of computations need to be performed during a given iteration; and (v) no merit function or filter is needed to ensure global convergence.  相似文献   

12.
Global optimization problems with a few variables and constraints arise in numerous applications but are seldom solved exactly. Most often only a local optimum is found, or if a global optimum is detected no proof is provided that it is one. We study here the extent to which such global optimization problems can be solved exactly using analytical methods. To this effect, we propose a series of tests, similar to those of combinatorial optimization, organized in a branch-and-bound framework. The first complete solution of two difficult test problems illustrates the efficiency of the resulting algorithm. Computational experience with the programbagop, which uses the computer algebra systemmacsyma, is reported on. Many test problems from the compendiums of Hock and Schittkowski and others sources have been solved.The research of the first and the third authors has been supported by AFOSR grants #0271 and #0066 to Rutgers University. Research of the second author has been supported by NSERC grant #GP0036426 and FCAR grants #89EQ4144 and #90NC0305.  相似文献   

13.
Regularized minimization problems with nonconvex, nonsmooth, even non-Lipschitz penalty functions have attracted much attention in recent years, owing to their wide applications in statistics, control,system identification and machine learning. In this paper, the non-Lipschitz ?_p(0 p 1) regularized matrix minimization problem is studied. A global necessary optimality condition for this non-Lipschitz optimization problem is firstly obtained, specifically, the global optimal solutions for the problem are fixed points of the so-called p-thresholding operator which is matrix-valued and set-valued. Then a fixed point iterative scheme for the non-Lipschitz model is proposed, and the convergence analysis is also addressed in detail. Moreover,some acceleration techniques are adopted to improve the performance of this algorithm. The effectiveness of the proposed p-thresholding fixed point continuation(p-FPC) algorithm is demonstrated by numerical experiments on randomly generated and real matrix completion problems.  相似文献   

14.
The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.  相似文献   

15.
We consider a continuous location problem in which a firm wants to set up two or more new facilities in a competitive environment. Both the locations and the qualities of the new facilities are to be found so as to maximize the profit obtained by the firm. This hard-to-solve global optimization problem has been addressed in Redondo et al. (Evol. Comput.17(1), 21–53, 2009) using several heuristic approaches. Through a comprehensive computational study, it was shown that the evolutionary algorithm uego is the heuristic which provides the best solutions. In this work, uego is parallelized in order to reduce the computational time of the sequential version, while preserving its capability at finding the optimal solutions. The parallelization follows a coarse-grain model, where each processing element executes the uego algorithm independently of the others during most of the time. Nevertheless, some genetic information can migrate from a processor to another occasionally, according to a migratory policy. Two migration processes, named Ring-Opt and Ring-Fusion2, have been adapted to cope the multiple facilities location problem, and a superlinear speedup has been obtained.  相似文献   

16.
Summary It often happens that a stochastic process may be approximated by a sum of a large number of independent components no one of which contributes a significant proportion of the whole. For example the depth of water in a lake with many small rivers flowing into it from distant sources, or the point process of calls entering a telephone exchange (considered as the sum of a number of point processes of calls made by individual subscribers) may approximately fulfil these hypotheses. In the present work we formulate and solve the problem of characterizing stochastic processes all of whose finite-dimensional distributions are infinitely divisible. Although some of our results could be derived from known theorems on probabilities on general algebraic structures, many could not and it seems preferable to take the vector-valued infinitely divisible laws as our starting point. We emphasize that an infinitely divisible process (in our sense) on the real line is not necessarily a decomposable process in the sense of Lévy (cf. § 4) though decomposable processes are particular cases.In § 1 a representation theorem for non-negative (and possibly infinite) stochastic processes is derived, while an analogous theorem in the real-valued case is to be found in § 2. § 3 is devoted to the statement of a central limit theorem and the investigation of some of the properties of infinitely divisible processes. The investigation is continued in § 4 by an examination of processes on the real line giving, for example, a representation theorem under weak conditions for infinitely divisible processes which are a.s. sample continuous. Finally in § 5 a study is made of infinitely divisible point processes and random measures.The author is indebted to Professor J. F. C. Kingman for advice and encouragement.  相似文献   

17.
We describe the optimization of the Voith-Schneider-Propeller (VSP) which is an industrial propulsion and steering system of a ship combined in one module. The goal is to optimize efficiency of the VSP with respect to different design variables. In order to determine the efficiency, we have to use numerical simulations for the complex flow around the VSP. Such computations are performed with standard (partly commercial) flow solvers. For the numerical optimization, one would like to use gradient-based methods which requires derivatives of the flow variables with respect to the design parameters. In this paper, we investigate if Automatic Differentiation (AD) offers a method to compute the required derivatives in the described framework. As a proof of concept, we realize AD for the 2D-code Caffa and the 3D-code Comet, for the simplified model of optimizing efficiency with respect to the angle of attack of one single blade (like an airfoil). We show that AD gives smooth derivatives, whereas finite differences show oscillations. This regularization effect is even more pronounced in the 3D-case. Numerical optimization by AD and Newton’s method shows almost optimal convergence rates.  相似文献   

18.
19.
A new approach is proposed for finding all real solutions of systems of nonlinear equations with bound constraints. The zero finding problem is converted to a global optimization problem whose global minima with zero objective value, if any, correspond to all solutions of the original problem. A branch-and-bound algorithm is used with McCormick’s nonsmooth convex relaxations to generate lower bounds. An inclusion relation between the solution set of the relaxed problem and that of the original nonconvex problem is established which motivates a method to generate automatically, starting points for a local Newton-type method. A damped-Newton method with natural level functions employing the restrictive monotonicity test is employed to find solutions robustly and rapidly. Due to the special structure of the objective function, the solution of the convex lower bounding problem yields a nonsmooth root exclusion test which is found to perform better than earlier interval-analysis based exclusion tests. Both the componentwise Krawczyk operator and interval-Newton operator with Gauss-Seidel based root inclusion and exclusion tests are also embedded in the proposed algorithm to refine the variable bounds for efficient fathoming of the search space. The performance of the algorithm on a variety of test problems from the literature is presented, and for most of them, the first solution is found at the first iteration of the algorithm due to the good starting point generation.  相似文献   

20.
The minimization of the potential energy function of Lennard-Jones atomic clusters has attracted much theoretical as well as computational research in recent years. One reason for this is the practical importance of discovering low energy configurations of clusters of atoms, in view of applications and extensions to molecular conformation research; another reason of the success of Lennard Jones minimization in the global optimization literature is the fact that this is an extremely easy-to-state problem, yet it poses enormous difficulties for any unbiased global optimization algorithm.In this paper we propose a computational strategy which allowed us to rediscover most putative global optima known in the literature for clusters of up to 80 atoms and for other larger clusters, including the most difficult cluster conformations. The main feature of the proposed approach is the definition of a special purpose local optimization procedure aimed at enlarging the region of attraction of the best atomic configurations. This effect is attained by performing first an optimization of a modified potential function and using the resulting local optimum as a starting point for local optimization of the Lennard Jones potential.Extensive numerical experimentation is presented and discussed, from which it can be immediately inferred that the approach presented in this paper is extremely efficient when applied to the most challenging cluster conformations. Some attempts have also been carried out on larger clusters, which resulted in the discovery of the difficult optimum for the 102 atom cluster and for the very recently discovered new putative optimum for the 98 atom cluster.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号