首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
We present a general diagrammatic approach to the construction of efficient algorithms for computing the Fourier transform of a function on a finite group. By extending work which connects Bratteli diagrams to the construction of Fast Fourier Transform algorithms we make explicit use of the path algebra connection to the construction of Gel’fand–Tsetlin bases and work in the setting of quivers. We relate this framework to the construction of a configuration space derived from a Bratteli diagram. In this setting the complexity of an algorithm for computing a Fourier transform reduces to the calculation of the dimension of the associated configuration space. Our methods give improved upper bounds for computing the Fourier transform for the general linear groups over finite fields, the classical Weyl groups, and homogeneous spaces of finite groups, while also recovering the best known algorithms for the symmetric group.  相似文献   

2.
On the basis of a unified approach to pivotal algorithms and a generalization of the concept of primitive sets by Scarf we show that Scarf's algorithm for finding fixed points can be embedded into a class of more flexible and more efficient algorithms, allowing restarts. An example of this new restart method is described. Also the class of equilibrium problems solvable by this method is discussed.  相似文献   

3.
Chow, Mallet-Paret, and Yorke have recently proposed in abstract terms an algorithm for computing Brouwer fixed points of C2 maps. A numerical implementation of that algorithm is presented here. Careful attention has been paid to computational efficiency, accuracy, and robustness. Some convergence theorems and results of numerical tests are also included.  相似文献   

4.
As a powerful mechanism, fixed point theorems have many applications in mathematical and economic analysis. In this paper, the well-known Brouwer fixed point theorem and Kakutani fixed point theorem are generalized to a class of nonconvex sets and a globally convergent homotopy method is developed for computing fixed points on this class of nonconvex sets.  相似文献   

5.
The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.  相似文献   

6.
Greedy algorithms which use only function evaluations are applied to convex optimization in a general Banach space \(X\). Along with algorithms that use exact evaluations, algorithms with approximate evaluations are treated. A priori upper bounds for the convergence rate of the proposed algorithms are given. These bounds depend on the smoothness of the objective function and the sparsity or compressibility (with respect to a given dictionary) of a point in \(X\) where the minimum is attained.  相似文献   

7.
A large number of algorithms introduced in the literature to find the global minimum of a real function rely on iterative executions of searches of a local minimum. Multistart, tunneling and some versions of simulated annealing are methods that produce well-known procedures. A crucial point of these algorithms is to decide whether to perform or not a new local search. In this paper we look for the optimal probability value to be set at each iteration so that by moving from a local minimum to a new one, the average number of function evaluations evals is minimal. We find that this probability has to be 0 or 1 depending on the number of function evaluations required by the local search and by the size of the level set at the current point. An implementation based on the above result is introduced. The values required to calculate evals are estimated from the history of the algorithm at running time. The algorithm has been tested both for sample problems constructed by the GKLS package and for problems often used in the literature. The outcome is compared with recent results.  相似文献   

8.
This paper considers complexity bounds for the problem of approximating the global minimum of a univariate function when the function evaluations are corrupted by random noise. We take an average-case point of view, where the objective function is taken to be a sample function of a Wiener process and the noise is independent Gaussian. Previous papers have bounded the convergence rates of some nonadaptive algorithms. We establish a lower bound on the convergence rate of any nonadaptive algorithm.  相似文献   

9.
The Chow—Yorke algorithm is a nonsimplicial homotopy type method for computing Brouwer fixed points that is globally convergent. It is efficient and accurate for fixed point problems. L.T. Watson, T.Y. Li, and C.Y. Wang have adapted the method for zero finding problems, the nonlinear complementarity problem, and nonlinear two-point boundary value problems. Here theoretical justification is given for applying the method to some mathematical programming problems, and computational results are presented.This work was partially supported by NSF Grant MCS 7821337.  相似文献   

10.
本文提出计算标准单纯形S″上连续自映射不动点的一种变维数重复开始不动点算法,证明了算法的可行性和有限步收敛性.一些数值试验结果表明新的不动点算法可以与三明治算法相媲美。  相似文献   

11.
This note gives a proof of the Brouwer fixed point theorem by appeal to mechanical intuition. A sequence of machines is described for each real dimension greater than unity. As a sufficiently large force is applied to this mechanism, the coordinates of certain components can be made arbitrarily close to positions which are analogs of a fixed point.  相似文献   

12.
DNA codes are sets of words of fixed length n over the alphabet {A,C,G,T} which satisfy a number of combinatorial conditions. They have application in DNA computing, in DNA microarray technologies and as molecular bar codes. The combinatorial conditions considered are (i) minimum Hamming distance d, (ii) fixed GC content and, in some cases (iii) minimum distance d between any codeword and the reverse Watson-Crick complement of any codeword. The problem is to find DNA codes with the maximum number of codewords. In this paper the construction of DNA codes is studied from an algorithmic perspective. Four local search algorithms are developed and combined in a variable neighbourhood search framework. The algorithm has been run extensively. Over 254 problems considered, it was able to match or improve the best known lower bounds in 180 cases, with 52 new bests.   相似文献   

13.
In k-means clustering we are given a set of n data points in d-dimensional space and an integer k, and the problem is to determine a set of k points in  , called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the very high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.

We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+)-approximation algorithm. We present an example showing that any approach based on performing a fixed number of swaps achieves an approximation factor of at least (9−) in all sufficiently high dimensions. Thus, our approximation factor is almost tight for algorithms based on performing a fixed number of swaps. To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice.  相似文献   


14.
Quadratically convergent algorithms and one-dimensional search schemes   总被引:5,自引:0,他引:5  
In this paper, the performances of three quadratically convergent algorithms coupled with four one-dimensional search schemes are studied through several nonquadratic examples. The algorithms are the rank-one algorithm (Algorithm I), the projection algorithm (Algorithm II), and the Fletcher-Reeves algorithm (Algorithm III). The search schemes are the exact quadratic search (EQS), the exact cubic search (ECS), the approximate quadratic search (AQS), and the approximate cubic search (ACS). The performances are analyzed in terms of number of iterations and number of equivalent function evaluations for convergence. From the numerical experiments, the following conclusions are found: (a) while the number of iterations generally increases by relaxing the search stopping condition, the number of equivalent function evaluations decreases; therefore, approximate searches should be preferred to exact searches; (b) the numbers of iterations for ACS, ECS, and EQS are about the same; therefore, the use of more sophisticated, higher order search schemes is not called for; the present ACS scheme, modified so that only the function, instead of the gradient, is used in bracketing the minimal point, could prove to be most desirable in terms of the number of equivalent function evaluations; (c) for Algorithm I, ACS and AQS yield almost identical results; it is believed that further improvements in efficiency are possible if one uses a fixed stepsize approach, thus bypassing the one-dimensional search completely; (d) the combination of Algorithm II and ACS exhibits high efficiency in treating functions whose order is higher than two and whose Hessian at the minimal point is singular; and (f) Algorithm III, even with the best search scheme, is inefficient in treating functions with flat bottoms; it is doubtful that the simplicity of its update will compensate for its inefficiency in such pathological cases.This research was supported by the National Science Foundation, Grant No. 32453.  相似文献   

15.
Among the penalty based approaches for constrained optimization, augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally, thereby providing a better function landscape for search, and (iii) they can result in computing optimal Lagrange multiplier for each constraint as a by-product. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters (called multipliers) adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm requires a serial application of a number of unconstrained optimization tasks, a process that is usually time-consuming and tend to be computationally expensive. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The proposed strategy updates critical parameters in an adaptive manner based on population statistics. Occasionally, a classical optimization method is used to improve the GA-obtained solution, thereby providing the resulting hybrid procedure its theoretical convergence property. The GAAL method is applied to a number of constrained test problems taken from the evolutionary algorithms (EAs) literature. The number of function evaluations required by GAAL in most problems is found to be smaller than that needed by a number of existing evolutionary based constraint handling methods. GAAL method is found to be accurate, computationally fast, and reliable over multiple runs. Besides solving the problems, the proposed GAAL method is also able to find the optimal Lagrange multiplier associated with each constraint for the test problems as an added benefit??a matter that is important for a sensitivity analysis of the obtained optimized solution, but has not yet been paid adequate attention in the past evolutionary constrained optimization studies.  相似文献   

16.
In a recent paper Ström analyzed a simple extrapolation algorithm for numerical differentiation and derived certain properties about the kernel function of the integral representation of the remainder term. These properties are useful for placing bounds on the error in cases when specified higher order derivatives are known not to change sign. The algorithm involves a separate Romberg table for each derivative and is rather inconvenient from the point of view of economizing the number of function values required. In this paper we generalize Ström's results in two stages. First we show that they are valid for a very wide choice of definitions of the initial column of each Romberg table. Then we show that one such choice, making full use of the computed function values, gives results identical to those that can be obtained using an algorithm suggested by Lyness and Moler with a particular choice of sequence of function evaluations. There is no detailed discussion of the effect of round-off error.  相似文献   

17.
Summary. The aim of this work is to study a decoupled algorithm of a fixed point for solving a finite element (FE) problem for the approximation of viscoelastic fluid flow obeying an Oldroyd B differential model. The interest for this algorithm lies in its applications to numerical simulation and in the cost of computing. Furthermore it is easy to bring this algorithm into play. The unknowns are the viscoelastic part of the extra stress tensor, the velocity and the pressure. We suppose that the solution is sufficiently smooth and small. The approximation of stress, velocity and pressure are resp. discontinuous, continuous, continuous FE. Upwinding needed for convection of , is made by discontinuous FE. The method consists to solve alternatively a transport equation for the stress, and a Stokes like problem for velocity and pressure. Previously, results of existence of the solution for the approximate problem and error bounds have been obtained using fixed point techniques with coupled algorithm. In this paper we show that the mapping of the decoupled fixed point algorithm is locally (in a neighbourhood of ) contracting and we obtain existence, unicity (locally) of the solution of the approximate problem and error bounds. Received July 29, 1994 / Revised version received March 13, 1995  相似文献   

18.
《Optimization》2012,61(3):357-372
A modification is proposed for Scarf's algorithm for computing fixed points, which allows restarting, and hence overcoming a serious difficulty of this well known algorithm. A particular feature of the new algorithm is that it uses for restarting an idea quite different from the sandwiching procedure of Merrill-Mackinnon, and operates with a non regular grid, in contrast with most other algorithms which use regular grids. From the computational point of view, it is very simple and seems to be quite competitive with other existing methods.  相似文献   

19.
Implementing Pure Adaptive Search with Grover's Quantum Algorithm   总被引:4,自引:0,他引:4  
Pure adaptive search (PAS) is an idealized stochastic algorithm for unconstrained global optimization. The number of PAS iterations required to solve a problem increases only linearly in the domain dimension. However, each iteration requires the generation of a random domain point uniformly distributed in the current improving region. If no regularity conditions are known to hold for the objective function, then this task requires a number of classical function evaluations varying inversely with the proportion of the domain constituted by the improving region, entirely counteracting the PAS apparent speedup. The Grover quantum computational search algorithm provides a way to generate the PAS iterates. We show that the resulting implementation, which we call the Grover adaptive search (GAS), realizes PAS for functions satisfying certain conditions, and we believe that, when quantum computers will be available, GAS will be a practical algorithm.  相似文献   

20.
We study the numerical integration of functions depending on an infinite number of variables. We provide lower error bounds for general deterministic algorithms and provide matching upper error bounds with the help of suitable multilevel algorithms and changing-dimension algorithms. More precisely, the spaces of integrands we consider are weighted, reproducing kernel Hilbert spaces with norms induced by an underlying anchored function space decomposition. Here the weights model the relative importance of different groups of variables. The error criterion used is the deterministic worst-case error. We study two cost models for function evaluations that depend on the number of active variables of the chosen sample points, and we study two classes of weights, namely product and order-dependent weights and the newly introduced finite projective dimension weights. We show for these classes of weights that multilevel algorithms achieve the optimal rate of convergence in the first cost model while changing-dimension algorithms achieve the optimal convergence rate in the second model. As an illustrative example, we discuss the anchored Sobolev space with smoothness parameter \(\alpha \) and provide new optimal quasi-Monte Carlo multilevel algorithms and quasi-Monte Carlo changing-dimension algorithms based on higher-order polynomial lattice rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号