首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The maximum flow algorithm is distinguished by the long line of successive contributions researchers have made in obtaining algorithms with incrementally better worst-case complexity. Some, but not all, of these theoretical improvements have produced improvements in practice. The purpose of this paper is to test some of the major algorithmic ideas developed in the recent years and to assess their utility on the empirical front. However, our study differs from previous studies in several ways. Whereas previous studies focus primarily on CPU time analysis, our analysis goes further and provides detailed insight into algorithmic behavior. It not only observes how algorithms behave but also tries to explain why algorithms behave that way. We have limited our study to the best previous maximum flow algorithms and some of the recent algorithms that are likely to be efficient in practice. Our study encompasses ten maximum flow algorithms and five classes of networks. The augmenting path algorithms tested by us include Dinic's algorithm, the shortest augmenting path algorithm, and the capacity-scaling algorithm. The preflow-push algorithms tested by us include Karzanov's algorithm, three implementations of Goldberg-Tarjan's algorithm, and three versions of Ahuja-Orlin-Tarjan's excess-scaling algorithms. Among many findings, our study concludes that the preflow-push algorithms are substantially faster than other classes of algorithms, and the highest-label preflow-push algorithm is the fastest maximum flow algorithm for which the growth rate in the computational time is O(n1.5) on four out of five of our problem classes. Further, in contrast to the results of the worst-case analysis of maximum flow algorithms, our study finds that the time to perform relabel operations (or constructing the layered networks) takes at least as much computation time as that taken by augmentations and/or pushes.  相似文献   

2.
Selected algorithms of three famous mathematicians are expressed in recursive computer programmes using a programming language (APL). Specifically: (a) Euclid's greatest common divisior algorithm; (b) Fibonacci's ‘ golden ‘ number series; (c) Pascal's triangle.

The programmes are designed to be viewed directly by students in order to elucidate the algorithms. Such programmes can be stimulants for learning mathematics.  相似文献   

3.
Toyota's goal of sequencing mixed models on an assembly line is to keep the constant usage rate of every part used in the assembly line. This paper deals with Toyota's goal of sequencing mixed models on an assembly line with multiple workstations. A sequencing problem with Toyota's goal is formulated. Two algorithms based on different mechanisms, respectively modified goal chasing and simulated annealing, are developed for solving the sequencing problem. A number of numerical experiments are carried out for evaluating the efficiency of the algorithms. Computational results show that one of the algorithms generates good sub-optimal solutions with very short CPU times, while the other can reach optimal solutions with longer, but acceptable, CPU times.  相似文献   

4.
Magnetic resonance imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured k‐space lines. In contrast to well‐known algorithms like SENSE and GRAPPA and its flavours we consider the problem as a non‐linear inverse problem. Fast computation algorithms for the necessary Fréchet derivative and reconstruction algorithms are given. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Abstract

Modifications of Prony's classical technique for estimating rate constants in exponential fitting problems have many contemporary applications. In this article the consistency of Prony's method and of related algorithms based on maximum likelihood is discussed as the number of observations n → ∞ by considering the simplest possible models for fitting sums of exponentials to observed data. Two sampling regimes are relevant, corresponding to transient problems and problems of frequency estimation, each of which is associated with rather different kinds of behavior. The general pattern is that the stronger results are obtained for the frequency estimation problem. However, the algorithms considered are all scaling dependent and consistency is not automatic. A new feature that emerges is the importance of an appropriate choice of scale in order to ensure consistency of the estimates in certain cases. The tentative conclusion is that algorithms referred to as Objective function Reweighting Algorithms (ORA's) are superior to their exact maximum likelihood counterparts, referred to as Gradient condition Reweighting Algorithms (GRA's), especially in the frequency estimation problem. This conclusion does not extend to fitting other families of functions such as rational functions.  相似文献   

6.
Algorithms and implementations for computing the sign function of a triangular matrix are fundamental building blocks for computing the sign of arbitrary square real or complex matrices. We present novel recursive and cache‐efficient algorithms that are based on Higham's stabilized specialization of Parlett's substitution algorithm for computing the sign of a triangular matrix. We show that the new recursive algorithms are asymptotically optimal in terms of the number of cache misses that they generate. One algorithm that we present performs more arithmetic than the nonrecursive version, but this allows it to benefit from calling highly optimized matrix multiplication routines; the other performs the same number of operations as the nonrecursive version, suing custom computational kernels instead. We present implementations of both, as well as a cache‐efficient implementation of a block version of Parlett's algorithm. Our experiments demonstrate that the blocked and recursive versions are much faster than the previous algorithms and that the inertia strongly influences their relative performance, as predicted by our analysis.  相似文献   

7.
We introduce a new family of Nlog Nbest basis search algorithms for functions of more than one variable. These algorithms search the collection of anisotropic wavelet packet and cosine packet bases and output a minimum entropy basis for a given function. These algorithms are constructed after treating the model problem of computing best Walsh packet bases. Several intermediate algorithms for conducting mixed isotropic/anisotropic best basis searches in the function's various coordinate directions are also presented.  相似文献   

8.
9.
In this work, Solodov–Svaiter's hybrid projection-proximal and extragradient-proximal methods [16,17] are used to derive two algorithms to find a Karush–Kuhn–Tucker pair of a convex programming problem. These algorithms are variations of the proximal augmented Lagrangian. As a main feature, both algorithms allow for a fixed relative accuracy of the solution of the unconstrained subproblems. We also show that the convergence is Q-linear under strong second order assumptions. Preliminary computational experiments are also presented.  相似文献   

10.
Kinematic equations and algorithms for the operation of strapdown inertial navigation systems intended for the high-accuracy determination of the inertial orientation parameters (the Euler (Rodrigues–Hamilton) parameters) of a moving object are considered. Together with classical orientation equations, Hamilton's quaternions and new kinematic differential equations in four-dimensional (quaternion) skew-symmetric operators are used that are matched with the classical rotation quaternion and the quaternion rotation matrix using Cayley's formulae. New methods for solving the synthesized kinematic equations are considered: a one-step quaternion orientation algorithm of third-order accuracy and two-step algorithms of third- and fourth-order accuracy in four-dimensional skew-symmetric operators for calculating the parameters of the spatial position of an object. The algorithms were constructed using the Picard method of successive approximations and employ primary integral information from measurements of the absolute angular velocity of the object as the input information, and have advantages over existing algorithms of a similar order with respect to their accuracy and simplicity.  相似文献   

11.
In this paper, by introducing a definition of parameterized comparison matrix of a given complex square matrix, the solvability of a parameterized class of complex nonsymmetric algebraic Riccati equations (NAREs) is discussed. The existence and uniqueness of the extremal solutions of the NAREs is proved. Some classical numerical methods can be applied to compute the extremal solutions of the NAREs, mainly including the Schur method, the basic fixed-point iterative methods, Newton's method and the doubling algorithms. Furthermore, the linear convergence of the basic fixed-point iterative methods and the quadratic convergence of Newton's method and the doubling algorithms are also shown. Moreover, some concrete parameter selection strategies in complex number field for the doubling algorithms are also given. Numerical experiments demonstrate that our numerical methods are effective.  相似文献   

12.
A piecewise homogeneous spherical medium is excited by an external or internal electric dipole with arbitrary location and polarization. The dyadic Green's function of the medium is determined analytically. Then, the vector electric fields and far‐field patterns are obtained. Low‐frequency approximations of the far‐field patterns are subsequently derived, which encode the dipole's locations coordinates and polarization components in the different orders of the associated expansions. This fact enables the establishment of far‐field inverse scattering algorithms referring to the electromagnetic interior or exterior excitation of a small sphere by an arbitrary dipole. Inverse medium and inverse source problems are considered concerning, respectively, the determination of the scatterer's material parameters and the dipole's characteristics. The developed inverse algorithms determine exactly the unknown parameters of problems fulfilling the low‐frequency assumption, which is indeed the case in most relevant applications, like, e.g., in biomedical imaging.  相似文献   

13.
Basic graph structures such as maximal independent sets (MIS's) have spurred much theoretical research in randomized and distributed algorithms, and have several applications in networking and distributed computing as well. However, the extant (distributed) algorithms for these problems do not necessarily guarantee fault‐tolerance or load‐balance properties. We propose and study “low‐average degree” or “sparse” versions of such structures. Interestingly, in sharp contrast to, say, MIS's, it can be shown that checking whether a structure is sparse, will take substantial time. Nevertheless, we are able to develop good sequential/distributed (randomized) algorithms for such sparse versions. We also complement our algorithms with several lower bounds. Randomization plays a key role in our upper and lower bound results. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 322–344, 2016  相似文献   

14.
《Optimization》2012,61(1-2):121-153
In this paper, we present a family of secant algorithms in association with nonmonotone trust region strategy for nonlinear equality constrained optimization problems. The proposed algorithms are globally convergent while keeping the local superlinear rate by introducing Fletcher's penalty function as merit function.  相似文献   

15.
SHIVA (Simulated Highways for Intelligent Vehicle Algorithms) is a kinematic simulation of vehicles moving and interacting on a user-defined stretch of roadway. The vehicles can be equipped with simulated human drivers as well as sensors and algorithms for automated control. These algorithms influence the vehicles' motion through simulated commands to the accelerator, brake, and steering wheel. SHIVA's user interface provides facilities for visualizing and influencing the interactions between vehicles.SHIVA not only models the elements of the domain most useful to tactical driving research but also provides tools to rapidly prototype and test algorithms in challenging traffic situations. Additionally, the scenario control tools allow repeatable testing of different reasoning systems on a consistent set of traffic situations. These features are vital in the development and evaluation of intelligent vehicle technology for ITS applications.  相似文献   

16.
Piecewise affine inverse problems form a general class of nonlinear inverse problems. In particular inverse problems obeying certain variational structures, such as Fermat's principle in travel time tomography, are of this type. In a piecewise affine inverse problem a parameter is to be reconstructed when its mapping through a piecewise affine operator is observed, possibly with errors. A piecewise affine operator is defined by partitioning the parameter space and assigning a specific affine operator to each part. A Bayesian approach with a Gaussian random field prior on the parameter space is used. Both problems with a discrete finite partition and a continuous partition of the parameter space are considered.

The main result is that the posterior distribution is decomposed into a mixture of truncated Gaussian distributions, and the expression for the mixing distribution is partially analytically tractable. The general framework has, to the authors' knowledge, not previously been published, although the result for the finite partition is generally known.

Inverse problems are currently of large interest in many fields. The Bayesian approach is popular and most often highly computer intensive. The posterior distribution is frequently concentrated close to high-dimensional nonlinear spaces, resulting in slow mixing for generic sampling algorithms. Inverse problems are, however, often highly structured. In order to develop efficient sampling algorithms for a problem at hand, the problem structure must be exploited.

The decomposition of the posterior distribution that is derived in the current work can be used to develop specialized sampling algorithms. The article contains examples of such sampling algorithms. The proposed algorithms are applicable also for problems with exact observations. This is a case for which generic sampling algorithms tend to fail.  相似文献   

17.
CGS (conjugate Gram-Schmidt) algorithms are given for computing extreme points of a real-valued functionf(x) ofn real variables subject tom constraining equationsg(x)=0,M<n. The method of approach is to solve the system $$\begin{gathered} f'(x) + g'(x)*\lambda = 0 \hfill \\ g(x) = 0 \hfill \\ \end{gathered} $$ where λ is the Lagrange multiplier vector, by means of CGS algorithms for systems of nonlinear equations. Results of the algorithms applied to test problems are given, including several problems having inequality constraints treated by adjoining slack variables.  相似文献   

18.
Multiplication algorithms in primary school are still frequently introduced with little attention to meaning. We present a case study focusing on a third grade class that engaged in comparing two algorithms and discussing “why they both work”. The objectives of the didactical intervention were to foster students' development of mathematical meanings concerning multiplication algorithms, and their development of an attitude to judge and compare the value and efficiency of different algorithms. Underlying hypotheses were that it is possible to promote the simultaneous unfolding of the semiotic potential of two algorithms, considered as cultural artifacts, with respect to the objectives of the didactical intervention, and to establish a fruitful synergy between the two algorithms. As results, this study sheds light onto the new theoretical construct of “bridging sign”, illuminating students’ meaning-making processes involving more than one artifact; and it provides important insight into the actual unfolding of the hypothesized potential of the algorithms.  相似文献   

19.
Gilmore and Gomory's algorithm is one of the better actually known exact algorithms for solving unconstrained guillotine two-dimensional cutting problems. Herz's algorithm is more effective, but only for the unweighted case. We propose a new exact algorithm adequate for both weighted and unweighted cases, which is more powerful than both algorithms. The algorithm uses dynamic programming procedures and one-dimensional knapsack problem to obtain efficient lower and upper bounds and important optimality criteria which permit a significant branching cut in a recursive tree-search procedure. Recursivity, computational power, adequateness to parallel implementations, and generalization for solving constrained two-dimensional cutting problems, are some important features of the new algorithm.  相似文献   

20.
All methods for solving least-squares problems involve orthogonalization in one way or another. Certain fundamental estimation and prediction problems of signal processing and time-series analysis can be formulated as least-squares problems. In these problems, the sequence that is to be orthogonalized is generated by an underlying unitary operator. A prime example of an efficient orthogonalization procedure for this class of problems is Gragg's isometric Arnoldi process, which is the abstract encapsulation of a number of concrete algorithms. In this paper, we discuss a two-sided orthogonalization process that is equivalent to Gragg's process but has certain conceptual strengths that warrant its introduction. The connections with classical algorithms of signal processing are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号