首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The cellular automaton model of computation has drawn the interest of researchers from different disciplines including computer science, biology, mathematics, economy, biochemistry and philosophy. Although a cellular automaton is based on a set of simple rules, over time complex patterns may evolve. We present computational methods for implementing and optimizing a well known two-state cellular automaton, Conway's Game of Life, on a 16-core Intel Xeon. The evaluation is based on three multicore algorithms. The first algorithm is coherent and utilizes shared memory and barrier synchronization. The remaining two algorithms are distributed and utilize private memories and explicit core-to-core message passing. We provide a link to our open source simulation software.  相似文献   

2.
The memory in modern computer systems has a highly complex hierarchy. The farther the memory from the processor, the larger it is, but also the slower. Each computer has its own architecture and its own cache memory, and it is not easy to write an algorithm that will run with equal efficiency on all computers. In this article we consider the simplest model of a two-level memory for which two cacheindependent algorithms are proposed: multiplication of full matrices and multiplication of a sparse matrix by a block vector.  相似文献   

3.
We describe how nondeterministic dynamic programming (DP) algorithms can be designed for a new class of parallel coprocessing systems using “functional memory”, an architecture based upon dataflow computer principles. We also show how Petri nets can be used to model and express such parallel DP algorithms. Finally, we discuss architectural improvements that would facilitate the processing of Petri net models of nondeterministic DP algorithms on functional memory computers (FMC).  相似文献   

4.
An useful application of computer algebra systems is the generation of algorithms for numerical computations. We have shown in Gander and Gruntz (SIAM Rev., 1999) how computer algebra can be used in teaching to derive numerical methods. In this paper we extend this work, using essentially the capability of computer algebra system to construct and manipulate the interpolating polynomial and to compute a series expansion of a function. We will automatically generate formulas for integration and differentiation with error terms and also generate multistep methods for integrating differential equations. In memory of Germund Dahlquist (1925–2005).AMS subject classification (2000) 65D25, 65D30, 65D32, 65L06  相似文献   

5.
6.
Partial eigenvalue decomposition (PEVD) and partial singular value decomposition (PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algorithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method. Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations.  相似文献   

7.
We study the implementation of two fundamentally different algorithms for solving the maximum flow problem: Dinic's method and the network simplex method. For the former, we present the design of a storage-efficient implementation. For the latter, we develop a "steepest-edge" pivot selection criterion that is easy to include in an existing network simplex implementation. We compare the computational efficiency of these two methods on a personal computer with a set of generated problems of up to 4 600 nodes and 27 000 arcs.This research was supported in part by the National Science Foundation under Grant Nos. MCS-8113503 and DMS-8512277.  相似文献   

8.
In this paper we explore the influence of adaptive memory in the performance of heuristic methods when solving a hard combinatorial optimization problem. Specifically, we tackle the adaptation of tabu search and scatter search to the bandwidth minimization problem. It consists of finding a permutation of the rows and columns of a given matrix which keeps the non-zero elements in a band that is as close as possible to the main diagonal. This is a classic problem, introduced in the late sixties, that also has a well-known formulation in terms of graphs. Different exact and heuristic approaches have been proposed for the bandwidth problem. Our contribution consists of two new algorithms, one based on the tabu search methodology and the other based on the scatter search framework. We also present a hybrid method combining both for improved outcomes. Extensive computational testing shows the influence of the different elements in heuristic search, such as neighborhood definition, local search, combination methods and the use of memory. We compare our proposals with the most recent and advanced methods for this problem, concluding that our new methods can compete with them in speed and running time.  相似文献   

9.
We present two graph-based algorithms for multiclass segmentation of high-dimensional data, motivated by the binary diffuse interface model. One algorithm generalizes Ginzburg–Landau (GL) functional minimization on graphs to the Gibbs simplex. The other algorithm uses a reduction of GL minimization, based on the Merriman–Bence–Osher scheme for motion by mean curvature. These yield accurate and efficient algorithms for semi-supervised learning. Our algorithms outperform existing methods, including supervised learning approaches, on the benchmark datasets that we used. We refer to Garcia-Cardona (2014) for a more detailed illustration of the methods, as well as different experimental examples.  相似文献   

10.
The defining characteristic of fixed interval scheduling problems is that each job has a finite number of fixed processing intervals. A job can be processed only in one of its intervals on one of the available machines, or is not processed at all. A decision has to be made about a subset of the jobs to be processed and their assignment to the processing intervals such that the intervals on the same machine do not intersect. These problems arise naturally in different real-life operations planning situations, including the assignment of transports to loading/unloading terminals, work planning for personnel, computer wiring, bandwidth allocation of communication channels, printed circuit board manufacturing, gene identification and examining computer memory structures. We present a general formulation of the interval scheduling problem, show its relations to cognate problems in graph theory, and survey existing models, results on computational complexity and solution algorithms.  相似文献   

11.
Many of the datasets encountered in statistics are two-dimensional in nature and can be represented by a matrix. Classical clustering procedures seek to construct separately an optimal partition of rows or, sometimes, of columns. In contrast, co-clustering methods cluster the rows and the columns simultaneously and organize the data into homogeneous blocks (after suitable permutations). Methods of this kind have practical importance in a wide variety of applications such as document clustering, where data are typically organized in two-way contingency tables. Our goal is to offer coherent frameworks for understanding some existing criteria and algorithms for co-clustering contingency tables, and to propose new ones. We look at two different frameworks for the problem of co-clustering. The first involves minimizing an objective function based on measures of association and in particular on phi-squared and mutual information. The second uses a model-based co-clustering approach, and we consider two models: the block model and the latent block model. We establish connections between different approaches, criteria and algorithms, and we highlight a number of implicit assumptions in some commonly used algorithms. Our contribution is illustrated by numerical experiments on simulated and real-case datasets that show the relevance of the presented methods in the document clustering field.  相似文献   

12.
We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives.  相似文献   

13.
With the latest developments in the area of advanced computer architectures, we are already seeing large-scale machines at petascale level and are faced with the exascale computing challenge. All these require scalability at system, algorithmic and mathematical model levels. In particular, efficient scalable algorithms are required to bridge the performance gap. Being able to predict application demeanour, performance and scalability of currently used software on new supercomputers of different architectures, varying sizes, and utilising distinct ways of intercommunication, can be of great benefit for researchers as well as application developers. This paper is concerned with scaling characteristics of Monte Carlo based algorithms for matrix inversion. The algorithmic behaviour on both, a shared memory and a large-scale cluster system will be predicted with the help of an extreme-scale high-performance computing (HPC) simulator.  相似文献   

14.
We analyze a complex queueing system with a single server operating in three different modes and dependent on circumstances, servicing two different queues simultaneously. There are different switching policies that specify when the server takes one or two queues. Main techniques are based on fluctuation analysis. One of the objectives is to model processes that occur in software, computer, and electrical engineering, and to argue that methods of fluctuation theory produce closed form functionals.  相似文献   

15.
The Lotka–McKendrick's model is a well-known model which describes the evolution in time of the age structure of a population. In this paper we consider this linear model and discuss a range of methods for its numerical solution. We take advantage of different analytical approaches to the system, to design different numerical methods and compare them with already existing algorithms. In particular we set up some algorithms inspired by the approach based on Volterra integral equations and we also consider a direct approach based on the nonlinear system that describes the evolution of the age profile of the population.  相似文献   

16.
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis-Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis-Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao-Blackwellization, at no extra cost.  相似文献   

17.
MRI切片成像   总被引:1,自引:0,他引:1  
为了从MRJ三维采样数据生成空间中任一位置及任一方向的切片图像,我们在物体空间中及计算机屏幕上建立了两套坐标系,引入了六个参数来描述切割平面.推导了从屏幕坐标到物体空间坐标的映射公式,设计了六种密度估计算法,即三线性插值法,最近邻法、中值法、控制力法,梯度法及GNP综合法,用于从所给的数据来估计空间中任意位置的密度,所有的算法都在某些情况下显现了它们的优点. 我们建立了—个由10个尺寸、方向,密度各不相同的椭球组成的三维头模型,通过在物体空间的均匀采样来生成数据集.使用了多组参数来检验模型和算法的成像能力. 在对算法结果进行了主、客观的比较之后,我们总结了这些算法的优,缺点.对于—般的应用,我们推荐梯度法与GNP综合法,在大多数情况下,这两种算法都能产生平滑且明显的边界. 算法的测试与比较使用了我们自己编制的一个基于Windows 95的程序.  相似文献   

18.
Binomial coefficients are used in many fields such as computational and applied mathematics, statistics and probability, theoretical physics and chemistry. For accurate numerical results, the correct calculation of these coefficients is very important. We present some new recurrence relationships and numerical methods for the evaluation of binomial coefficients for negative integers. For this purpose, we give some comparisons of the outputs for different computer programming languages in case of negative integers, and also we wrote two new algorithms for computations.  相似文献   

19.
We propose new iterated improvement neighborhood search algorithms for metaheuristic optimization by exploiting notions of conditional influence within a strategic oscillation framework. These approaches, which are unified within a class of methods called multi-wave algorithms, offer further refinements by memory based strategies that draw on the concept of persistent attractiveness. Our algorithms provide new forms of both neighborhood search methods and multi-start methods, and are readily embodied within evolutionary algorithms and memetic algorithms by solution combination mechanisms derived from path relinking. These methods can also be used to enhance branching strategies for mixed integer programming.  相似文献   

20.
We present two algorithms for multivariate numerical integration of smooth periodic functions. The cubature rules on which these algorithms are based use fractional parts of multiples of irrationals in combination with certain weights. Previous work led to algorithms with quadratic and cubic error convergence. We generalize these algorithms so that one can use them to obtain general higher order error convergence. The algorithms are open in the sense that extra steps can easily be taken in order to improve the result. They are also linear in the number of steps and their memory cost is low.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号