首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Interior Point algorithms have become a very successful tool for solving large-scale linear programming problems. The Dual Affine algorithm is one of the Interior Point algorithms implemented in the computer program OB1. It is a good candidate for implementation on a parallel computer because it is very computing-intensive. A parallel Dual Affine algorithm is presented which is suitable for a parallel computer with a distributed memory. The algorithm obtains its speedup from parallel sparse linear algebra computations such as Cholesky factorisation, matrix multiplication, and triangular system solving, which form the bulk of the computing work. Efficient algorithms based on the grid distribution of matrices are presented for each of these computations. The algorithm is implemented in occam 2 on a square mesh of transputers. The resulting parallel program is connected to the sequentialFortran 77 program OB1, which performs the preprocessing and the postprocessing. Experimental results on a mesh of 400 transputers are given for a test set of seven realistic planning and scheduling problems from Shell and seven problems from the NETLIB LP collection; the results show a speedup of 88 for the largest problem.  相似文献   

2.
In this paper, a processing element (PE) is characterized by its computation bandwidth, I/O bandwidth, and the size of its local memory. In carrying out a computation, a PE is said to be balanced if the computing time equals the I/O time. Consider a balanced PE for some computation. Suppose that the computation band-width of the PE is increased by a factor of α relative to its I/O bandwidth. Then when carrying out the same computation the PE will be imbalanced; i.e., it will have to wait for I/O. A standard method of avoiding this I/O bottleneck is to reduce the overall I/O requirement of the PE by increasing the size of its local memory. This paper addresses the question of by how much the PE's local memory must be enlarged in order to restore balance.The following results are shown: For matrix computations such as matrix multiplication and Gaussian elimination, the size of the local memory must be increased by a factor of α2. For computations such as relaxation on a k-dimensional grid, the local memory must be enlarged by a factor of αk. For some other computations such as the FFT and sorting, the increase is exponential; i.e., the size of the new memory must be the size of the original memory to the αth power. All these results indicate that to design a balanced PE, the size of its local memory must be increased much more rapidly than its computation bandwidth. This phenomenon seems to be common for many computations where an output may depend on a large subset of the inputs.Implications of these results for some parallel computer architectures are also discussed. One particular result is that to balance an array of p linearly connected PEs for performing matrix computations such as matrix multiplication and matrix triangularization, the size of each PE's local memory must grow linearly with p. Thus, the larger the array is, the larger each PE's local memory must be.  相似文献   

3.
We checked the 107-dimensional alternative algebra constructed by E. Kleinfeld (On centers of alternative algebras, Comm. Algebra8(3) 1980, 289–297) and verified that it was alternative. To straightforwardly check the alternative law for the basis elements proved to be too time consuming for the computer. We developed a new algorithm which is much faster for sparse multiplication tables and which requires much less memory. Instead of testing the identities for all possible choices of basis elements, our algorithm is based on “factoring” the basis elements in all possible ways. The factorization algorithm took 5 minutes of C.P.U. time; it is estimated direct substitution with no consideration for sparseness would take 600 hours. Even when the substitution technique was improved to account for sparseness it is estimated it would take 12 hours of C.P.U. time. Substitution also requires 100 times as much memory space as factorization.  相似文献   

4.
Majid M. Ali 《代数通讯》2013,41(12):4620-4642
All rings are commutative with identity, and all modules are unital. The purpose of this article is to investigate multiplication von Neumann regular modules. For this reason we introduce the concept of nilpotent submodules generalizing nilpotent ideals and then prove that a faithful multiplication module is von Neumann regular if and only if it has no nonzero nilpotent elements and its Krull dimension is zero. We also give a new characterization for the radical of a submodule of a multiplication module and show in particular that the radical of any submodule of a Noetherian multiplication module is a finite intersection of prime submodules.  相似文献   

5.
The need for effective utilization of computer facilities has resulted in a great interest in parallel processing of several independent programs on a computer.The introductory parts of this paper deal with multiprogramming both for batch and real-time computer systems. Also some important hardware features such as automatic program interrupts, memory lock-outs and autonomous facilities are discussed.In the following chapter elementary program-mechanics are introduced and attention is paid to priority assignment to different programs.Some main functions of a Master Control Program are then discussed and finally some aspects are given on multiprogram scheduling of computer runs.Multiprogramming is still in an early stage of its development and therefore this paper makes no claim to completeness—it rather attempts to render some important principles of these new techniques.  相似文献   

6.
We explore the possibility of storage and retrieval of ultrametrically organized patterns in hippocampus, the part of the brain devoted to the memory processes. The ultrametric structure has been chosen for having a good representation of the categories of memory. The storage and retrieval process is the one typical of the hippocampus and it is based on the dynamic of the CA1 neurons under the input from the neurons of the Enthorinal cortex and the Ca3 system. We explore if this real system of neurons exhibits the property of associative memory introduced since a long time in the artificial neural networks. We study how the performance is dependent on the deviation of the system of patterns from ultrametricity. The evolution of the system is simulated by means of a parallel computer and the statistics of storage and retrieval is investigated.  相似文献   

7.
The Schur complement method, also known as substructuring technique, was widely used in structural mechanics to solve large-scale systems with limited memory computers for more than three decades [J.S. Przemieniecki, AIAA J. 1 (1963) 138]. Currently, due to developments in computer technology, the available on-board memory has increased considerably. Despite the existence of these high-memory systems, the Schur complement method still finds its applications in structural mechanics through parallel computing. When developing a computer program, the Schur method has a significant book-keeping load in comparison to other parallel algorithms used, e.g., Schwarz alternating domain decomposition method [H.A. Schwarz, Gesammelte Mathematiche Abhandlungen, vol. 2, Springer, Berlin, 1890, p. 133]. This results in memory usage. Although parallel systems are used, global coefficient matrices require a large amount of memory. Therefore, significant memory is reserved for the solution of large-scale systems. In this paper, we present an efficient algorithm for the assemblage and solution of interface equations which facilitates the solution of large-scale systems via the Schur complement method on multiple instruction multiple data (MIMD) distributed memory architectures. In this method, we assemble the subdomain and interface coefficient matrices in such a manner that the memory requirements decrease significantly, resulting in the solution of large-scale systems with reasonable memory usage. The computer program is tested on distributed memory architectures with UNIX, WINDOWS NT, and LINUX operating systems.  相似文献   

8.
利用平面上的黄金分割法求全局最优解   总被引:5,自引:0,他引:5  
给出了无约束全局最优问题的一种解法 ,该方法是一维搜索中的 0 .61 8法的推广 ,不仅使其适用范围由一维扩展到平面上 ,并且将原方法只适用于单峰函数的局部搜索改进为可适用于多峰函数的全局最优解的搜索 .给出了收敛性证明 .本法突出的优点在于 :适用性强、算法简单、可以在任意精度内寻得最优解并且克服了以往直接解法所共有的要求大量计算机内存的缺点 .仿真结果表明算法是有效的 .  相似文献   

9.
两个模糊数乘积运算   总被引:2,自引:0,他引:2  
两个模糊数的乘积运算是根据扩张原理定义的。本文根据扩张原理给出了非线性规划方法、解析法、计算机作图法和计算机模拟方法四种求解两个模糊数的乘积问题的方法,四种方法各有优缺点,对每个方法通过实例作了说明。  相似文献   

10.
We present a global optimization algorithm of the interval type that does not require a lot of memory and treats standard constraints. The algorithm is shown to be able to find one globally optimal solution under certain conditions. It has been tested with many examples with various degrees of complexity and a large variety of dimensions ranging from 1 to 2,000 merely in a basic personal computer. The extensive numerical experiments have indicated that the algorithm would have a good chance to successfully find a good approximation of a globally optimal solution. More importantly, it finds such a solution much more quickly and using much less memory space than a conventional interval method. The new algorithm is also compared with several noninterval global optimization methods in our numerical experiments, again showing its clear superiority in most cases.  相似文献   

11.
Since the middle of the 1960s, computer science has been practised in Denmark under Peter Naur's termdatalogy, the science of data processes. Starting at Regenecentralen and the University of Copenhagen, the Copenhagen Tradition of Computer Science has developed its own special characteristics by means of a close connection with applications and other fields of knowledge. The tradition is not least visible in the area of education. Comprehensive project activity is an integral part of the curriculum, thus presenting theory as an aspect of realistic solutions known primarily through actual experience. Peter Naur early recognized the particular educational challenges presented by computer science. His innovations have shown their quality and vitality also at other universities. There is a close connection between computer science training as it has been formed at Copenhagen University, and the view of computer science which has characterized Peter Naur's research. We illustrate how the study of programming and system development conceived as a human activity has been an all-pervasive theme in Naur's work. This approach has set the scene for central research issues in software development which today seem more topical than ever.Dedicated to Peter Naur on the occasion of his 60th birthday  相似文献   

12.
Piet Verstappen Drs. 《ZDM》1999,31(5):158-165
In education multiplying is usually viewed as repeated joining together and dividing as repeated taking away or, which comes to the same thing, as an equal distribution. This presentation springs from Antiquity, when thought was mostly concrete. In modern mathematics we have relation-numbers instead of, image-numbers and likewise multiplying is a facet of relational thinking. The view that children merely can learn through the concrete is often biassedly understood in the sense that the concrete has to be abstracted, which characterizes substantial thinking. However, in the case of relational thinking, learning through the concrete means that to achieve insight the mathematical activities have to be applied to reality, a crucial point, for most people have difficulties with applying multiplication, much more than with, the inherent algorithms. It appears that they do not really know what multiplication is, particularly not its space structure. The more general the structure the more and wider the applications. This thesis infers that multiplying as multiple of classes is much less useful than multiplying as space form. Questing for the essence of multiplication is the major topic of this paper. Which changes has its structure undergone and how can education deal with them? At the end it is illustratively explained why probability, based on the established multiplication, is usually such a tough domain.  相似文献   

13.
Svetoslav Markov 《PAMM》2006,6(1):685-686
In this paper intervals are viewed as approximate real numbers. A revised formula for interval multiplication of generalized intervals is given. This formula will be useful for further axiomatization of interval arithmetic and relevant implementations within computer algebra systems. Relations between multiplication of numbers and multiplication of errors are discussed. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
Configurational comparative methods have gained in popularity among sociologists and political scientists. In particular, Qualitative Comparative Analysis (QCA) has attracted considerable attention in recent years. The process of Boolean minimization by means of the Quine-McCluskey algorithm (QMC) is the central procedure in QCA, but QMC's exactitude renders it memory intensive and slow in processing complex output functions. In this article, we introduce the enhanced QMC algorithm (eQMC) to alleviate these problems. eQMC is equally exact but, unlike QMC, capable of processing multivalent condition and outcome factors. Instead of replacing QMC, however, eQMC acts as an optimizing complement in contexts of limited empirical diversity. We demonstrate its speed and computer memory performance through simulations.  相似文献   

15.
In this paper, we deduce general multiplication formulas for hypercomplex monogenic and polymonogenic generalizations of Eisenstein series that are related to translation groups. In particular, a criterion for paravector multiplication of arbitrary finite-dimensional lattices in terms of being integral is developed. Under these number theoretical conditions it is then possible to transfer the concept of the complex multiplication of the ℘-function to the framework of its hypercomplex higher dimensional analogues within the Clifford analysis setting. We derive explicit formulas for the hypercomplex division values of the hypercomplex monogenic and polymonogenic Eisenstein series for lattices with hypercomplex multiplication.We also provide applications to the function theory. In particular, it will be shown that a non-constant polymonogenic function that satisfies one of the specific integer multiplication formulas of the Clifford-analytic Eisenstein series must have singularities. Furthermore, it coincides with one of those polymonogenic Eisenstein series whenever all the singularities are distributed in the form of a lattice and have all the same order and principal parts.  相似文献   

16.
We define a special multiplication of function series (skew multiplication) and a generalized Riemann-Stieltjes integral with function series as integration arguments. The generalized integrals and the skew multiplication are related by an integration by parts formula. The generalized integrals generate a family of linear generalized integral equations, which includes a family (represented in integral form via the Riemann-Stieltjes integral) of linear differential equations with several deviating arguments. A specific feature of these equations is that all deviating functions are defined on the same closed interval and map it into itself. This permits one to avoid specifying the initial functions and imposing any additional constraints on the deviating functions. We present a procedure for constructing the fundamental solution of a generalized integral equation. With respect to the skew multiplication, it is invertible and generates the product of the fundamental solution (a function of one variable) by its inverse function (a function of the second variable). Under certain conditions on the parameters of the equation, the product has all specific properties of the Cauchy function. We introduce the notion of adjoint generalized integral equation, obtain a representation of solutions of the original equation and the adjoint equation in generalized integral Cauchy form, and derive sufficient conditions for the convergence of solutions of a pair of adjoint equations.  相似文献   

17.
The defining characteristic of fixed interval scheduling problems is that each job has a finite number of fixed processing intervals. A job can be processed only in one of its intervals on one of the available machines, or is not processed at all. A decision has to be made about a subset of the jobs to be processed and their assignment to the processing intervals such that the intervals on the same machine do not intersect. These problems arise naturally in different real-life operations planning situations, including the assignment of transports to loading/unloading terminals, work planning for personnel, computer wiring, bandwidth allocation of communication channels, printed circuit board manufacturing, gene identification and examining computer memory structures. We present a general formulation of the interval scheduling problem, show its relations to cognate problems in graph theory, and survey existing models, results on computational complexity and solution algorithms.  相似文献   

18.
In this work, we propose an efficient implementation of a finite-difference method employed to approximate the solutions of a system of partial differential equations that appears in the investigation of the growth of biological films. The associated homogeneous Dirichlet problem is discretized using a linear approach. This discretization yields a positivity- and boundedness-preserving implicit technique which is represented in vector form through the multiplication by a sparse matrix. A straightforward implementation of this methodology would require a substantial amount of computer memory and time, but the problem is conveniently coded using a continual reduction of the zero sub-matrices of the representing matrix. In addition to the conditions that guarantee the positivity and the boundedness of the numerical approximations, we establish some parametric constraints that assure that the same properties for the discrete total mass at each point of the mesh-grid and each discrete time are actually satisfied. Some simulations are provided in order to illustrate both the performance of the implementation, and the preservation of the positivity and the boundedness of the numerical approximations.  相似文献   

19.
When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely “recalling” low energy states that have been previously visited but “predicting” their location by generalizing over local attractor states that have already been visited. This “self‐modeling” framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self‐organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems. © 2010 Wiley Periodicals, Inc. Complexity 16: 17–26, 2011  相似文献   

20.
Computers, and computer‐related thinking structures, are only gradually influencing mathematics education. On the one hand, there is a discrepancy between involved teachers who already have changed their own classroom teaching to a great extent, and a majority of mathematics teachers who have not yet taken notice of the computer for teaching purposes. On the other hand, knowledge of the computer and of algorithms is frequently merely added to the mathematical subject matter. As opposed to that, the authors argue that it is necessary to genuinely integrate such subject matter, and to include general topics such as social impact and changed attitudes toward application. With regard to implementation, they develop concrete ideas which are aligned in a differentiated manner to the specific situation and the opportunities offered in the Federal Republic of Germany. The rationale for that is that only such reference to a specific situation will provide an opportunity for readers abroad to usefully apply approaches and ideas to the situation given in their own cultural environment.

An abbreviated version of this paper for cursory reading or other purposes has been marked by bold lines on the margin.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号