共查询到20条相似文献,搜索用时 0 毫秒
1.
Cartesian product (CP) files have been shown to be very effective for partial match retrieval. Further the Disk Modulo (DM) allocation method has been shown to be a simple and effective method for allocating CP files onto multiple independently accessible disks to further facilitate partial match retrieval. In this paper, a useful criterion for the DM method to be the best allocation method for a given CP file is presented. The presented criterion is much more general than that suggested previously which makes the DM allocation method much more applicable for real applications. 相似文献
2.
Jeffrey H. Kingston 《BIT Numerical Mathematics》1986,26(2):156-163
Henriksen's algorithm is a priority queue implementation that has been proposed for the event list found in discrete event simulations. It offers several practical advantages over heaps and tree structures.Although individual insertions ofO(n) complexity can easily be demonstrated to exist, the self-adjusting nature of the data structure seems to ensure that these will be rare. For this reason, a better measure of total running time is theamortized complexity: the worst case over a sequence of operations, rather than for a single operation.We show that Henriksen's algorithm has an amortized complexity of(n
1/2) per insertion,O(1) per extract_min operation, andO(logn) for isolated deletions. 相似文献
3.
In the present report, Interpolation search, Fast search and Pegasus method are compared with respect to their performance in searching ordered disk files for several key distributions. The aim is to study the effect of the page capacity on searching performance. Cost metric is the number of page accesses and not key comparisons. Numerical results are illustrated and a new approximate formula is derived giving an estimate of the number of page accesses for the case of the Interpolation algorithm under uniform distributions. 相似文献
4.
This paper examines a partial match retrieval scheme which supports range queries for highly dynamic databases. The scheme relies on order preserving multi-attribute hashing. In general, designing optimal indexes is NP-hard. Greedy algorithms used to determine the optimal indexes for simple partial match queries are not directly applicable because there are a larger number of queries to consider in determining the optimal indexes. In this paper we present heuristic algorithms which provide near-optimal solutions. The optimisation scheme we propose can be used to design other dynamic file structures such as the grid file, BANG file and multilevel grid file to further enhance their retrieval performance taking into consideration the query distribution. 相似文献
5.
The average cost for answering partial-match queries can be dramatically reduced by storing multiple copies of the data, each with a different clustering. We analyse the cost benefits (in terms of page accesses) of this arrangement and present heuristic algorithms for determining a near-minimal-cost file organisation for a given probability distribution of queries. We also show how constraining the range of values for specific attributes affects the usefulness of maintaining multiple copies. 相似文献
6.
The problem examined in this report is the calculation of the average wasted space at the end of the block when variable length records are inserted in the file. Previous efforts are based in approximations. Here, a detailed analysis based on Markov chains gives the exact solution. A framework is presented which shows the relations between the previous approaches. The proposed model includes the previous models as special limiting cases. Simulation results close to the analytic results are also presented.This research was sponsored partially by the National Science Foundation under the grants DCR-86-16833, IRI-8719458 and IRI-8958546 and by the Air Force Office of Scientific Research under grant AFOSR-89-0303. 相似文献
7.
Lars-erik Thorelli 《BIT Numerical Mathematics》1985,25(2):358-378
A variant of the static part of Milner's languageCCS is presented. It can be used for describing the construction of systems from modules, where the interconnection between modules is either export-import relationships of, for instance, procedures and types, or alternatively many-to-one communication channels. The semantics of the language is specified on two levels, the exterior level and the interior level. The exterior level semantics determines the legality of expressions, given externally visible properties of their constituent modules, while the interior level semantics associates a certain class of labeled directed graphs with expressions of the language. 相似文献
8.
Lars Lundberg 《BIT Numerical Mathematics》1993,33(2):190-213
In multiprocessors with static allocation of processes to processors, scheduling can be done locally for each processor. The scheduling strategy may have dramatic effect on the execution time of a parallel program. It is an NP-hard problem to find an optimal schedule, and very little is known of how close the heuristic solutions get.The major result here is a theorem stating that if certain program parameters, which can be obtained from an execution of the program on a single-processor, are known, the execution time of the optimal schedule can be calculated within a factor equal to the largest number of border processes on one processor. Border processes are processes which communicate with other processors. The program parameters are obtained using a previously developed tool.Due to the generality of this theorem, the proof is rather complex because it has to cover a large range of situations. The theorem itself, however, is easy to apply, making it possible to compare the performance of different scheduling strategies with the optimal case. The proof also gives important hints on how to design efficient scheduling algorithms for statically allocated programs. 相似文献
9.
S. D. Lang 《BIT Numerical Mathematics》1990,30(1):42-50
For an ordered file of records with uniformly distributed key values, we examine an existing batched searching algorithm based on recursive use of interpolation searches. The algorithm, called Recursive Batched Interpolation Search (RBIS) in this paper, uses a divide-and-conquer technique for batched searching. The expected-case complexity of the algorithm is shown to beO(m loglog (2n/m) +m), wheren is the size of the file andm is the size of the query batch. Simulations are performed to demonstrate the savings of batched searching using RBIS. Also, simulations are performed to compare alternative batched searching algorithms which are based on either interpolation search or binary search. When the file's key values are uniformly distributed, the simulation results confirm that interpolation-search based algorithms are superior to binary-search based algorithms. However, when the file's key values are not uniformly distributed, a straight-forward batched interpolation search deteriorates quickly as the batch size increases, but algorithm RBIS still outperforms binary-search based algorithms when the batch size passes a threshold value. 相似文献
10.
Sigurd Meldal 《BIT Numerical Mathematics》1986,26(2):164-174
We give transformation rules for the concurrent parts of Hoare's language CSP, transforming concurrent CSP programs into nondeterministic, sequential programs.On the basis of these transformations we define an axiomatic semantics for CSP with nested concurrency.This axiomatic system includes a rule for binary, associative process composition, enabling modular verification dealing with parts of concurrent systems as well as full programs.The proof system is fully abstract, in the sense that the internal structure of processes is irrelevant in the specification inasmuch it is not externally observable.An outline of a verification of a recursive, concurrent sorter is given as an example. 相似文献
11.
The dynamic external hashing proposed in this paper allocates records according to the spiral storage technique. Separators derived from the signature technique are used for distinguishing primary from overflow records and for subdividing overflow chains into segments allocated into the primary file. Single access retrieval is obtained by means of a main memory index with an entry per bucket and containing separators and pointers. While this method uses a larger index than other recent proposals, it is much more convenient regarding load factor and insertion cost. Furthermore, file expansion is directed by various control parameters, thus allowing the user to choose the most suitable policy for his application. 相似文献
12.
The amortized analysis is a useful tool for analyzing the time-complexity of performing a sequence of operations. The disk scheduling problem involves a sequence of requests in general. In this paper, the performances of representative disk scheduling algorithms,SSTF, SCAN, andN-StepSCAN, are analyzed in the amortized sense. A lower bound of the amortized complexity for the disk scheduling problem is also derived. According to our analysis,SCAN is not only better thanSSTF andN-StepSCAN, but also an optimal algorithm. Various authors have studied the disk scheduling problem based on some probability models and concluded that the most acceptable performance is obtained fromSCAN. Our result therefore supports their conclusion.This research was supported by the National Science Council, Taiwan R. O. C. under contract: NSC80-0408-E009-11. 相似文献
13.
Christos Faloutsos 《BIT Numerical Mathematics》1988,28(4):736-754
We design and analyze integrated ways of applying the signature file approach for text and attributes simultaneously. In traditional signature file methods, the records are stored sequentially in the main file; for every record, a hash-coded abstraction of it (record signature) is created and stored in the signature file (usually, sequentially). To resolve a query, the signature file is scanned; the signatures retrieved correspond to all the qualifying records, plus some false drops.Here, we extend some signature file methods, namely superimposed coding and disjoint coding, to handle text and attributes. We develop a mathematical model and derive formulas for the optimal choice of parameters. The proposed methods achieve significant performance improvements, because they can take advantage of the skewed distribution of the queries. Depending on the query frequencies, the false drop probability can be reduced 40–45 times ( 97% savings), for the same overhead.Also with the University of Maryland Institute for Advanced Computer Studies (U.M.I.A.C.S.). This research was sponsored partially by the National Science Foundation under the grant DCR-86-16833. 相似文献
14.
The problem of file organization which we consider involves altering the placement of records on pages of a secondary storage device. In addition, we want this reorganization to be done in-place, i.e., using the file's original storage space for the newly reorganized file. The motivation for such a physical change is to improve the database system's performance. For example, by placing frequently and jointly accessed records on the same page or pages, we can try to minimize the number of page accesses made in answering a set of queeries. The optimal assignment (or reassignment) of records to clusters is exactly what record clustering algorithms attempt to do. However, record clustering algorithms usually do not solve the entire problem, i.e., they do not specify how to efficiently reorganize the file to reflect the clustering assignment which they determine. Our algorithm is a companion to general record clustering algorithms since it actually transforms the file. The problem of optimal file reorganization isNP-hard. Consequently, our reorganization algorithm is based on heuristics. The algorithm's time and space requirements are reasonable and its solution is near optimal. In addition, the reorganization problem which we consider in this paper is similar to the problem of join processing when indexes are used.The research of this author was partially supported by the National Science Foundation under grant IST-8696157. 相似文献
15.
In this paper, we present a method, called the partition data allocation method, to allocate a set of data into a multi-disk system. Using the partition data allocation method, we shall show that, in a two-disk system, the performance is influenced by how we disperse similar data onto different disks. 相似文献
16.
C. C. Chang 《BIT Numerical Mathematics》1988,28(2):205-214
This paper is concerned with the allocation of multi-attribute records on several disks so as to achieve high degree of concurrency of disk access when responding to partial match queries.An algorithm to distribute a set of multi-attribute records onto different disks is presented. Since our allocation method will use the principal component analysis, this concept is first introduced. We then use it to generate a set of real numbers which are the projections on the first principal component direction and can be viewed as hashing addresses.Then we propose an algorithm based upon these hashing addresses to allocate multi-attribute records onto different disks. Some experimental results show that our method can indeed be used to solve the multi-disk data allocation problem for concurrent accessing. 相似文献
17.
In this paper, two different structures for inverted files are analyzed and compared, when the relational join operation is taken into account. The structures are called shared and separate inverted files. The results are given of some experiments which show that the shared inverted organization is always advantageous when the inverted files are not sorted and is almost always advantageous when the files are sorted. 相似文献
18.
Jorma Tarhio 《BIT Numerical Mathematics》1990,30(3):437-449
Evaluation of inherited attributes is a problem in conjunction with LR parsing, because the derivation tree is incomplete during parsing. An evaluation scheme for inherited attributes is presented based on a restricted grammar class, uncle-attributed grammars. A transformation to the uncle-attributed form is described for L-attributed grammars. 相似文献
19.
Chang and Wu have proposed a letter-oriented perfect hashing scheme based on sparse matrix compression. We present a method which is a refinement of the Chang-Wu scheme. By experimental evaluation, we show that the hashing of our refinement has more efficient storage utilization than Chang-Wu's method. Our refinement is valuable in practical implementations of hashing for large sets of keys. 相似文献
20.
Matti O. Jokinen 《BIT Numerical Mathematics》1989,29(2):227-238
In older languages lists, trees and graphs are represented with sets of arrays where indices of elements correspond to pointers to the nodes of the data structure. We present an algorithm that replaces such arrays with objects allocated dynamically from the heap, and indices with true pointers. Generated pointers are strongly typed and elements of logically related arrays are combined into records. The algorithm is potentially useful, especially in automatic translation between high-level programming languages. 相似文献