首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
2.
This paper presents the results of extensive computational testing of the modified damped Newton algorithm for solving variational inequality problems presented in Part I [8].Corresponding author.  相似文献   

3.
A technique is given for permuting the rows and columns of a general square matrixM into a fully reduced matrixC. The corresponding algorithm is described, its validity is proved and it is compared with Harary's algorithm [5].  相似文献   

4.
In this paper, a new fast algorithm for the computation of the distance of a stable matrix to the unstable matrices is provided. The method uses Newton’s method to find a two-dimensional Jordan block corresponding to a pure imaginary eigenvalue in a certain two-parameter Hamiltonian eigenvalue problem introduced by Byers [R. Byers, A bisection method for measuring the distance of a stable matrix to the unstable matrices, SIAM J. Sci. Statist. Comput. 9 (1988) 875-881]. This local method is augmented by a test step, previously used by other authors, to produce a global method. Numerical results are presented for several examples and comparison is made with the methods of Boyd and Balakrishnan [S. Boyd, V. Balakrishnan, A regularity result for the singular values of a transfer matrix and a quadratically convergent algorithm for computing its L-norm, Systems Control Lett. 15 (1990) 1-7] and He and Watson [C. He, G.A. Watson, An algorithm for computing the distance to instability, SIAM J. Matrix Anal. Appl. 20 (1999) 101-116].  相似文献   

5.
This paper studies the problem of finding efficient deletion algorithms for the coalesced hashing method, in which a portion of memory (called the address region) serves as the range of the hash function while the rest of memory (called the cellar) is devoted solely to storing records that collide when inserted. First we present a deletion algorithm, which solves the open problem described in [6, Sect. 6.4–23]. The main result of this paper, Theorem 3, shows that the deletion algorithm preserves randomness for the special case of standard coalesced hashing (when there is no cellar), in that deleting a record is in some sense like never having inserted it. This means that the formulas for the search times (which are analyzed in [8, 9]) are still valid after deletions. There is as yet no known deletion algorithm that preserves randomness for the general case (when there is a cellar). We give some reasons why and then discuss some heuristics that seem to make deletions practical anyway.  相似文献   

6.
A hypergraph is linear if any two distinct hyperedges have at most one common vertex. The existence of a polynomial algorithm is shown for deciding whether a graph of minimum degree δ ≥ 19 is the intersection graph of a linear 3-uniform hypergraph. This result improves a corollary of the finite forbidden subgraph characterization proved for δ ≥ 69 by Naik et al. in [8]. Essentially the same methods yield a polynomial recognition algorithm for the intersection graph of a linear r-uniform hypergraph, r ≥ 3, provided the minimum edge-degree of the graphs is at least 2r 2 ? 3r + 1. This improves on the cubic bound that follows from the corresponding finite characterization result in [8].  相似文献   

7.
Recently the authors have defined a coherent prohomotopy category of topological spaces CPHTop [5]. In the present paper, which is a sequel to Part I [6], the authors define a strong homology functor Hs:CPHTop→Ab. The results of this paper are essential for the construction of a Steenrod-Sitnikov homology theory for arbitrary spaces.  相似文献   

8.
In this paper, we prove strong convergence theorems by the hybrid method for a family of hemi-relatively nonexpansive mappings in a Banach space. Our results improve and extend the corresponding results given by Qin et al. [Xiaolong Qin, Yeol Je Cho, Shin Min Kang, Haiyun Zhou, Convergence of a modified Halpern-type iteration algorithm for quasi-?-nonexpansive mappings, Appl. Math. Lett. 22 (2009) 1051-1055], and at the same time, our iteration algorithm is different from the Kimura and Takahashi algorithm, which is a modified Mann-type iteration algorithm [Yasunori Kimura, Wataru Takahashi, On a hybrid method for a family of relatively nonexpansive mappings in Banach space, J. Math. Anal. Appl. 357 (2009) 356-363]. In addition, we succeed in applying our algorithm to systems of equilibrium problems which contain a family of equilibrium problems.  相似文献   

9.
Summary In Part II of this paper we present a rigorous analysis of the Iterated Defect-Correction — applied to two-point boundary value problems — which was introduced in Part I of this paper [1]. A complete proof of Theorem 5. 1 of [1] is given.  相似文献   

10.
In the paper we improve Fujimoto’s sufficient condition for a finite set to be a Unique Range Set under relaxed sharing hypothesis. We introduce a new sharing notion which directly improves one result of the paper [3]. In particular, we rectify the Application Part of [3], and extend and improve all results of [3].  相似文献   

11.
The purpose of this paper is to present an algorithm for matrix multiplication based on a formula discovered by Pan [7]. For matrices of order up to 10 000, the nearly optimum tuning of the algorithm results in a rather clear non‐recursive one‐ or two‐level structure with the operation count comparable to that of the Strassen algorithm [9]. The algorithm takes less workspace and has better numerical stability as compared to the Strassen algorithm, especially in Winograd's modification [2]. Moreover, its clearer and more flexible structure is potentially more suitable for efficient implementation on modern supercomputers. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
13.
This paper continues the study of generalized amalgamation properties begun in [1], [2], [3], [5] and [6]. Part of the paper provides a finer analysis of the groupoids that arise from failure of 3-uniqueness in a stable theory. We show that such groupoids must be abelian and we link the binding group of the groupoids to a certain automorphism group of the monster model, showing that the group must be abelian as well. We also study connections between n-existence and n-uniqueness properties for various “dimensions” n in the wider context of simple theories. We introduce a family of weaker existence and uniqueness properties. Many of these properties did appear in the literature before; we give a category-theoretic formulation and study them systematically. Finally, we give examples of first-order simple unstable theories showing, in particular, that there is no straightforward generalization of the groupoid construction in an unstable context.  相似文献   

14.
Recently two shifting algorithms were designed for two optimum tree partitioning problems: The problem of max-min q-partition [4] and the problem of min-max q-partition [1]. In this work we investigate the applicability of these two algorithms to max-min and min-max partitioning of a tree for various different weighting functions. We define the families of basic and invariant weighting functions. It is shown that the first shifting algorithm yields a max-min q-partition for every basic weighting function. The second shifting algorithm yields a min-max q-partition for every invariant weighting function. In addition a modification of the second algorithm yields a min-max q-partition for the noninvariant diameter weighting function.  相似文献   

15.
This paper analyzes and evaluates an efficient n-dimensional global optimization algorithm. It is a natural n-dimensional extension of the algorithm of Casado et al. [1]. This algorithm takes advantage of all available information to estimate better bounds of the function. Numerical comparison made on a wide set of multiextremal test functions has shown that on average the new algorithm works faster than a traditional interval analysis global optimization method.  相似文献   

16.
This paper presents an efficient heuristic algorithm for the one-dimensional loading problem in which the goal is to pack homogeneous blocks of given length and weight in a container in such a way that the center of gravity of the packed blocks is as close to a target point as possible. The proposed algorithm is based on the approximation of this problem as a knapsack problem. This algorithm has the same computational complexity but a better worst-case performance than the algorithm recently proposed by Amiouny et al. [Oper. Res. 40 (1992) 238]. Moreover, the computational results also show that, in general, it performs better on randomly generated problems.  相似文献   

17.
The recursive method for computing the generalized LM-inverse of a constant rectangular matrix augmented by a column vector is proposed in Udwadia and Phohomsiri (2007) [16] and [17]. The corresponding algorithm for the sequential determination of the generalized LM-inverse is established in the present paper. We prove that the introduced algorithm for computing the generalized LM-inverse and the algorithm for the computation of the weighted Moore-Penrose inverse developed by Wang and Chen (1986) in [23] are equivalent algorithms. Both of the algorithms are implemented in the present paper using the package MATHEMATICA. Several rational test matrices and randomly generated constant matrices are tested and the CPU time is compared and discussed.  相似文献   

18.
19.
A single machine scheduling problem with controllable processing times and compression costs is considered. The objective is to find an optimal sequence to minimize the cost ofcompletion times and the cost of compression. The complexity of this problem is still unknown.In Part Ⅱ of this paper,the authors have considered a special case where the compression timesand the compression costs are equal among all jobs. Such a problem appears polynomiafiy solvable by developing an O(n^2) algorithm. In this part(Part Ⅱ ),a general case where the controllable processing times and the compression costs are not equal is discussed. Authors proposehere two heuristics with the first based on some previous work and the second based on the algorithm developed in Part Ⅱ . Computational results are presented to show the efficiency and therobustness of these heuristics.  相似文献   

20.
《Journal of Complexity》1994,10(2):199-215
We consider two hybrid algorithms for finding an ϵ-approximation of a root of a convex real function that is twice differentiable and satisfies a certain growth condition on the intervial [0, R]. The first algorithm combines a binary search procedure with Newton′s method. The binary search produces an interval contained in the region of quadratic convergence of Newton′s method. The computational cost of the binary search, as well as the computational cost of Newton′s method, is of order O(log log(R/ϵ)). The second algorithm combines a binary search with the secant method in a similar fashion. This results in a lower overall computational cost when the cost of evaluating the derivative is more than .44042 of the cost of evaluating the function. Our results generalize same recent results of Ye.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号