首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We use the Euler, Jacobi, Poincaré, and Brun matrix algorithms as well as two new algorithms to evaluate the continued fraction expansions of two vectorsL related to two Davenport cubic formsg 1 andg 2. The Klein polyhedra ofg 1 andg 2 were calculated in another paper. Here the integer convergentsP k given by the cited algorithms are considered with respect to the Klein polyhedra. We also study the periods of these expansions. It turns out that only the Jacobi and Bryuno algorithms can be regarded as satisfactory. Translated fromMatematicheskie Zametki, Vol. 61, No. 3, pp. 339–348, March, 1997. Translated by V. E. Nazaikinskii  相似文献   

2.
In this paper a new hybrid conjugate gradient algorithm is proposed and analyzed. The parameter β k is computed as a convex combination of the Polak-Ribière-Polyak and the Dai-Yuan conjugate gradient algorithms, i.e. β k N =(1−θ k )β k PRP +θ k β k DY . The parameter θ k in the convex combination is computed in such a way that the conjugacy condition is satisfied, independently of the line search. The line search uses the standard Wolfe conditions. The algorithm generates descent directions and when the iterates jam the directions satisfy the sufficient descent condition. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this hybrid computational scheme outperforms the known hybrid conjugate gradient algorithms. N. Andrei is a member of the Academy of Romanian Scientists, Splaiul Independenţei nr. 54, Sector 5, Bucharest, Romania.  相似文献   

3.
《Optimization》2012,61(2):401-421
Abstract

We study the efficient set X E for a multiple objective linear program by using its projection into the linear space L spanned by the independent criteria. We show that in the orthogonally complementary space of L, the efficient points form a polyhedron, while in L an efficiency-equivalent polyhedron for the projection P(X E ) of X E can be constructed by algorithms of outer and inner approximation types. These algorithms can be also used for generating all extreme points of P(X E ). Application to optimization over the efficient set for a multiple objective linear program is considered.  相似文献   

4.
In this article, we propose and analyze several numerical methods for the nonlinear delay reaction–diffusion system with smooth and nonsmooth solutions, by using Quasi-Wilson nonconforming finite element methods in space and finite difference methods (including uniform and nonuniform L1 and L2-1σ schemes) in time. The optimal convergence results in the senses of L2-norm and broken H1-norm, and H1-norm superclose results are derived by virtue of two types of fractional Grönwall inequalities. Then, the interpolation postprocessing technique is used to establish the superconvergence results. Moreover, to improve computational efficiency, fast algorithms by using sum-of-exponential technique are built for above proposed numerical schemes. Finally, we present some numerical experiments to confirm the theoretical correctness and show the effectiveness of the fast algorithms.  相似文献   

5.
This paper is devoted to studying the problem of optimal recovery for Sobolev function classesW 2 r (ℝ) inL 2 (ℝ) by using the information of function values on denumerable points. Forr≥1, we determine that the optimal sampling points are arranged equidistantly in a suitable collection of sets of sampling points and find two kinds of cardinal spline interpolation as optimal algorithms. This author is supported by the National Natural Science Foundation of China.  相似文献   

6.
We study approximation of multivariate functions from a general separable reproducing kernel Hilbert space in the randomized setting with the error measured in the L norm. We consider algorithms that use standard information consisting of function values or general linear information consisting of arbitrary linear functionals. The power of standard or linear information is defined as, roughly speaking, the optimal rate of convergence of algorithms using n function values or linear functionals. We prove under certain assumptions that the power of standard information in the randomized setting is at least equal to the power of linear information in the worst case setting, and that the powers of linear and standard information in the randomized setting differ at most by 1/2. These assumptions are satisfied for spaces with weighted Korobov and Wiener reproducing kernels. For the Wiener case, the parameters in these assumptions are prohibitively large, and therefore we also present less restrictive assumptions and obtain other bounds on the power of standard information. Finally, we study tractability, which means that we want to guarantee that the errors depend at most polynomially on the number of variables and tend to zero polynomially in n −1 when n function values are used.  相似文献   

7.
We consider domain subdivision algorithms for computing isotopic approximations of a nonsingular algebraic curve. The curve is given by a polynomial equation f(X,Y)=0. Two algorithms in this area are from Snyder (1992) SIGGRAPH Comput. Graphics, 26(2), 121 and Plantinga and Vegter (2004) In Proc. Eurographics Symposium on Geometry Processing, pp. 245–254. We introduce a new algorithm that combines the advantages of these two algorithms: like Snyder, we use the parameterizability criterion for subdivision, and like Plantinga and Vegter, we exploit nonlocal isotopy. We further extend our algorithm in two important and practical directions: first, we allow subdivision cells to be rectangles with arbitrary but bounded aspect ratios. Second, we extend the input domains to be regions R 0 with arbitrary geometry and which might not be simply connected. Our algorithm halts as long as the curve has no singularities in the region, and intersects the boundary of R 0 transversally. Our algorithm is practical and easy to implement exactly. We report some very encouraging experimental results, showing that our algorithms can be much more efficient than the algorithms of Plantinga–Vegter and Snyder.  相似文献   

8.
By combining the prototype theory and random set theory interpretations of vague concepts, a novel structure named information cell and a combined structure named information cell mixture model are proposed to represent the semantics of vague concepts. An information cell L i on the domain Ω has a transparent cognitive structure ‘L i =about P i ’ which is mathematically formalized by a 3-tuple 〈P i ,d i ,δ i 〉 comprising a prototype set P i (⊆Ω), a distance function d i on Ω and a density function δ i on [0,+∞). An information cell mixture model on domain Ω is actually a set of weighted information cells L i s. A positive neighborhood function of the information cell mixture model is introduced in this paper to reflect the belief distribution of positive neighbors of the underlying concept. An information cellularization algorithm is also proposed to learn the information cell mixture model from a training data set, which is a direct application of the k-means and EM algorithms. Information cell mixture models provide some tools for information coarsening and concept modelling, and have potential applications in uncertain reasoning and classification.  相似文献   

9.
There are several different models of computation used on which to base evaluations of VLSI sorting algorithms and there are different measures of complexity. This paper revises complexity results under the linear model that have been gained under the constant model. This approach is due to expected technological development (see Mangir, 1983; Thompson and Raghavan, 1984; Vitanyi, 1984a and Vitanyi, 1984b).For the constant model we know that for medium sized keys there are AT2and AP2 optimal sorting algorithms with T ranging from ω(log n) to O(√nk) and P ranging from Ω(1) to O(√nk) (Bilardi, 1984). The main results of asymptotic analysis of sorting algorithms under the linear model are that the lower bounds allow AT2 optimal sorting algorithms only for T = Θ(√nk) but allow AP2 algorithms in the same range as under the constant model. Furthermore the sorting algorithms presented in this paper meet these lower bounds. This proves that these bounds cannot be improved for k = Θ (log n). The building block for the realization of these sorting algorithms is a comparison exchange module that compares r × s bit matrices in time TC = Θ(r + s) on an area AC = Θ(r2) (not including the storage area for the keys).For problem sizes that exceed realistic chip capacities, chip-external sorting algorithms can be used. In this paper two different chip-external sorting algorithms (BBB(S) and TWB(S)) are presented. They are designed to be implemented on a single board. They use a sorting chip S to perform the sort-split operation on blocks of data BBB(S) and TWB(S) are systolic algorithms using local communication only so that their evaluation does not depend on whether the constant or the linear model is used. Furthermore it seems obvious that their design is technically feasible whenever the sorting chip S is technically feasible.TWB has optimal asymptotic time complexity, so its existence proves that under the linear model external sorting can be done asymptotically as fast as under the constant model. The time complexity of TWB(S) is linearly dependent on the speed gs = ns/ts. It is shown that the speed if looked at as a function of the chip capacity C is asymptotically maximal for AT2 optimal sorting algorithms. Thus S should be a sorting algorithm similar to the M-M-sorter presented in this paper. A major disadvantage of TWB(S) is that it cannot exploit the maximal throughput ds = ns/ps of a systolic sorting algorithm S.Therefore algorithm BBB(S) is introduced. The time complexity of BBB(S) is linearly dependent on ds. It is shown that the throughput is maximal for AP2 optimal algorithms. There is a wide range of such sorting algorithms including algorithms that can be realized in a way that is independent of the length of the keys. For example, BBB(S) with S being a highly parallel version of odd-even transposition sort has this kind of flexibility. A disadvantage of BBB(S) is that it is asymptotically slower than TWB(S).  相似文献   

10.
Thek-partitioning problem   总被引:1,自引:0,他引:1  
Thek-partitioning problem is defined as follows: Given a set of items {I 1,I 2,...,I n} where itemIj is of weightwj 0, find a partitionS 1,S 2,...,S m of this set with ¦S i ¦ =k such that the maximum weight of all subsetsS i is minimal,k-partitioning is strongly related to the classical multiprocessor scheduling problem of minimizing the makespan on identical machines. This paper provides suitable tools for the construction of algorithms which solve exactly the problem. Several approximation algorithms are presented for this NP-hard problem. The worst-case behavior of the algorithms is analyzed. The best of these algorithms achieves a performance bound of 4/3.  相似文献   

11.
An Interior-Point Method for Approximate Positive Semidefinite Completions   总被引:1,自引:0,他引:1  
Given a nonnegative, symmetric matrix of weights, H, we study the problem of finding an Hermitian, positive semidefinite matrix which is closest to a given Hermitian matrix, A, with respect to the weighting H. This extends the notion of exact matrix completion problems in that, H ij =0 corresponds to the element A ij being unspecified (free), while H ij large in absolute value corresponds to the element A ij being approximately specified (fixed).We present optimality conditions, duality theory, and two primal-dual interior-point algorithms. Because of sparsity considerations, the dual-step-first algorithm is more efficient for a large number of free elements, while the primal-step-first algorithm is more efficient for a large number of fixed elements.Included are numerical tests that illustrate the efficiency and robustness of the algorithms  相似文献   

12.
The problem of arranging N unit length weights on a line segment of length N, given a target center of gravity on this line segment, is examined under the assumption that the only information we have about the weights is their order, i.e., a 1 a 2 ... a N . Lower bounds on worst case performance of algorithms for this problem are developed, and algorithms (permutations) which come close to achieving these bounds are provided.This work was partially supported under Natural Sciences and Engineering Research Council (NSERC) of Canada research grant number OGP0002507.  相似文献   

13.
The classical occupancy problem is extended to the case where two types of balls are thrown. In particular, the probability that no urn contains both types of balls is studied. This is a birthday problem in two groups of boys and girls to consider the coincidence of a boy's and a girl's birthday. Let N 1 and N 2 denote the numbers of balls of each type thrown one by one when the first collision between the two types occurs in one of m urns. Then N 1 N 2/m is asymptotically exponentially distributed as m tends to infinity.This problem is related to the security evaluation of authentication procedures in electronic message communication.  相似文献   

14.
《Journal of Complexity》2000,16(1):333-362
We use an information-based complexity approach to study the complexity of approximation of linear operators. We assume that the values of a linear operator may be elements of an infinite dimensional space G. It seems reasonable to look for an approximation as a linear combination of some elements gi from the space G and compute only the coefficients ci of this linear combination. We study the case when the elements gi are fixed and compare it with the case where the gi can be chosen arbitrarily. We show examples of linear problems where a significant output data compression is possible by the use of a nonlinear algorithm, and this nonlinear algorithm is much better than all linear algorithms. We also provide an example of a linear problem for which one piece of information is enough whereas an optimal (minimal cost) algorithm must use information of much higher cardinality.  相似文献   

15.
LetM 1 = (E, 91),M 2 = (E, 92) be two matroids over the same set of elementsE, and with families of independent sets 91, 92. A setI 91 92 is said to be anintersection of the matroidsM 1,M 2. An important problem of combinatorial optimization is that of finding an optimal intersection ofM 1,M 2. In this paper three matroid intersection algorithms are presented. One algorithm computes an intersection containing a maximum number of elements. The other two algorithms compute intersections which are of maximum total weight, for a given weighting of the elements inE. One of these algorithms is primal-dual, being based on duality considerations of linear programming, and the other is primal. All three algorithms are based on the computation of an augmenting sequence of elements, a generalization of the notion of an augmenting path from network flow theory and matching theory. The running time of each algorithm is polynomial inm, the number of elements inE, and in the running times of subroutines for independence testing inM 1,M 2. The algorithms provide constructive proofs of various important theorems of matroid theory, such as the Matroid Intersection Duality Theorem and Edmonds' Matroid Polyhedral Intersection Theorem.Research sponsored by the Air Force Office of Scientific Research Grant 71-2076.  相似文献   

16.
The Huber criterion for data fitting is a combination of thel 1 and thel 2 criteria which is robust in the sense that the influence of wild data points can be reduced. We present a trust region and a Marquardt algorithm for Huber estimation in the case where the functions used in the fit are non-linear. It is demonstrated that the algorithms converge under the usual conditions.  相似文献   

17.
In this paper we consider classical shop problems:n jobs have to be processed onm machines. The processing timep i,j of jobi on machinej is given for all operations (i, j). Each machine can process at most one job at a time and each job can be processed at most on one machine at a given time. The machine orders are fixed (job-shop) or arbitrary (open-shop). We have to determine a feasible combination of machine and job orders, a so-called sequence, which minimizes the makespan. We introduce a partial order on the set of sequences with the property that there exists at least one optimal sequence in the set of minimal elements of this partial order independent of the given processing times. The set of minimal elements (set of irreducible sequences) can be in detail described in the case of the two machine open-shop problem. The cardinality is calculated. We will show which sequences are generated by the well-known polynomial algorithms for the construction of optimal schedules. Furthermore, we investigate the problemOC max on an operation set with spanning tree structure. Supported by Deutsche Forschungsgemeinschaft, Project ScheMA  相似文献   

18.
LM-g splines     
As an extension of the notion of an L-g spline, three mathematical structures called LM-g splines of types I, II, and III are introduced. Each is defined in terms of two differential operators the coefficients aj, J = 0,…, n − 1, and bi, I = 0,…, m, are sufficiently smooth; and bm is bounded away from zero on [0, T]. Each of the above types of splines is the solution of an optimization problem more general than the one used in the definition of the L-g spline and hence it is recognized as an entity which is distinct from and more general mathematically than the L-g spline. The LM-g splines introduced here reduce to an L-g spline in the special case in which m = 0 and b0 = constant ≠ 0. After the existence and uniqueness conditions, characterization, and best approximation properties for the proposed splines are obtained in an appropriate reproducing kernel Hilbert space framework, their usefulness in extending the range of applicability of spline theory to problems in estimation, optimal control, and digital signal processing are indicated. Also, as an extension of recent results in the generalized spline literature, state variable models for the LM-g splines introduced here are exhibited, based on which existing least squares algorithms can be used for the recursive calculation of these splines from the data.  相似文献   

19.
Another hybrid conjugate gradient algorithm is subject to analysis. The parameter β k is computed as a convex combination of (Hestenes-Stiefel) and (Dai-Yuan) algorithms, i.e. . The parameter θ k in the convex combination is computed in such a way so that the direction corresponding to the conjugate gradient algorithm to be the Newton direction and the pair (s k , y k ) to satisfy the quasi-Newton equation , where and . The algorithm uses the standard Wolfe line search conditions. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms the Hestenes-Stiefel and the Dai-Yuan conjugate gradient algorithms as well as the hybrid conjugate gradient algorithms of Dai and Yuan. A set of 750 unconstrained optimization problems are used, some of them from the CUTE library.   相似文献   

20.
In this paper we compute explicit formulas for the commutation relations between any two quantum minors in the quantum matrix bialgebra Oq(Mn(k)). The product of any two minors is expressed as linear combination of products of minors strictly less in certain orderings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号