首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
We discuss the linear independence of systems ofmvectors in n-dimensional complex vector spaces where the m vectors are time-frequency shifts of one generating vector. Such systems are called Gabor systems. When n is prime, we show that there exists an open, dense subset with full-measure of such generating vectors with the property that any subset of n vectors in the corresponding full Gabor system of n2 vectors is linearly independent. We derive consequences relevant to coding, operator identification and time-frequency analysis in general.  相似文献   

2.
We present an algorithm for generating a subset of non-dominated vectors of multiple objective mixed integer linear programming. Starting from an initial non-dominated vector, the procedure finds at each iteration a new one that maximizes the infinity-norm distance from the set dominated by the previously found solutions. When all variables are integer, it can generate the whole set of non-dominated vectors.  相似文献   

3.
We propose an algorithm for nonparametric estimation for finite mixtures of multivariate random vectors that strongly resembles a true EM algorithm. The vectors are assumed to have independent coordinates conditional upon knowing from which mixture component they come, but otherwise their density functions are completely unspecified. Sometimes, the density functions may be partially specified by Euclidean parameters, a case we call semiparametric. Our algorithm is much more flexible and easily applicable than existing algorithms in the literature; it can be extended to any number of mixture components and any number of vector coordinates of the multivariate observations. Thus it may be applied even in situations where the model is not identifiable, so care is called for when using it in situations for which identifiability is difficult to establish conclusively. Our algorithm yields much smaller mean integrated squared errors than an alternative algorithm in a simulation study. In another example using a real dataset, it provides new insights that extend previous analyses. Finally, we present two different variations of our algorithm, one stochastic and one deterministic, and find anecdotal evidence that there is not a great deal of difference between the performance of these two variants. The computer code and data used in this article are available online.  相似文献   

4.
We approximate d-variate functions from weighted Korobov spaces with the error of approximation defined in the L sense. We study lattice algorithms and consider the worst-case setting in which the error is defined by its worst-case behavior over the unit ball of the space of functions. A lattice algorithm is specified by a generating (integer) vector. We propose three choices of such vectors, each corresponding to a different search criterion in the component-by-component construction. We present worst-case error bounds that go to zero polynomially with n ?1, where n is the number of function values used by the lattice algorithm. Under some assumptions on the weights of the function space, the worst-case error bounds are also polynomial in d, in which case we have (polynomial) tractability, or even independent of d, in which case we have strong (polynomial) tractability. We discuss the exponents of n ?1 and stress that we do not know if these exponents can be improved.  相似文献   

5.
Principal component analysis (PCA) of an objects ×  variables data matrix is used for obtaining a low-dimensional biplot configuration, where data are approximated by the inner products of the vectors corresponding to objects and variables. Borg and Groenen (Modern multidimensional scaling. Springer, New York, 1997) have suggested another biplot procedure which uses a technique for approximating data by projections of object vectors on variable vectors. This technique is formulated as constraining the variable vectors in PCA to be of unit length and can be called unit-length vector analysis (UVA). However, an algorithm for UVA has not yet been developed. In this paper, we present such an algorithm, discuss the properties of UVA solutions, and demonstrate the advantage of UVA in biplots for standardized data with homogeneous variances among variables. The advantage of UVA-based biplots is that the projections of object vectors onto variable vectors express the approximation of data in an easy way, while in PCA-based biplots we must consider not only the projections, but also the lengths of variable vectors in order to visualize approximations.  相似文献   

6.
The nonsymmetric Lanczos algorithm reduces a general matrix to tridiagonal by generating two sequences of vectors which satisfy a mutual bi-orthogonality property. The process can proceed as long as the two vectors generated at each stage are not mutually orthogonal, otherwise the process breaks down. In this paper, we propose a variant that does not break down by grouping the vectors into clusters and enforcing the bi-orthogonality property only between different clusters, but relaxing the property within clusters. We show how this variant of the matrix Lanczos algorithm applies directly to a problem of computing a set of orthogonal polynomials and associated indefinite weights with respect to an indefinite inner product, given the associated moments. We discuss the close relationship between the modified Lanczos algorithm and the modified Chebyshev algorithm. We further show the connection between this last problem and checksum-based error correction schemes for fault-tolerant computing.The research reported by this author was supported in part by NSF grant CCR-8813493.The research reported by this author was supported in part by ARO grant DAAL03-90-G-0105 and in part by NSF grant DCR-8412314.  相似文献   

7.
In this paper, we propose a new hybrid algorithm for the Hamiltonian cycle problem by synthesizing the Cross Entropy method and Markov decision processes. In particular, this new algorithm assigns a random length to each arc and alters the Hamiltonian cycle problem to the travelling salesman problem. Thus, there is now a probability corresponding to each arc that denotes the probability of the event “this arc is located on the shortest tour.” Those probabilities are then updated as in cross entropy method and used to set a suitable linear programming model. If the solution of the latter yields any tour, the graph is Hamiltonian. Numerical results reveal that when the size of graph is small, say less than 50 nodes, there is a high chance the algorithm will be terminated in its cross entropy component by simply generating a Hamiltonian cycle, randomly. However, for larger graphs, in most of the tests the algorithm terminated in its optimization component (by solving the proposed linear program).  相似文献   

8.
Two probabilistic hit-and-run algorithms are presented to detect nonredundant constraints in a full dimensional system of linear inequalities. The algorithms proceed by generating a random sequence of interior points whose limiting distribution is uniform, and by searching for a nonredundant constraint in the direction of a random vector from each point in the sequence. In the hypersphere directions algorithm the direction vector is drawn from a uniform distribution on a hypersphere. In the computationally superior coordinate directions algorithm a search is carried out along one of the coordinate vectors. The algorithms are terminated through the use of a Bayesian stopping rule. Computational experience with the algorithms and the stopping rule will be reported.  相似文献   

9.
Finite tight frames are widely used for many applications. An important problem is to construct finite frames with prescribed norm for each vector in the tight frame. In this paper we provide a fast and simple algorithm for such a purpose. Our algorithm employs the Householder transformations. For a finite tight frame consisting of m vectors in ?n or ?n only O(nm) operations are needed. In addition, we also study the following question: Given a set of vectors in ?n or ?n, how many additional vectors, possibly with constraints, does one need to add in order to obtain a tight frame?  相似文献   

10.
Astract  We describe an algorithm for generating the symbolic sequences which code the orbits of points under an interval exchange transformation on three intervals. The algorithm has two components. The first is an arithmetic division algorithm applied to the lengths of the intervals. This arithmetic construction was originally introduced by the authors in an earlier paper and may be viewed as a two-dimensional generalization of the regular continued fraction. The second component is a combinatorial algorithm which generates the bispecial factors of the associated symbolic subshift as a function of the arithmetic expansion. As a consequence, we obtain a complete characterization of those sequences of block complexity 2n+1 which are natural codings of orbits of three-interval exchange transformations, thereby answering an old question of Rauzy. Partially supported by NSF grant INT-9726708.  相似文献   

11.
The CMRH method [H. Sadok, Méthodes de projections pour les systèmes linéaires et non linéaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303–321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix–vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithm. A comparison with Gaussian elimination is provided.  相似文献   

12.
There are many computational tasks, in which it is necessary to sample a given probability density function (or pdf for short), i.e., to use a computer to construct a sequence of independent random vectors x i (i = 1, 2, ··· ), whose histogram converges to the given pdf. This can be difficult because the sample space can be huge, and more importantly, because the portion of the space, where the density is significant, can be very small, so that one may miss it by an ill-designed sampling scheme. Indeed, Markovchain Monte Carlo, the most widely used sampling scheme, can be thought of as a search algorithm, where one starts at an arbitrary point and one advances step-by-step towards the high probability region of the space. This can be expensive, in particular because one is typically interested in independent samples, while the chain has a memory. The authors present an alternative, in which samples are found by solving an algebraic equation with a random right-hand side rather than by following a chain; each sample is independent of the previous samples. The construction in the context of numerical integration is explained, and then it is applied to data assimilation.  相似文献   

13.
Book Notices     
Finding Pareto-minimum vectors among r given vectors, each of dimension m, is a fundamental problem in multiobjective optimization problems or multiple-criteria decision-making problems. Corley and Moon (Ref. 1) have given an algorithm for finding all the Pareto-minimum paths of a multiobjective network optimization problem from the initial node to any other node. It uses another algorithm by Corley and Moon, which actually computes the Pareto-minimum vectors. We observed that the latter algorithm is incorrect. In this note, we correct the algorithm for computing Pareto-minimum vectors and present a modified algorithm.  相似文献   

14.
Densest translational lattice packing of non-convex polygons   总被引:4,自引:0,他引:4  
A translational lattice packing of k polygons P1,P2,P3,…,Pk is a (non-overlapping) packing of the k polygons which is replicated without overlap at each point of a lattice i0v0+i1v1, where v0 and v1 are vectors generating the lattice and i0 and i1 range over all integers. A densest translational lattice packing is one which minimizes the area |v0×v1| of the fundamental parallelogram. An algorithm and implementation is given for densest translational lattice packing. This algorithm has useful applications in industry, particularly clothing manufacture.  相似文献   

15.
Extensible (polynomial) lattice rules have the property that the number N of points in the node set may be increased while retaining the existing points. It was shown by Hickernell and Niederreiter in a nonconstructive manner that there exist generating vectors for extensible integration lattices of excellent quality for N=b,b 2,b 3,…, where b is a given integer greater than 1. Similar results were proved by Niederreiter for polynomial lattices. In this paper we provide construction algorithms for good extensible lattice rules. We treat the classical as well as the polynomial case.  相似文献   

16.
A (3, 4)-biregular bigraph G is a bipartite graph where all vertices in one part have degree 3 and all vertices in the other part have degree 4. A path factor of G is a spanning subgraph whose components are nontrivial paths. We prove that a simple (3,4)-biregular bigraph always has a path factor such that the endpoints of each path have degree three. Moreover we suggest a polynomial algorithm for the construction of such a path factor.  相似文献   

17.
通过对高维数据整体表达式建模预测方法和分区间等预测算法的缺陷分析,提出基于向量值有理插值的最优预测算法,通过有理向量插值函数和各分量的误差限得到向量之间的相似性,克服了其它很多算法利用向量的整体表达式方法而产生预测的偏差;另外,通过向量的误差限与训练样本所得向量值有理插值函数及迭代仿真方法来确定预测样本向量所对应的最优预测值.通过实例,算法所得预测值的精度比其他算法更高,并且分析了误差限和迭代步长对算法性能的影响.  相似文献   

18.
In this paper, we study the problem of quadratic programming with M-matrices. We describe (1) an effective algorithm for the case where the variables are subject to a lower-bound constraint, and (2) an analogous algorithm for the case where the variables are subject to lower-and-upper-bound constraints. We demonstrate the special monotone behavior of the iterate and gradient vectors. The result on the gradient vector is new. It leads us to consider a simple updating procedure which preserves the monotonicity of both vectors. The procedures uses the fact that an M-matrix has a nonnegative inverse. Two new algorithms are then constructed by incorporating this updating procedure into the two given algorithms. We give numerical examples which show that the new methods can be more efficient than the original ones.  相似文献   

19.
Operator geometric stable laws are the weak limits of operator normed and centered geometric random sums of independent, identically distributed random vectors. They generalize operator stable laws and geometric stable laws. In this work we characterize operator geometric stable distributions, their divisibility and domains of attraction, and present their application to finance. Operator geometric stable laws are useful for modeling financial portfolios where the cumulative price change vectors are sums of a random number of small random shocks with heavy tails, and each component has a different tail index.  相似文献   

20.
Korsh  James F.  LaFollette  Paul S. 《Order》2002,19(2):115-126
Canfield and Williamson gave the first loopless algorithm for generating all linear extensions of a poset. It elegantly generates all signed extensions, resulting in each extension appearing somewhere with each sign, but retains only every other one independent of its sign. It uses an array for the extension. In this paper we give another loopless algorithm for generating all the linear extensions. It generates each extension only once and uses a list for the extensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号