首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a class of bounded control inputs that permits one to solve target control synthesis problems for linear systems with geometric (“instantaneous”) constraints on the perturbations by reduction to simpler programmed control problems.  相似文献   

2.
Operator scaling Gaussian random fields, as anisotropic generalizations of self-similar fields, know an increasing interest for theoretical studies in the literature. However, up to now, they were only defined through stochastic integrals, without explicit covariance functions. In this paper we exhibit explicit covariance functions, as anisotropic generalizations of fractional Brownian fields ones, and define corresponding Operator scaling Gaussian random fields. This allows us to propose a fast and exact method of simulation in dimension 2 based on the circulant embedding matrix method, following ideas of Stein [34] for fractional Brownian surfaces syntheses. This is a first piece of work to popularize these models in anisotropic spatial data modeling.  相似文献   

3.
Many interesting and fundamentally practical optimization problems, ranging from optics, to signal processing, to radar and acoustics, involve constraints on the Fourier transform of a function. It is well-known that the fast Fourier transform (fft) is a recursive algorithm that can dramatically improve the efficiency for computing the discrete Fourier transform. However, because it is recursive, it is difficult to embed into a linear optimization problem. In this paper, we explain the main idea behind the fast Fourier transform and show how to adapt it in such a manner as to make it encodable as constraints in an optimization problem. We demonstrate a real-world problem from the field of high-contrast imaging. On this problem, dramatic improvements are translated to an ability to solve problems with a much finer grid of discretized points. As we shall show, in general, the “fast Fourier” version of the optimization constraints produces a larger but sparser constraint matrix and therefore one can think of the fast Fourier transform as a method of sparsifying the constraints in an optimization problem, which is usually a good thing.  相似文献   

4.
Data-extrapolating (extension) technique has important applications in image processing on implicit surfaces and in level set methods. The existing data-extrapolating techniques are inefficient because they are designed without concerning the specialities of the extrapolating equations. Besides, there exists little work on locating the narrow band after data extrapolating—a very important problem in narrow band level set methods. In this paper, we put forward the general Huygens’ principle, and based on the principle we present two efficient data-extrapolating algorithms. The algorithms can easily locate the narrow band in data extrapolating. Furthermore, we propose a prediction–correction version for the data-extrapolating algorithms and the corresponding band locating method for a special case where the direct band locating method is hard to apply. Experiments demonstrate the efficiency of our algorithms and the convenience of the band locating method.  相似文献   

5.
6.
7.
8.
This paper studies the asymptotic structure of convection in an infinite Prandtl number fluid with strongly temperature-dependent viscosity, in the limit where the dimensionless activation energy 1/ε is large, and the Rayleigh number R, defined (essentially) with the basal viscosity and the prescribed temperature drop, is also large. We find that the Nusselt number N is given by N~CεR1/5, where C depends on the aspect ratio a. The relative error in this result is O(R?1/10ε?1/4, ε1/2, R?2/5ε?2, R?2/20ε?1/24), so that we cannot hope to find accurate confirmation of this result at moderate Rayleigh numbers, though it should serve as a useful indicator of the relative importance of R and ε. For the above result to be valid, we require R ? 1/ε5 ?1. More important, however, is the asymptotic structure of the flow: there is a cold (hence rigid) lid with sloping base, beneath which a rapid, essentially isoviscous, convection takes place. This convection is driven by plumes at the sides, which generate vorticity due to thermal buoyancy, as in the constant viscosity case (Roberts, 1979). However, the slope of the lid base is sufficient to cause a large shear stress to be generated in the thermal boundary layer which joins the lid to the isoviscous region underneath (though a large velocity is not generated); consequently, the layer does not “see” the shear stress exerted by the interior flow (at leading order), and therefore the thermal boundary layer structure is totally self-determined: it even has a similarity structure (as a consequence). This fact makes it easy to analyse the problem, since the boundary layer uncouples from the rest of the flow. In addition, we find an alternative scaling (in which the lid base is “almost” flat), but it seems that the resulting boundary layer equations have no solution, though this is certainly open to debate: the results quoted above are not for this case. When a free slip boundary condition is applied at the top surface, one finds that there exists a thin “skin” at the top of the lid which is a stress boundary layer. The shear stress changes rapidly to zero, and there exists a huge longitudinal stress (compressive/tensile) in this skin. For earthlike parameters, this stress far exceeds the fracture strength of silicate rocks.  相似文献   

9.
Matching two-sided estimates are given for the minimal degree of polynomialsP satisfyingP(0)=1 and ¦P(x)|exp(–x¦)),x [–1,1], where is an arbitrary, in [0, 1], increasing function. Besides these fast decreasing polynomials we also consider bell-shaped polynomials and polynomials approximating well the signum function.Communicated by Edward B. Saff.  相似文献   

10.
11.
12.
The efficient evaluation of multivariate polynomials at many points is an important operation for polynomial system solving. Kedlaya and Umans have recently devised a theoretically efficient algorithm for this task when the coefficients are integers or when they lie in a finite field. In this paper, we assume that the set of points where we need to evaluate is fixed and “sufficiently generic”. Under these restrictions, we present a quasi-optimal algorithm for multi-point evaluation over general fields. We also present a quasi-optimal algorithm for the opposite interpolation task.  相似文献   

13.
Necessary and sufficient conditions are obtained for the existence of sequences of rational functions of the formr n(x) =p n(x)/pn(−x), withp n a polynomial of degreen, that decrease geometrically on (0, 1] in accordance with a specified rate function. The technique of proof involves minimum energy problems for Green potentials in the presence of an external field. Applications are given for the construction of rational approximations of |x| and sgn(x) on [−1, 1] having geometric rates of convergence forx ≠ 0. The research of this author was supported, in part, by National Science Foundation grant DMS-9501130.  相似文献   

14.
Summary Algorithms are presented which compute theQR factorization of a block-Toeplitz matrix inO(n) 2 block-operations, wheren is the block-order of the matrix and a block-operation is a multiplication, inversion or a set of Householder operations involving one or two blocks. The algorithms are in general analogous to those presented in the scalar Toeplitz case in a previous paper, but the basic operation is the Householder transform rather than the Givens transform, and the computation of the Householder coefficients and other working matrices requires special treatment. Two algorithms are presented-the first computes onlyR explicitly and the second computes bothQ andR.  相似文献   

15.
We present the penalized fast subset scan (PFSS), a new and general framework for scalable and accurate pattern detection. PFSS enables exact and efficient identification of the most anomalous subsets of the data, as measured by a likelihood ratio scan statistic. However, PFSS also allows incorporation of prior information about each data element’s probability of inclusion, which was not previously possible within the subset scan framework. PFSS builds on two main results: first, we prove that a large class of likelihood ratio statistics satisfy a property that allows additional, element-specific penalty terms to be included while maintaining efficient computation. Second, we prove that the penalized statistic can be maximized exactly by evaluating only O(N) subsets. As a concrete example of the PFSS framework, we incorporate “soft” constraints on spatial proximity into the spatial event detection task, enabling more accurate detection of irregularly shaped spatial clusters of varying sparsity. To do so, we develop a distance-based penalty function that rewards spatial compactness and penalizes spatially dispersed clusters. This approach was evaluated on the task of detecting simulated anthrax bio-attacks, using real-world Emergency Department data from a major U.S. city. PFSS demonstrated increased detection power and spatial accuracy as compared to competing methods while maintaining efficient computation.  相似文献   

16.
Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given n nonnegative integers x1, . . ., xn, to allocate n nonoverlapping subarrays of sizes x1, . . ., xn from within a base array of Onj=1xj) cells. We show that interval allocation problems of size n can be solved in O((log log n)3) time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of Θ(log n/log log n). We describe an application to the problem of computing the connected components of an undirected graph. Finally we show that there is a nonuniform deterministic algorithm that solves interval allocation problems of size n in O(log log n) time with optimal speedup.  相似文献   

17.
This paper presents a new method for the computation of truncated singular value decomposition (SVD) of an arbitrary matrix. The method can be qualified as deterministic because it does not use randomized schemes. The number of operations required is asymptotically lower than that using conventional methods for nonsymmetric matrices and is at a par with the best existing deterministic methods for unstructured symmetric ones. It slightly exceeds the asymptotical computational cost of SVD methods based on randomization; however, the error estimate for such methods is significantly higher than for the presented one. The method is one‐pass, that is, each value of the matrix is used just once. It is also readily parallelizable. In the case of full SVD decomposition, it is exact. In addition, it can be modified for a case when data are obtained sequentially rather than being available all at once. Numerical simulations confirm accuracy of the method.  相似文献   

18.
Arithmetic in large ring and field extensions is an important problem of symbolic computation, and it consists essentially of the combination of one multiplication and one division in the underlying ring. Methods are known for replacing one division by two short multiplications in the underlying ring, which can be performed essentially by using convolutions.

However, while using school-book multiplication, modular multiplication may be grouped into operations (where denotes the number of operations of one multiplication in the underlying ring), the short multiplication problem is an important obstruction to convolution. It raises the costs in that case to . In this paper we give a method for understanding and bypassing this problem, thus reducing the costs of ring arithmetic to roughly when also using fast convolutions. The algorithms have been implemented with results which fit well the theoretical prediction and which shall be presented in a separate paper.

  相似文献   


19.
In this paper, we consider solving non-convolution type integral equations by the preconditioned conjugate gradient method. The fast dense matrix method is a fast multiplication scheme that provides a dense discretization matrix A approximating a given integral equation. The dense matrix A can be constructed in O(n) operations and requires only O(n) storage where n is the size of the matrix. Moreover, the matrix-vector multiplication A xcan be done in O(n log n) operations. Thus if the conjugate gradient method is used to solve the discretized system, the cost per iteration is O(n log n) operations. However, for some integral equations, such as the Fredholm integral equations of the first kind, the system will be ill-conditioned and therefore the convergence rate of the method will be slow. In these cases, preconditioning is required to speed up the convergence rate of the method. A good choice of preconditioner is the optimal circulant preconditioner which is the minimizer of CA F in Frobenius norm over all circulant matrices C. It can be obtained by taking arithmetic averages of all the entries of A and therefore the cost of constructing the preconditioner is of O(n 2) operations for general dense matrices. In this paper, we develop an O(n log n) method of constructing the preconditioner for dense matrices A obtained from the fast dense matrix method. Application of these ideas to boundary integral equations from potential theory will be given. These equations are ill-conditioned whereas their optimal circulant preconditioned equations will be well-conditioned. The accuracy of the approximation A, the fast construction of the preconditioner and the fast convergence of the preconditioned systems will be illustrated by numerical examples.  相似文献   

20.
In this paper an efficient algorithm to generate regular graphs with small vertex valency is presented. The running times of a program based on this algorithm and designed to generate cubic graphs are below two natural benchmarks: (a) If N(n) denotes the number of pairwise non-isomorphic cubic graphs with n vertices and T(n) the time needed for generating the list of all these graphs, then T(n)/N(n) decreases gradually for the observed values of n. This suggests that T(n)/N(n) might be uniformly bounded for all n, ignoring the time to write the outputs, but we are unable to prove this and in fact are not confident about it. (b) For programs that generate lists of non-isomorphic objects, but cannot a priori make sure to avoid the generation of isomorphic copies, the time needed to check a randomly ordered list of these objects for being non-isomorphic is a natural benchmark. Since for large lists (n ≥ 22, girth 3) existing graph isomorphism programs take longer to canonically label all of the N(n) graphs than our algorithm takes to generate them, our algorithm is probably faster than any method which does one or more isomorphism test for every graph. © 1996 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号