首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
In this paper we study the general reconstruction of a compactly supported function from its Fourier coefficients using compactly supported shearlet systems. We assume that only finitely many Fourier samples of the function are accessible and based on this finite collection of measurements an approximation is sought in a finite dimensional shearlet reconstruction space. We analyze this sampling and reconstruction process by a recently introduced method called generalized sampling. In particular by studying the stable sampling rate of generalized sampling we then show stable recovery of the signal is possible using an almost linear rate. Furthermore, we compare the result to the previously obtained rates for wavelets.  相似文献   

2.
We propose an iterative algorithm for the minimization of a ? 1-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in the penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.  相似文献   

3.
In 1961, at A.M.S. Symposium on Convexity, P.C. Hammer proposed the following problem: how many X-ray pictures of a convex planar domain D must be taken to permit its exact reconstruction? Richard Gardner writes in his fundamental 2006 book [4] that X-rays in four different directions would do the job. The present paper points at the possibility that in certain asymptotical sense X-rays in only three different directions can be enough for approximate reconstruction of centrally symmetric convex domains. The accuracy of reconstruction would tend to become perfect in the limit, as the directions of the three X-rays change, all three converging to some given direction. The analysis leading to that conclusion is based on two lemmas of Section 1 and Pleijel type identity for parallel X-rays derived in Sections 2 and 3. These tools together supply a systemof two differential equations with respect to two unknown functions that describe the two branches of the domain boundary D. The system is easily resolved. The solution intended to provide a complete tomography reconstruction of D, happens however to depend on a two dimensional parameter, whose “real value” remains unknown. So tomography reconstruction of D becomes possible if a satisfactory approximation to that unknown “real value” can be found. In the last section a test procedure for the individual candidates for “approximate real value” of the parameter is described. A uniqueness theorem concerning tomography of circular discs is proved.  相似文献   

4.
Shidong Li  Dunyan Yan 《Acta Appl Math》2009,107(1-3):91-103
We demonstrate that for all linear devices and/or sensors, signal requisition and reconstruction is naturally a mathematical frame expansion and reconstruction issue, whereas the measurement is carried out via a sequence generated by the exact physical response function (PRF) of the device, termed sensory frame {h n }. The signal reconstruction, on the other hand, will be carried out using the dual frame $\{\tilde{h}^{a}_{n}\}$ of an estimated sensory frame {h n a }. This consequently results in a one-sided perturbation to a frame expansion, which resides in each and every signal and image reconstruction problem. We show that the stability of such a one-sided frame perturbation exits. Examples of image reconstructions in de-blurring are demonstrated.  相似文献   

5.
We investigate the potential of sparsity constraints in the electrical impedance tomography (EIT) inverse problem of inferring the distributed conductivity based on boundary potential measurements. In sparsity reconstruction, inhomogeneities of the conductivity are a priori assumed to be sparse with respect to a certain basis. This prior information is incorporated into a Tikhonov-type functional by including a sparsity-promoting ?1-penalty term. The functional is minimized with an iterative soft shrinkage-type algorithm. In this paper, the feasibility of the sparsity reconstruction approach is evaluated by experimental data from water tank measurements. The reconstructions are computed both with sparsity constraints and with a more conventional smoothness regularization approach. The results verify that the adoption of ?1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively more feasible than that with the smoothness constraint.  相似文献   

6.
We study the duality of reconstruction systems, which are g-frames in a finite dimensional setting. These systems allow redundant linear encoding-decoding schemes implemented by the so-called dual reconstruction systems. We are particularly interested in the projective reconstruction systems that are the analogue of fusion frames in this context. Thus, we focus on dual systems of a fixed projective system that are optimal with respect to erasures of the reconstruction system coefficients involved in the decoding process. We consider two different measures of the reconstruction error in a blind reconstruction algorithm. We also study the projective reconstruction system that best approximate an arbitrary reconstruction system, based on some well known results in matrix theory. Finally, we present a family of examples in which the problem of existence of a dual projective system of a reconstruction system of this type is considered.  相似文献   

7.
In this paper we introduce the q-potential as an extension of the Benedetto-Fickus frame potential, defined on general reconstruction systems and show that protocols are the minimizers of this potential under certain restrictions. We extend recent results of B.G. Bodmann on the structure of optimal protocols with respect to 1 and 2 lost packets where the worst (normalized) reconstruction error is computed with respect to a compatible unitarily invariant norm. We finally describe necessary and sufficient (spectral) conditions, that we call q-fundamental inequalities, for the existence of protocols with prescribed properties by relating this problem to Klyachko’s and Fulton’s theory on sums of hermitian operators.  相似文献   

8.
A standard reconstruction problem is how to discover a compact set from a noisy point cloud that approximates it. A finite point cloud is a compact set. This paper proves a reconstruction theorem which gives a sufficient condition, as a bound on the Hausdorff distance between two compact sets, for when certain offsets of these two sets are homotopic in terms of the absence of μ-critical points in an annular region. We reduce the problem of reconstructing a subset from a point cloud to the existence of a deformation retraction from the offset of the subset to the subset itself. The ambient space can be any Riemannian manifold but we focus on ambient manifolds which have nowhere negative curvature (this includes Euclidean space). We get an improvement on previous bounds for the case where the ambient space is Euclidean whenever μ≤0.945 (μ∈(0,1) by definition). In the process, we prove stability theorems for μ-critical points when the ambient space is a manifold.  相似文献   

9.
《Comptes Rendus Mathematique》2008,346(3-4):239-242
The measures of the fields of density, pressure and temperature of an atmospheric column are often scarce and sometimes inaccurate, which needs a necessary reconstruction. An original technique of reconstruction with a confined storage reduced to 6 parameters only, is proposed and validated by using a mathematical model with an altitude dependent gravity, and a discretization by an original well-balanced scheme. To cite this article: A.-Y. LeRoux, L. Vignon, C. R. Acad. Sci. Paris, Ser. I 346 (2008).  相似文献   

10.
Most of the standard papers about the WENO schemes consider their implementation to uniform meshes only. In that case the WENO reconstruction is performed efficiently by using the algebraic expressions for evaluating the reconstruction values and the smoothness indicators from cell averages. The coefficients appearing in these expressions are constant, dependent just on the scheme order, not on the mesh size or the reconstruction function values, and can be found, for example, in Jiang and Shu (J Comp Phys 126:202–228, 1996). In problems where the geometrical properties must be taken into account or the solution has localized fine scale structure that must be resolved, it is computationally efficient to do local grid refinement. Therefore, it is also desirable to have numerical schemes, which can be applied to nonuniform meshes. Finite volume WENO schemes extend naturally to nonuniform meshes although the reconstruction becomes quite complicated, depending on the complexity of the grid structure. In this paper we propose an efficient implementation of finite volume WENO schemes to nonuniform meshes. In order to save the computational cost in the nonuniform case, we suggest the way for precomputing the coefficients and linear weights for different orders of WENO schemes. Furthermore, for the smoothness indicators that are defined in an integral form we present the corresponding algebraic expressions in which the coefficients obtained as a linear combination of divided differences arise. In order to validate the new implementation, resulting schemes are applied in different test examples.   相似文献   

11.
In this paper, a processing element (PE) is characterized by its computation bandwidth, I/O bandwidth, and the size of its local memory. In carrying out a computation, a PE is said to be balanced if the computing time equals the I/O time. Consider a balanced PE for some computation. Suppose that the computation band-width of the PE is increased by a factor of α relative to its I/O bandwidth. Then when carrying out the same computation the PE will be imbalanced; i.e., it will have to wait for I/O. A standard method of avoiding this I/O bottleneck is to reduce the overall I/O requirement of the PE by increasing the size of its local memory. This paper addresses the question of by how much the PE's local memory must be enlarged in order to restore balance.The following results are shown: For matrix computations such as matrix multiplication and Gaussian elimination, the size of the local memory must be increased by a factor of α2. For computations such as relaxation on a k-dimensional grid, the local memory must be enlarged by a factor of αk. For some other computations such as the FFT and sorting, the increase is exponential; i.e., the size of the new memory must be the size of the original memory to the αth power. All these results indicate that to design a balanced PE, the size of its local memory must be increased much more rapidly than its computation bandwidth. This phenomenon seems to be common for many computations where an output may depend on a large subset of the inputs.Implications of these results for some parallel computer architectures are also discussed. One particular result is that to balance an array of p linearly connected PEs for performing matrix computations such as matrix multiplication and matrix triangularization, the size of each PE's local memory must grow linearly with p. Thus, the larger the array is, the larger each PE's local memory must be.  相似文献   

12.
In this paper we examine the role of the β-space property (equivalently of the MCM-property) in generalized ordered (GO-)spaces and, more generally, in monotonically normal spaces. We show that a GO-space is metrizable iff it is a β-space with a Gδ-diagonal and iff it is a quasi-developable β-space. That last assertion is a corollary of a general theorem that any β-space with a σ-point-finite base must be developable. We use a theorem of Balogh and Rudin to show that any monotonically normal space that is hereditarily monotonically countably metacompact (equivalently, hereditarily a β-space) must be hereditarily paracompact, and that any generalized ordered space that is perfect and hereditarily a β-space must be metrizable. We include an appendix on non-Archimedean spaces in which we prove various results announced without proof by Nyikos.  相似文献   

13.
Which collections of mn minors of an m-by-n matrix uniquely determine the matrix, given some regularity conditions? For m=n=3, the 585 such collections, that are distinct up to symmetry, are determined. For general m, n, a necessary and a sufficient condition for reconstruction are given in terms of matchings in a bipartite graph. Among other particular results, those collections of entries for which there are minors that permit reconstruction one entry at a time are characterized.  相似文献   

14.
Steiner trees for (finite) subsets D of metric spaces S are discussed. For a given (abstract) tree topology over D Steiner interpretations in S are defined and their properties are studied. An algorithm to obtain Steiner interpretations for a given tree topology is given which is efficient if S is the (L1-) product of small metric spaces, e.g., if S is the sequence space Al over an alphabet A of small cardinality. A variant of the same algorithm can be used to minimize efficiently and exactly spin glass Hamiltonians of k-meshed graphs. The interpretation algorithm is used as an ingredient for a variant of the stochastic search algorithm called “simulated annealing” which is used to find Steiner trees for various given data sets D in various sequence spaces S = Al. For all data sets analyzed so far the trees obtained this way are shorter than or at least as short as the best ones derived using other tree construction methods. Two main features can be observed:
  • 1.(1) Very often the shape of Steiner trees constructed this way is more or less chain-like. The trees are “long and slim.”
  • 2.(2) Generally, the method allows to find many different Steiner trees.
As a consequence, one may conclude that tree reconstruction programs should be executed in an interactive fashion so that additional biological knowledge, not explicitly represented in the data set, can be introduced at various stages of the reconstruction algorithm to reduce the number of possible solutions. Moreover, as the “Simulated Annealing” search procedure is universally applicable, one may also use this algorithm during such an interactive reconstruction program to optimize any other of the known tree reconstruction minimality principles.  相似文献   

15.
We present two algorithms for reconstruction of the shape of convex bodies in the two-dimensional Euclidean space. The first reconstruction algorithm requires knowledge of the exact surface tensors of a convex body up to rank s for some natural number s. When only measurements subject to noise of surface tensors are available for reconstruction, we recommend to use certain values of the surface tensors, namely harmonic intrinsic volumes instead of the surface tensors evaluated at the standard basis. The second algorithm we present is based on harmonic intrinsic volumes and allows for noisy measurements. From a generalized version of Wirtinger's inequality, we derive stability results that are utilized to ensure consistency of both reconstruction procedures. Consistency of the reconstruction procedure based on measurements subject to noise is established under certain assumptions on the noise variables.  相似文献   

16.
In this paper we show that if X is an infinite compactum cleavable over an ordinal, then X must be homeomorphic to an ordinal. X must also therefore be a LOTS. This answers two fundamental questions in the area of cleavability. We also leave it as an open question whether cleavability of an infinite compactum X over an ordinal λ implies X is embeddable into λ.  相似文献   

17.
Given a partition λ of n, a k-minor of λ is a partition of nk whose Young diagram fits inside that of λ. We find an explicit function g(n) such that any partition of n can be reconstructed from its set of k-minors if and only if k?g(n). In particular, partitions of n?k2+2k are uniquely determined by their sets of k-minors. This result completely solves the partition reconstruction problem and also a special case of the character reconstruction problem for finite groups.  相似文献   

18.
The basic reconstruction problem lead with the general task of retrieving a scenery from observations made by a random walker. A critical factor associated with the problem is reconstructing the scenery in polynomial time. In this article, we propose a novel technique based on the modern DNA sequencing method for reconstructing a 3-color scenery of length n. The idea is first to reconstruct small pieces of length order log n and then assembled them together to form the required piece. We show that this reconstruction and assembly for a finite piece of a 3-color scenery takes polynomial amount of time.  相似文献   

19.
Positional DNA sequencing by hybridization (PSBH) is a recently proposed enhancement of DNA sequencing by hybridization (SBH, potentially a powerful alternative to the DNA sequencing by gel electrophoresis). It has been discussed in many papers and applied to large scale sequencing by hybridization. However, the computational part of PSBH reconstruction is a difficult problem, especially for the occurrence of hybridization errors. So far the problem has not been solved well. Taking PSBH as a combinatorial optimization problem, a novel reconstruction approach to PSBH is presented in this paper. The proposed approach accepts both the negative and positive errors and can greatly reduce ambiguities in the reconstruction of PSBH. The computational experiment shows that our algorithm works satisfactorily and correctly on the test data, especially for the positive errors and k-tuple repetitions.  相似文献   

20.
Magnetic resonance electrical impedance tomography(MREIT, for short) is a new medical imaging technique developed recently to visualize the cross-section conductivity of biologic tissues. A new MREIT image reconstruction method called harmonic Bz algorithm was proposed in 2002 with the measurement of Bz that is a single component of an induced magnetic flux density subject to an injection current. The key idea is to solve a nonlinear integral equation by some iteration process. This paper deals with the convergence analysis as well as the error estimate for noisy input data Bz, which is the practical situation for MREIT. By analyzing the iteration process containing the Laplacian operation on the input magnetic field rigorously, the authors give the error estimate for the iterative solution in terms of the noisy level δ and the regularizing scheme for determiningΔBz approximately from the noisy input data. The regularizing scheme for computing the Laplacian from noisy input data is proposed with error analysis. Our results provide both the theoretical basis and the implementable scheme for evaluating the reconstruction accuracy using harmonic Bz algorithm with practical measurement data containing noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号