首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This work addresses the problem of the approximation of the normals of the offsets of general compact sets in Euclidean spaces. It is proven that for general sampling conditions, it is possible to approximate the gradient vector field of the distance to general compact sets. These conditions involve the μ-reach of the compact set, a recently introduced notion of feature size. As a consequence, we provide a sampling condition that is sufficient to ensure the correctness up to isotopy of a reconstruction given by an offset of the sampling. We also provide a notion of normal cone to general compact sets that is stable under perturbation.  相似文献   

2.
In this paper, the problem of reconstructing a surface, given a set of scattered data points is addressed. First, a precise formulation of the reconstruction problem is proposed. The solution is mathematically defined as a particular mesh of the surface called the normalized mesh. This solution has the property to be included inside the Delaunay graph. A criterion to detect faces of the normalized mesh inside the Delaunay graph is proposed. This criterion is proved to provide the exact solution in 2D for points sampling a r-regular shapes with a sampling path < sin(π/8)r. In 3D, this result cannot be extended and the criterion cannot retrieve every face. A heuristic is proposed in order to complete the surface.  相似文献   

3.
Feature selection consists of choosing a subset of available features that capture the relevant properties of the data. In supervised pattern classification, a good choice of features is fundamental for building compact and accurate classifiers. In this paper, we develop an efficient feature selection method using the zero-norm l 0 in the context of support vector machines (SVMs). Discontinuity at the origin for l 0 makes the solution of the corresponding optimization problem difficult to solve. To overcome this drawback, we use a robust DC (difference of convex functions) programming approach which is a general framework for non-convex continuous optimisation. We consider an appropriate continuous approximation to l 0 such that the resulting problem can be formulated as a DC program. Our DC algorithm (DCA) has a finite convergence and requires solving one linear program at each iteration. Computational experiments on standard datasets including challenging feature-selection problems of the NIPS 2003 feature selection challenge and gene selection for cancer classification show that the proposed method is promising: while it suppresses up to more than 99% of the features, it can provide a good classification. Moreover, the comparative results illustrate the superiority of the proposed approach over standard methods such as classical SVMs and feature selection concave.  相似文献   

4.
The local reconstruction from samples is one of the most desirable properties for many applications in signal processing. Local sampling is practically useful since we need only to consider a signal on a bounded interval and computer can only process finite samples. However, the local sampling and reconstruction problem has not been given as much attention. Most of known results concern global sampling and reconstruction. There are only a few results about local sampling and reconstruction in spline subspaces. In this article, we study local sampling and reconstruction in general shift-invariant spaces and multiple generated shift-invariant spaces with compactly supported generators. Then we give several applications in spline subspaces and multiple generated spline subspaces.  相似文献   

5.
As a special shift-invariant spaces, spline subspaces yield many advantages so that there are many practical applications for signal or image processing. In this paper, we pay attention to the sampling and reconstruction problem in spline subspaces. We improve lower bound of sampling set conditions in spline subspaces. Based on the improved explicit lower bound, a improved explicit convergence ratio of reconstruction algorithm is obtained. The improved convergence ratio occupies faster convergence rate than old one. At the end, some numerical examples are shown to validate our results.  相似文献   

6.
The recent development of compressed sensing seeks to extract information from as few samples as possible. In such applications, since the number of samples is restricted, one should deploy the sampling points wisely. We are motivated to study the optimal distribution of finite sampling points in reproducing kernel Hilbert spaces, the natural background function spaces for sampling. Formulation under the framework of optimal reconstruction yields a minimization problem. In the discrete measure case, we estimate the distance between the optimal subspace resulting from a general Karhunen–Loève transform and the kernel space to obtain another algorithm that is computationally favorable. Numerical experiments are then presented to illustrate the effectiveness of the algorithms for the searching of optimal sampling points.  相似文献   

7.
Proper orthogonal decomposition (POD) finds an orthonormal basis yielding an optimal reconstruction of a given dataset. We consider an optimal data reconstruction problem for two general datasets related to balanced POD, which is an algorithm for balanced truncation model reduction for linear systems. We consider balanced POD outside of the linear systems framework, and prove that it solves the optimal data reconstruction problem. The theoretical result is illustrated with an example.  相似文献   

8.
Hebert Montegranario 《PAMM》2007,7(1):2140017-2140018
3D reconstruction is a branch of computer vision with a broad range of applications like computer aided design, animation, medicine and many others. In this talk we use continuous linear functionals for characterizing different kinds of 3D data. These problems can be tackled in the framework of Reproducing Kernel Hilbert Spaces and regularization of inverse problems. It can be said that 3D reconstruction is the general problem of estimating or finding functional dependencies from a three-dimensional data set. The origin of these data covers a wide range that includes tomography, surface reconstruction from point clouds or image and signal processing. Usually the problem of reconstruction has a very close relationship with scientific visualization and computer graphics. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
In this paper, we address the problem of curve and surface reconstruction from sets of points. We introduce regular interpolants, which are polygonal approximations of curves and surfaces satisfying a new regularity condition. This new condition, which is an extension of the popular notion of r-sampling to the practical case of discrete shapes, seems much more realistic than previously proposed conditions based on properties of the underlying continuous shapes. Indeed, contrary to previous sampling criteria, our regularity condition can be checked on the basis of the samples alone and can be turned into a provably correct curve and surface reconstruction algorithm. Our reconstruction methods can also be applied to non-regular and unorganized point sets, revealing a larger part of the inner structure of such point sets than past approaches. Several real-size reconstruction examples validate the new method.  相似文献   

10.
One of the most striking features of the Continuous Shearlet Transform is its ability to precisely characterize the set of singularities of multivariable functions through its decay at fine scales. In dimension n=2, it was previously shown that the continuous shearlet transform provides a precise geometrical characterization for the boundary curves of very general planar regions, and this property sets the groundwork for several successful image processing applications. The generalization of this result to dimension n=3 is highly nontrivial, and so far it was known only for the special case of 3D bounded regions where the boundary set is a smooth 2-dimensional manifold with everywhere positive Gaussian curvature. In this paper, we extend this result to the general case of 3D bounded regions with piecewise-smooth boundaries, and show that also in this general situation the continuous shearlet transform precisely characterizes the geometry of the boundary set.  相似文献   

11.
Using the techniques of approximation and factorization of convolution operators we study the problem of irregular sampling of band-limited functions on a locally compact Abelian group G. The results of this paper relate to earlier work by Feichtinger and Gröchenig in a similar way as Kluvánek's work published in 1969 relates to the classical Shannon Sampling Theorem. Generally speaking we claim that reconstruction is possible as long as there is sufficient high sampling density. Moreover, the iterative reconstruction algorithms apply simultaneously to families of Banach spaces.  相似文献   

12.
Given n points in 3D, sampled from k original planes (with sampling errors), a new probabilistic method for detecting coplanar subsets of points in O(k 6) steps is introduced. The planes are reconstructed with small probability of error. The algorithm reduces the problem of reconstruction to the problem of clustering in R 3 and thereby produces effective results. The algorithm is significantly faster than other known algorithms in most cases.  相似文献   

13.
The generalized assignment problem is a classical combinatorial optimization problem known to be NP-hard. It can model a variety of real world applications in location, allocation, machine assignment, and supply chains. The problem has been studied since the late 1960s, and computer codes for practical applications emerged in the early 1970s. We propose a new algorithm for this problem that proves to be more effective than previously existing methods. The algorithm features a path relinking approach, which is a mechanism for generating new solutions by combining two or more reference solutions. It also features an ejection chain approach, which is embedded in a neighborhood construction to create more complex and powerful moves. Computational comparisons on benchmark instances show that the method is not only effective in general, but is especially effective for types D and E instances, which are known to be very difficult.  相似文献   

14.
A hierarchical classification of different concepts of shape of compact connected sets in R n (topological, Lipshitz, homotopic, Borsuk an homological shapes) is given. The most general among them is the homological shape. There is only a countable number of homological shapes for connected compact sets in R n . In the case n = 2 even the number of different Borsuk shapes for connected compact sets is countable. Giving a probability distribution of shapes we can define a shape entropy, a mean shape and shape fluctuations. This enables a formulation of information thermodynamics of shape and its applications to different fields (physics – small systems, chemistry, biophysics, pattern recognition). The paper does not develop yet these applications, its aim is to clear the basic notions.  相似文献   

15.
A standard reconstruction problem is how to discover a compact set from a noisy point cloud that approximates it. A finite point cloud is a compact set. This paper proves a reconstruction theorem which gives a sufficient condition, as a bound on the Hausdorff distance between two compact sets, for when certain offsets of these two sets are homotopic in terms of the absence of μ-critical points in an annular region. We reduce the problem of reconstructing a subset from a point cloud to the existence of a deformation retraction from the offset of the subset to the subset itself. The ambient space can be any Riemannian manifold but we focus on ambient manifolds which have nowhere negative curvature (this includes Euclidean space). We get an improvement on previous bounds for the case where the ambient space is Euclidean whenever μ≤0.945 (μ∈(0,1) by definition). In the process, we prove stability theorems for μ-critical points when the ambient space is a manifold.  相似文献   

16.
This paper introduces a new decomposition of the 3D X-ray transform based on the shearlet representation, a multiscale directional representation which is optimally efficient in handling 3D data containing edge singularities. Using this decomposition, we derive a highly effective reconstruction algorithm yielding a near-optimal rate of convergence in estimating piecewise smooth objects from 3D X-ray tomographic data which are corrupted by white Gaussian noise. This algorithm is achieved by applying a thresholding scheme on the 3D shearlet transform coefficients of the noisy data which, for a given noise level ε, can be tuned so that the estimator attains the essentially optimal mean square error rate O(log(ε ???1)ε 2/3), as ε→0. This is the first published result to achieve this type of error estimate, outperforming methods based on Wavelet-Vaguelettes decomposition and on SVD, which can only achieve MSE rates of O(ε 1/2) and O(ε 1/3), respectively.  相似文献   

17.
Given a simple undirected graph, the problem of finding a maximum subset of vertices satisfying a nontrivial, interesting property Π that is hereditary on induced subgraphs, is known to be NP-hard. Many well-known graph properties meet the above conditions, making the problem widely applicable. This paper proposes a general purpose exact algorithmic framework to solve this problem and investigates key algorithm design and implementation issues that are helpful in tailoring the general framework for specific graph properties. The performance of the algorithms so derived for the maximum s-plex and the maximum s-defective clique problems, which arise in network-based data mining applications, is assessed through a computational study.  相似文献   

18.
How to represent a continuous signal in terms of a discrete sequence is a fundamental problem in sampling theory. Most of the known results concern global sampling in shift-invariant signal spaces. But in fact, the local reconstruction from local samples is one of the most desirable properties for many applications in signal processing, e.g. for implementing real-time reconstruction numerically. However, the local reconstruction problem has not been given much attention. In this article, we find conditions on a finite sampling set X such that at least in principle a continuous signal on a finite interval is uniquely and stably determined by their sampling value on the finite sampling set X in shift-invariant signal spaces.  相似文献   

19.
We consider Shannon sampling theory for sampling sets which are unions of shifted lattices. These sets are not necessarily periodic. A function f can be reconstructed from its samples provided the sampling set and the support of the Fourier transform of f satisfy certain compatibility conditions. An explicit reconstruction formula is given for sampling sets which are unions of two shifted lattices. While explicit formulas for unions of more than two lattices are possible, it is more convenient to use a recursive algorithm. The analysis is presented in the general framework of locally compact abelian groups, but several specific examples are given, including a numerical example implemented in MATLAB. Our methods also provide a new tool for designing sampling sets of minimal density.  相似文献   

20.
We present a randomized algorithm, called the cloning algorithm, for approximating the solutions of quite general NP-hard combinatorial optimization problems, counting, rare-event estimation and uniform sampling on complex regions. Similar to the algorithms of Diaconis–Holmes–Ross and Botev–Kroese the cloning algorithm is based on the MCMC (Gibbs) sampler equipped with an importance sampling pdf and, as usual for randomized algorithms, it uses a sequential sampling plan to decompose a “difficult” problem into a sequence of “easy” ones. The cloning algorithm combines the best features of the Diaconis–Holmes–Ross and the Botev–Kroese. In addition to some other enhancements, it has a special mechanism, called the “cloning” device, which makes the cloning algorithm, also called the Gibbs cloner fast and accurate. We believe that it is the fastest and the most accurate randomized algorithm for counting known so far. In addition it is well suited for solving problems associated with the Boltzmann distribution, like estimating the partition functions in an Ising model. We also present a combined version of the cloning and cross-entropy (CE) algorithms. We prove the polynomial complexity of a particular version of the Gibbs cloner for counting. We finally present efficient numerical results with the Gibbs cloner and the combined version, while solving quite general integer and combinatorial optimization problems as well as counting ones, like SAT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号