首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose and discuss a new Lepp-surface method able to produce a small triangular approximation of huge sets of terrain grid data by using a two-goal strategy that assures both small approximation error and well-shaped 3D triangles. This is a refinement method which starts with a coarse initial triangulation of the input data, and incrementally selects and adds data points into the mesh as follows: for the edge e having the highest error in the mesh, one or two points close to (one or two) terminal edges associated with e are inserted in the mesh. The edge error is computed by adding the triangle approximation errors of the two triangles that share e, while each L2-norm triangle error is computed by using a curvature tensor (a good approximation of the surface) at a representative point associated with both triangles. The method produces triangular approximations that capture well the relevant features of the terrain surface by naturally producing well-shaped triangles. We compare our method with a pure L2-norm optimization method.  相似文献   

2.
We discuss adaptive sparse grid algorithms for stochastic differential equations with a particular focus on applications to electromagnetic scattering by structures with holes of uncertain size, location, and quantity. Stochastic collocation (SC) methods are used in combination with an adaptive sparse grid approach based on nested Gauss-Patterson grids. As an error estimator we demonstrate how the nested structure allows an effective error estimation through Richardson extrapolation. This is shown to allow excellent error estimation and it also provides an efficient means by which to estimate the solution at the next level of the refinement. We introduce an adaptive approach for the computation of problems with discrete random variables and demonstrate its efficiency for scattering problems with a random number of holes. The results are compared with results based on Monte Carlo methods and with Stroud based integration, confirming the accuracy and efficiency of the proposed techniques.  相似文献   

3.
4.
An adaptive grid refinement procedure allows accurate solutions to advection-dominated, time-dependent flows using finite-element collocation. The technique relies on a data structure that is readily amenable to parallel computing. The paper discusses computational aspects of the method.  相似文献   

5.
Data reduction is an important issue in the field of data mining. The goal of data reduction techniques is to extract a subset of data from a massive dataset while maintaining the properties and characteristics of the original data in the reduced set. This allows an otherwise difficult or impossible data mining task to be carried out efficiently and effectively. This paper describes a new method for selecting a subset of data that closely represents the original data in terms of its joint and univariate distributions. A pair of distance criteria, motivated by the χ2-statistic, are used for measuring the goodness-of-fit between the distributions of the reduced and full datasets. Under these criteria, the data reduction problem can be formulated as a bi-objective quadratic program. A genetic algorithm technique is used in the search/optimization process. Experiments conducted on several real-world data sets demonstrate the effectiveness of the proposed method.  相似文献   

6.
SupposeA 1,...,A s are (1, - 1) matrices of order m satisfying 1 $$A_i A_j = J, i,j \in \left\{ {1,...s} \right\}$$ 2 $$A_i^T A_j = A_j^T A_i = J, i \ne j, i,j \in \left\{ {1,...,s} \right\}$$ 3 $$\sum\limits_{i = 1}^s {(A_i A_i^T = A_i^T A_i ) = 2smI_m } $$ 4 $$JA_i = A_i J = aJ, i \in \left\{ {1,...,s} \right\}, a constant$$ Call A1,…,A s ,a regular s- set of matrices of order m if Eq. 1-3 are satisfied and a regular s-set of regular matrices if Eq. 4 is also satisfied, these matrices were first discovered by J. Seberry and A.L. Whiteman in “New Hadamard matrices and conference matrices obtained via Mathon’s construction”, Graphs and Combinatorics, 4(1988), 355-377. In this paper, we prove that
  1. if there exist a regular s-set of order m and a regulart-set of order n there exists a regulars-set of ordermn whent =sm
  2. if there exist a regular s-set of order m and a regulart-set of order n there exists a regulars-set of ordermn when 2t = sm (m is odd)
  3. if there exist a regulars-set of order m and a regulart-set of ordern there exists a regular 2s-set of ordermn whent = 2sm As applications, we prove that if there exist a regulars-set of order m there exists
  4. an Hadamard matrices of order4hm whenever there exists an Hadamard matrix of order4h ands =2h
  5. Williamson type matrices of ordernm whenever there exists Williamson type matrices of ordern and s = 2n
  6. anOD(4mp;ms1,…,msu whenever anOD (4p;s1,…,su)exists and s = 2p
  7. a complex Hadamard matrix of order 2cm whenever there exists a complex Hadamard matrix of order 2c ands = 2c
This paper extends and improves results of Seberry and Whiteman giving new classes of Hadamard matrices, Williamson type matrices, orthogonal designs and complex Hadamard matrices.  相似文献   

7.
Characterizing sets of permutations whose associated quasisymmetric function is symmetric and Schur-positive is a long-standing problem in algebraic combinatorics. In this paper, we present a general method to construct Schur-positive sets and multisets, based on geometric grid classes and the product operation. Our approach produces many new instances of Schur-positive sets and provides a broad framework that explains the existence of known such sets that until now were sporadic cases.  相似文献   

8.
The hard-core model has received much attention in the past couple of decades as a lattice gas model with hard constraints in statistical physics, a multicast model of calls in communication networks, and as a weighted independent set problem in combinatorics, probability and theoretical computer science. In this model, each independent set I in a graph G is weighted proportionally to λ|I|, for a positive real parameter λ. For large λ, computing the partition function (namely, the normalizing constant which makes the weighting a probability distribution on a finite graph) on graphs of maximum degree Δ ≥ 3, is a well known computationally challenging problem. More concretely, let ${\lambda_c(\mathbb{T}_\Delta)}$ denote the critical value for the so-called uniqueness threshold of the hard-core model on the infinite Δ-regular tree; recent breakthrough results of Weitz (Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC), pp. 140–149, 2006) and Sly (Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 287–296, 2010) have identified ${\lambda_c(\mathbb{T}_\Delta)}$ as a threshold where the hardness of estimating the above partition function undergoes a computational transition. We focus on the well-studied particular case of the square lattice ${\mathbb{Z}^2}$ , and provide a new lower bound for the uniqueness threshold, in particular taking it well above ${\lambda_c(\mathbb{T}_4)}$ . Our technique refines and builds on the tree of self-avoiding walks approach of Weitz, resulting in a new technical sufficient criterion (of wider applicability) for establishing strong spatial mixing (and hence uniqueness) for the hard-core model. Our new criterion achieves better bounds on strong spatial mixing when the graph has extra structure, improving upon what can be achieved by just using the maximum degree. Applying our technique to ${\mathbb{Z}^2}$ we prove that strong spatial mixing holds for all λ < 2.3882, improving upon the work of Weitz that held for λ < 27/16 = 1.6875. Our results imply a fully-polynomial deterministic approximation algorithm for estimating the partition function, as well as rapid mixing of the associated Glauber dynamics to sample from the hard-core distribution.  相似文献   

9.
10.
Summary Continuation methods compute paths of solutions of nonlinear equations that depend on a parameter. This paper examines some aspects of the multicomputer implementation of such methods. The computations are done on a mesh connected multicomputer with 64 nodes.One of the main issues in the development of concurrent programs is load balancing, achieved here by using appropriate data distributions. In the continuation process, many linear systems have to be solved. For nearby points along the solution path, the corresponding system matrices are closely related to each other. Therefore, pivots which are good for theLU-decomposition of one matrix are likely to be acceptable for a whole segment of the solution path. This suggests to choose certain data distributions that achieve good load balancing. In addition, if these distributions are used, the resulting code is easily vectorized.To test this technique, the invariant manifold of a system of two identical nonlinear oscillators is computed as a function of the coupling between them. This invariant manifold is determined by the solution of a system of nonlinear partial differential equations that depends on the coupling parameter. A symmetry in the problem reduces this system to one single equation, which is discretized by finite differences. The solution of the discrete nonlinear system is followed as the coupling parameter is changed.This material is based upon work supported by the NSF under Cooperative Agreement No. CCR-8809615. The government has certain rights in this material.  相似文献   

11.
This paper studies adaptive thinning strategies for approximating a large set of scattered data by piecewise linear functions over triangulated subsets. Our strategies depend on both the locations of the data points in the plane, and the values of the sampled function at these points—adaptive thinning. All our thinning strategies remove data points one by one, so as to minimize an estimate of the error that results by the removal of a point from the current set of points (this estimate is termed “anticipated error”). The thinning process generates subsets of “most significant” points, such that the piecewise linear interpolants over the Delaunay triangulations of these subsets approximate progressively the function values sampled at the original scattered points, and such that the approximation errors are small relative to the number of points in the subsets. We design various methods for computing the anticipated error at reasonable cost, and compare and test the performance of the methods. It is proved that for data sampled from a convex function, with the strategy of convex triangulation, the actual error is minimized by minimizing the best performing measure of anticipated error. It is also shown that for data sampled from certain quadratic polynomials, adaptive thinning is equivalent to thinning which depends only on the locations of the data points—nonadaptive thinning. Based on our numerical tests and comparisons, two practical adaptive thinning algorithms are proposed for thinning large data sets, one which is more accurate and another which is faster.  相似文献   

12.
In this paper we introduce an enhanced notion of extremal systems for sets in locally convex topological vector spaces and obtain efficient conditions for set extremality in the convex case. Then we apply this machinery to deriving new calculus results on intersection rules for normal cones to convex sets and on infimal convolutions of support functions.  相似文献   

13.
A minimum volume (MV) set, at level α, is a set having minimum volume among all those sets containing at least α probability mass. MV sets provide a natural notion of the ‘central mass’ of a distribution and, as such, have recently become popular as a tool for the detection of anomalies in multivariate data. Motivated by the fact that anomaly detection problems frequently arise in settings with temporally indexed measurements, we propose here a new method for the estimation of MV sets from dependent data. Our method is based on the concept of complexity-penalized estimation, extending recent work of Scott and Nowak for the case of independent and identically distributed measurements, and has both desirable theoretical properties and a practical implementation. Of particular note is the fact that, for a large class of stochastic processes, choice of an appropriate complexity penalty reduces to the selection of a single tuning parameter, which represents the data dependency of the underlying stochastic process. While in reality the dependence structure is unknown, we offer a data-dependent method for selecting this parameter, based on subsampling principles. Our work is motivated by and illustrated through an application to the detection of anomalous traffic levels in Internet traffic time series.  相似文献   

14.
15.
In this paper we first give a priori estimates on asymptotic polynomials of solutions to elliptic equations at nodal points. This leads to a pointwise version of Schauder estimates. As an application we discuss the structure of nodal sets of solutions to elliptic equations with nonsmooth coefficients.  相似文献   

16.
Let X1, …, Xn be n disjoint sets. For 1 ? i ? n and 1 ? j ? h let Aij and Bij be subsets of Xi that satisfy |Aij| ? ri and |Bij| ? si for 1 ? i ? n, 1 ? j ? h, (∪i Aij) ∩ (∪i Bij) = ? for 1 ? j ? h, (∪i Aij) ∩ (∪i Bil) ≠ ? for 1 ? j < l ? h. We prove that h?Πi=1nri+siri. This result is best possible and has some interesting consequences. Its proof uses multilinear techniques (exterior algebra).  相似文献   

17.
In this work, considering the information carried by the membership degree and the non-membership degree in Atanassov’s intuitionistic fuzzy sets (IFSs) as a vector representation with the two elements, a cosine similarity measure and a weighted cosine similarity measure between IFSs are proposed based on the concept of the cosine similarity measure for fuzzy sets. To demonstrate the efficiency of the proposed cosine similarity measures, the existing similarity measures between IFSs are compared with the cosine similarity measure between IFSs by numerical examples. Finally, the cosine similarity measures are applied to pattern recognition and medical diagnosis.  相似文献   

18.
We have applied adaptive grid refinement to solve a two-dimensional Schrödinger equation in order to study the feasibility of a quantum computer based on extremely-cold neutral alkali-metal atoms. Qubits are implemented as motional states of an atom trapped in a single well of an optical lattice of counter-propagating laser beams. Quantum gates are constructed by bringing two atoms together in a single well leaving the interaction between the atoms to cause entanglement. For special geometries of the optical lattices and thus shape of the wells, quantifying the entanglement reduces to solving for selected eigenfunctions of a Schrödinger equation that contains a two-dimensional Laplacian, a trapping potential that describes the optical well, and a short-ranged interaction potential. The desired eigenfunctions correspond to eigenvalues that are deep in the interior of the spectrum where the trapping potential becomes significant. The spatial range of the interaction potential is three orders of magnitude smaller than the spatial range of the trapping potential, necessitating the use of adaptive grid refinement.  相似文献   

19.
20.
The main objective of this paper is to study reduction rate of 2D DEM (digital elevation model) data profile after data reduction by the Douglas–Peucker (DP) linear simplification method and by fractal interpolation to show original terrain reconstruction. In this paper, two-dimensional data of measured geographic profiles are taken as the study object, by using the DP method and the improved Douglas–Peucker (IDP) method to reduce data. Its aim is to retain spatial linear characteristics and variations, then take reduced data points as basic points and use the random fractal interpolation approach to add more data points up to the same as the original data points, in order to reconstruct the terrain, and compare the experimental data with the random point extraction method addressed in related literature. This paper uses tolerance calibration to generate different reduction rates and utilizes four types of evaluation factors, statistical measurement, image measurement, spectral analysis and elevation cumulative probability distribution graph, to make a quantitative analysis of profile variation. The study result indicates that real profile elevation data, manipulated with varied reduction approaches, then reconstructed by means of fractal interpolation can produce data points with a higher resolution than those originally observed, thereby the reconstructed profile gets more natural and real details.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号