首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We propose and discuss a new Lepp-surface method able to produce a small triangular approximation of huge sets of terrain grid data by using a two-goal strategy that assures both small approximation error and well-shaped 3D triangles. This is a refinement method which starts with a coarse initial triangulation of the input data, and incrementally selects and adds data points into the mesh as follows: for the edge e having the highest error in the mesh, one or two points close to (one or two) terminal edges associated with e are inserted in the mesh. The edge error is computed by adding the triangle approximation errors of the two triangles that share e, while each L2-norm triangle error is computed by using a curvature tensor (a good approximation of the surface) at a representative point associated with both triangles. The method produces triangular approximations that capture well the relevant features of the terrain surface by naturally producing well-shaped triangles. We compare our method with a pure L2-norm optimization method.  相似文献   

2.
A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation   总被引:1,自引:0,他引:1  
We present a simple new algorithm for triangulating polygons and planar straightline graphs, It provides "shape" and "size" guarantees:
• All triangles have a bounded aspect ratio.
• The number of triangles is within a constant factor of optimal.
Such "quality" triangulations are desirable as meshes for the finite element method, in which the running time generally increases with the number of triangles, and where the convergence and stability may be hurt by very skinny triangles. The technique we use-successive refinement of a Delaunay triangulation-extends a mesh generation technique of Chew by allowing triangles of varying sizes. Compared with previous quadtree-based algorithms for quality mesh generation, the Delaunay refinement approach is much simpler and generally produces meshes with fewer triangles. We also discuss an implementation of the algorithm and evaluate its performance on a variety of inputs.  相似文献   

3.
We present a refinement and coarsening algorithm for the adaptive representation of Right-Triangulated Irregular Network (RTIN) meshes. The refinement algorithm is very simple and proceeds uniformly or locally in triangle meshes. The coarsening algorithm decreases mesh complexity by reducing unnecessary data points in the mesh after a given error criterion is applied. We describe the most important features of the algorithms and give a brief numerical study on the propagation associated with the adaptive scheme used for the refinement algorithm. We also present a comparison with a commercial tool for mesh simplification, Rational Reducer, showing that our coarsening algorithm offers better results in terms of accuracy of the generated meshes.  相似文献   

4.
Longest edge (nested) algorithms for triangulation refinement in two dimensions are able to produce hierarchies of quality and nested irregular triangulations as needed both for adaptive finite element methods and for multigrid methods. They can be formulated in terms of the longest edge propagation path (Lepp) and terminal edge concepts, to refine the target triangles and some related neighbors. We discuss a parallel multithread algorithm, where every thread is in charge of refining a triangle t and its associated Lepp neighbors. The thread manages a changing Lepp(t) (ordered set of increasing triangles) both to find a last longest (terminal) edge and to refine the pair of triangles sharing this edge. The process is repeated until triangle t is destroyed. We discuss the algorithm, related synchronization issues, and the properties inherited from the serial algorithm. We present an empirical study that shows that a reasonably efficient parallel method with good scalability was obtained.  相似文献   

5.
In this paper, we analyze the convergence of the adaptive conforming and nonconforming $P_1$ finite element methods with red–green refinement based on standard Dörfler marking strategy. Since the mesh after refining is not nested into the one before, the usual Galerkin-orthogonality or quasi-orthogonality for newest vertex bisection does not hold for this case. To overcome such a difficulty, we develop some new quasi-orthogonality instead under certain condition on the initial mesh (Condition A). Consequently, we show convergence of the adaptive methods by establishing the reduction of some total errors. To weaken the condition on the initial mesh, we propose a modified red–green refinement and prove the convergence of the associated adaptive methods under a much weaker condition on the initial mesh (Condition B). Furthermore, we also develop an initial mesh generator which guarantee that all the interior triangles are equilateral triangles (satisfy Condition A) and the other triangles containing at least one vertex on the boundary satisfy Condition B.  相似文献   

6.
Delaunay refinement algorithms for triangular mesh generation   总被引:36,自引:0,他引:36  
Delaunay refinement is a technique for generating unstructured meshes of triangles for use in interpolation, the finite element method, and the finite volume method. In theory and practice, meshes produced by Delaunay refinement satisfy guaranteed bounds on angles, edge lengths, the number of triangles, and the grading of triangles from small to large sizes. This article presents an intuitive framework for analyzing Delaunay refinement algorithms that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and most importantly, helps to solve the difficult problem of meshing nonmanifold domains with small angles.

Although small angles inherent in the input geometry cannot be removed, one would like to triangulate a domain without creating any new small angles. Unfortunately, this problem is not always soluble. A compromise is necessary. A Delaunay refinement algorithm is presented that can create a mesh in which most angles are 30° or greater and no angle is smaller than arcsin[( , where φ60°is the smallest angle separating two segments of the input domain. New angles smaller than 30° appear only near input angles smaller than 60°. In practice, the algorithm's performance is better than these bounds suggest.

Another new result is that Ruppert's analysis technique can be used to reanalyze one of Chew's algorithms. Chew proved that his algorithm produces no angle smaller than 30° (barring small input angles), but without any guarantees on grading or number of triangles. He conjectures that his algorithm offers such guarantees. His conjecture is conditionally confirmed here: if the angle bound is relaxed to less than 26.5°, Chew's algorithm produces meshes (of domains without small input angles) that are nicely graded and size-optimal.  相似文献   


7.
Four different automatic mesh generators capable of generating either triangular meshes or hybrid meshes of mixed element types have been used in the mesh generation process. The performance of these mesh generators were tested by applying them to the adaptive finite element refinement procedure. It is found that by carefully controlling the quality and grading of the quadrilateral elements, an increase in efficiency over pure triangular meshes can be achieved. Furthermore, if linear elements are employed, an optimal hybrid mesh can be obtained most economically by a combined use of the mesh coring technique suggested by Lo and Lau and a selective removal of diagonals from the triangular element mesh. On the other hand, if quadratic elements are used, it is preferable to generate a pure triangular mesh first, and then obtain a hybrid mesh by merging of triangles.  相似文献   

8.
It is proved that any triangulation of a flat polygonal region can be refined by using repeated subdivisions of an edge so that: (1) the maximum diameter of the triangles would be less than any pre-assigned positive number, and (2) the minimum interior angle of the triangles of the triangulation obtained would be not less than the minimum interior angle of the triangles of the original triangulation divided by 9. The required triangulation refinement is constructed in two steps: first, the triangulation is refined so that the triangles of the triangulation obtained can be combined into pairs, and only boundary triangles may be left unpaired; at this step each triangle is split into at most 4 parts. Then the triangulation obtained is refined once again in order that the diameter of each triangle be less then a prescribed ?. At each of the steps, the minimum interior angle of triangles is reduced by at most 3 times. This is guaranteed by the lemma saying that the interior angles of the triangles into which the original triangle is divided by a median are at least as great as one-third of the minimum interior angle of the original triangle.  相似文献   

9.
In this paper, we investigate using the adaptive Runge-Kutta discontinuous Galerkin (RKDG) methods with the modified ghost fluid method (MGFM) in conjunction with the adaptive RKDG methods for solving the level set function to simulate the compressible two-medium flow in one and two dimensions. A shock detection technique (KXRCF method) is adopted as an indicator to identify the troubled cell, which serves for further numerical limiting procedure which uses a modified TVB limiter to reconstruct different degrees of freedom and an adaptive mesh refinement procedure. If the computational mesh should be refined or coarsened, and the detail of the implementation algorithm is presented on how to modulate the hanging nodes and redefine the numerical solutions of the two-medium flow and the level set function on such adaptive mesh. Extensive numerical tests are provided to illustrate the proposed adaptive methods may possess the capability of enhancing the resolutions nearby the discontinuities inside of the single medium flow region and material interfacial vicinities of the two-medium flow region.  相似文献   

10.
In this paper, the three-dimensional automatic adaptive mesh refinement is presented in modeling the crack propagation based on the modified superconvergent patch recovery technique. The technique is developed for the mixed mode fracture analysis of different fracture specimens. The stress intensity factors are calculated at the crack tip region and the crack propagation is determined by applying a proper crack growth criterion. An automatic adaptive mesh refinement is employed on the basis of modified superconvergent patch recovery (MSPR) technique to simulate the crack growth by applying the asymptotic crack tip solution and using the collapsed quarter-point singular tetrahedral elements at the crack tip region. A-posteriori error estimator is used based on the Zienkiewicz–Zhu method to estimate the error of fracture parameters and predict the crack path pattern. Finally, the efficiency and accuracy of proposed computational algorithm is demonstrated by several numerical examples.  相似文献   

11.
We show that two desirable properties for planar mesh refinement techniques are incompatible. Mesh refinement is a common technique for adaptive error control in generating unstructured planar triangular meshes for piecewise polynomial representations of data. Local refinements are modifications of the mesh that involve a fixed maximum amount of computation, independent of the number of triangles in the mesh. Regular meshes are meshes for which every interior vertex has degree 6. At least for some simple model meshing problems, optimal meshes are known to be regular, hence it would be desirable to have a refinement technique that, if applied to a regular mesh, produced a larger regular mesh. We call such a technique a regular refinement. In this paper, we prove that no refinement technique can be both local and regular. Our results also have implications for non-local refinement techniques such as Delaunay insertion or Rivara's refinement. Received August 1, 1996 / Revised version received February 28, 1997  相似文献   

12.
In this paper we analyse a method for triangulating the sphere originally proposed by Baumgardner and Frederickson in 1985. The method is essentially a refinement procedure for arbitrary spherical triangles that fit into a hemisphere. Refinement is carried out by dividing each triangle into four by introducing the midpoints of the edges as new vertices and connecting them in the usual ‘red’ way. We show that this process can be described by a sequence of piecewise smooth mappings from a reference triangle onto the spherical triangle. We then prove that the whole sequence of mappings is uniformly bi-Lipschitz and converges uniformly to a non-smooth parameterization of the spherical triangle, recovering the Baumgardner and Frederickson spherical barycentric coordinates. We also prove that the sequence of triangulations is quasi-uniform, that is, areas of triangles and lengths of the edges are roughly the same at each refinement level. Some numerical experiments confirm the theoretical results.  相似文献   

13.
In order to solve the topology optimization problems of fluid flow and obtain higher resolution of the interface with a minimum of additional expense, an automatic local adaptive mesh refinement method is proposed. The optimization problem is solved by a simple but robust optimality criteria (OC) algorithm. A material distribution information based adaptive mesh method is adopted during the optimization process. The optimization procedure is provided and verified with several benchmark examples.  相似文献   

14.
The basic theory of the strengthened Cauchy–Buniakowskii–Schwarz (C.B.S.) inequality is the main tool in the convergence analysis of the recently proposed algebraic multilevel iterative methods. An upper bound of the constant γ in the strengthened C.B.S. inequality for the case of the finite element solution of 2D elasticity problems is obtained. It is assumed that linear triangle finite elements are used, the initial mesh consisting of right isosceles triangles and the mesh refinement procedure being uniform. For the resulting linear algebraic systems we have proved that γ2<0.75 uniformly on the mesh parameter and on Poisson's ratio ν ? (0, 1/2). Furthermore, the presented numerical tests show that the same relation holds for arbitrary initial right triangulations, even in the case of degeneracy of triangles. The theoretical results obtained are practically important for successful implementation of the finite element method to large-scale modeling of complicated structures. They allow us to construct optimal order algebraic multilevel iterative solvers for a wide class of real–life elasticity problems.  相似文献   

15.
The hexaparagon     
A hexagon with each pair of opposite sides parallel to a side of a triangle will be called a hexaparagon for that triangle. One way to construct a hexaparagon for a given triangle ABC is to use as vertices the centroids P, Q, R, S, T, and U of the six non-overlapping sub-triangles formed by the three medians of triangle ABC. The perimeter of this hexaparagon is half the perimeter of triangle ABC. The ratio of the areas of triangle ABC to this hexaparagon is 36 to 13 and the lengths of the parallel sides are in the ratio 6 to 2 to 1. The vertices of this hexaparagon lie on an ellipse and, with a second type of hexaparagon introduced later, hexaparagons tile the plane.  相似文献   

16.
We develop a method for adaptive mesh refinement for steady state problems that arise in the numerical solution of Cahn–Hilliard equations with an obstacle free energy. The problem is discretized in time by the backward-Euler method and in space by linear finite elements. The adaptive mesh refinement is performed using residual based a posteriori estimates; the time step is adapted using a heuristic criterion. We describe the space–time adaptive algorithm and present numerical experiments in two and three space dimensions that demonstrate the usefulness of our approach.  相似文献   

17.
Two methods for calculating the volume and surface area of the intersection between a triangle mesh and a rectangular hexahedron are presented. The main result is an exact method that calculates the polyhedron of intersection and thereafter the volume and surface area of the fraction of the hexahedral cell inside the mesh. The second method is approximate, and estimates the intersection by a least squares plane. While most previous publications focus on non-degenerate triangle meshes, we here extend the methods to handle geometric degeneracies. In particular, we focus on large-scale triangle overlaps, or double surfaces. It is a geometric degeneracy that can be hard to solve with existing mesh repair algorithms. There could also be situations in which it is desirable to keep the original triangle mesh unmodified. Alternative methods that solve the problem without altering the mesh are therefore presented. This is a step towards a method that calculates the solid area and volume fractions of a degenerate triangle mesh including overlapping triangles, overlapping meshes, hanging nodes, and gaps. Such triangle meshes are common in industrial applications. The methods are validated against three industrial test cases. The validation shows that the exact method handles all addressed geometric degeneracies, including double surfaces, small self-intersections, and split hexahedra.  相似文献   

18.
An adaptive mixed finite element method (AMFEM) is designed to guarantee an error reduction, also known as saturation property: after each refinement step, the error for the fine mesh is strictly smaller than the error for the coarse mesh up to oscillation terms. This error reduction property is established here for the Raviart-Thomas finite element method with a reduction factor uniformly for the norm of the flux errors. Our result allows for linear convergence of a proper adaptive mixed finite element algorithm with respect to the number of refinement levels. The adaptive algorithm surprisingly does not require any particular mesh design, unlike the conforming finite element method. The new arguments are a discrete local efficiency and a quasi-orthogonality estimate. The proof does not rely on duality or on regularity.

  相似文献   


19.
A unified and robust mathematical model for compressible and incompressible linear elasticity can be obtained by rephrasing the Herrmann formulation within the Hellinger-Reissner principle. This quasi-optimally converging extension of PEERS (Plane Elasticity Element with Reduced Symmetry) is called Dual-Mixed Hybrid formulation (DMH). Explicit residual-based a posteriori error estimates for DMH are introduced and are mathematically shown to be locking-free, reliable, and efficient. The estimator serves as a refinement indicator in an adaptive algorithm for effective automatic mesh generation. Numerical evidence supports that the adaptive scheme leads to optimal convergence for Lamé and Stokes benchmark problems with singularities.  相似文献   

20.
Local mesh refinement is one of the key steps in the implementations of adaptive finite element methods. This paper presents a parallel algorithm for distributed memory parallel computers for adaptive local refinement of tetrahedral meshes using bisection. This algorithm is used in PHG, Parallel Hierarchical Grid (http: //lsec. cc. ac. cn/phg/J, a toolbox under active development for parallel adaptive finite element solutions of partial differential equations. The algorithm proposed is characterized by allowing simultaneous refinement of submeshes to arbitrary levels before synchronization between submeshes and without the need of a central coordinator process for managing new vertices. Using the concept of canonical refinement, a simple proof of the independence of the resulting mesh on the mesh partitioning is given, which is useful in better understanding the behaviour of the bisectioning refinement procedure.AMS subject classifications: 65Y05, 65N50  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号