首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Longest edge (nested) algorithms for triangulation refinement in two dimensions are able to produce hierarchies of quality and nested irregular triangulations as needed both for adaptive finite element methods and for multigrid methods. They can be formulated in terms of the longest edge propagation path (Lepp) and terminal edge concepts, to refine the target triangles and some related neighbors. We discuss a parallel multithread algorithm, where every thread is in charge of refining a triangle t and its associated Lepp neighbors. The thread manages a changing Lepp(t) (ordered set of increasing triangles) both to find a last longest (terminal) edge and to refine the pair of triangles sharing this edge. The process is repeated until triangle t is destroyed. We discuss the algorithm, related synchronization issues, and the properties inherited from the serial algorithm. We present an empirical study that shows that a reasonably efficient parallel method with good scalability was obtained.  相似文献   

2.
We examine a generalized conforming bisection (GCB-)algorithm which allows both global and local nested refinements of the triangulations without generating hanging nodes. It is based on the notion of a mesh density function which prescribes where and how much to refine the mesh. Some regularity properties of generated sequences of refined triangulations are proved. Several numerical tests demonstrate the efficiency of the proposed bisection algorithm. It is also shown how to modify the GCB-algorithm in order to generate anisotropic meshes with high aspect ratios.  相似文献   

3.
J. Mosler  M. Ortiz 《PAMM》2006,6(1):247-248
A novel h-adaptive finite element strategy for standard dissipative media at finite strains based on energy minimization is presented. The method can be applied to any (incremental) minimization problem to be analyzed by finite elements. Similarly to an error estimator by Babǔska & Rheinboldt , the proposed error indicator is based on solving a local Dirichlet -type problem. However, in contrast to the original work, a different error indicator is considered. Provided the underlying physical problem is governed by a minimization problem, the difference between the energy of the elements defining the local problem computed from the initial finite element interpolation and that associated with the local Dirichlet -type problem is used as an indicator. If this difference reaches a certain threshold, the elements defining the local problem are refined by applying a modified longest edge bisection according to Rivara . Since this re-meshing strategy leads to a nested family of triangulations, the transfer of history variables necessary to describe dissipative materials is relatively inexpensive. The presented h-adaption is only driven by energy-minimization. As a consequence, anisotropic meshes may evolve if they are energetically favorable. The versatility and rate of convergence of the resulting approach are illustrated by means of selected numerical examples. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
Local mesh refinement is one of the key steps in the implementations of adaptive finite element methods. This paper presents a parallel algorithm for distributed memory parallel computers for adaptive local refinement of tetrahedral meshes using bisection. This algorithm is used in PHG, Parallel Hierarchical Grid (http: //lsec. cc. ac. cn/phg/J, a toolbox under active development for parallel adaptive finite element solutions of partial differential equations. The algorithm proposed is characterized by allowing simultaneous refinement of submeshes to arbitrary levels before synchronization between submeshes and without the need of a central coordinator process for managing new vertices. Using the concept of canonical refinement, a simple proof of the independence of the resulting mesh on the mesh partitioning is given, which is useful in better understanding the behaviour of the bisectioning refinement procedure.AMS subject classifications: 65Y05, 65N50  相似文献   

5.
We examine the longest-edge bisection algorithm that chooses for bisection the longest edge in a given face-to-face simplicial partition of a bounded polytopic domain in ℝ d . Dividing this edge at its midpoint, we define a locally refined partition of all simplices that surround this edge. Repeating this process, we obtain a family ℱ = {ℐ h } h → 0 of nested face-to-face partitions ℐ h . For d = 2, we prove that this family is strongly regular; i.e., there exists a constant C > 0 such that meas TCh 2 for all triangles T ∈ ℐ h and all triangulations ℐ h ∈ ℱ. In particular, the well-known minimum angle condition is valid. The text was submitted by the authors in English.  相似文献   

6.
The Longest-Edge (LE) bisection of a triangle is obtained by joining the midpoint of its longest edge with the opposite vertex. Here two properties of the longest-edge bisection scheme for triangles are proved. For any triangle, the number of distinct triangles (up to similarity) generated by longest-edge bisection is finite. In addition, if LE-bisection is iteratively applied to an initial triangle, then minimum angle of the resulting triangles is greater or equal than a half of the minimum angle of the initial angle. The novelty of the proofs is the use of an hyperbolic metric in a shape space for triangles.  相似文献   

7.
We describe a distributed memory parallel Delaunay refinement algorithm for simple polyhedral domains whose constituent bounding edges and surfaces are separated by angles between 90° to 270° inclusive. With these constraints, our algorithm can generate meshes containing tetrahedra with circumradius to shortest edge ratio less than 2, and can tolerate more than 80% of the communication latency caused by unpredictable and variable remote gather operations.

Our experiments show that the algorithm is efficient in practice, even for certain domains whose boundaries do not conform to the theoretical limits imposed by the algorithm. The algorithm we describe is the first step in the development of much more sophisticated guaranteed-quality parallel mesh generation algorithms.  相似文献   


8.
For hydrologic applications, terrain models should have few local minima, and drainage lines should coincide with edges. We show that triangulating a set of points with elevations such that the number of local minima of the resulting terrain is minimized is NP-hard for degenerate point sets. The same result applies when there are no degeneracies for higher-order Delaunay triangulations. Two heuristics are presented to reduce the number of local minima for higher-order Delaunay triangulations, which start out with the Delaunay triangulation. We give efficient algorithms for their implementation, and test on real-world data how well they perform. We also study another desirable drainage characteristic, few valley components, and how to obtain it for higher-order Delaunay triangulations. This gives rise to a third heuristic. Tables and visualizations show how the heuristics perform for the drainage characteristics on real-world data.  相似文献   

9.
Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The algorithms we describe are (a) 2-/3-dimensional spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) d-dimensional axis-aligned box intersection computation, and finally (c) 3D bulk insertion of points into Delaunay triangulations, which can be used for mesh generation algorithms, or simply for constructing 3D Delaunay triangulations. For the latter, we introduce as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory. This makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations.We show experimental results for these algorithms, using our implementations based on the Computational Geometry Algorithms Library (CGAL). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.  相似文献   

10.
A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation   总被引:1,自引:0,他引:1  
We present a simple new algorithm for triangulating polygons and planar straightline graphs, It provides "shape" and "size" guarantees:
• All triangles have a bounded aspect ratio.
• The number of triangles is within a constant factor of optimal.
Such "quality" triangulations are desirable as meshes for the finite element method, in which the running time generally increases with the number of triangles, and where the convergence and stability may be hurt by very skinny triangles. The technique we use-successive refinement of a Delaunay triangulation-extends a mesh generation technique of Chew by allowing triangles of varying sizes. Compared with previous quadtree-based algorithms for quality mesh generation, the Delaunay refinement approach is much simpler and generally produces meshes with fewer triangles. We also discuss an implementation of the algorithm and evaluate its performance on a variety of inputs.  相似文献   

11.
In this paper, we are concerned with mortar edge element methods for solving three-dimensional Maxwell's equations. A new type of Lagrange multiplier space is introduced to impose the weak continuity of the tangential components of the edge element solutions across the interfaces between neighboring subdomains. The mortar edge element method is shown to have nearly optimal convergence under some natural regularity assumptions when nested triangulations are assumed on the interfaces. A generalized edge element interpolation is introduced which plays a crucial role in establishing the nearly optimal convergence. The theoretically predicted convergence is confirmed by numerical experiments.

  相似文献   


12.
Optimal convergence rates of adaptive finite element methods are well understood in terms of the axioms of adaptivity.One key ingredient is the discrete reliability of a residualbased a posteriori error estimator,which controls the error of two discrete finite element solutions based on two nested triangulations.In the error analysis of nonconforming finite element methods,like the Crouzeix-Raviart or Morley finite element schemes,the difference of the piecewise derivatives of discontinuous approximations to the distributional gradients of global Sobolev functions plays a dominant role and is the object of this paper.The nonconforming interpolation operator,which comes natural with the definition of the aforementioned nonconforming finite element in the sense of Ciarlet,allows for stability and approximation properties that enable direct proofs of the reliability for the residual that monitors the equilibrium condition.The novel approach of this paper is the suggestion of a right-inverse of this interpolation operator in conforming piecewise polynomials to design a nonconforming approximation of a given coarse-grid approximation on a refined triangulation.The results of this paper allow for simple proofs of the discrete reliability in any space dimension and multiply connected domains on general shape-regular triangulations beyond newest-vertex bisection of simplices.Particular attention is on optimal constants in some standard discrete estimates listed in the appendices.  相似文献   

13.
Summary In this paper we consider a quadrature method for the solution of the double layer potential equation corresponding to Laplace's equation in a threedimensional polyhedron. We prove the stability for our method in case of special triangulations over the boundary of the polyhedron. The assumptions imposed on the triangulations are analogous to those appearing in the one-dimensional case. Finally, we establish the rates of convergence and discuss the effect of mesh refinement near the corners and edges of the polyhedron.  相似文献   

14.
15.
We show that two desirable properties for planar mesh refinement techniques are incompatible. Mesh refinement is a common technique for adaptive error control in generating unstructured planar triangular meshes for piecewise polynomial representations of data. Local refinements are modifications of the mesh that involve a fixed maximum amount of computation, independent of the number of triangles in the mesh. Regular meshes are meshes for which every interior vertex has degree 6. At least for some simple model meshing problems, optimal meshes are known to be regular, hence it would be desirable to have a refinement technique that, if applied to a regular mesh, produced a larger regular mesh. We call such a technique a regular refinement. In this paper, we prove that no refinement technique can be both local and regular. Our results also have implications for non-local refinement techniques such as Delaunay insertion or Rivara's refinement. Received August 1, 1996 / Revised version received February 28, 1997  相似文献   

16.
Recently, there has been much interest in the solution of differential equations on surfaces and manifolds, driven by many applications whose dynamics take place on such domains. Although increasingly powerful algorithms have been developed in this field, many straightforward questions remain, particularly in the area of coupling advanced discretizations with efficient linear solvers. In this paper, we develop a structured refinement algorithm for octahedral triangulations of the surface of the sphere. We explain the composite‐grid finite‐element discretization of the Laplace–Beltrami operator on such triangulations and extend the fast adaptive composite‐grid scheme to provide an efficient solution of the resulting linear system. Supporting numerical examples are presented, including the recovery of second‐order accuracy in the case of a nonsmooth solution.  相似文献   

17.
We discuss adaptive sparse grid algorithms for stochastic differential equations with a particular focus on applications to electromagnetic scattering by structures with holes of uncertain size, location, and quantity. Stochastic collocation (SC) methods are used in combination with an adaptive sparse grid approach based on nested Gauss-Patterson grids. As an error estimator we demonstrate how the nested structure allows an effective error estimation through Richardson extrapolation. This is shown to allow excellent error estimation and it also provides an efficient means by which to estimate the solution at the next level of the refinement. We introduce an adaptive approach for the computation of problems with discrete random variables and demonstrate its efficiency for scattering problems with a random number of holes. The results are compared with results based on Monte Carlo methods and with Stroud based integration, confirming the accuracy and efficiency of the proposed techniques.  相似文献   

18.
Funken  Stefan A.  Schmidt  Anja 《Numerical Algorithms》2021,87(3):1147-1176

Adaptive meshing is a fundamental component of adaptive finite element methods. This includes refining and coarsening meshes locally. In this work, we are concerned with the red-green-blue refinement strategy in two dimensions and its counterpart-coarsening. In general, coarsening algorithms are mostly based on an explicitly given refinement history. In this work, we present a coarsening algorithm on adaptive red-green-blue meshes in two dimensions without explicitly knowing the refinement history. To this end, we examine the local structure of these meshes, find an easy-to-verify criterion to adaptively coarsen red-green-blue meshes, and prove that this criterion generates meshes with the desired properties. We present a MATLAB implementation built on the red-green-blue refinement routine of the ameshref-package (Funken and Schmidt 2018, 2019).

  相似文献   

19.
Delaunay refinement algorithms for triangular mesh generation   总被引:36,自引:0,他引:36  
Delaunay refinement is a technique for generating unstructured meshes of triangles for use in interpolation, the finite element method, and the finite volume method. In theory and practice, meshes produced by Delaunay refinement satisfy guaranteed bounds on angles, edge lengths, the number of triangles, and the grading of triangles from small to large sizes. This article presents an intuitive framework for analyzing Delaunay refinement algorithms that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and most importantly, helps to solve the difficult problem of meshing nonmanifold domains with small angles.

Although small angles inherent in the input geometry cannot be removed, one would like to triangulate a domain without creating any new small angles. Unfortunately, this problem is not always soluble. A compromise is necessary. A Delaunay refinement algorithm is presented that can create a mesh in which most angles are 30° or greater and no angle is smaller than arcsin[( , where φ60°is the smallest angle separating two segments of the input domain. New angles smaller than 30° appear only near input angles smaller than 60°. In practice, the algorithm's performance is better than these bounds suggest.

Another new result is that Ruppert's analysis technique can be used to reanalyze one of Chew's algorithms. Chew proved that his algorithm produces no angle smaller than 30° (barring small input angles), but without any guarantees on grading or number of triangles. He conjectures that his algorithm offers such guarantees. His conjecture is conditionally confirmed here: if the angle bound is relaxed to less than 26.5°, Chew's algorithm produces meshes (of domains without small input angles) that are nicely graded and size-optimal.  相似文献   


20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号