# Workshop on Mesh Generation

Mesh Generation is a very active and interdisciplinary research field, which has received important contributions from the computational geometry research community along the past 30 years. In particular, the computational geometry community is responsible for the large majority of the results concerning theoretical properties of planar triangle and quadrilateral meshes, as well as triangle surface meshes and volumetric tetrahedral and hexahedral meshes. These properties enabled us to understand what a "good quality" mesh is, and they also made it possible the development of algorithms with provable guarantees on the quality of the output mesh. The Workshop on Mesh Generation (WMG) consists of several invited talks and short presentations by world-renowned authorities in the mesh generation field, as well as young scientists who have applied mesh generation methods and techniques to industrial problems.

## Organizer

Marcelo Siqueira (UFRN - Brazil)

## Invited Speakers

Session 1 *(Monday, June 17 ^{th}: 5:00pm-6:30pm, CCET Auditorium)*

**Theoretical advances in hexahedral mesh generation**

Jeff Erickson (UIUC - USA)**CGALmesh: a Generic Framework for Delaunay Mesh Generation**

Clément Jamin (Inria, Sophia-Antipolis, France)

Session 2 *(Tuesday, June 18 ^{th}: 5:00pm-6:30pm, CCET Auditorium)*

**Interactive Rendering of Massive Geometric Models with Transparency Effects**

João Comba (UFRGS - Brazil)**The GEM Data Structure for d-Dimensional Colored Triangulations**

Jorge Stolfi (UNICAMP - Brazil)**Hierarchical Template-Based Hexahedral Mesh Generation**

Antônio Miranda (UnB - Brazil)

Session 3 *(Thursday, June 20 ^{th}: 2:30pm-4:00pm, CCET Auditorium)*

**Delaunay Mesh Generation**

Tamal Dey (OSU - USA)**Optimal Meshing**

Don Sheehy (INRIA - France)

Session 4 *(Thursday, June 20 ^{th}: 4:30pm-6:00pm, CCET Auditorium)*

**Dynamically Adapted Stellar Meshes**

Luiz Velho (IMPA - Brazil)**Lepp-Based Algorithms for Triangulation Refinement: Theoretical Issues and Parallelization**

Maria-Cecilia Rivara (UChile - Chile)**Multi-Domain Restricted Triangulations Using Off-the-shelf Libraries**

Paulo Cavalcanti (UFRJ - Brazil)

## Abstracts

Abstract: The goal of the hexahedral mesh generation problem is to decompose a complex geometric domain into distorted cubes that meet face-to-face. This talk will be a brief survey of recent theoretical results in hexahedral mesh generation and some related topics in combinatorial topology and geometry.

Abstract: CGALmesh is the mesh generation software package of the Computational Geometry Algorithm Library (CGAL). It generates isotropic simplicial meshes -- surface triangular meshes or volume tetrahedral meshes -- from input surfaces, 3D domains as well as 3D multi-domains, with or without sharp features. The underlying meshing algorithm relies on restricted Delaunay triangulations to approximate domains and surfaces, and on Delaunay refinement to ensure both approximation accuracy and mesh quality. CGALmesh provides guarantees on approximation quality, as well as on size and shape of the mesh elements. It provides four optional mesh optimization algorithms to further improve the mesh quality. A distinctive property of CGALmesh is its high flexibility with respect to the input domain representation.

Abstract: In this talk I will survey our recent efforts on producing interactive renderings of massive geometric models including transparency effects. To be able to properly handle transparent models, color compositing must be performed in sorted order with respect to the viewer. Our proposal lies in the class of algorithms called Order Independent Transparency (OIT), which does not require preliminary object sorting, but defer sorting to be performed at image space using the graphics hardware. I will demonstrate two algorithms presented at SIBGRAPI 2012 and ACM SIGGRAPH I3D 2013 that can handle a model with more 10 million triangles and up to 588 transparency layers at interactive rates. This is joint work with Marilena Maule (UFRGS), Rafael Torchelsen (UFFS) and Rui Bastos (NVIDIA).

Abstract: The gem data structure is a simple and uniform representation for $d$-dimensional triangulations. In an arbitrary $d$-dimensional triangulation, there are $d!$ ways in which a specific facet of a simplex can be glued to a specific facet of another simplex. Therefore, in data structures for general $d$-dimensional triangulations, that information must be encoded using [log_2(d!)] bits for each adjacent pair of simplices; or, alternatively, it must be recovered on-the-fly by cumbersome vertex identity tests. The gem data structure does away with that information by assuming that the vertices are labeled with $d+1$ colors, which allows extremely simple algorithms for traversal and splicing in any dimension. Even though it applies to a very restricted subclass of all traingulations, the gem structure can be useful for geometric modeling, since there are efficient algorithms for subdividing an arbitrary triangulation to obtain a colored one, and to locally refine a colored triangulation.

Abstract: This work describes a hexahedral mesh generation algorithm ideally suited for transition subdomain meshes in the context of any domain decomposition meshing strategy. The algorithm is based on an automatic hierarchical region decomposition in which, in the last level, it is possible to generate hexahedral elements with a conventional mapping strategy. In two dimensions, a subdomain is usually a triangle or a rectangle. In three dimensions, a subdomain is usually a box. Templates impose restrictions on meshes on boundary surfaces of a subdomain to be meshed. The proposed hierarchical template scheme eliminates many of these restrictions, creating more options in surface meshes as input boundary data. Other algorithms in the literature present similar characteristics. However, the implementation of the hierarchical decomposition and its templates presented here is quite simple compared to other approaches. Examples demonstrate that this simple idea may result in structured meshes.

Abstract: The technique of Delaunay refinement has been recognized as a versatile tool for generating Delaunay meshes for a variety of geometries. The sampling theory for surfaces coupled with the Delaunay refinement paradigm has led to effective algorithms for meshing smooth surfaces and volumes which have been collated in a recent book on Delaunay Mesh Generation. The first part of this talk focuses on these developments.

The Delaunay refinement paradigm, despite its usefulness, suffers from one deficiency that limits its application. It does not scale well with the mesh size. As the sample point set grows, the Delaunay triangulation starts stressing the available memory space which ultimately stalls any effective progress. To this end, we developed a localized version of Delaunay refinement which still retains theoretical guarantees. The second part of this talk focuses on this work.

Abstract: The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, ``the best mesh'' or that the algorithm is, by some measure, ``the best algorithm''. One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.

Abstract: This talk presents a framework based on stellar operators that maintains a conforming triangulation of deformable surfaces. The mesh representation uses a compact topological data structure and allows computationaly efficient mechanisms. Furthermore, it supports a dynamic adaptation process for controlling the mesh characteristics under very general rules.

Abstract: Longest edge refinement algorithms were designed to deal with the iterative refinement of triangulations for adaptive finite element applications. Here a set $S$ of triangles in a quality triangulation $T$ needs to be refined to equilibrate the finite element error, and so on iteratively. Longest edge algorithms refine the target triangles and a set of neighbor triangles to produce a valid mesh by bisecting the triangles by the midpoint of the longest edge and the opposite vertex. The mathematical properties of the longest edge bisection of triangles guarantee that the iterative use of the refinement algorithm, produces sequences of nested triangulations of quality analogous to the initial triangulation.

Lepp-bisection algorithm is in an efficient reformulation of the longest edge algorithm [1]. For each target triangle to be refined, a longest edge propagating path ($Lepp(t)$) is followed until a local largest edge $E$ in the mesh (terminal edge) shared by two terminal triangles (one for boundary terminal edge) are found. Then the terminal triangles are longest edge bisected which corresponds to a very local refinement operation. The procedure is repeated until the target triangle is refined.

Lepp-Delaunay algorithm extends these ideas to deal with the quality triangulation of PSLG geometries (general polygons) [2]. This works over a constrained Delaunay triangulation of the initial data and process the bad quality triangles in any order. For each bad quality triangle $t$, $Lepp(t)$ and terminal edge $E$ are found; then the midpoint of the terminal edge $E$ is Delaunay inserted in the mesh. The procedure is repeated until the target triangle is destroyed.

We discuss recent results that show that optimal-size triangulations are obtained with the Lepp-bisection algorithm [3,4]. To this end we use a revisited study on the iterative longest edge bisection of triangles [5], and new mathematical properties over the evolution of the percentage terminal edges over the refined triangulations. At present we are also working to prove that Lepp-Delaunay is a provably good algorithm that produces size-optimal meshes [6]. We also discuss the parallelization of Lepp-bisection and Lepp-Delaunay algorithm which take advantage of these properties [7]. We present empirical results which illustrate the algorithms behavior and the practical evolution of the percentage of terminal edges througout the refined triangulations.

- M.C. Rivara, Lepp-bisection algorithms, applications and mathematical properties, Applied Numerical Mathematics, 59 (2009), 2218-2235.
- M.C. Rivara, C. Calderón, Lepp terminal centroid method for quality triangulation, Computer-Aided Design, 42 (2010) 58-66.
- C. Bedregal, M.C. Rivara, A study on size-optimal longest edge refinement algorithms, 21th International Meshing Roundtable, California, 7-10 October 2012, 15 pages.
- C. Bedregal and M.C. Rivara. Longest-edge algorithms for size-optimal refinement of triangulations. Technical report, Unpublished manuscript, 2013.
- C. Gutierrez, F. Gutierrez, M.C. Rivara. Complexity of the Bisection Method, Theoretical Computer Science 382 (2007) pp. 131-138.
- C. Bedregal, M.C. Rivara, Longest edge algorithms for quality Delaunay refinement, Computational Geometry: Young Research Form, CG: YRF, Rio de Janeiro, Brasil, June 17-20, 2013.
- M.C. Rivara, P. Rodriguez, R. Montenegro, G. Jorquera, Multithread parallelization of Lepp bisection algorithms, Applied Numerical Mathematics 62(2012) 473-488.

Abstract: The problem of building non-structured 3D meshes for use in numerical simulations has been intensively studied in the last three decades. Several aspects of the process have been focused, such as

- how to subdivide domains into collections of tetrahedra while avoiding degenerate elements,
- how to use finer spatial resolutions in order to favour certain regions of the domains, and
- how to maintain the integrity of the mesh as parts of the domains change in time.