GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (2024)

©2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

The final published version is available on IEEE Xplore: https://ieeexplore.ieee.org/document/10319084
DOI: 10.1109/LRA.2023.3333233

Nicolai Steinke1, Daniel Goehring1 and Raúl Rojas1Manuscript received: June, 15, 2023; Revised October, 4, 2023; Accepted October, 28, 2023.This paper was recommended for publication by Editor Pauline Pounds upon evaluation of the Associate Editor and Reviewers’ comments.This work was supported by the German Federal Ministry for Digital and Transport under grant number 45AVF3001F.1Dahlem Center for Machine Learning and Robotics (DCMLR), Department of Mathematics and Computer Science, Freie Universität Berlin, Germany,{nicolai.steinke | daniel.goehring | raul.rojas}@fu-berlin.deDigital Object Identifier (DOI): see top of this page.

Abstract

The precise point cloud ground segmentation is a crucial prerequisite of virtually all perception tasks for LiDAR sensors in autonomous vehicles. Especially the clustering and extraction of objects from a point cloud usually relies on an accurate removal of ground points. The correct estimation of the surrounding terrain is important for aspects of the drivability of a surface, path planning, and obstacle prediction. In this article, we propose our system GroundGrid which relies on 2D elevation maps to solve the terrain estimation and point cloud ground segmentation problems. We evaluate the ground segmentation and terrain estimation performance of GroundGrid and compare it to other state-of-the-art methods using the SemanticKITTI dataset and a novel evaluation method relying on airborne LiDAR scanning. The results show that GroundGrid is capable of outperforming other state-of-the-art systems with an average IoU of 94.78% while maintaining a high run-time performance of 171Hz. The source code is available at https://github.com/dcmlr/groundgrid

Index Terms:

Range Sensing; Mapping; Field Robots

I INTRODUCTION

LiDAR sensors are widely used in the field of autonomous vehicles. Many applications for LiDAR sensors in this field require the removal of ground points. This applies especially to object detection and classification algorithms where the ground segmentation is often the first processing step that precedes higher perception functions, e. g., object detection and classification[1]. With technological advances, the point density of the sensors rises continuously which makes scalable solutions necessary. As a related problem to ground segmentation, terrain estimation is the task of estimating the ground topology around the vehicle. Knowledge about the surrounding ground topology can be used to segment point clouds into ground and non-ground, infer drivable terrain, improve object movement prediction, and filter outlier points of the LiDAR sensor. In this article, we present GroundGrid, a LiDAR ground segmentation and terrain estimation system that outperforms other state-of-the-art methods while maintaining online performance. The method is sensor independent, deterministic, and easy to understand since it relies only upon the vertical point height variance and the outlier detection in order to detect the ground surface. We evaluate the ground segmentation performance on the well-known SemanticKITTI dataset[2] and compare it to other state-of-the-art methods. Additionally, we evaluate the terrain estimation performance using georeferenced Airborne Lidar Scanning (ALS) data as ground truth and compare the results with another method.

II RELATED WORK

Since the ground segmentation of point cloud data is a fundamental task for LiDAR perception systems, especially for those in use on autonomous robots the research effort in this topic has been ongoing for many years[3].The biggest challenges for accurate ground segmentation methods are the sparsity of the generated data, the fact that the point clouds are generally not spatially structured, and tight performance constraints to run online in the vehicle’s on-board systems. To overcome these challenges, many approaches rely on the projection of the LiDAR point cloud to reduce the point cloud’s dimensionality. Often the data is projected as a bird’s eye view image onto a plane. Rasterizing this plane into a 2D grid structure and assigning the point height averages as cell values yields an elevation map. Algorithms relying on elevation maps were successfully used by many teams of the DARPA Grand Urban Challenge in 2007[4][5][6]. Elevation maps offer significant performance advantages because the calculations have only to be calculated for each cell of the map and not for each point in the point cloud. This fixed structure also enables the use of image processing algorithms and machine learning methods using convolutional neural networks[7][8]. Another popular way of projecting the data is in the form of a range image which results in a cylindrical projection for rotational laser scanners[9][10]. Using this image representation allows classical image processing algorithms to be applied, e. g., 2D line regression algorithms[9] or methods relying on 2D image convolutions[10]. One disadvantage of these projection methods is the possible loss of information due to quantization errors[11]. Other works exploit the decline of the point density with the distance to the sensor. Himmelsbach et al.[12] discretized the point cloud anglewise so that each bin covered more area far away from the car than close to it. Others used polar elevation maps which also divide the space by angles so the cells cover more area the more distant they are to the sensor[13][14]. This way quantization errors can be minimized and the possible information loss reduced but this comes at the expense of a higher error in distant cells which also can include important information. Concerning the ground surface estimation, many methods rely on the assumption that the ground surface is relatively flat and continuous. Especially methods that use linear regression to fit lines or planes to the point cloud need these assumptions to hold[12][15]. In urban scenarios, the ground is often flat but not continuous since the ground level can abruptly change at road borders and pedestrian areas. In scenarios outside of the urban structure, the ground is often continuous but not flat since there are fewer human-made structures. In these cases, a singular plane can not accurately model the ground surface. Lim et al.[14] and the follow-up publication by Lee et al.[16] compensated for these problems by fitting multiple smaller planes. Himmelsbach et al.[12] and Cheng et al.[13] split the area around the sensor into angular segments. All of these methods have in common that the fitted planes grow with the distance to the sensor. Since most sensors record fewer points per area in the distance the error concerning the recorded points remains constant. But the error for the true ground plane rises with the size of the planes. Since these algorithms are mostly evaluated on a per-point basis the bigger ground approximation error in the distance does not impact the results in these evaluations.
Recently methods utilizing artificial neural nets received more attention from researchers: Paigwar et al.[8] presented a convolutional neural network GndNet that was trained on elevation maps and which infers the ground surface in order to segment point clouds. He et al.[7] also split the area into segments of varying size similar to Lim et al.[14] and Lee et al.[16]. These segments were then processed with a PointNet[17] feature encoding block and subsequently fed into a classification module. With the classified sectors, the point cloud is segmented.

GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (1)

A disadvantage of artificial neural nets is the need for large amounts of labeled data for training which often need to be generated manually. Paigwar et al.[8] and He et al.[7] trained their networks on the SemanticKITTI dataset which consists of 11 publicly available manually labeled point cloud sequences. Since all the data was collected using the same sensor platform in similar locations around Karlsruhe, Germany, the dataset might be biased and the generalization capabilities of the outcomes are hampered.
Outlier removal in point clouds is rarely explicitly mentioned despite outliers are a common phenomenon - especially in urban scenarios where car chassis often reflect laser beams leading to incorrect measurements. Sequence 00 of the SemanticKITTI dataset is an example of an outlier prone scenario. Fig.1 is a visualization of common outlier points. The outlier points (cyan) are well below (approx. 1.5m) the true ground surface (red points) since two lasers have been reflected from the lower car body onto the ground and back to the LiDAR sensor. The sensor uses only the time of flight to determine the traveled distance of the light beam and does not seem to be able to detect the reflections of the beams. Hence the position of the points ends up being significantly lower than their true position. Spatial errors of this magnitude as well as the number of erroneous points poses a big challenge for ground estimation and segmentation systems. Lee[16] used two manually tuned thresholds to identify this kind of outlier points but they must be tuned to each sensor and mounting position individually and therefore this method does not provide a general solution to the problem.

III GROUND SEGMENTATION AND TERRAIN ESTIMATION

In this section, we describe GroundGrid in detail. Fig.2 shows the general structure of the approach. The Input and output point clouds are depicted in red and green respectively and the intermediate results are in blue. The algorithm reuses the elevation maps of previous iterations. If no previous result is available the elevation map is initialized with the vehicle’s elevation. First, an outlier filtering is executed, followed by the point cloud rasterization into a grid map. Subsequently, the grid map cells containing only ground points are classified and the ground level is calculated. Then the elevation map is generated and missing data is interpolated. Finally, the point cloud is segmented into ground points and non-ground points using the estimated elevation map.

GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (2)

III-A Outlier Filtering

Reflected laser beams can yield points that lie well below the true ground level (see Fig.1). These outliers lead to problems for the ground-level estimation. To remove outlier points, the intersection between the line connecting the sensor position with the point in question and the currently known elevation map is calculated. If the intersection shows that there is no line of sight between the sensor and the point, the point is classified as an outlier.

III-B Point Cloud Rasterization

The remaining point cloud is rasterized into the grid map. During the rasterization process, several grid map layers are calculated: first of all the variance layer with the z-axis point coordinate variance for each cell. The other layers are the minimal, maximal, and average values of the z-coordinate (height) as well as the point count for each cell. The variance is calculated using Welford’s online algorithm[18] to avoid a second pass.

III-C Classification of Ground Cells

Cells containing exclusively ground points can be identified by using the variance information calculated in the previous step. The assumption is that traversable ground is more flat compared to obstacles and thus has a lower point height variance. In order to accommodate the sparsity of the point cloud data, the ground cell classification takes a cell’s neighborhood into account. It is performed for patches of 9 (3×3333\times 33 × 3) to 25 (5×5555\times 55 × 5) cells depending on the distance to the sensor. If a cell does not contain enough points to calculate the variance, the average variance value of the patch is used. A cell in the center of a patch is classified as ground if the variance of the cell is below the variance threshold tvsubscript𝑡𝑣t_{v}italic_t start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT. This threshold is scaled with the distance to the sensor to take the reduced accuracy of distant measurements into account:

tv=dsfd(p,o)subscript𝑡𝑣subscript𝑑𝑠𝑓𝑑𝑝𝑜t_{v}=d_{sf}\cdot d(p,o)italic_t start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = italic_d start_POSTSUBSCRIPT italic_s italic_f end_POSTSUBSCRIPT ⋅ italic_d ( italic_p , italic_o )(1)

where the distance scaling factor dsfsubscript𝑑𝑠𝑓d_{sf}italic_d start_POSTSUBSCRIPT italic_s italic_f end_POSTSUBSCRIPT is multiplied by the Euclidean planar distance d𝑑ditalic_d of the point p𝑝pitalic_p and the sensor origin o𝑜oitalic_o. The parameter dsfsubscript𝑑𝑠𝑓d_{sf}italic_d start_POSTSUBSCRIPT italic_s italic_f end_POSTSUBSCRIPT scales the distance to a variance threshold and is the same for all cells.For a cell to be classified as ground it also must contain a sufficient number of points. Assuming the absence of obstacles, the LiDAR sensor used in the SemanticKITTI dataset produces a circle pattern for each laser on the ground. We approximate the expected point count per cell by calculating the angular distance covered by each cell on a circle and dividing it by the sensor’s angular point distance:

n=tan1(Rd)dpv𝑛superscript1𝑅𝑑subscript𝑑𝑝𝑣n=\frac{\tan^{-1}(\frac{R}{d})}{d_{pv}}italic_n = divide start_ARG roman_tan start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( divide start_ARG italic_R end_ARG start_ARG italic_d end_ARG ) end_ARG start_ARG italic_d start_POSTSUBSCRIPT italic_p italic_v end_POSTSUBSCRIPT end_ARG(2)

where R𝑅Ritalic_R is the cell resolution, d𝑑ditalic_d is the distance to the LiDAR sensor and dpvsubscript𝑑𝑝𝑣d_{pv}italic_d start_POSTSUBSCRIPT italic_p italic_v end_POSTSUBSCRIPT is the angular distance of two points of this sensor. Cells that contain less than a predefined fraction (we used 0.25) of the expected points are skipped.

III-D Ground Elevation Calculation

In this step, the ground elevation and a corresponding confidence value are calculated. It is again performed for patches of 9999 to 25252525 cells (see sec.III-C). The ground elevation hi,jsubscript𝑖𝑗h_{i,j}italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT for the cell i,j𝑖𝑗i,jitalic_i , italic_j is calculated as follows:

hi,j=(Pi,jMi,j)Pi,jsubscript𝑖𝑗subscript𝑃𝑖𝑗subscript𝑀𝑖𝑗subscript𝑃𝑖𝑗h_{i,j}=\frac{\sum{(P_{i,j}\circ M_{i,j})}}{\sum{P_{i,j}}}italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = divide start_ARG ∑ ( italic_P start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ∘ italic_M start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ italic_P start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG(3)

where Pi,jsubscript𝑃𝑖𝑗P_{i,j}italic_P start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT denotes the point count matrix at cell i,j𝑖𝑗i,jitalic_i , italic_j and Mi,jsubscript𝑀𝑖𝑗M_{i,j}italic_M start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT the matrix containing the minimum height values.The \circ-operator denotes the element-wise multiplication. The usage of point count weighted minimum height values as ground elevation minimizes the risk of a too high ground estimation which would result in under-segmentation.A ground cell’s elevation estimation confidence qi,jsubscript𝑞𝑖𝑗q_{i,j}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT at index i,j𝑖𝑗{i,j}italic_i , italic_j is defined as

qi,j=Pi,jssubscript𝑞𝑖𝑗subscript𝑃𝑖𝑗𝑠q_{i,j}=\frac{\sum{P_{i,j}}}{s}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = divide start_ARG ∑ italic_P start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG start_ARG italic_s end_ARG(4)

where Pi,jsubscript𝑃𝑖𝑗P_{i,j}italic_P start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the point count matrix and s𝑠sitalic_s the confidence point count scaling parameter. The resulting qi,jsubscript𝑞𝑖𝑗q_{i,j}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is clamped to [0,1]01[0,1][ 0 , 1 ].Considering only ground cells, the elevation and confidence values are integrated with the existing values from previous observations as follows: The new elevation estimation gi,jsubscript𝑔𝑖𝑗g_{i,j}italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is given by the confidence-weighted sum of the current and previous updates:

gi,jqi,jhi,j+ci,jgi,jqi,j+ci,jsubscript𝑔𝑖𝑗subscript𝑞𝑖𝑗subscript𝑖𝑗subscript𝑐𝑖𝑗subscript𝑔𝑖𝑗subscript𝑞𝑖𝑗subscript𝑐𝑖𝑗g_{i,j}\leftarrow\frac{q_{i,j}\cdot h_{i,j}+c_{i,j}\cdot g_{i,j}}{q_{i,j}+c_{i%,j}}italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← divide start_ARG italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ⋅ italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT + italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ⋅ italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG start_ARG italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT + italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG(5)

where ci,jsubscript𝑐𝑖𝑗c_{i,j}italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT and gi,jsubscript𝑔𝑖𝑗g_{i,j}italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT are values from previous updates while hi,jsubscript𝑖𝑗h_{i,j}italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT and qi,jsubscript𝑞𝑖𝑗q_{i,j}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT are calculated according to3 and4.

ci,j12(qi,j2+ci,j)subscript𝑐𝑖𝑗12subscript𝑞𝑖𝑗2subscript𝑐𝑖𝑗c_{i,j}\leftarrow\frac{1}{2}(\frac{q_{i,j}}{2}+c_{i,j})italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( divide start_ARG italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG + italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT )(6)

where ci,jsubscript𝑐𝑖𝑗c_{i,j}italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT denotes the existing confidence from previous steps for the cell i,j𝑖𝑗i,jitalic_i , italic_j and qi,jsubscript𝑞𝑖𝑗q_{i,j}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT the confidence value for the current update. To give repeated ground detections a higher weight than a single detection with a high amount of points, the weight of qi,jsubscript𝑞𝑖𝑗q_{i,j}italic_q start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is halved in this equation. We also calculate3 for non-ground cells (variance >tvabsentsubscript𝑡𝑣>t_{v}> italic_t start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT). If hi,jsubscript𝑖𝑗h_{i,j}italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is below the known ground level (hi,j<gi,jsubscript𝑖𝑗subscript𝑔𝑖𝑗h_{i,j}<g_{i,j}italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT < italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT) we update it gi,jhi,jsubscript𝑔𝑖𝑗subscript𝑖𝑗g_{i,j}\leftarrow h_{i,j}italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← italic_h start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT, and we increase the confidence ci,jsubscript𝑐𝑖𝑗c_{i,j}italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT by a small amount: ci,jmin(ci,j+0.1,0.5)subscript𝑐𝑖𝑗subscript𝑐𝑖𝑗0.10.5c_{i,j}\leftarrow\min(c_{i,j}+0.1,0.5)italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← roman_min ( italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT + 0.1 , 0.5 ) to reflect the addition of new information. This way under-segmentation is avoided for cells covered by obstacles.

III-E Terrain Interpolation

The ground elevation is interpolated for cells where no ground points were detected. The interpolation is performed spirally from the sensor origin. The elevation for each cell is set to the confidence-weighted sum of the estimated ground elevation of the cell and the average of the cell’s 3×3333\times 33 × 3 neighborhood:

gi,j(1ci,j)Σ(Ci,jGi,j)ΣCi,j+ci,jgi,jsubscript𝑔𝑖𝑗1subscript𝑐𝑖𝑗Σsubscript𝐶𝑖𝑗subscript𝐺𝑖𝑗Σsubscript𝐶𝑖𝑗subscript𝑐𝑖𝑗subscript𝑔𝑖𝑗g_{i,j}\leftarrow(1-c_{i,j})\cdot\frac{\Sigma(C_{i,j}\circ G_{i,j})}{\Sigma C_%{i,j}}+c_{i,j}\cdot g_{i,j}italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← ( 1 - italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ) ⋅ divide start_ARG roman_Σ ( italic_C start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ∘ italic_G start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG roman_Σ italic_C start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG + italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ⋅ italic_g start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT(7)

where Ci,jsubscript𝐶𝑖𝑗C_{i,j}italic_C start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the 3×3333\times 33 × 3 confidence matrix and Gi,jsubscript𝐺𝑖𝑗G_{i,j}italic_G start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT the 3×3333\times 33 × 3 ground elevation matrix centered at i,j𝑖𝑗{i,j}italic_i , italic_j. The \circ-symbol represents the element-wise multiplication. The confidence values are updated as follows:

ci,jci,jci,jθsubscript𝑐𝑖𝑗subscript𝑐𝑖𝑗subscript𝑐𝑖𝑗𝜃c_{i,j}\leftarrow c_{i,j}-\frac{c_{i,j}}{\theta}italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ← italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT - divide start_ARG italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_ARG start_ARG italic_θ end_ARG(8)

where ci,jsubscript𝑐𝑖𝑗c_{i,j}italic_c start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the confidence value at cell index i,j𝑖𝑗i,jitalic_i , italic_j. The parameter θ𝜃\thetaitalic_θ is a constant decay factor that reduces the ground confidence over time.

III-F Point Cloud Segmentation

With the terrain elevation map, the point cloud can be segmented. We compare two threshold parameters hgsubscript𝑔h_{g}italic_h start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and hosubscript𝑜h_{o}italic_h start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT with each point’s z-height above the corresponding cell’s elevation for the segmentation. hgsubscript𝑔h_{g}italic_h start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is for segmenting areas classified as ground while hosubscript𝑜h_{o}italic_h start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT is for segmenting areas classified as obstacles. We set ho<hgsubscript𝑜subscript𝑔h_{o}<h_{g}italic_h start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT < italic_h start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT because the aim is to segment all of the obstacle’s points correctly while avoiding over-segmentation in sloped ground areas.

IV EXPERIMENTS AND EVALUATION

{tblr}

colspec = lccc, rowspec = cccccc,Ground Truth[2] &GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (3)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (4)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (5)
Patchwork++[16]GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (6)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (7)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (8)
GndNet[8]GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (9)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (10)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (11)
JPC[10]GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (12)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (13)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (14)
Ours GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (15)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (16)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (17)
a) b) c)

In this section, we present a qualitative and quantitative evaluation of the ground segmentation and terrain estimation results of the proposed system. We compare the ground segmentation results using the SemantikKITTI dataset to the state-of-the-art methods Patchwork++[16], Jump Point Convolution (JPC)[10] and GndNet[8] leveraging open source implementations111https://github.com/url-kaist/patchwork-plusplus222https://github.com/anshulpaigwar/GndNet333https://github.com/wangx1996/Fast-Ground-Segmentation-Based-on-JPC. We evaluate the terrain estimation capabilities by using self-acquired data and a public georeferenced Airbourne Lidar Scanning (ALS) dataset[20].

IV-A Parameters

seq.0001020304050607080910average
Precision
Patchwork++94.9998.2795.9696.8198.1892.6597.8693.2996.9796.0692.8195.80
GndNet92.4096.5493.7495.6097.3089.5896.1590.0995.0993.8188.3493.51
JPC96.7897.9797.5098.0999.0194.0397.9695.6597.9797.6495.2797.08
Ours96.0598.0197.3697.9699.0895.1997.8295.3197.5097.2595.3896.99
Recall
Patchwork++98.6796.5297.2098.1797.2198.1397.3998.4297.4196.4595.9397.41
GndNet99.5096.9196.9496.6899.0698.6999.0099.4498.7496.1493.6097.70
JPC97.2095.4693.7294.8696.9195.6496.2396.5395.1392.6688.4794.97
Ours98.7096.1797.7197.9597.8598.1398.3898.7297.7996.9195.9097.65
F1
Patchwork++96.8097.3996.5897.4997.6995.3197.6395.7997.1996.2594.3596.59
GndNet95.8296.7295.3196.1498.1793.9197.5594.5396.8894.9690.8995.53
JPC96.9996.7095.5796.4597.9594.8397.0996.0996.5395.0991.7495.91
Ours97.3597.0897.5497.9698.4696.6498.1096.9997.6497.0895.6497.32
Accuracy
Patchwork++96.6495.9695.0896.0896.3994.7996.6395.8896.3794.9093.7595.68
GndNet95.5394.8893.2093.9997.1093.1096.4694.5295.9193.0889.8194.33
JPC96.8994.9393.8094.5996.8194.3795.8996.2695.6093.5091.3594.91
Ours97.2495.5096.4896.8497.6096.3297.2997.0896.9796.0595.2596.60
IoU
Patchwork++93.7994.9093.3895.0995.4991.0495.3691.9194.5392.7889.3093.41
GndNet91.9793.6591.0492.5596.4188.5295.2289.6393.9490.4183.3091.51
JPC94.1593.6191.5293.1495.9890.1694.3492.4793.2990.6484.7592.19
Ours94.8494.3395.1996.0096.9793.4996.2794.1595.4094.3391.6494.78

In all experiments the parameters for our method were set as follows: We chose a grid map resolution of R=0.33m𝑅0.33𝑚R=0.33mitalic_R = 0.33 italic_m which offers a reasonable compromise between accuracy and computational performance. The variance threshold distance scaling factor was set to dsf=105subscript𝑑𝑠𝑓superscript105d_{sf}=10^{-5}italic_d start_POSTSUBSCRIPT italic_s italic_f end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT (eq.1) and the minimum variance threshold to tminv=5dsfsubscript𝑡𝑚𝑖𝑛𝑣5subscript𝑑𝑠𝑓t_{minv}=5\cdot d_{sf}italic_t start_POSTSUBSCRIPT italic_m italic_i italic_n italic_v end_POSTSUBSCRIPT = 5 ⋅ italic_d start_POSTSUBSCRIPT italic_s italic_f end_POSTSUBSCRIPT. The confidence decay was θ=5𝜃5\theta=5italic_θ = 5 (eq.8). The minimum point count relative to the expected point count per cell (eq.2) to perform the ground detection was gminp=0.25subscript𝑔𝑚𝑖𝑛𝑝0.25g_{minp}=0.25italic_g start_POSTSUBSCRIPT italic_m italic_i italic_n italic_p end_POSTSUBSCRIPT = 0.25, cells with fewer points are skipped. The minimal ground confidence for the outlier detection (sum of a 5×5555\times 55 × 5 matrix) was ominc=1.25subscript𝑜𝑚𝑖𝑛𝑐1.25o_{minc}=1.25italic_o start_POSTSUBSCRIPT italic_m italic_i italic_n italic_c end_POSTSUBSCRIPT = 1.25, to avoid using interpolated cells for the outlier detection. The two thresholds for the point cloud segmentation were set to hg=0.3msubscript𝑔0.3𝑚h_{g}=0.3mitalic_h start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = 0.3 italic_m for cells classified as ground and ho=0.1msubscript𝑜0.1𝑚h_{o}=0.1mitalic_h start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT = 0.1 italic_m for obstacle cells (seeIII-F). The confidence point count scaling factor was s=20𝑠20s=20italic_s = 20, so that a ground cell must contain at least 20 points to reach a perfect confidence rating of 1 (see4). The outlier tolerance was set to ot=0.1msubscript𝑜𝑡0.1𝑚o_{t}=0.1mitalic_o start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 0.1 italic_m, so points that were occluded by less than this height are still considered valid. The distance where the patch size changes from 3×3333\times 33 × 3 to 5×5555\times 55 × 5 was dps=20msubscript𝑑𝑝𝑠20𝑚d_{ps}=20mitalic_d start_POSTSUBSCRIPT italic_p italic_s end_POSTSUBSCRIPT = 20 italic_m. The expected point distance was dpv=0.4°subscript𝑑𝑝𝑣0.4°d_{pv}=0.4\degreeitalic_d start_POSTSUBSCRIPT italic_p italic_v end_POSTSUBSCRIPT = 0.4 ° (see2) which is based on the pessimistic assumption that at least half of the beams of the HDL64e produced a return at 10Hz. The point count threshold to use the cell’s variance instead of the patch’s was vnp=10subscript𝑣𝑛𝑝10v_{np}=10italic_v start_POSTSUBSCRIPT italic_n italic_p end_POSTSUBSCRIPT = 10. Our implementation uses the Robot Operating System (ROS) and the Grid Map library[19]. We evaluated GroundGrid’s ground segmentation performance with the SemanticKITTI dataset. We define the labels lane-marking, other-ground, parking, road, sidewalk, and terrain as ground. The labels unlabeled and outlier are ignored in the error metrics since they do not belong to either class. Points labeled vegetation are also ignored in the error metrics. Due to the way the vegetation class is labeled in the SemanticKITTI dataset it is impossible to say if the majority of point belonging to this class belong to the ground surface or not. Hence we decided to follow the example of Lee et al.[16] and exclude this label from the error metrics (it is still included in the point clouds). All other labels are evaluated as non-ground.

IV-B Ground Segmentation Evaluation

Fig.3 shows qualitative results of the SemanticKITTI dataset. From top to bottom we compare the provided ground truth with the results of the methods Patchwork++[16], JPC[10] and GndNet[8], and GroundGrid (ours). The first scenario a) is cloud 40 of sequence 01. One notable difficulty in this scenario is the ditch located in the right part of the image. GroundGrid is the only method segments the points located in and around this ditch correctly while other methods over-segment the outer slope. The same applies to the points around the curbs where the other method display over-segmentation. Some methods fail to correctly segment the points on the opposite lane left to the vehicle because most laser beams are blocked by the guardrail. The distant guardrail in the lower right part is correctly segmented by all methods except JPC which slightly over-segments it and ours which partly under-segments it.

UrbanGroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (18)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (19)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (20)
HillGroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (21)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (22)GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (23)
Ground Truth[20]GndNet[8]Ours

In b) (seq. 09, cloud 1590) the methods have to deal with significant elevation differences and a steep slope in the driving direction. All methods over-segment parts of the road surface in this scenario. Patchwork++ performs well in lower streets towards the bottom but struggles with the higher elevated parts in the upper half of the image. GndNet struggles with the smaller streets to the sides. JPC over-segments the lower small street and the area around the curbs. GroundGrid manages to correctly segment almost all of the street surface but some over-segmentation can be found at the very top where a part of the curb is wrongly classified. Scenario c) is a detailed view of a difficult segmentation situation (seq. 10, cloud 164). The non-continuous ground elevation changes combined with vegetation and tilted surfaces make the ground segmentation task especially challenging. Patchwork++ tends to over-segment which is visible in the sloped area to the center right and in the area in the back. There is also under-segmentation present at the wall to the center-back and right. GndNet severely struggles with over-segmentation in the sloped area to the center right while heavily under-segmenting the cars and walls in the center back. JPC over-segments the area to the center right as well as the ground surface in the center back. GroundGrid also suffers from over-segmentation in this scenario albeit less strongly compared to the other methods: There is some over-segmentation in the center right area around the vegetation and near the wall that separates the area from the street level. This is also true for the walls in the back part but the elevated ground surface is still mostly correctly segmented.For the quantitative evaluation, we chose the following error metrics:

Precision=TPTP+FPRecall=TPTP+FNPrecision𝑇𝑃𝑇𝑃𝐹𝑃Recall𝑇𝑃𝑇𝑃𝐹𝑁\scriptsize\text{Precision}=\frac{TP}{TP+FP}\quad\text{Recall}=\frac{TP}{TP+FN}Precision = divide start_ARG italic_T italic_P end_ARG start_ARG italic_T italic_P + italic_F italic_P end_ARG Recall = divide start_ARG italic_T italic_P end_ARG start_ARG italic_T italic_P + italic_F italic_N end_ARG(9)
F1=2TP2TP+FP+FNIoU=TPTP+FP+FNsubscript𝐹12𝑇𝑃2𝑇𝑃𝐹𝑃𝐹𝑁𝐼𝑜𝑈𝑇𝑃𝑇𝑃𝐹𝑃𝐹𝑁\scriptsize F_{1}=\frac{2\cdot TP}{2\cdot TP+FP+FN}\quad IoU=\frac{TP}{TP+FP+FN}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = divide start_ARG 2 ⋅ italic_T italic_P end_ARG start_ARG 2 ⋅ italic_T italic_P + italic_F italic_P + italic_F italic_N end_ARG italic_I italic_o italic_U = divide start_ARG italic_T italic_P end_ARG start_ARG italic_T italic_P + italic_F italic_P + italic_F italic_N end_ARG(10)
Accuracy=TP+TNTP+TN+FP+FNAccuracy𝑇𝑃𝑇𝑁𝑇𝑃𝑇𝑁𝐹𝑃𝐹𝑁\scriptsize\text{Accuracy}=\frac{TP+TN}{TP+TN+FP+FN}Accuracy = divide start_ARG italic_T italic_P + italic_T italic_N end_ARG start_ARG italic_T italic_P + italic_T italic_N + italic_F italic_P + italic_F italic_N end_ARG(11)

where TP and FP represent the count of true positive and false positive ground points and TN and FN the count of true and false negative ground points. TableI shows the results for all methods on the SemanticKITTI dataset. GroundGrid exhibits the best performance in most metrics except Recall and Precision where Patchwork++ and GndNet produce higher averages. The high recall for Patchwork++ comes with a lower precision compared to the other methods which suggests a tendency for over-segmentation. This is contrasted by GndNet which scores the highest in the recall metric but suffers from severe under-segmentation. Keep in mind that this method was trained on this very same dataset so possible overfitting could influence the results.

IV-C Terrain Estimation Evaluation

For the terrain estimation evaluation, we used the publicly available Airborne Lidar Scanning (ALS) data published by the city of Berlin[20]. The maximum absolute height error is verified to be smaller than 0.05m0.05𝑚0.05m0.05 italic_m, validated with 30 known reference points[20].We preprocessed the point clouds by filtering non-ground points and rasterizing them onto a grid with 0.5m0.5𝑚0.5m0.5 italic_m cell size to create rasterized elevation maps which we used to evaluate the elevation maps produced by the algorithms (see ground truth in Fig4). We evaluate two experiments in different settings: The first in an urban road setting with mostly flat terrain, and the second in a forest area driving downwards in sloped terrain. We generated the data with our autonomous test vehicle equipped with a Velodyne HDL-64e 64-beam LiDAR placed on the car’s roof and an Applanix POS-LV GNSS for localization purposes.Fig.4 shows visualizations (created using[21]) of some of the results of the terrain estimation experiments. This data was collected by averaging the terrain estimation output of the methods. Only pixels with a point density of at least 27pointsm227𝑝𝑜𝑖𝑛𝑡𝑠superscript𝑚227\frac{points}{m^{2}}27 divide start_ARG italic_p italic_o italic_i italic_n italic_t italic_s end_ARG start_ARG italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG were considered. This equals 3333 points per 0.33m20.33superscript𝑚20.33m^{2}0.33 italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT area on average which we define as a lower threshold for a meaningful variance calculation. For comparison purposes, we display the ALS ground truth alongside the results. The visualization of the urban scenario shows a T-type crossing where three streets meet. Our vehicle traveled straight hence the third street is not covered by the terrain estimation methods. The lanes are divided by artificially raised ground covered with trees and lower vegetation except for the center part of the crossing where all lanes are connected. Trees can also be found on the side of the streets which raises the difficulty for the terrain estimation. The dividing elevated area between the lanes can be seen in the ground truth and GroundGrid’s results. Whereas GndNet’s results only faintly show the outline of this area. However, GroundGrid struggles to correctly interpolate the area under the trees on the side of the road and the dividing area. This is represented by lower circular areas in the ground estimation, visible as darker spots in the visualization. GndNet does not display these spots but does seem to smooth the elevation map too much leading to a loss of detail. The results of the hill scenario show the performance in the presence of a significant elevation difference. The pictured area covers a trajectory length of approx. 250m of the car’s trajectory with an elevation difference of 23m between the top and bottom point which translates into an average inclination of 9.2%. Due to the dense tree cover on the roadsides, the ground visibility was low except on the street. Paigwar’s GndNet struggled in this challenging environment which is shown by the high amount of noise in the resulting terrain estimation. The sides of this narrow street are covered with trees and the terrain is rough and uneven. Since GndNet was trained on the SemanticKITTI dataset it has never seen this type of terrain before and struggles to generalize its learned segmentation to this scenario. This is contrasted by the result of GroundGrid which is able to correctly extract the street’s surface from the point cloud data. Only in the part after the turn, some noise is visible on the side of the road. Since the parameters of our method were the same in all experiments it shows that GroundGrid generalizes well over various scenarios.

urbanhill
GndNet0.2971.581
Ours0.1960.488

The root mean square error as measured against the ground truth data is shown in tab.II. GroundGrid achieves the best performance in both scenarios. The results show the high difficulty of the hill scenario, especially GndNet is not able to provide an accurate terrain estimation. The underlying network does seem to be able to generalize to this type of terrain due to the lack of similar training data. However, the performance of our method is also considerably lower in the hill scenario which seems to be caused by the occlusion due to the forest area.

IV-D Run-Time

GroundGrid has a high run-time performance of 171Hz (5.85ms per point cloud) measured on an Intel i5-13600k desktop system.The implementation of the point cloud rasterization, outlier detection, and ground cell classification is multi-threaded using up to eight threads in our evaluation. The ground interpolation and point cloud segmentation are single-threaded.

V CONCLUSION

In this article, we presented a method for the point cloud ground segmentation and terrain estimation task consisting of outlier filtering, grid map rasterization, variance-based ground detection and the retention of previous information. The experimental results demonstrate the capabilities of the algorithm which outperforms current state-of-art methods in point cloud ground segmentation and terrain estimation tasks. This was demonstrated quantitatively and qualitatively using the SemanticKITTI dataset as well as self-acquired data from scenarios. The run-time performance is very high which enables online operation in mobile robots. The code of our implementation is available as open source and we invite the reader to reproduce our results.

References

  • [1] Li, Y. and Ibanez-Guzman, J., “Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems”, in IEEE Sig. Process., vol. 37, no. 4, pp. 50-61, 2020.
  • [2] Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J., “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences”, in Proc. IEEE/CVF Int. Conf. Comp. Vis., pp. 9297–9307, 2019.
  • [3] Gomes, T. and Matias, D. and Campos, A. and Cunha, L. and Roriz, R., “A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors”, Sensors, no. 2, vol. 23, p. 601, 2023.
  • [4] S. Kammel, J. Ziegler, B. Pitzer, M. Werling, T. Gindele, D. Jagzent, J. Schröder, M. Thuy, M. Goebl, F. von Hundelshausen, O. Pink, C. Frese, and C. Stiller, “Team AnnieWAY’s autonomous system for the DARPA Urban Challenge 2007”, in Journ. Field Robot. Spec. Issue 2007 DARPA Urban Chall., Part II, vol. 25, pp. 615-639, 2008.
  • [5] M. Montemerlo, J. Becker, S. Bhat, H. Dahlkamp, D. Dolgov, S. Ettinger, D. Haehnel, T. Hilden, G. Hoffmann, B. Huhnke, et al., “Junior: The Stanford entry in the Urban Challenge”, in Journ. Field Robot. Spec. Issue 2007 DARPA Urban Chall., Part II, vol. 25, pp. 569-597, 2008.
  • [6] C. Urmson et al., ‘Autonomous Driving In Urban Environments: Boss and the Urban Challenge, in Journ. Field Robot. Spec. Issue 2007 DARPA Urban Chall., Part I, vol. 25, pp. 425–466, 2008.
  • [7] He, D., Abid, F., Kim, Y., Kim, J., “SectorGSnet: Sector Learning for Efficient Ground Segmentation of Outdoor LiDAR Point Clouds”, IEEE Access, vol. 10, pp. 11938-11946, 2022.
  • [8] Paigwar, A. and Erkent, O. and Sierra-Gonzalez, D. and Laugier, C., “GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles”, in Proc. IEEE/RSJ Int. Conf. on Intell. Robots Syst., pp. 2150-2156, 2020.
  • [9] Bogoslavskyi, I., Stachniss, C., “Fast range image-based segmentation of sparse 3D laser scans for online operation”. in Proc. IEEE/RSJ Int. Conf. on Intell. Robots Syst., pp. 163–169, 2016.
  • [10] Shen, Z. and Liang, H. and Lin, L. and Wang, Z. and Huang, W. and Yu, J., “Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process”, Remote Sensing, vol. 13, no. 16, p. 3239, 2021.
  • [11] T. Wu, H. Fu, B. Liu, H. Xue, R. Ren, and Z. Tu, “Detailed Analysis on Generating the Range Image for LiDAR Point Cloud Processing,” Electronics, vol. 10, no. 11, p. 1224, 2021.
  • [12] Himmelsbach, M. and Hundelshausen, Felix v. and Wuensche, H.-J., “Fast Segmentation of 3D Point Clouds for Ground Vehicles”, in Proc. IEEE Intell. Veh. Symp., pp. 560-565, 2010.
  • [13] Cheng, J. and He, D. and Lee, C., “A simple ground segmentation method for LiDAR 3D point clouds”, on Proc. 2nd Int. Conf. Adv. Comp. Tech., Info. Sc. Comm., pp. 171-175, 2020.
  • [14] Lim, H. and Oh, M. and Myung, H., “Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor”, IEEE Robot. Automat. Lett., vol. 6, 2021.
  • [15] Douillard, B. and Underwood, J. and Kuntz, N. and Vlaskine, V. and Quadros, A. and Morton, P. and Frenkel, A., “On the segmentation of 3D LIDAR point clouds”, in Proc. IEEE Int. Conf. Robot. Automat., pp. 2798-2805, 2011.
  • [16] Lee, S. and Lim, H. and Myung, H., “Patchwork++: Fast and Robust Ground Segmentation Solving Partial Under-Segmentation Using 3D Point Cloud”, in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 13276-13283, 2022.
  • [17] Qi, C. R., Su, H., Mo, K., Guibas, L. J., “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, in Proc. IEEE Conf. Comp. Vis. Pattern Recognit., pp. 77-85, 2017.
  • [18] Welford, B. P., “Note on a Method for Calculating Corrected Sums of Squares and Products”, in Technometrics, vol. 4, no. 3, pp. 419-420, 1962.
  • [19] Fankhauser, P., Hutter, M., “A Universal Grid Map Library: Implementation and Use Case for Rough Terrain Navigation”, In Robot Operating System (ROS) – The Complete Reference, vol. 1, chap. 5, 2016.
  • [20]Senatsverwaltung für Stadtentwicklung und Wohnen, Geoportal Berlin / “Airborne Laserscanning (ALS) - Primäre 3D Laserscan-Daten”. online, https://www.stadtentwicklung.berlin.de/geoinformation/, licence: https://www.govdata.de/dl-de/by-2-0.
  • [21] Morgan-Wall, T., “rayshader: Create Maps and Visualize Data in 2D and 3D”, online, https://www.rayshader.com, 2023.
GroundGrid: LiDAR Point Cloud Ground Segmentation and Terrain Estimation (2024)
Top Articles
Latest Posts
Article information

Author: Terence Hammes MD

Last Updated:

Views: 6488

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Terence Hammes MD

Birthday: 1992-04-11

Address: Suite 408 9446 Mercy Mews, West Roxie, CT 04904

Phone: +50312511349175

Job: Product Consulting Liaison

Hobby: Jogging, Motor sports, Nordic skating, Jigsaw puzzles, Bird watching, Nordic skating, Sculpting

Introduction: My name is Terence Hammes MD, I am a inexpensive, energetic, jolly, faithful, cheerful, proud, rich person who loves writing and wants to share my knowledge and understanding with you.