Point Cloud Alignment
A complete alignment pipeline: creating test data, comparing alignment methods, and refining with ICP.
Source: alignment.cpp
Overview
This example demonstrates the full point cloud alignment workflow:
- Test Data Generation — Create realistic test cases using Laplacian smoothing
- With Correspondences — Compare rigid and OBB alignment when point pairs are known
- Without Correspondences — Show why rigid fails and how OBB with tree disambiguation works
- ICP Refinement — Refine alignment using
fit_icp_alignment - Different Resolutions — Align meshes with different vertex counts using Chamfer error
Key insights:
- Rigid/similarity alignment requires known correspondences
- OBB alignment works without correspondences but has 180° ambiguity
- Tree-based disambiguation resolves OBB ambiguity
fit_icp_alignmentrefines any reasonable initialization to optimal alignment- Point-to-plane ICP (target with normals) converges ~2-3x faster than point-to-point
- Chamfer error measures alignment quality without correspondences
Creating Test Data
Instead of random noise, use Laplacian smoothing to create a realistic "similar but different" mesh:
// Build vertex connectivity
tf::face_membership<int> fm(mesh.polygons());
tf::vertex_link<int> vlink;
vlink.build(mesh.polygons(), fm);
// Heavily smooth the mesh (200 iterations, lambda=0.9)
auto smoothed = tf::laplacian_smoothed(
mesh.points() | tf::tag(vlink), 200, 0.9f);
This creates a mesh that's geometrically similar (~0.7% of diagonal displacement) but with different vertex positions.
Part 1: With Correspondences
When source[i] corresponds to target[i], rigid alignment is optimal:
// Transform smoothed mesh far away with random rotation
auto T1 = tf::transformed(
tf::random_transformation_at(centroid),
tf::make_transformation_from_translation(far_translation));
tf::points_buffer<float, 3> source;
source.allocate(smoothed.size());
tf::parallel_for_each(tf::zip(smoothed.points(), source.points()),
[&](auto tup) {
auto [src, dst] = tup;
dst = tf::transformed(src, T1);
});
Compare alignment methods:
// Rigid alignment - optimal with correspondences
auto T_rigid = tf::fit_rigid_alignment(source.points(), mesh.points());
// OBB without tree - may get wrong 180° orientation
auto T_obb_no_tree = tf::fit_obb_alignment(source.points(), mesh.points());
// OBB with tree - disambiguates orientation
tf::aabb_tree<int, float, 3> tree(mesh.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh.points() | tf::tag(tree);
auto T_obb_tree = tf::fit_obb_alignment(source.points(), target_with_tree);
Results:
| Method | RMS Error |
|---|---|
| Ground truth | 0.00187 |
| Rigid | 0.00187 ✓ |
| OBB (no tree) | 0.136 ✗ |
| OBB (with tree) | 0.00339 |
Rigid alignment recovers the exact transformation. OBB without tree gets the wrong 180° rotation.
Part 2: Without Correspondences
Shuffle point indices to break correspondences:
tf::buffer<int> shuffle_ids;
shuffle_ids.allocate(source.size());
tf::parallel_iota(shuffle_ids, 0);
std::shuffle(shuffle_ids.begin(), shuffle_ids.end(),
std::mt19937{std::random_device{}()});
// Build shuffled source
tf::points_buffer<float, 3> source_shuffled;
source_shuffled.allocate(source.size());
tf::parallel_copy(tf::make_indirect_range(shuffle_ids, source.points()),
source_shuffled.points());
Results:
| Method | RMS Error |
|---|---|
| Ground truth | 0.00187 |
| Rigid | 0.105 ✗ (FAILS) |
| OBB (no tree) | 0.136 ✗ |
| OBB (with tree) | 0.00339 ✓ |
Without correspondences, rigid alignment fails completely. OBB with tree disambiguation still works.
Part 3: ICP Refinement
Refine OBB initialization using fit_icp_alignment. ICP returns a DELTA transformation (source_world -> target_world), which we compose with the initial transform. We compare point-to-point and point-to-plane ICP.
// Configure ICP
tf::icp_config icp_cfg;
icp_cfg.max_iterations = 50;
icp_cfg.n_samples = 1000;
icp_cfg.k = 1;
icp_cfg.min_relative_improvement = 0.001f;
// Point-to-Point ICP (returns delta, compose with initial transform)
tf::tick();
auto T_p2p_delta = tf::fit_icp_alignment(
source.points() | tf::tag(T_obb_tree),
target_with_tree, icp_cfg);
auto T_p2p = tf::transformed(T_obb_tree, T_p2p_delta); // total = delta @ init
auto p2p_time = tf::tock();
// Compute point normals for point-to-plane
auto fm = tf::make_face_membership(mesh.polygons());
auto normals = tf::compute_point_normals(mesh.polygons() | tf::tag(fm));
// Tag tree and normals onto target
auto target_with_normals = mesh.points()
| tf::tag(tree)
| tf::tag_normals(normals.unit_vectors());
// Point-to-Plane ICP (returns delta, compose with initial transform)
tf::tick();
auto T_p2l_delta = tf::fit_icp_alignment(
source.points() | tf::tag(T_obb_tree),
target_with_normals, icp_cfg);
auto T_p2l = tf::transformed(T_obb_tree, T_p2l_delta); // total = delta @ init
auto p2l_time = tf::tock();
Timing Comparison
Point-to-plane converges significantly faster:
| Method | Time | Chamfer Error |
|---|---|---|
| Point-to-Point | 5.7 ms | 0.000251 |
| Point-to-Plane | 2.2 ms | 0.000255 |
Point-to-plane was 2.6x faster while achieving the same accuracy.
Part 4: Different Mesh Resolutions
When meshes have different vertex counts, correspondences are impossible. Use Chamfer error to measure quality:
// Load meshes with different resolutions
auto mesh_high = tf::read_stl("dragon-500k.stl"); // 240k vertices
auto mesh_low = tf::read_stl("dragon-50k.stl"); // 29k vertices
// Use high-res as target, build tree for queries
tf::aabb_tree<int, float, 3> tree(mesh_high.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh_high.points() | tf::tag(tree);
// Baseline: Chamfer error between aligned meshes (best possible)
float baseline = tf::chamfer_error(mesh_low.points(), target_with_tree);
As in Part 1, transform the low-res mesh far away with a random rotation. The alignment pipeline works identically:
// OBB initialization
auto T_obb = tf::fit_obb_alignment(source_low.points(), target_with_tree);
// For point-to-plane, tag normals onto target
auto target_with_normals = mesh_high.points()
| tf::tag(tree)
| tf::tag_normals(normals.unit_vectors());
// ICP refinement (returns delta, compose with OBB transform)
tf::icp_config icp_cfg;
icp_cfg.max_iterations = 50;
icp_cfg.n_samples = 1000;
icp_cfg.k = 1;
auto T_delta = tf::fit_icp_alignment(
source_low.points() | tf::tag(T_obb),
target_with_normals, icp_cfg);
auto T_p2l = tf::transformed(T_obb, T_delta); // total = delta @ init
Results:
| Stage | Chamfer Error |
|---|---|
| Baseline (best possible) | 0.000251 |
| Initial (far apart) | 0.862 |
| OBB (no tree) | 0.00886 |
| OBB (with tree) | 0.000918 |
| After ICP | 0.000252 ✓ |
Summary
| Method | Needs Correspondences | Handles Scale | Notes |
|---|---|---|---|
fit_rigid_alignment | Yes | No | Optimal with correspondences |
fit_similarity_alignment | Yes | Yes | For uniform scale differences |
fit_obb_alignment | No | No | Coarse alignment, use tree for disambiguation |
fit_icp_alignment | No (finds them) | No | Full ICP loop with convergence |
fit_knn_alignment | No (finds them) | No | Single ICP step for custom pipelines |
Point-to-Point vs Point-to-Plane:
Tag normals on the target to enable point-to-plane ICP, which converges ~2-3x faster:
// Point-to-point (tree only)
auto target_p2p = points | tf::tag(tree);
// Point-to-plane (tree + normals) - 2-3x faster convergence
auto target_p2l = points | tf::tag(tree) | tf::tag_normals(normals);
Recommended pipeline for unknown correspondences:
- Build tree on target:
target | tf::tag(tree) - Compute normals:
tf::compute_point_normals(polygons | tf::tag(fm)) - Tag normals:
target | tf::tag(tree) | tf::tag_normals(normals) - OBB initialization:
fit_obb_alignment(source, target_with_tree) - ICP refinement:
fit_icp_alignment(source | tf::tag(T_obb), target_with_normals) - Measure quality:
chamfer_error(aligned_source, target_with_tree)
