Examples | C++

Point Cloud Alignment

Aligning point clouds with and without correspondences.

A complete alignment pipeline: creating test data, comparing alignment methods, and refining with ICP.

Source: alignment.cpp

Overview

This example demonstrates the full point cloud alignment workflow:

  1. Test Data Generation — Create realistic test cases using Laplacian smoothing
  2. With Correspondences — Compare rigid and OBB alignment when point pairs are known
  3. Without Correspondences — Show why rigid fails and how OBB with tree disambiguation works
  4. ICP Refinement — Refine alignment using fit_icp_alignment
  5. Different Resolutions — Align meshes with different vertex counts using Chamfer error

Key insights:

  • Rigid/similarity alignment requires known correspondences
  • OBB alignment works without correspondences but has 180° ambiguity
  • Tree-based disambiguation resolves OBB ambiguity
  • fit_icp_alignment refines any reasonable initialization to optimal alignment
  • Point-to-plane ICP (target with normals) converges ~2-3x faster than point-to-point
  • Chamfer error measures alignment quality without correspondences

Creating Test Data

Instead of random noise, use Laplacian smoothing to create a realistic "similar but different" mesh:

// Build vertex connectivity
tf::face_membership<int> fm(mesh.polygons());
tf::vertex_link<int> vlink;
vlink.build(mesh.polygons(), fm);

// Heavily smooth the mesh (200 iterations, lambda=0.9)
auto smoothed = tf::laplacian_smoothed(
    mesh.points() | tf::tag(vlink), 200, 0.9f);

This creates a mesh that's geometrically similar (~0.7% of diagonal displacement) but with different vertex positions.

Part 1: With Correspondences

When source[i] corresponds to target[i], rigid alignment is optimal:

// Transform smoothed mesh far away with random rotation
auto T1 = tf::transformed(
    tf::random_transformation_at(centroid),
    tf::make_transformation_from_translation(far_translation));

tf::points_buffer<float, 3> source;
source.allocate(smoothed.size());
tf::parallel_for_each(tf::zip(smoothed.points(), source.points()),
    [&](auto tup) {
      auto [src, dst] = tup;
      dst = tf::transformed(src, T1);
    });

Compare alignment methods:

// Rigid alignment - optimal with correspondences
auto T_rigid = tf::fit_rigid_alignment(source.points(), mesh.points());

// OBB without tree - may get wrong 180° orientation
auto T_obb_no_tree = tf::fit_obb_alignment(source.points(), mesh.points());

// OBB with tree - disambiguates orientation
tf::aabb_tree<int, float, 3> tree(mesh.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh.points() | tf::tag(tree);
auto T_obb_tree = tf::fit_obb_alignment(source.points(), target_with_tree);

Results:

MethodRMS Error
Ground truth0.00187
Rigid0.00187 ✓
OBB (no tree)0.136 ✗
OBB (with tree)0.00339

Rigid alignment recovers the exact transformation. OBB without tree gets the wrong 180° rotation.

Part 2: Without Correspondences

Shuffle point indices to break correspondences:

tf::buffer<int> shuffle_ids;
shuffle_ids.allocate(source.size());
tf::parallel_iota(shuffle_ids, 0);
std::shuffle(shuffle_ids.begin(), shuffle_ids.end(),
             std::mt19937{std::random_device{}()});

// Build shuffled source
tf::points_buffer<float, 3> source_shuffled;
source_shuffled.allocate(source.size());
tf::parallel_copy(tf::make_indirect_range(shuffle_ids, source.points()),
                  source_shuffled.points());

Results:

MethodRMS Error
Ground truth0.00187
Rigid0.105 ✗ (FAILS)
OBB (no tree)0.136 ✗
OBB (with tree)0.00339 ✓

Without correspondences, rigid alignment fails completely. OBB with tree disambiguation still works.

Part 3: ICP Refinement

Refine OBB initialization using fit_icp_alignment. ICP returns a DELTA transformation (source_world -> target_world), which we compose with the initial transform. We compare point-to-point and point-to-plane ICP.

// Configure ICP
tf::icp_config icp_cfg;
icp_cfg.max_iterations = 50;
icp_cfg.n_samples = 1000;
icp_cfg.k = 1;
icp_cfg.min_relative_improvement = 0.001f;

// Point-to-Point ICP (returns delta, compose with initial transform)
tf::tick();
auto T_p2p_delta = tf::fit_icp_alignment(
    source.points() | tf::tag(T_obb_tree),
    target_with_tree, icp_cfg);
auto T_p2p = tf::transformed(T_obb_tree, T_p2p_delta);  // total = delta @ init
auto p2p_time = tf::tock();

// Compute point normals for point-to-plane
auto fm = tf::make_face_membership(mesh.polygons());
auto normals = tf::compute_point_normals(mesh.polygons() | tf::tag(fm));

// Tag tree and normals onto target
auto target_with_normals = mesh.points()
    | tf::tag(tree)
    | tf::tag_normals(normals.unit_vectors());

// Point-to-Plane ICP (returns delta, compose with initial transform)
tf::tick();
auto T_p2l_delta = tf::fit_icp_alignment(
    source.points() | tf::tag(T_obb_tree),
    target_with_normals, icp_cfg);
auto T_p2l = tf::transformed(T_obb_tree, T_p2l_delta);  // total = delta @ init
auto p2l_time = tf::tock();

Timing Comparison

Point-to-plane converges significantly faster:

MethodTimeChamfer Error
Point-to-Point5.7 ms0.000251
Point-to-Plane2.2 ms0.000255

Point-to-plane was 2.6x faster while achieving the same accuracy.

Part 4: Different Mesh Resolutions

When meshes have different vertex counts, correspondences are impossible. Use Chamfer error to measure quality:

// Load meshes with different resolutions
auto mesh_high = tf::read_stl("dragon-500k.stl");  // 240k vertices
auto mesh_low = tf::read_stl("dragon-50k.stl");    // 29k vertices

// Use high-res as target, build tree for queries
tf::aabb_tree<int, float, 3> tree(mesh_high.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh_high.points() | tf::tag(tree);

// Baseline: Chamfer error between aligned meshes (best possible)
float baseline = tf::chamfer_error(mesh_low.points(), target_with_tree);

As in Part 1, transform the low-res mesh far away with a random rotation. The alignment pipeline works identically:

// OBB initialization
auto T_obb = tf::fit_obb_alignment(source_low.points(), target_with_tree);

// For point-to-plane, tag normals onto target
auto target_with_normals = mesh_high.points()
    | tf::tag(tree)
    | tf::tag_normals(normals.unit_vectors());

// ICP refinement (returns delta, compose with OBB transform)
tf::icp_config icp_cfg;
icp_cfg.max_iterations = 50;
icp_cfg.n_samples = 1000;
icp_cfg.k = 1;

auto T_delta = tf::fit_icp_alignment(
    source_low.points() | tf::tag(T_obb),
    target_with_normals, icp_cfg);
auto T_p2l = tf::transformed(T_obb, T_delta);  // total = delta @ init

Results:

StageChamfer Error
Baseline (best possible)0.000251
Initial (far apart)0.862
OBB (no tree)0.00886
OBB (with tree)0.000918
After ICP0.000252 ✓

Summary

MethodNeeds CorrespondencesHandles ScaleNotes
fit_rigid_alignmentYesNoOptimal with correspondences
fit_similarity_alignmentYesYesFor uniform scale differences
fit_obb_alignmentNoNoCoarse alignment, use tree for disambiguation
fit_icp_alignmentNo (finds them)NoFull ICP loop with convergence
fit_knn_alignmentNo (finds them)NoSingle ICP step for custom pipelines

Point-to-Point vs Point-to-Plane:

Tag normals on the target to enable point-to-plane ICP, which converges ~2-3x faster:

// Point-to-point (tree only)
auto target_p2p = points | tf::tag(tree);

// Point-to-plane (tree + normals) - 2-3x faster convergence
auto target_p2l = points | tf::tag(tree) | tf::tag_normals(normals);

Recommended pipeline for unknown correspondences:

  1. Build tree on target: target | tf::tag(tree)
  2. Compute normals: tf::compute_point_normals(polygons | tf::tag(fm))
  3. Tag normals: target | tf::tag(tree) | tf::tag_normals(normals)
  4. OBB initialization: fit_obb_alignment(source, target_with_tree)
  5. ICP refinement: fit_icp_alignment(source | tf::tag(T_obb), target_with_normals)
  6. Measure quality: chamfer_error(aligned_source, target_with_tree)