A complete alignment pipeline: creating test data, comparing alignment methods, and refining with ICP.
Source: alignment.cpp
This example demonstrates the full point cloud alignment workflow:
Key insights:
Instead of random noise, use Laplacian smoothing to create a realistic "similar but different" mesh:
// Build vertex connectivity
tf::face_membership<int> fm(mesh.polygons());
tf::vertex_link<int> vlink;
vlink.build(mesh.polygons(), fm);
// Heavily smooth the mesh (200 iterations, lambda=0.9)
auto smoothed = tf::laplacian_smoothed(
mesh.points() | tf::tag(vlink), 200, 0.9f);
This creates a mesh that's geometrically similar (~0.7% of diagonal displacement) but with different vertex positions.
When source[i] corresponds to target[i], rigid alignment is optimal:
// Transform smoothed mesh far away with random rotation
auto T1 = tf::transformed(
tf::random_transformation_at(centroid),
tf::make_transformation_from_translation(far_translation));
tf::points_buffer<float, 3> source;
source.allocate(smoothed.size());
tf::parallel_for_each(tf::zip(smoothed.points(), source.points()),
[&](auto tup) {
auto [src, dst] = tup;
dst = tf::transformed(src, T1);
});
Compare alignment methods:
// Rigid alignment - optimal with correspondences
auto T_rigid = tf::fit_rigid_alignment(source.points(), mesh.points());
// OBB without tree - may get wrong 180° orientation
auto T_obb_no_tree = tf::fit_obb_alignment(source.points(), mesh.points());
// OBB with tree - disambiguates orientation
tf::aabb_tree<int, float, 3> tree(mesh.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh.points() | tf::tag(tree);
auto T_obb_tree = tf::fit_obb_alignment(source.points(), target_with_tree);
Results:
| Method | RMS Error |
|---|---|
| Ground truth | 0.00187 |
| Rigid | 0.00187 ✓ |
| OBB (no tree) | 0.136 ✗ |
| OBB (with tree) | 0.00339 |
Rigid alignment recovers the exact transformation. OBB without tree gets the wrong 180° rotation.
Shuffle point indices to break correspondences:
tf::buffer<int> shuffle_ids;
shuffle_ids.allocate(source.size());
tf::parallel_iota(shuffle_ids, 0);
std::shuffle(shuffle_ids.begin(), shuffle_ids.end(),
std::mt19937{std::random_device{}()});
// Build shuffled source
tf::points_buffer<float, 3> source_shuffled;
source_shuffled.allocate(source.size());
tf::parallel_copy(tf::make_indirect_range(shuffle_ids, source.points()),
source_shuffled.points());
Results:
| Method | RMS Error |
|---|---|
| Ground truth | 0.00187 |
| Rigid | 0.105 ✗ (FAILS) |
| OBB (no tree) | 0.136 ✗ |
| OBB (with tree) | 0.00339 ✓ |
Without correspondences, rigid alignment fails completely. OBB with tree disambiguation still works.
Refine OBB initialization using iterative closest point:
auto T_accum = T_obb_tree; // Start from OBB
constexpr std::size_t n_samples = 1000;
constexpr std::size_t k = 5; // k-NN for soft correspondences
for (std::size_t iter = 0; iter < max_iters; ++iter) {
// Subsample source points
auto ids = make_ids(source.size(), offset, stride, n_samples);
auto subsample = tf::make_points(
tf::make_indirect_range(ids, source.points()));
// Fit transform using k-nearest neighbors
auto subsample_with_frame = subsample | tf::tag(T_accum);
auto T_iter = tf::fit_knn_alignment(
subsample_with_frame, target_with_tree, k);
T_accum = tf::transformed(T_accum, T_iter);
}
Use exponential moving average (EMA) to handle sampling noise:
constexpr float alpha = 0.3f; // EMA smoothing
constexpr float rel_tol = 0.001f; // 0.1% improvement threshold
float ema = chamfer; // Initialize with first measurement
for (std::size_t iter = 0; iter < max_iters; ++iter) {
// ... ICP iteration ...
// Evaluate on DIFFERENT random subset (avoid overfitting)
float chamfer = tf::chamfer_error(
eval_sample | tf::tag(T_accum), target_with_tree);
ema_prev = ema;
ema = alpha * chamfer + (1.0f - alpha) * ema;
float rel_change = (ema_prev - ema) / ema;
if (rel_change < rel_tol) break; // Converged
}
ICP converges from 0.00339 to 0.00187 (ground truth) in ~15 iterations.
When meshes have different vertex counts, correspondences are impossible. Use Chamfer error to measure quality:
// Load meshes with different resolutions
auto mesh_high = tf::read_stl("dragon-500k.stl"); // 240k vertices
auto mesh_low = tf::read_stl("dragon-50k.stl"); // 29k vertices
// Use high-res as target, build tree for queries
tf::aabb_tree<int, float, 3> tree(mesh_high.points(), tf::config_tree(4, 4));
auto target_with_tree = mesh_high.points() | tf::tag(tree);
// Baseline: Chamfer error between aligned meshes (best possible)
float baseline = tf::chamfer_error(mesh_low.points(), target_with_tree);
As in Part 1, transform the low-res mesh far away with a random rotation. The alignment pipeline works identically:
// OBB initialization
auto T_obb = tf::fit_obb_alignment(source_low.points(), target_with_tree);
// ICP refinement
for (std::size_t iter = 0; iter < max_iters; ++iter) {
auto T_iter = tf::fit_knn_alignment(
subsample | tf::tag(T_accum), target_with_tree, ...);
T_accum = tf::transformed(T_accum, T_iter);
}
Results:
| Stage | Chamfer Error |
|---|---|
| Baseline (best possible) | 0.000251 |
| Initial (far apart) | 0.862 |
| OBB (no tree) | 0.00886 |
| OBB (with tree) | 0.000918 |
| After ICP | 0.000252 ✓ |
ICP converges to within 0.4% of the theoretical optimum.
| Method | Needs Correspondences | Handles Scale | Notes |
|---|---|---|---|
fit_rigid_alignment | Yes | No | Optimal with correspondences |
fit_similarity_alignment | Yes | Yes | For uniform scale differences |
fit_obb_alignment | No | No | Use tree for disambiguation |
fit_knn_alignment | No (finds them) | No | ICP iteration step |
Recommended pipeline for unknown correspondences:
target | tf::tag(tree)fit_obb_alignment(source, target_with_tree)fit_knn_alignment until EMA convergeschamfer_error(aligned_source, target_with_tree)