Reassembling Thin Surface Geometry

Geoffrey Oxholm and Ko Nishino
Drexel University

reassemble header image
We present a novel 3D reassembly method for fragmented, thin objects with unknown geometry. Unlike past methods, we do not make any restrictive assumptions about the overall shape of the object, or its painted texture. Our key observation is that regardless of the object’s shape, matching fragments will have similar geometry and photometry along and across their adjoining regions. We begin by encoding the scale variability of each fragment’s boundary contour in a multi-channel, 2D image representation. Using this multi-channel boundary contour representation, we identify matching sub-contours via 2D partial image registration. We then align the fragments by minimizing the distance between their adjoining regions while simultaneously ensuring geometric continuity across them. The configuration of the fragments as they are incrementally matched and aligned form a graph structure that we use to improve subsequent matches. By detecting cycles in this graph, we identify subsets of fragments with interdependent alignments. We then minimize the error within the subsets to achieve a globally optimal alignment. We leverage user feedback to cull the otherwise exponential search space; after each new match is found and aligned, it is presented to a user for confirmation or rejection. Using ceramic pottery as the driving example, we demonstrate the accuracy and efficiency of our method on six real-world datasets.
  • A Flexible Approach to Reassembling Thin Objects of Unknown Geometry
    G. Oxholm and K. Nishino,
    Journal of Cultural Heritage, vol. 14, no. 1, pp51-61, Jan.-Feb., 2013.
    [ DOI ][ paper ][ project ]

  • Reassembling Thin Artifacts of Unknown Geometry
    G. Oxholm and K. Nishino,
    in Proc. of International Symposium on Virtual Reality, Archaeology and Cultural Heritage, 2011.
    [ paper ] [ video ][ project ]


reassemble overview image
We derive a three-step method that reassembles objects using only the fragments’ boundary contours with minimal user interaction. Our method exploits the key observation that regardless of the shape, or painted texture of the object, the matching boundary regions of adjoining fragments will be similar, both in geometry and photometry. This figure outlines our approach. (a) First, we preprocess each fragment to encode the scale variability of its boundary contour as a multi-channel 2D image. (b) We then identify matching sub-contours using a novel image registration method based on these scale-space boundary contour representations. (c) Next, we estimate the transformation to align the fragments using a least squares formulation that minimizes the distance between the adjoining regions while simultaneously maximizing the resulting geometric continuity across them. (d) The configuration of the fragments as they are incrementally matched and aligned form a graph structure. By identifying cycles in this graph, we detect subsets of fragments whose alignments are dependent on each other. When a cycle is formed, we jointly re-optimize the alignments of the constituent fragments to ensure a globally optimal configuration, and improve subsequent matches.
reassemble scalespace image
The first step in our method is to identify which fragments are most likely to align, and where their boundaries match. To do so quickly and accurately, we leverage the scale variability of each fragment’s boundary contour. A coarse scale representation, which is robust to noise and subtle detail, may be used to quickly estimate potential matches, while finer scale detail may be used to verify and fine-tune the estimated matches. This graduated relationship naturally lends itself to a hierarchical encoding. To that end, we build a multi-channel image representation that encodes the scale variability of each fragment’s shape and color. We encode the shape and color of each fragment’s boundary contour under various scales. The first two channels (curvature and torsion) encode the shape and the second two (red and green chromaticity) encode the color. Shown at the bottom right is the compact visualization: the red channel is curvature, green is torsion, and blue is the intensity of each contour point.
reassemble global image
As matching contours are found, edges are added to the assembly graph. When cycles are detected, as in (b), edges may be jointly re-optimized to incorporate the additional information. Note how the gap between pieces B and C has been closed with the addition of the yellow edge.


reassemble result1 image
A 6 piece store bought vase is reassembled using our system (green box) and by hand (orange box). At each step the green contour indicates the proposed addition, and purple indicates alignments that have consequently been adjusted.
reassemble result2 image
Steps taken to reassemble a 21 piece artifact. Note how the primary cluster is abandoned after the eighth piece. At this point the strongest matches were part of the rim. After completing this 4 piece section, a connection was made between the two components, enabling the completion of the rest of the vase. The final result (green) is compared to the hand reassembly (orange). Note that a few steps have been omitted for compactness.
reassemble result3 image
A 10 piece vessel is reassembled using our system (green box) and by hand (orange box). Note the large gap that is closed in the last step. Shown in purple, nearly every alignment is adjusted to correct this accumulated error.