Ring 3: The Great Beyond
Adapting experimental and modeling workflows to create 3D visualizations of artist and conservation materials on polychrome objects
By Roxanne Radpour, Charles E. Culpeper Fellow at the National Gallery of Art
For the final talk of the Washington Conservation Guild’s annual Three-Ring Circus, Roxanne Radpour discussed her work using imaging technologies and 3D modeling software to develop accessible workflows for creating digital models of 3D polychrome objects with pigment maps. Beginning with a summary of common imaging techniques currently used throughout the field of conservation to analyze 2D surfaces and map materials, Radpour then goes into detail on how these same techniques can be applied to 3D polychrome objects, with special attention to visible-induced visible and NIR luminescence. To demonstrate the workflow she developed to create 3D models featuring luminescence of pigments and conservation materials, Radpour walked through the development of one such model for a terracotta funerary Canosa Head Vase (81.AE.157) from the J. Paul Getty Museum Collection she examined while a student at UCLA. Her methodology followed three steps: image capture, image pre-processing, and 3D model building. Image capture was performed using common analytical image capture techniques often applied in conservation and cultural heritage laboratories. The head vase was placed on a rotating surface with a stationary camera pointed at the head vase to collect different angles of it as it rotated. She took 4 sets of images: color, visible-induced visible luminescence, visible-induced near-infrared luminescence, and UV-induced visible luminescence. Pre-processing work was done in Photoshop to integrate captures from different image sets to create a more comprehensive composite image. Because luminescence of a pigment can often give a very incomplete image of the entire object, which would cause issues for building a 3D model from scratch, Radpour included grayscale images of the object as a base layer over which the pigment luminescence could be laid. Agisoft Photoscan (now Metashape) was used to turn the captured images into 3D models.
Of special interest to Radpour were the madder lake and Egyptian blue pigments on the surface, which are often found in ancient 3D polychromy. A color model and an integrated pigment luminescence model were made, but as an experiment, one of her 3D models was built from only UV-induced luminescence capture (without overlay onto the grayscale image). While she admits the technique might not be effective for all objects, the UV-induced luminescence-only model gave a decently complete reconstruction of the overall object.
After leaving UCLA, Radpour continued her work at the Metropolitan Museum of Art, imaging another terracotta figurine (11.212.16). Testing a new modeling software, RealityCapture, she again successfully constructed a UV-induced luminescence 3D model without image pre-processing. Granted access to a multispectral camera with video rate acquisition, she was able to examine the same object’s Egyptian blue application over the surface by exciting and recording its luminescence in real time.
As Radpour continues her imaging work, she strives to provide the knowledge and workflows to the field so that 3D image capture and reconstruction with artist and conservation materials mapped across the surface of the object can become common practice in museums. Beyond the immediate benefits of analysis, the 3D capture can live as an interactive document to document objects in their current states.
Summarized by Madison Whitesell
Virtual Attendance: 69 participants