By Danny Freedman
When art researchers peek beneath the paint of the world’s masterpieces, they sometimes find a treasure trove of information: earlier drafts, doodles, notes and would-be works of art.
Now a GW lab, in collaboration with the National Gallery of Art in Washington, D.C., is developing software that would make the process much easier and far more precise, and is attracting the interest of other premiere art museums.
Researchers digitally peel back the layers of a painting by viewing it through parts of the electromagnetic spectrum that are beyond our vision (like X-rays and infrared), since the various bands can reveal different secrets. To then find small differences between the images, researchers align them by picking reference points common to each image—a deep physical crack, at best, or any feature visible in each.
Historically, this process (called “registration”) has been manual and intensive, and limited in accuracy. Typically it’s done by sight—setting images side-by-side and eyeballing the changes—or with commercial photo software, but even with that “you could only do that so well, to a certain level of precision,” says John Delaney, an imaging scientist at the National Gallery of Art who has been collaborating with GW doctoral student Damon Conover and his mentor, engineering professor Murray Loew.
The new software that’s being developed by Mr. Conover, who is pursuing a PhD from the electrical and computer engineering department, uses a mix of new and existing formulas that would automate the registration process, offering art researchers a powerful and precise electronic eye to do the heavy lifting.
Images made from these different wavelengths are difficult to align, especially since they may show different pictures altogether and vary in size.
The software picks several thousand points that seem promising at various scales—basically, at different levels of zooming-in and zooming-out—and then pares down the points to a few very good ones. These act as guideposts for laying one type of digital image over another and, perhaps by the end of the summer, even a third. So a color photo of a painting could be analyzed directly against, for example, an X-ray and an infrared image.
And the goal is to align the images so precisely that every pixel matches up.
(In the the photos above, for example, the image on the left shows a full-color portion of the painting “Death and the Miser” that the scientists registered with an X-ray image using the software. In the middle, the color photograph is rendered in grayscale for easier comparison against the X-ray, on the right. The green dots are the refined reference points selected by the software.)
The automated process could take anywhere from a few minutes to a few hours, depending on the size of the art being analyzed. But since the software’s doing the work “it’s something you could start and go off and do something else,” says Mr. Conover.
Finding underlying sketches and the subtle shifts from earlier drafts help inform scholars’ understanding of an artist’s intent. “There are a lot of good clues in there for conservators and art historians,” says Dr. Delaney. (In the case of Pablo Picasso’s The Tragedy, Dr. Delaney’s lab used types of X-ray and infrared imaging to find a host of sketches underneath the paint; they suggest Picasso may have used that wood panel several times over the course of four years until the final product finally emerged.)
This entrée into the world of art has been a surprising tangent for Mr. Conover and Dr. Loew, who heads GW’s biomedical engineering program. Their current work stems from registration techniques they used for bringing together different medical images, like an MRI and a CT scan.
Dr. Loew was discussing the work a few years ago with Dr. Delaney, of the National Gallery of Art, who suggested testing the technique on a painting.
Mr. Conover and Dr. Loew were able to register a portion of the 15th-century piece “Death and the Miser,” by Hieronymus Bosch, comparing an X-ray image with a color photograph of the painting. That showed a few known differences, such as a shift of the miser’s hand between the draft and final versions, but also a few unknown subtleties, like the build-up of white paint used in forming the miser’s face.
The conservator “really got a kick out of that,” says Dr. Delaney.
Mr. Conover—who says the notion of working in the art world had “seemed pretty fanciful”—now finds himself in a summer research position at the National Gallery of Art. By summer’s end their goal is to be able to register three images together, and Mr. Conover is hoping to have built a user interface so that anyone could make use of the software.
That advance couldn’t come soon enough: Dr. Delaney says there already has been interest in the software from the London’s National Gallery and other art museums.
Photos on the far left and far right above are courtesy of the Samuel H. Kress Collection, National Gallery of Art, Washington, D.C. Center photo is courtesy of Ms. Lorene Emerson.