By the end of 2013, the J. Paul Getty Museum – a Los Angeles art museum more commonly known as the Getty – had converted approximately 10,000 works of art into high-resolution images, available for public use without fees or restrictions.
Roughly 4,600 of those images came from the museum itself in August, while the other 5,400 were released by the Getty Research Institute – a program funded by the J. Paul Getty Trust – in October, as part of the Getty’s Open Content Program.
The reason for the image releases and new open content policy is, in part, in keeping with the Getty’s founding purpose – according to Getty President and CEO James Cuno. But it is also a matter of practicality and inevitability, he admitted.
“The Web is such that people will get the images and do with them what they wish, and it’s impractical to police the Internet,” Cuno said. “So we wanted to recognize that and be certain that we had the best quality images available and with the most accurate information attached to them.”
But what may be more interesting than the “why” of the mass image release is the “how.”
To start, the Getty previously found a source of revenue in the licensing and use of the images, a source of revenue it lost by instituting its Open Content Program. But Cuno said that practice was really a double-edged sword.
“There’s revenue and there’s costs associate with it,” Cuno said of licensing. “With the elimination of the fees, we’ve also eliminated the costs. We can have individuals doing other things, more important things than filling forms and policing the follow-up on the use of the images. The revenue wasn’t sufficiently large to discourage us from doing it.”
In addition to paving a new trail by abandoning old practices, the Open Content Program also meant actually converting 10,000 pieces of art into digital hi-res images, having the infrastructure to distribute them, and keeping track of what works are either in the public domain or for which the Getty owns the rights.
The Getty isn’t the only museum undergoing the process of converting artwork to the digital realm. In fact, last year, Fujifilm Europe announced a project called Relievo, which sees the company’s Belgium NV division using its technology to make high-quality reproductions of original paintings for third parties. Among those third-parties is the Van Gogh Museum, which has asked Fujifilm to recreate Van Gogh’s oil paintings.
“What is very different from 2D image reproductions is that we are capable to match a color-managed 2D hi-res image with the actual structure of the original painting,” said Daniela Levy, who heads up communications for Fujifilm. “So especially with impasto painting techniques, our Relievo technology can reproduce the exact brush strokes of the original and reproduce this 3D result in an exact registration with the 2D image data.”
But Levy admits the company is rather “conservative” when it comes to how much it is willing to share about the process, which she only acknowledged is a combination of hardware, software, and some manual handling by a team of operators.
“We have put seven years of research and development in the whole process to achieve the results we have now,” Levy said. “So unfortunately, we cannot share much more.”
But Stanley Smith, the Getty’s head of collection information, isn’t so coy about his organization’s process. He said it often starts with high-end camera backs that capture 80-megapixel images and cost upwards of $50,000.
“They have much bigger sensors and therefore a much higher quality of image, a lot better total range, much better color accuracy,” Smith said.
The museum also employs the use of specialized scanners and Better Light camera backs, digital scanning camera backs designed for large format, high-resolution digital photography.
“We [also] have a device called a Cruse scanner, which is a very high-end scanner that can scan large pieces of artwork in one pass,” Smith said.
While Smith noted many museums get by simply using high-end Canon or Nikon equipment – getting what he calls “very, very good” results – the Getty utilizes the technology it does in an effort to get the best quality possible.
“We go through a very rigorous calibration process to make sure when we do capture an image of a piece of artwork it’s really as close to the original as it possibly can be,” Smith said, noting many museums have collaborated through a Listserv in trying to find the best methods of converting physical artwork into digital images. “That goal, while it might seem kind of easy, really is difficult to capture exactly the colors and tonality of art. Our eyes see artwork very different than a camera does. The idea is to bring those together and optimize that process as best you can.”
On the back end, the Getty has simply hosted the images in the cloud for its first two releases, but that could change, Smith said.
“We have a very robust IT infrastructure here,” Smith said. “The way we launched this program, the back end is probably going to change a bit. But for the initial launch, we took advantage of the robustness of a file server to make sure that we had enough bandwidth and could accommodate the downloads if they got voluminous, which they did.”
How voluminous? After the release of the Getty’s initial batch, the museum claimed that traffic to the Getty Search Gateway – the tool used to access the Open Content Program – went from an average of 200 visits per day to a peak of 22,000. The first two months alone saw more than 100,000 downloads of Open Content images, compared to an average of only 121 images per month under the old system.
The first two batches of images released were the result of the work of nearly 10 people at the Getty, who spent roughly a month-and-half working to make it happen. The bulk of that time was not spent creating images, though, as thousands were already in the Getty’s database following the organization’s involvement in the Google Cultural Institute’s Art Project.
“Really, the main constraint for us was to identify the images that were public domain and didn’t have any copyright issues,” Smith said. “We went a bit deeper from every curatorial department to get more representative images and ended up with that group.”
Smith called what was released in the first two batches “fairly low-hanging fruit” and noted there will be more to come.
“Our goal is just to continually add more,” Smith said, noting photography is the most underrepresented group at the moment. “As you might suspect, compared to the other curatorial departments, photography is very young media. So there are often more copyright restraints on that material. It’s a little more complicated. There are some things we have that are in the public domain, or there’s a time at some point they lapse into the public domain, but there could be personalities in the images, and as I’m sure you know there’s prohibitions on that, as well. We’re taking a little more time to flesh out the photography section, but I think we’ll fairly soon be putting a lot more photography up there.”