3D Conversions ‘Can Be Very Good,’ Post-Production Guru Says
LONDON -- Converting 2D content to 3D “can never be as good as original 3D, but it can be very good,” Martin Brennand of specialist post-production company Imagineer Systems told the British Kinematograph Sound and TV Society at a Monday briefing. “There is a huge back catalog of material that is good for conversion and the value of sales outweighs the cost of the work,” Brennand said.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Alice in Wonderland was a hybrid 2D/3D production because director Tim Burton “wanted to use standard cameras,” Brennand said. “But he shot with 3D in mind. James Cameron wants to convert Titanic to 3D, the Harry Potter series is already being converted, the 1922 version of Nosferatu has now been converted and work is under way on Metropolis. The Star Wars movies are being converted now, as is Beauty and the Beast. Some of these conversions will go straight to Blu-ray 3D.” Toy Story and Toy Story 2 are now being “re-rendered in 3D,” Brennand said. “It’s worth noting that when these movies were first made, it took over an hour to render each frame and now it takes less than a minute. But conversion is still very time-consuming."
In basic 2D-to-3D conversion, the original “flat plate” image is used for the left eye, while the right-eye image is created by analyzing each flat frame and looking for depth clues, Brennand said. Key objects in the scene are then isolated and physically moved slightly sideways to create a right-eye image, he said. Isolating the key objects is done by “rotoscoping,” manually drawing a mask around the object using computer graphics software, he said. Blank or “occluded” areas left in the picture when objects have been moved must then be filled in, he said. “Much of the rotoscoping work is now being farmed out to India,” he said. “Practical difficulties” abound with rotoscoping, Brennand said. For example, individual hairs often “have to be painted in,” he said. “Reflections in cars windows are also hard to deal with."
Brennan then demonstrated how a computer model of a towering building can be constructed from a mesh frame, similar to a computer aided design model, and the image then projected onto the frame so the detail wraps round it. This is the modern equivalent of projecting movie film of a talking head onto a dummy head, he said. “Making models is more difficult with organic objects than buildings,” Brennand said. “For Alice, they built a physical computer model of her head and projected her face onto it. Even then they had to clean up the hair."
Another trick Brennand demonstrated is to manually distort a face by making the nose look brighter to pull it out from other features of the head. “Sometimes we can make use of the old Pulfrich effect, where visual lag between the left and right eyes creates depth from 2D,” he said. “This used to be done by slightly dimming the image to one eye, with a neutral filter. We now do it by offsetting the right eye image by one frame. But it only works on panning shots."
Ironically, anaglyph, an old and discredited 3D technique, is proving invaluable as a tool for 3D conversion, Brennand said. “We find the easiest way to check the depth effect in the studio without having to wear glasses or use lenticular 3D screens is to work with anaglyph 3D,” he said. “The left and right images show red and green fringing where there is depth -- and the bigger the fringe, the greater the depth."
Imagineer has found that conversion pitfalls most apt to cause eyestrain include “retinal rivalry.” That’s when there are brightness or color grading differences between the left and right eye images, Brennand said. “You really do need to see the results in a cinema on a large screen, to check for depth continuity and jarring caused by scene changes. Telephoto shots can look very cardboardy. And the brain treats all distant objects over 50 meters away as 2D anyway. All the time, we have to remember that 55 percent of the population cannot see 3D properly.”