Holograms that offer a 3D view of objects provide a level of detail unattainable with typical 2D images. Due to their ability to provide a realistic and immersive experience of 3D objects, holograms hold enormous potential for use in various fields, including medical imaging, manufacturing, gaming, entertainment and virtual reality. They have long held the promise of immersive 3D experiences, but generating them is difficult, not making them conducive to commercialisation.

Researchers from Chiba University in Japan are now turning to deep learning to generate three-dimensional holograms from coloured two-dimensional images. We use neural networks to transform ordinary 2D colour images into 3D holograms, simplifying hologram creation.

Details

Holograms are traditionally constructed by recording the three-dimensional data of an object and the interactions of light with it. However, this technique is highly computationally-intensive and requires special cameras to capture the 3D images.

Deep-learning methods can create holograms directly from the gathered 3D data using RGB-D cameras that capture an object’s colour and depth information. This approach circumvents the many computational challenges associated with the conventional method and is an easier method of generating holograms.

Professor Tomoyoshi Shimobaba’s team at the Chiba Graduate School of Engineering further streamlines hologram generation by producing 3D images directly from regular 2D colour images captured from ordinary cameras.

“There are several problems in realising holographic displays, including the acquisition of 3D data, the computational cost of holograms, and the transformation of hologram images to match the characteristics of a holographic display device. We undertook this study because we believe that deep learning has developed rapidly in recent years and has the potential to solve these problems,” said Shimobaba.

Neural-network-based

The proposed approach uses three deep neural networks (DNNs) to transform a regular 2D colour image into data that can be used to display a 3D scene or object as a hologram. The first DNN makes use of a colour image captured with a regular camera as the input and then predicts the associated depth map, providing information about the 3D structure of the image. Both the original RGB image and the depth map created by the first DNN are then fed to the second DNN to generate a hologram. Finally, the third DNN refines the hologram generated by the second DNN, making it suitable for display on different devices.

In addition to generating the image from ordinary cameras, the researchers found that this approach to process data and generate a hologram is much faster than current state-of-the-art graphics processing unit.

“Another noteworthy benefit of our approach is that the reproduced image of the final hologram can represent a natural 3D reproduced image. Moreover, since depth information is not used during hologram generation, this approach is inexpensive and does not require 3D imaging devices such as RGB-D cameras after training,” said Shimobaba.

The approach can find potential applications in heads-up and head-mounted displays for generating high-resolution 3D displays. Likewise, it can revolutionise the generation of in-vehicle holographic displays that will present information in 3D. The proposed approach is thus expected to pave the way for the development of ubiquitous holographic technology.