By Chris Chinnock, Insight Media
Many, including myself, think light field acquisition, processing and display is the future of natural 3D visualisations. One method for light field image acquisition is to place a micro-lens array in front of an image sensor. This is the approach used by Lytro in their consumer and cinema-grade cameras. The trade off for acquiring all of these different viewports from the micro-lens array is a big reduction in image resolution. Now, a new company, Wooptix, has a different way of light field acquisition that alleviates the loss in resolution trade-off. They will be at CES 2020 if you want to meet with them to learn more.
So what is their secret? Instead of a micro-lens array, they use a liquid lens in the optical chain in front of the sensor. In this way, they can rapidly change the focal plane that the sensor is focused on to allow multiple image planes to be captured at 30 frames per second at full resolution.
The founders of the company come from an astro-physics background where liquid lenses are used to compensate for atmospheric distortions in the optical path of telescopes looking at distance space objects. Now, they are applying this same technology to create a more elegant light field capture technology.
However, success here will require more than a hardware solution. The real IP is in being able to process these multi-planar images to create depth maps and enable tradition light field capture functionality such as changing the depth of focus, depth plane, point of view, field of view, and more.
So far, the company has built two prototype systems that can capture 6 planes of image data at FHD resolution and 30 fps. Processing with an accompanying computer allows real time manipulation of the images. The picture below shows the camera along with a depth map of the objects in the background.
The applications for this are obvious. One is the mobile phone market where multiple cameras are the current trend. Maybe these can be replaced with a single camera solution that can offer nearly the same functionality. How about medical endoscopes or automotive cameras. There are already several automotive HUD companies that can display information in multiple depth planes (as will be showcased at CES as well) so this could be a very nice complementary capture solution. And, several AR/VR headsets also use the multiple depth planes approach for visualisation, so potentially another good fit for this capture data.