The Light Field Camera ~ Shoot Now, Focus Later

What you are looking at is not four photographs taken with different focus settings. This is only one shot of the light field, which can be reconstructed as you want (photos Lytro).

A new camera that is set to appear on the market by the end of the year is going to “revolutionize photography.” This claim, by the young US company responsible for developing the camera, may sound a little over-ambitious but, after years of research, the prototype already exists.

“Our mission is to change photography forever,” affirms Ren Ng, the camera’s inventor – CEO and founder of the Silicon Valley-based company Lytro, “Ordinary cameras will become a thing of the past.”

If Lytro is to be believed, users of this new camera will no longer have to worry about adjusting exposure time, diaphragms or focusing, neither will lighting conditions present any problem; and if you think it’s automation that will do all the hard work before you shoot, you’re quite mistaken. This camera simply does not have these settings!

The Lytro camera is ready to capture any scene less than a second after being switched on. There’s no delay between sleep mode and the moment of shooting. No need to click through confusing menus, no shutter lag and no need to press the button half way to set automatic focus. The Lytro simply doesn’t deal with it. Just shoot away now, the miracles happen later!

The camera’s advanced technology is little short of magic: each picture takes up a lot less memory capacity than on a traditional camera, and it allows you to adjust the focus on any particular point of interest in the picture, after the event.

Using the file you’ve obtained after shooting, the special software lets you alter the depth of field and focus anywhere in the frame, as you wish. Simply click on the desired object in the foreground, the middle distance or the background and it comes into sharp focus, and objects in the other planes become blurred.

What’s more, this blurring is not the artificial type you get from image editing software such as Photoshop. All the images appearing and floating away into the fog again – they are real. Everything works as if you are shooting several dozen or even hundreds of pictures, all taken in the same split second, with the focus of each set at a different distance.

Check out the Lytro picture gallery and see for yourself how the refocusing happens.

The secret of the technology lies in the way the new camera captures the entire light field of the scene being photographed. In simplified terms, the light field determines the look of the picture. Imagine it as a series of rays passing from all points of the environment in all directions.

A simplified representation of the light field (Figure Lytro).

The Lytro camera is able to store information separately about all the rays of light entering the lens, from different distances and angles.

As a result, data captured by the CCD matrix can be processed by the accompanying software, enabling you not only to alter the final picture’s depth of field but also to shift the perspective, to a certain extent, and even move seamlessly from a flat 2D image to one that is virtually 3D. It sounds incredible, doesn’t it? But this is what Lytro is promising to bring to the market.

Ren Ng – a Stanford University computer science graduate and now CEO of Lytro . The scientist, Ren, is holding the standard digital camera that his team customized in order to create the prototype of the Lytro, using a number of ready- and homemade spares (photo from

If you dig a little deeper, there are some interesting details hidden within the technology. For example, Ng didn’t invent his light field sensor from scratch. In his work he followed the principles of the so-called plenoptic camera.

The scientific community has been well aware of this type of camera for many years. It has roamed research departments – in experimental form only -from one institute to the next, never quite making it on to the consumer market.

How a plenoptic camera would capture the scene (Figure Ren Ng).

By taking the original pixels and solving some fairly complex equations, it is possible to get an idea about the scene in a higher resolution than is available from a separate fragment of the mosaic.

The distance between the lens and the subject being photographed is of no importance. Knowing the laws of the diffusion of light, we can choose to process the image so that every object will be rendered in sharp focus.

Part of an image obtained with the array of microlenses lying directly in front of the matrix (with a precisely determined gap). Top right – fragments at even greater enlargement (in the main shot outlined with squares), at the bottom – the resulting synthesized image (Figure Ren Ng).

However, plenoptic cameras have their problems. For example, the size of the microlenses and their correlation with the pixels of the matrix which lies beneath, the distance between the microlenses and the matrix – all of these parameters are combined in a way that makes it difficult to get well focused, high-resolution pictures under each microlens at the same time. Some things turn out well, but others get lost.

With this in mind, it is important that the set of such miniature images conveys information not only on the brightness and color of different points, but also on the distance between the camera and any particular part of the scene. What happens if all this knowledge is combined? To simplify the problem, the question the inventors of the technology asked themselves was: Would it be possible, from the raw and crude images under the microlenses, to calculate the whole situation in all its details?

In order to answer this question, Ng used the concept of the pleoptic camera to build up a whole theory about the different ways of representing light fields as well as various mathematical methods for transforming them, and as a result he was able to design his own device (with its accompanying software), capable of performing all the tricks mentioned above.

(You can discover more about the scientific side of the project in the PDF-document where you’ll also find out about the history Lytro’s emergence and the work of specialists at Stanford which preceded it – in the blog of Venture Company K9, which helped to make Ren’s dream a reality.)

The micro-grid of square lenses used in the experimental prototype. Dimensions – 296 x296 lenses. These lenses cover almost 100% of the area. The grid was placed over a CCD matrix with a resolution of 4096 x 4096 pixels. The assembly of these units is shown below (pictures Ren Ng).

The Lytro Camera enables very low light photography without a flash, and – with just one lens – it can produce 3D-pictures, according to company insiders quoted recently in PC World.

And, on the subject of three-dimensionality, to be honest, this area is somewhat unclear. To achieve the effect of 3 dimensions in a 2D photograph, you would normally need to shoot your subject from at least two different points of view. Most likely, the very same grid of microlenses makes it possible to shoot as if we had two lenses.

The distance between them, in this case, would be the diameter of the main lens. Lytro, however, has so far provided no details, and it has yet to show us any three-dimensional “Lytro-shots”.

Alas, many aspects of the camera’s performance and software remain unclear, but most importantly – there has been no answer to the question: What kind of end product will the Americans be bringing to market?

The exact date of Lytro’s appearance in the stores, or the price, or even the shape (whether it’ll be a slender “bar of soap” or a weighty “SLR”) remain to be specified. So those eager to join the photographic revolution will just have to wait patiently.



Co-founder and CEO of Daminion Software. I like traveling, swimming, cycling, etc... all kind of activities that makes me happy ).