USER REPORT: The Light L16 Camera, Week One (and a bit) By Harry Teasley

The Light L16 Camera, Week One (and a bit)

By Harry Teasley

I wanted to follow up my Day One review of the L16 camera from Light with one more in-depth, after more time with the camera. It’s a very interesting photographic tool, without a doubt. It does some things surprisingly well, but also, less startlingly, has some quirks. So I hope to be able to clearly demonstrate the camera capabilities and weaknesses more fully than before. Also, given the strong computational nature of the camera, this review will talk about the computer app Lumen a fair bit, given that it is absolutely necessary to work with the L16.

The L16 can shoot very capable wide angle, bright light images, but saying that is almost damning with faint praise: it doesn’t take $1200 to get a camera that can do that. Granted, there aren’t a lot of cheap cameras that can produce a 10432 x 7824 image with unique post processing capabilities, so there you go. But let’s look at what it means in reality.

Seattle autumn is a well-kept secret: a beautiful mix of evergreens with deciduous trees, and there’s a gorgeous couple of weeks where the trees are turning but the weather hasn’t gone dismal yet. It’s a great time to be a photographer. The L16 captures these colors beautifully.

Below is a reduction of a 8192 x 6192 image, with no post-processing adjustments of any kind (save the reduction to 1200 x 907).

 

The detail is incredible: here’s a center crop from the original at 100%.

These are cellphone sensors, in a thick cellphone-like body, and this is the detail they pull from a hundred or so yards away. This tells me that this is a powerful path for digital imaging to pursue. This is Gen1 quality imaging in this format, so I see a lot of potential.

But this is Gen1 still, and there are issues. Let me first preface this with the fact that images I use to point these issues out are the highest resolution versions of the images the L16 produces: there is no software depth of field change, so there is no opportunity to recover detail that might have been obscured with post-processing. With that said, it looks to me as if there are areas of the image where the captures from different sensors stitch together well, and other areas where the sensors stitch together less well, with the result that there are mushy patches of lower detail.

Worse, there are often patches of smeary detail side by side with areas of high detail, and these differences do not map to scene depth: it feels as if these are regions of one sensor’s output, next to a blurry merged region. In a one-lens camera system, that sort of rendering is the result of shallow depth of field, giving good foreground/background separation, but when it’s randomly placed in locales around an image, it’s really jarring and distracting. Below are some crops from the full-sized image showing these sorts of artifacts.

Foreground elements in focus, middle ground elements blurred, then distant elements back in focus… elements clearly next to one another with drastically different levels of detail… it’s unappealing. The L16 also has an unfortunate disdain for edges, where if contrast gets low, it blurs details and colors across the boundary very easily, so a sharp edge suddenly dissolves into soft focus very noticeably. Lumen, the software package that translates the L16 files into standard images, has some ability to unify depth detail, but not if you haven’t already edited the depth quality of the image: if you left the file at the default f/15.2 setting, the tools to edit depth info are greyed out. There’s nothing you can do to add localized depth information, to fix any incongruities.

Oh, and the full-res DNG of this tree image is 369mb is size. If you pick the correct zoom level, you can end up with a 10k x 7.5k image that produces a full-res DNG that is half a gigabyte.

Playing with depth of field in software is also hit or miss. The adjustment uses actual scene depth well, sometimes. And no matter what, all portions of the image will lose detail: nothing escapes some degree of blurring. It also appears that the modification is based on distance from the camera, as opposed to distance from the point of focus: in all the photos I’ve taken, closer objects receive less depth blurring, and distant objects receive more.

Here’s our puppy Edgar at f/15.2.

And here’s the same photo, adjusted to approximate an f/4 lens. At a reduced resolution, the detail preservation in the close detail looks very good, and the depth effect in this sort of composition does improve things.

But here’s a split-screen difference of preserved detail in the full-resolution images:

The left half is at f/15.2, the right half is at f/4. This is the close detail on the chair, exhibiting the least lossy change when altering depth of field. Now, Lumen allows you to paint different depth of field values back in, so it’s possible to do some rescuing of lost information, but this adjustment feature becomes less appealing if it means needing to go in and do a lot of fiddling with different areas of the image.

On other images, the depth tool throws a wobbler and punts on good decisions. Here’s a split image of a close-up shot, where choosing to add a depth of field effect is choosing to ruin the image.

So don’t add fake depth of field if there’s no significant changes in the depth data in the source file, I suppose. It will just blur everything.

Here are more images from the L16.

The L16 can reproduce colors beautifully in its insanely high-resolution files. Detail retention can be spectacular, but the idiosyncrasies of stitching multiple sensors’ images together can often detract from your captures. Many of the flaws are partially or entirely cured by down-sampling the images, though, so I wouldn’t put as much emphasis on them as one might be tempted to. But then again, if I’m downsampling everything, it brings up the question of using a traditional and cheaper camera, that captures at the lower resolution.

Overall, my impression is much the same as before: there’s a ton of potential, but there are hard problems to solve. There are also a lot of interesting opportunities that I haven’t seen a sign of Light pursuing: why not use depth information to influence exposure, or hue? I imagine you could create a convincing atmospheric perspective in images using this data. So far, there is no sign of that sort of thinking going into the tools, but who knows? They are a small, new company, with a lot to prove with their vision of imaging.

Some of the preceding can be taken as bad news. Now, the really bad news: Lumen. Lumen is both necessary, to interpret the L16’s native files, and is currently a software failure. As I discovered on day one, I could connect my camera to a Windows machine successfully only once, but never again would the software see the connected camera. I could use the Open File menu within Lumen to find the files on the camera, as if it were an external hard drive, but could not use the software transfer function, which is how you can import more than one photo at a time. So that was tedious, especially since there is no way to tell which picture is which in the dialog, given that there are no thumbnail images for the LRI files.

Light Support has been very helpfully communicative, and has reached out again and again, but the result hasn’t been happy. They first said they’ve discovered the same bug themselves and are working on it, then they offered some suggestions for workarounds (that didn’t work), before finally getting in touch to say that they’re unsatisfied with the current Windows version of Lumen, and are going into a significant rewrite of the tool. That’s not going to arrive tomorrow, that’s for certain, so Windows users are going to be unsatisfied for a while.

Mac users aren’t that much happier, though. I use Windows primarily, but we do have an iMac in the house, so having received the bad news about the Windows client, I downloaded the OSX version. It connects and reconnects to the camera just fine, so that’s a big plus. The Transfer dialog popped up immediately when I plugged the L16 in, and it took several minutes to copy over the hundreds of pictures that I’ve accumulated all week.

Lumen is frustratingly slow. Image redraws are glacial and buggy, with screen redraws not happening reliably, sometimes requiring panning the image or changing the zoom size so that it’s forced to think again about displaying things correctly. The performance of the OSX version on the iMac (i7 3.4Ghz) is similar to what I see under Windows on a fast gaming rig: it’s too frustrating to use seriously. All I want to do is spit full sized DNGs out, to stop using Lumen as fast as possible.

Since software processing is at the heart of the L16 imaging pipeline, it needs some serious improvement. It needs a tool that is robust and fast, that makes a photographer want to avail themselves of the capabilities of the LRI files. As it is, the tool is slow, its display scaling of images is poor, so it makes me not want to use it at all. Were I to want to paint depth of field detail, I think I would just spit out an f/4 image, and an f/15.2 image, and layer and mask them in Photoshop. I would probably be able to generate an image every bit as good as what Lumen could do (probably better), and in about the same amount of time. So I’m looking forward to them overcoming these initial stumbles on the software side.

Thanks for reading. My flickr is at https://www.flickr.com/photos/harryteasley/ where I will post an album of L16 images.

 

8 Comments

  1. It’s a little disappointing to see that the L16 has such significant problems after having been under development for so long. I’ve been on the initial waiting list for one of these for over two years now after having attended their launch party in San Francisco back in October 2015. I realize such a new innovative product is bound to have lots of problems. Since so much of the product is dependent on software and processing, hopefully they’ll be able to solve many of these problems with future software or firmware updates. At their launch party, I recall them saying that their business model was to eventually license their technology to smartphone manufacturers, so the L16 is probably meant to be more of a proof-of-concept than an actual product they would build a product line around.

  2. You say, about halfway down, photos of Puppy Edgar: “But here’s a split-screen difference of preserved detail in the full-resolution images: The left half is at f/15.2, the right half is at f/4.”

    Surely it’s the other way round: left half – with blurry chair back – at f4 and right half at f15.2 ..no?

    And why would you choose “..to add a depth of field effect..” to a close-up shot, in which everything’s the same distance from the camera/lenses/sensors?

    Apart from those couple of quibbles, it’s a very well-balanced review, thanks! Mine’s been on order for more than a year, and I’m still waiting.. waiting.. waiting..

  3. It’s interesting how certain cameras promise much, and what they deliver doesn’t quite match what is expected. I think you’re right that eventually this technology will sort itself out, one way or another. Most of the early DSLRs were awful (and some new ones are as well!), but we didn’t know it because we were so smitten with the convenience. Now we’re a bit inoculated against novelty.

    I think that the Lytro Illum is arguably the better type of modern digital camera. It’s lacking in outright resolution but the basic system works very well already.

    The L16, I think, could be replaced by a simpler system of between 2 and 4 sensors. I don’t think that you need 16 sensors. Light is making this too difficult for themselves. The sensors are too small – you look at the output of even the newest iPhones and it’s not good enough for big enlargements.

  4. I believe there’s so much potential in this technology
    Thanks for sharing.. I’ve been curious on this L16 camera

  5. The photos are interesting if not intriguing, as is the technology. For me however, a big part of the enjoyment I get from photography comes from holding a proper camera. So unless a version of this gets integrated into my next iPhone, I doubt I’ll ever have it. Thank you for sharing.

Comments are closed.