Jump to content

Algorithms will be king in Digital Photography


chkphoto

Recommended Posts

Advertisement (gone after registration)

from Gadget Lab by Priya Ganapati

 

Video cameras on your cellphone could soon be good enough to record a jazz concert, a nighttime street scene, or a candlelit dinner. A Swedish start-up has created an algorithm, inspired by dung beetles, that can be integrated into camera modules to offer high-quality video in extremely low light situations.

 

“We are talking about shooting video in situations that seem almost pitch black,” Benjamin Page, business development manager for Nocturnal Vision told Wired.com. “We can offer an unbelievable amount of noise reduction and contrast enhancement at the same time.” Nocturnal Vision presented its technology at the ISE 2010 imaging conference in London Thursday.

 

Toyota, which financed a significant portion of the research and development, has secured exclusive rights to use the technology in night-vision systems for cars.

 

Nocturnal Vision says it is now working with mobile phone companies such as Sony Ericsson to test its technology and find a way to integrate it into phones.

 

As more consumers use the cameras on their cellphones for video and photographs, companies are looking for ways to improve the quality of the camera modules. Earlier this week, Palo Alto startup InVisage Technologies said it has developed a new technology using a nanomaterial called quantum dots that would offer four times the light-gathering performance of current silicon-based sensors.

 

Nocturnal Vision says its software can be complementary to hardware-based improvements.

 

The company’s algorithm is based on research by a Lund University zoologist Eric Warrant on dung beetles, bees and other nocturnal bugs. Dung beetles are remarkable because of their ability to see enough detail in the night to find food and escape predators.

 

Their night-vision capability is the result of their ability to “sum the visual signal locally in space and time,” says Henrik Malm, one of the creators of the algorithm in his research paper. It’s known as adaptive spatio-temporal smoothing. That means the brain analyzes what’s going on across each frame of an image and what’s going on from one frame to another. (See Malm’s research paper on noise reduction and image enhancement in low light video.)

 

In most digital cameras today, the short, one-time exposure (usually a fraction of a second) and imaging sensors that have uniform sensitivity across their area combine to produce pictures that have underexposed dark areas. Amplifying the dark areas uniformly means the low signal-to-noise ratio becomes pronounced, writes Malm. Instead, adaptive spatio-temporal intensity smoothing can even out the noise, while reducing motion blur.

 

To do this, Nocturnal Vision’s algorithm pools information from about seven frames before and after a shot to brighten, reduce noise and sharpen the video stream, says Page. The technology can work in real time as scenes are shot, or can be applied to video in post-processing. However, because it requires multiple frames, it won’t work with single-exposure still images.

 

For instance, a video on the company’s website shows a clip of a man walking in the night. The algorithm first enhances the darker pixels in the frame more than the lighter ones to reveal additional details. But that also introduces a noise into the frame. The algorithm then pools brightness information from adjacent frames to correct for the noise.

 

The challenge for Nocturnal Vision is that the algorithm sucks up processing power. Most smartphones today, including those featuring the 1-GHz Qualcomm Snapdragon processor, don’t have enough muscle to run the software.

 

“Currently, we are running it on test devices via GPU computation power,” says Page. “For a standard video with resolution of 640 x 480 it requires approximately 14 billion calculations per frame.”

 

Nocturnal Vision’s technology works best on uncompressed images. Since most camera phones compress photos as soon as they are taken, that means Nocturnal Vision’s technology would need to be integrated into a phone’s firmware — or directly into a new line of chips. The company says it is looking for chip makers to do just this.

 

Page says Nocturnal Vision hopes to see its software in the hands of consumers within the next two years. “If we can work with the chip makers, we could be in millions of smartphones,” he says.

 

And your next nighttime videos might not be quite so dark.

Link to post
Share on other sites

Interesting article, but nothing really too interesting and it doesn't seem like it will ever be that practical. Not only is the "high-quality" term quite a stretch (look at their samples), it only works for video, and because its using time-sampling, it ends up reducing the actual resolution in the process. Never mind the fact that it will only work with video mode....

 

The reality is that low-light is low-light, period. Even if you can get to a 100% quantum efficiency, there's just not a lot of light there, and there are a lot of other good technologies entrenched out there to deal with situation like this. I don't see this being of much practical use in the photography world.

Link to post
Share on other sites

Interesting article, but nothing really too interesting and it doesn't seem like it will ever be that practical. Not only is the "high-quality" term quite a stretch (look at their samples), it only works for video, and because its using time-sampling, it ends up reducing the actual resolution in the process. Never mind the fact that it will only work with video mode....

 

The reality is that low-light is low-light, period. Even if you can get to a 100% quantum efficiency, there's just not a lot of light there, and there are a lot of other good technologies entrenched out there to deal with situation like this. I don't see this being of much practical use in the photography world.

 

Agreed. But I do think as time goes on that the algorithm will be the principle thing that really improves the results of a lens and sensor to capture the scene before it. And future still cameras now shooting video will benefit. A lot in that article will creep into camera firmware over time I reckon.

 

I'm working with a government agency that has a camera that shoots in total darkness by mapping a subject with laser and reproduces that subject in minute detail, the data being interpreted by algorithms. Amazing technology.

 

All this is just interesting and food for thought.

Link to post
Share on other sites

Imho physics will be king in any sort of photography. :) The problem I have with this theory is that you cannot make a silk purse out of a sow's ear.

 

Low light -> amplification -> noise -> noise reduction -> loss of data -> creation of pseudo-data.

 

At some point it ceases to be photography. :(

 

Pete.

Link to post
Share on other sites

At some point it ceases to be photography. :(

 

Pete.

 

Exactly. In camera HDR's; cameras that capture smiles and remeber faces; 64000 iso. So to rephrase your statement into a question, "When does photography stop being photography?"

 

Obviously, the ability to "see" and compose, use light, click at the right moment are still what makes an image outstanding. But...

Link to post
Share on other sites

Imho physics will be king in any sort of photography. :) The problem I have with this theory is that you cannot make a silk purse out of a sow's ear.

 

Low light -> amplification -> noise -> noise reduction -> loss of data -> creation of pseudo-data.

 

At some point it ceases to be photography. :(

 

Pete.

 

And... barring infra-red, you can't see what you can't see. LOL - When I shoot night racing, since I'm not a fan of flash photography, I basically pack it in once I don't know what car I'm aiming at. I figure that's a fairly reasonable deadline. :)

Link to post
Share on other sites

Sounds like frame stacking which has been in use for astrophotography over the last 10-15 years.

 

In essence, yes, but this is being applied to moving objects, so it's much more sophisticated than simple frame stacking--which has been used for nearly 40 years when you remember its something that started with professional astronomers.

 

Jeff

Link to post
Share on other sites

In essence, yes, but this is being applied to moving objects, so it's much more sophisticated than simple frame stacking--which has been used for nearly 40 years when you remember its something that started with professional astronomers.

 

Jeff

 

 

What's the difference betwen moving objects on the ground and stars? Everything moves, even the stars and after removing the effect of earth rotation with a star tracker. You still need to compensate for atmospheric distortion and tracking error.

 

If you have bright objects (relative to stars), you can have a relatively high frame rate (1-20fps versus 10minute exposures). All this account for faster movement of the object of interest giving you finer temporal resolution. Otherwise, you'll still have blurred portions on an image where an object moves rapidly across the scene over several frames.

 

“sum the visual signal locally in space and time,” sounds all fancy and complicated but this is already done in video compression. Ever notice that when you're watching a video with a low bandwidth connection that if the image on video doesn't change much, you get higher resolution where as if the scene in the video is changing rapidly (eg. moving water or fast panning), your resolution decreases? It's the same idea.

 

In the end, all the techno-talk is exactly what the article calls it: spatio-temporal smoothing. By its very nature, a smoothing (low-pass) filter results in loss of information so instead of getting noisy blobs in your image, you'll have less noisy (smoother) blobs. Not exactly revolutionary, nor is it the panacea to noise/sharpness that it claims to be. You simply cannot gain information by smoothing.

Link to post
Share on other sites

What's the difference betwen moving objects on the ground and stars? Everything moves, even the stars and after removing the effect of earth rotation with a star tracker. You still need to compensate for atmospheric distortion and tracking error.

 

If you have bright objects (relative to stars), you can have a relatively high frame rate (1-20fps versus 10minute exposures). All this account for faster movement of the object of interest giving you finer temporal resolution. Otherwise, you'll still have blurred portions on an image where an object moves rapidly across the scene over several frames.

 

Actually, it's a pretty big difference. Modern tracking systems are such that the error is on the order of a few pixels over a long exposure. On my telescope systems, I could get just a few pixels of drift across ten minutes, and this included atmospheric turbulence. This can also be corrected by adaptive optics systems. With autoguiding and autocorrecting systems, the drift can be rendered nearly non-existent.

 

The system in the article is doing a lot of reconstruction of objects through interpolation from a series of exposures. In the case of astrophotography you are not interpolating any signal; you are using multiple exposures to sum signal and decrease noise. Because this is a power function, the noise decreases much more rapidly than the signal. My understanding of what this software system is doing is that it is combining this approach with an interpolation of the signal from a series of exposures. So they are related, but not entirely identical. With astronomical summing, there is no smearing of detail as there is no actual reduction of the signal or interpolation of the signal, just a reduction in noise. In fact, the longer the exposures, and the more that are summed, the more resolution you gain with astronomical image summing (within limits, of course).

 

“sum the visual signal locally in space and time,” sounds all fancy and complicated but this is already done in video compression. Ever notice that when you're watching a video with a low bandwidth connection that if the image on video doesn't change much, you get higher resolution where as if the scene in the video is changing rapidly (eg. moving water or fast panning), your resolution decreases? It's the same idea.

 

In the end, all the techno-talk is exactly what the article calls it: spatio-temporal smoothing. By its very nature, a smoothing (low-pass) filter results in loss of information so instead of getting noisy blobs in your image, you'll have less noisy (smoother) blobs. Not exactly revolutionary, nor is it the panacea to noise/sharpness that it claims to be. You simply cannot gain information by smoothing.

 

Again, this is not the same thing as resolution changing video compression. There is no summing of signal in compression algorithms; the loss of resolution is simply the result of a decision between frame rate and resolution, and frame rate is set to win, to present the perception of smooth motion. Totally different than what is described in the article.

 

I totally agree with your last paragraph. They clearly have a good PR firm that helped them hype this up. What the technology does appear to be is a novel application of some well-known technologies and methodologies to a new situation. Based on the deal with Toyota, it's clearly not something intended to be the source of high-resolution, but rather simply a method of improving signal quality in moving images in low-light situations. Even then, I have to wonder whether this provides any real advantage over existing low-light imaging systems. The market will define whether this is a real success or not.

 

Jeff

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...