• 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Forums > Back > Canon "turbulence remover"
#1
http://www.mirrorlessrumors.com/look-at-...00mm-lens/

 

It's a short demo of what are the capabilities of the 250MP prototype with a very long lens. Apart a lingering doubt about the very issue of the so large number of photosensors (and the capability of current lenses to resolve them), it's an interesting testbed: landscape details from very long distances (about 20km). You need a crystal clear air, but then you have turbulence. Now, there's the mention of a "turbulence remover" facility made by Canon.

 

I'm very curious about that. I know that there is existing software, used by specialised astro-photographers, to get rid of atmospherical artefacts - typically they use multiple frames, they evaluate the quality in some way, they auto-align frames and do averaging to improve the quality. It's the first time, AFAIK, that such a software is advertised for general purpose photography. Also, there are a few things that are puzzling: AFAIK, this post-processing is extremely CPU intensive and doesn't run in real-time (unless there's ad hoc hardware); but the final video with the woman waiving hands from the Tour Eiffel has such a low impact of turbulence that it should have been processed in some way...

 

What do you think?

 

 

stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
  Reply
#2
Maybe I missed it, but I didn't see anywhere claiming that was the real-time capture. If you had 250MP video, exactly what would you do with it in real time? So I think it likely they can post-process it.

Also the processing used in astro use has very different optimisations than for video use, so a direct comparison is difficult. Astro corrections tend to be optimised to produce a single detailed, low noise image. Features generally don't change much over short periods, with the possible exception of Jupiter. Due to its fast rotation speed, rotation effects start to become significant in as little as minutes, and there is software to de-rotate it also!

For video, you are not trying to produce one low noise image. You would be looking for feature trends and stabilising for that. Look at it as a local IS as opposed to a global one.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#3
I agree with some of your points, that's why I'm asking. In particular, I'd like to understand whether that post is the first one mentioning this "turbulence remover" from Canon, or whether there are other information about the tool.

 

True, there were no explicit claims it was real time - but also they are not saying that the whole 250MP are processed, honestly it wouldn't make any sense for a video. The feature they were demonstrating makes sense with a previous comment that I read when the 250MP camera was announced, about the possibility of using it as a super electronic zoom, capable to deliver 4k on small portions. Such a thing _must_ IMHO be designed with a processing pipeline capable of working only on selected 4k portions. Don't you think?

stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
  Reply
#4
PS I'm not an expert of astro things, but I'm aware of the points you mentioned, as a close friend of mine explained them to me. But - for instance - some large, reflector based telescopes are capable to change the geometry of the reflector to correct for atmosphere blur. Sure, it's a multi million $ thing :-) but many things that we routinely do today were stuff for super-computers in the past. 

 

Also, I think that a 250MP thing it's for highly professional usage, and it would be coherent to hear that there is very expensive hw/sw in support. I can't image any other usage for being able to spot a person 20km far from the camera than security surveillance.
stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
  Reply
#5
There are two parts to this. The sensor is designed to read out as best it can, then you have to decide what to do with that output. Spying may be one application, if you need to cover a large field of view in one at high resolution.

Adaptive optics is something that is used for astro, but we might be waiting a long time for it to come to a consumer application. Generally it needs to have a correction signal to know how to adapt. Fine if you have a nearby target to what you are imaging to look at in parallel. Not so great in a busy day scene. As for costs going down, there is an affordable "adaptive optics" add on you can buy today. It isn't using mirror deformation, but deflection like lens IS systems. The difference is that instead of correcting for human rates of movement, it is tuned to work on much faster fluctuations.

I don't think a video turbulence correction need be that computationally intensive. For example, look at existing digital stabilisation systems for video. This crops the image a bit and aligns the scene globally. The difference for correcting turbulence would be alter the algorithms to work on small areas. More work, but I don't think it would be disproportionately more computationally costly to implement.
<a class="bbc_url" href="http://snowporing.deviantart.com/">dA</a> Canon 7D2, 7D, 5D2, 600D, 450D, 300D IR modified, 1D, EF-S 10-18, 15-85, EF 35/2, 85/1.8, 135/2, 70-300L, 100-400L, MP-E65, Zeiss 2/50, Sigma 150 macro, 120-300/2.8, Samyang 8mm fisheye, Olympus E-P1, Panasonic 20/1.7, Sony HX9V, Fuji X100.
  Reply
#6
I was certainly not thinking of adaptive optics for a consumer camera ;-). I was pointing out that algorithms for detecting turbulence in real time are already available. They are used to feed adaptive optics, perhaps they could be used to feed an image processor, as you said, e.g. for moving small areas of pixels and counter the blur.

stoppingdown.net

 

Sony a6300, Sony a6000, Sony NEX-6, Sony E 10-18mm F4 OSS, Sony Zeiss Vario-Tessar T* E 16-70mm F4 ZA OSS, Sony FE 70-200mm F4 G OSS, Sigma 150-600mm Æ’/5-6.3 DG OS HSM Contemporary, Samyang 12mm Æ’/2, Sigma 30mm F2.8 DN | A, Meyer Gorlitz Trioplan 100mm Æ’/2.8, Samyang 8mm Æ’/3.5 fish-eye II | Zenit Helios 44-2 58mm Æ’/2 
Plus some legacy Nikkor lenses.
  Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)