[Elphel-support] Global Shutter

Abe Bachrach abachrac at mit.edu
Tue Nov 30 10:51:40 PST 2010


I'll respond inline....

Also coming back to our phone conversation - there are no perfect high
> resolution global shutter sensor that I know of. The high resolution global
> shutter CCDs suffer from limited shutter ratio so some light gets into the
> CCD registers during readout and the image snapshot is combined with the
> vertically integrated signal (each pixel charge gets a little of the
> photoelectrons while travelling to the output shift registers. With CMOS
> sensors it might be better, but there are still problems related to the use
> of the analog memory.
>

As I tried to say on the phone, for most realtime machine vision
applications, a global shutter is more important than "High Resolution"...
640x480 is probably the most commonly used resolution, although some cameras
go up to ~1.3Mpix. The google situation was probably fairly unique in
requiring both high resolution and a global shutter for optical character
recognition, since they were most likely collecting the imagery and
processing each image separately, on a cluster of computers


The advantage of the current ERS sensors is that there is no analog memory
> involved in the readout process and so there are no problems with undesired
> light getting to the pixels in the wrong time.
>
> The only real solution I can see to the rolling effect should would be to
> combine memory (i.e. SDRAM) on the same chip with the sensor with wide
> parallel input from the sensor part, each line digitized and stored in the
> memory in parallel (should be easy when they are on the same chip). Until
> high performance sensors with such technology become available, I would
> rather try to use the ERS sensors perfected by the huge cellphone camera
> market and try to deal with undesired effects by other means.
>

Any sensor will have to make tradeoffs. The important factors for cellphone
cameras (Cost, Size, Power, #of Mpixels for marketing) are VERY different
from the needs of machine vision (and probably also surveillance).

The sensor I mentioned earlier is Aptina's thought as to the best tradeoff
that it can make for the automotive/industrial machine vision use cases:
http://www.aptina.com/products/image_sensors/mt9v022ia7atc/#overview

Aptina makes their CMOS sensors with a global shutter, using their
"TrueSNAP" technology, which a CMOS sensor with analog memory:
http://ericfossum.com/Articles/Cumulative%20Articles%20about%20EF/truesnaparticle.pdf


 I don't know enough about the subject to know why they would use analog
memory instead of  a digital memory as you propose, but my guess is that
they would have considered that option, and decided to go with the analog
memory.


With the ERS sensor each line is exposed at different, but precisely known
> time. I was thinking of using the additional sensor (attached to the 10359
> multiplexor board with the FPGA) to run a smaller rectangular area (i.e.
> 16-64 pixels high by full width - it will be 500-2000fps) and using the
> opposite square areas (sensor ROI has to be a single rectangle) for the
> "optical mouse" correlation algorithm running in the FPGA - that could
> provide additional information to find out orientation of the camera with
> high temporal precision. This orientation information can be applied to the
> full resolution ERS image, and used to compensate the distortions in many
> cases.
>

That might help, but would be very complex. Even with high frame rates,
detecting the camera motion in unconstrained environments is a VERY
difficult problem. If you know the motion is in 2D such as an optical mouse,
Simple correlation works, but in general, the camera will have 6 degrees of
freedom, which is MUCH harder.  If you only care about orientation, an
Inertial Measurement Unit(IMU) would make a lot more sense, but that is not
sufficient. That being said, integrating an IMU in the elphel camera would
be very cool!

Furthermore, while robotics applications care about the case where the
camera is moving, the more common case for machine vision applications is
when the camera is stationary, and the objects are moving quickly. Your
solution would not do anything to help with imaging fast moving objects.


thanks!
-=Abe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://support.elphel.com/pipermail/support-list_support.elphel.com/attachments/20101130/22d1e067/attachment-0002.html>


More information about the Support-list mailing list