[Elphel-support] Problems to detect timestamps

support-list support-list at support.elphel.com
Sun Jan 10 13:25:39 PST 2016


Jennifer, how do you currently synchronize the cameras, do these cameras have 10369 (http://wiki.elphel.com/index.php?title=10369 ) interface boards?

In any case I would use wget (or just function in Python, PHP, ...) to repetitively read images from imgsrv (as explained here: http://wiki.elphel.com/index.php?title=Imgsrv#imgsrv_usage) - each image will have timestamp with microsecond resolution that you can extract from the Exif header.
As long as the average image size * frame rate does not exceed the network bandwidth (if it does you may need to decrease image quality or frame rate) cameras will tolerate some delays on the computers as long as the internal camera 19MB buffer is not overrun.

Imgsrv will wait (as explained on our wiki) for the next image ready so the host computer does not need to do anything for synchronization but read files (and match the files with the same timestamps (should be exact same value) later.

Alternatively you may use camogm over NFS (camera will "see" the host computer HDD/SSD as its own).

If the cameras do not have 10369 I/O boards, timestamps will be based on each camera clock that is based on a crystal oscillator with +/-50ppm accuracy. You can increase precision by comparing the timestamps at the beginning and end of a longer interval and calculate each clock correction.

With 10369 boards it is much easier - you can connect all cameras to one "master" camera using the telephone cables/connectors, wired as shown here : http://wiki.elphel.com/index.php?title=10369#J15_-_SYNC_.28external.29
The cameras should be placed in external trigger mode (it may limit the maximal frame rate if there is insufficient light, as in triggered mode exposure and readout are sequential, while in free-running rolling-shutter mode they overlap), and the "master" will set pace to all the cameras (itself included), the synchronization cable both provides the trigger signal and and is also used to transmit a 52 (32+20) bit timestamp to all the cameras, so each of them will be triggered simultaneously and have exactly the same timestamp value in Exif header regardless of the individual camera clock rate.

And if you use data for 3d reconstruction I would recommend using JP4 format (not JPEG) and use green pixels for processing. You may use color too, of course, but raw Bayer data will be more convenient than JPEG YCbCr that is subject to artefacts introduced by a simple de-mosaic algorithm in the camera.

Andrey


---- On Sun, 10 Jan 2016 11:12:57 -0800 Jennifer Valle <jvs1192 at gmail.com> wrote ---- 

Hello,


Thank you very much for the information and quick response. We study in detail what you proposed.


I'm going to explain a little what it will be our project.
Our goal is a multi-view synchronized capture for  3D analysis with 16 Elphel cameras (An example that has already been made in this university )
These analyzes can be of recordings stored on disk or in real time. Therefore the timestamps are very important in our project.
Millisecond resolution would almost enough for what we want to do, but if we have a microseconds resolution would be better.

The real-time processing does not want to do in the camera, we want to do on remote machines (inside a LAN) GPUs, etc.




What would be your recommendation for this particular project?


Kind regards,


Jennifer Valle




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://support.elphel.com/pipermail/support-list_support.elphel.com/attachments/20160110/c60f2e5b/attachment-0002.html>


More information about the Support-list mailing list