[Elphel-support] Eyesis 4pi

support-list support-list at support.elphel.com
Mon Jun 17 11:55:14 PDT 2013


Dear Jan,
please see the answers below.

 > 1. How much does the quality of the image change with the speed of  capture - meaning how different is image captured at 2.5 fps from one captured  at 5 fps?

5fps is the maximal rate possible by the hardware when it is not limited by the recording speed that with the current camera is limited to 16MB/sec. Good quality images (good for the purpose of aberration correction, not just viewing of the JPEG images) is about 2MB/5MPix frame. Each SSD records 3 channels (15MPix combined) - that gives 2.5 fps. Higher frame rate will be possible with the cameras electronic based on the new system boards that we are now working on.

 > 2. Since we already own a mobile mapping system and are not  happy with the camera, are you able to deliver just the imaging part of your  system? Only the set of cameras capturing the sphere.

What we actually sell is the image capturing system, all the software we use is Free software available for download under GNU GPLv3.0 license. The SSD when installed in the camera are part of the system, you can also transfer images over the network and record them on a  host PC, but that would reduce the maximal recording rate from 16MB/s to 10MB/s (and so the maximal frame rate with the same good quality of images)

 > 3. On your web you say it takes 30 minutes to create a panorama. Is this value correct, and does  it apply for every panorama, or just for the first and the subsequent panos  take much shorter? When capturing tens of thousands images per day, the 30  minutes seem just impossible.

Most of the image processing time is lens aberration correction. You know that every lens has some aberrations, especially far from the center. It is not a big problem for most cameras, but is really important when you tile multiple images. And even as we use the best lenses we could find they still have lower performance (especially in the corners of the individual tiles) than the sensor resolution, so we use the processing to do the aberration correction. Of course you may bypass that step and use any other software to combine the acquired images, we are focusing on the camera hardware development. Our lens distortion calibration provide precise mapping of the acquired pixels to the corresponding rays in space and you may use to create panoramas with other software at a much higher rate.

 > 4. The SSDs you use - is there a chance to  install bigger disks and increase operational time?

The SSD we use are installed inside the camera, they are 1.8" SATA and we can install different capacity. It is also possible to use mSATA format SSD with adapters, in the future we'll use just the mSATA devices.

 > 5. Have you tested the  positional accuracy of your system in the challenging conditions of urban  canyons/tunnels? If so, what were the results? We plan to get as many daily  kms as possible (up to 400) and then use the data for survey purposes - this  requires very good stitching of panoramas as well as precise position and  attitude. From the MEMS IMU you use, I don't have much confidence in the final  accuracy, but I might be wrong.

Currently we do not have any software to process the IMU data and test positional accuracy. The hardware records high-resolution IMU data at the rate 2.5 Ksamples per second and records them combined with the image acquisition timestamps, GPS measurements and optionally pulses from odometer to the same log with microsecond-resolution timestamps. We also do the same when calibrating cameras - during that process the rotational accuracy is a fraction of the angular minute and positional accuracy is better than a millimeter - this data can be used fro the IMU calibration. But as I wrote above - currently we only have the hardware recording that data, not the software to process it.

Andrey





More information about the Support-list mailing list