[Elphel-support] Cameras and processing modules for Self Driving Cars Vision System

Andrey Filippov andrey at elphel.com
Fri Jun 30 08:14:38 PDT 2017


Hello Marius,

Elphel cameras through the history used multiple different sensors from different manufacturers, and the cameras are designed to be flexible and being able to accommodate new sensors as well. Currently off the shelf we have 5 Mpix sensors that we consider to be a good balance between sensitivity and resolution - OnSemi/Aptina MT9P006 with 2.2umx2.2um pixels as well as (still only partially supported by the software drivers) smaller pixel 14MPix. For the book scanning project we used huge 35mm format Kodak CCDs connected to the same system boards. At that time our system board 10313, but all the later ones (10333, 10353 and the current 10393) are backward compatible - just some software for the older sensors is missing.

All our code is under GNU GPLv3+, excluding Linux kernel drivers that have to be GNU GPLv2(+), hardware designs are under CERN OHL. Mechanical parts are documented on Elphel Wiki, most parts are also viewable with x3d using our FreeCAD-based converter from STEP files libraries into x3d assemblies , like here: https://www.elphel.com/wiki/Elphel_camera_assemblies .  All FPGA code is GPLv3.

We support FSF and Richard Stallman in not using term "IP", so we do not have or develop any of it. But we have both plenty of the Verilog code in our git repository (all our code is there without any exclusions), the main one is  https://git.elphel.com/Elphel/x393/tree/dct (there is a block diagram if you scroll down the page). This code is really free software, it does not rely on any "black boxes" or "IP" and can be completely simulated with Icarus Verilog (GNU GPLv2)

For FPGA development we also provide advanced tools based on Eclipse IDE that support Xilinx Vivado (and older ISE) , Intel/Altera Quartus and can accommodate other architectures. It Integrates modified version of Veditor, Icarus Verilog and Cocotb (for simulation with Python - https://blog.elphel.com/2016/07/i-will-not-have-to-learn-systemverilog/ ) and interacts with Xilinx proprietary tools over ssh/rsync :
https://git.elphel.com/Elphel/vdt-plugin
https://blog.elphel.com/2016/05/tutorial-02-eclipse-based-fpga-development-environment-for-elphel-cameras/
https://www.youtube.com/watch?v=9g8udf_nhZE

3d camera - no it is not https://www3.elphel.com/stereo_setup , it is something much more advanced, based on both the new hardware and the new software and is targeted to do most of the job directly in the camera FPGA. I hope to  post some demo links to the intermediate results soon.

And self driving cars is one of the target applications for this project. Mars rovers can probably benefit from it too - long rage passive 3d reconstruction requires less valuable power than lasers.

Andrey








  Hi Andrey,
 Thank you for getting back to me!
 I'm trying to figure out what FPGAs and components I should get to build smart cameras that supports 1-4 x 1080P 60FPS (or better) low light (2+um pixel) sensors, for open source robotics and self driving car development.
 
 I understand these solutions may not be cheap or easy to approach, I'm trying to see what can be used best and relatively future proof as starting point for open source development in this area. OSSDC being a global community driven project, I'm also looking at affordable solutions that I can promote to be used for development and testing of SDC algorithms all around the world, as OSSDC target is global full autonomy (not only applicable in certain areas).
 Would be great if you could give me some ideas of what configurations I should try (my budget is very limited for now), and which IPs may help get started faster.
 I'll post soon more details about the OSSDC platform architecture and Smart Camera, here are some things we collected, more discussions happed on OSSDC Slack (ossdc.org):
     https://github.com/OSSDC/OSSDC-SmartCamera/issues/1
     https://medium.com/@mslavescu/what-about-putting-a-computer-vision-processor-on-the-camera-or-sensor-platform-itself-d0622b24f5c
 I'm also looking into HDMI capture solutions (for both cameras and gaming consoles/PCs):
     https://medium.com/@mslavescu/hdmi-capture-and-analysis-on-fpga-for-just-150-8b34652b0b0c
     https://medium.com/@mslavescu/get-ready-to-race-ai-with-us-at-ossdc-org-b741e266e362
 
 I assume this is the 3D camera you mention, would be possible to use a different sensor to increase the frame rate and also is it possible to get uncompressed video to the PC (to do further processing on raw data using Nvidia GPUs):
 
     https://www3.elphel.com/stereo_setup
 Stereo (multi) camera with dense depth maps would perfect for SDC, that what I think is necessary to achieve full autonomy at relatively low cost.
 
 Thank you,
 Marius
 





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://support.elphel.com/pipermail/support-list_support.elphel.com/attachments/20170630/d634c912/attachment.html>


More information about the Support-list mailing list