[Elphel-support] FPGA CCD Camera Video Compression to send Windows App using Altera Cyclone IV

Andrey Filippov support-list at support.elphel.com
Mon May 9 14:05:19 PDT 2011


On Mon, May 9, 2011 at 2:54 AM, Adam Gibson <c3008353 at uon.edu.au> wrote:

> Hi Andrey,
>
>
>
> I appreciate the reply.
>
> I’ve found this article:
>
>
> http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Elphel-camera-under-the-hood-from-Verilog-to-PHP/
>
>
>
> 1.       Is this article still accurate?
>
Adam, yes, it is mostly accurate. Of course, there are more additions to the
code, but they still follow the ideas described there


> 2.       Is your MJPEG encoder fully created in hardware?
>
Yes, it is completely in the FPGA. And it uses some 25% of the resources,
not more - there are many other functions implemented in the FPGA.


> I’m sifting through the Verilog files for 353 version 8.09 and I’m finding
> it difficult to identify the compression parts and match to the block
> diagram.
>
Most compressor parts are under "i_compressor" instance, but it is also
heavily interdependent with memory controller (mcontr).
The block diagram that you attached is for Ogg Theora encoder.


> 3.       Is the compression code something I can dissect out? If so what
> files exactly construct it? I’ve noticed some of the Verilog files are
> labelled 333.
>
Probably you can take some code, but as I wrote above you need both the
compressor and memory controller (it allows to read data in macroblock order
- overlapping 20x20 pixel tiles, with the overlap intended to perform
de-mosaic). And did you make sure that your project is compatible with GNU
GPLv3?
 The modules that have "333" are just those that they originally appeared in
the previous model 333, but most of them are still modified since (you can
find revisions in the CVS)

> 4.       What was the reason that your new models changed to MJPEG? These
> still achieve 1280x1024 at 27fps with MJPEG?
>
Main reason is that we are trying to preserve most of the sensor "raw" data
for the post-processing while having some reasonable compression that does
not sacrifice image data. Generally video formats are designed for video
distribution and do not preserve the sensor data we need for the high
quality images.



>
>
> Note regarding my system. The only things set is:
>
> ·         Its on Altera.
>
I believe the difference between different FPGA devices is small compared to
the complexity of the project itself, so would disagree that it is a major
factor. I did not use Altera because of their licensing policy, not because
of the technical specs or familiarity with it. If they will change that
policy in the future I would be happy to reconsider usage of their FPGA in
our products.



> ·         The camera gives Bayer pattern row by row. (ie. R-G rows and B-G
> rows).
>
In Elphel cameras the sensor data (after some pre-processing) is stored in
the SDRAM row by row. Later the data is read in 20x20 overlapping tiles to
the compressor, de-mosaic is performed before the compressor. In our
preferable format (JP4) there is no de-mosaic at all.


> ·         I’ve got a very basic Ethernet going, but it will need
> adjustments depending on what data I get.
>
We never did anything to make "Ethernet going" in our cameras - it is part
of the operating system.

Andrey



> ·         Everything else requires me to solve the video compression and
> construct the hardware/software around it.
>
>
>
> Again thank you for your help Andrey.
>
>
>
> Adam Gibson
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://support.elphel.com/pipermail/support-list_support.elphel.com/attachments/20110509/e232f930/attachment-0002.html>


More information about the Support-list mailing list