[Elphel-support] the discussion on the Heptaclops camera

Biel Bestué de Luna 7318.tk at gmail.com
Sun Oct 28 20:12:43 PDT 2012


   1.   Biel Bestué says:
    October 27, 2012 at 4:12
AM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58504>

   Andrey, I understand this, but if you want to edit heavily the colour
   from footage filmed with the camera, the heightened bits per pixels are a
   really welcome feature, so even if it doesn’t make any sense, would it be
   technically possible to download at least 25 12bit frames per second ?

   about compression, the 4.0.0 scheme from jp4, is it competitive to
   current monochrome codecs? recently some GPL monochrome codecs have
   surfaced (mostly for games alpha channels, within the newest OpenGL
   libraries ) I haven’t made any significant tests so this are just guesses,
   wouldn’t those modern codecs outperform jpg when compressing just using 1
   channel?

   I did some tests with a bad compression codec, PNG I don’t have the
   numbers here but it was mostly this:

   1 RGB 3 channel JPEG at 100% compression was the most wheightless of the
   results comming from the camera

   1 BW 3 channel JPEG @ 100% compr. had the worst wheight

   1 JP46 3channel JPEG @ 100% compr. had more wheight than RGB but
   significant less than BW

   I didn’t test JP4…

   1 JP46 1channel PNG compressed @ 100% compr. re-encoded from the earlier
   JP46 sampler had 1 third less the weight from the JP46 (!)

   so erasing 2 form the three channels gained a lot of space, why not then
   use a new BW, 1 channel codec? maybe this we could get 12 bits at 25 FPS,
   or more resolution at 25 FPS, or have the user decide the compromise he
   wants to take.
     2.   Biel Bestué says:
    October 27, 2012 at 4:22
AM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58507>

   wait! I committed an error in my numbers in the last comment, where I
   said “the PNG weighted a third less”, no I meant, the PNG with it’s worse
   compression had a little less wheight than the Jp46 at 100% using only 1
   channel instead of three, that is using only a third of the original
   information.

   so using a 1channel PNG at lossless compression gave a better result
   with the JP46 at 100% (!)

   excuse my dumb error in my last post -_-’
     3.   andrey says:
    October 27, 2012 at 11:27
AM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58543>

   Biel,
   I just do not understand your ” if you want to edit heavily the colour
   from footage filmed with the camera…” statement. The sensor provides data
   where each pixel has some signal value and noise. The simpliest source of
   the noise (shot noise) comes from the discrete number of electrons it
   stores. Why for “heavily colour editing” you need to know exactly how the
   dice fell for each particular pixel? When you do “heavy editing” (that
   involves multiple transformations and save/load operations – then yes, you
   need lossless format to prevent errors from accumulating. But in a single
   process of acquisition it is enough to keep artifacts reasonably smaller
   than the natural noise.

   As you know we use aberration correction and that amplifies any
   artifacts significantly, so we need higher compression quality and more
   precise reproduction of the pixel data than even for “colour editing” – the
   artifacts that come out during deconvolution (in has to amplify high
   spatial frequency components – those that are sacrificed first by JPEG) are
   not visible by just viewing the footage.

   Using color JPEG is not an option for us – it involves color conversion
   that would ruin our post-processing.

   Comparison of different compression methods should not just compare the
   file size, but it should include analysis of the difference between the
   pixel output and the one restored after decompressing, comparison of
   difference signal and the noise sources. Normally the compression is
   designed for image/video distribution and is intended to reduce the
   bandwidth while preserving the visible quality. In the camera our goal is
   different – limit bandwidth while preserving the useful sensor data –
   whatever it can provide (sacrificing what is anyway under the noise floor).

   Difference between JP46 and JP4 is just that the first is slightly
   larger – to be compatible with the normal color JPEG format (4:2:0) it
   sends two zero color blocks, but they are nicely compressed because they
   are true zeros. But still – a little more data and – lower FPS in 353. In
   JP4 the pixel rate is 160MHz/2=80Mpix/sec, in JP46 (and JPEG) – only
   160MHz/3~=53.3 MPix/sec. 80Mpix/sec is above the Aptina MT9P001/31/06
   output (it is ~75MPix/sec average – peak 96Mpix/sec do not count here), but
   53.3 – below.

   Andrey
     4.   Biel Bestué says:
    October 27, 2012 at 6:03
PM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58578>

   I’m sorry I said “editing heavily the colour” when I meant Colour
   correction and VFX work, so that means working heavily with the footage,
   with several filters and several colour corrections, so having 4096 shades
   of grey per channel would be great. indeed if the process only needs image
   acquisition and no other filtering or corrections then 8 bit it’s ok.

   on the codecs I mentioned, since they are dedicated to BW images, they
   are dedicated to preserve the maximum amount of detail per image, they are
   not video codecs but pictures codecs, what I mean is that they are “intra”
   codecs, not “inter” codecs so they serve the purpose you mentioned: “limit
   bandwidth while preserving the useful sensor data” because they are both
   Intra and designed to work around BW images like RAW images.
     5.   andrey says:
    October 27, 2012 at 7:34
PM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58587>

   Biel,

   We are now saving all the “shades of grey” the sensor provides, what is
   discarded – is just noise, not the useful signal. So you can just add some
   noise to the decoded signal – result will be the same as if you preserved
   the shot noise from the real pixels. Well, the particular values will be
   different, but if you acuire the same image with exactly the same light –
   the result from the real pixels would also be different.

   As for codecs – what we use is, of course “intra” and JP4 is not
   dedicated to BW images – it is dedicated to compress the Bayer mosaic from
   the sensor.

   And please – do not confuse ADC bits and “shades of grey”. We use all
   the 12 bits from the sensor output – if you disable even the LSB (make it
   11) – you’ll be able to see the difference. So yes, that is correct –
   sensor output is 12 bits and all are needed. But the sensor does not
   provide 4096 “shades of grey” – it is a lot even for 35mm sensor we used.
   The Aptina sensor in the 353 camera has full well capacity about 8500 e- ,
   so it has less than 200 (closer to 100) distinct levels you can resolve
   from individual pixels (even as the readout noise is very small). And 8
   bits is quite enough to encode those distinct levels of grey that sensor
   provides.

   Andrey
     6.   Biel Bestué says:
    October 28, 2012 at 6:29
AM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58638>

   Andrey, I’m curious, how does those cameras that deliver more than 8
   bits deliver those quantities? they use more than 12 bits in the AD
   conversion? like 14bits and higher? if the current sensor/camera outputed
   12 bits from the current hardware how would this “noise” display? maybe in
   the color resolution?
   the ADC process is in the imager, between the imager and the FPGA, or
   inside the FPGA? the imager/sensor is independent of the ADC process? I
   mean, can you extract as much bits you want for the images the images you
   extract from the imager, or is it limited somewhat?
     7.   andrey says:
    October 28, 2012 at 12:08
PM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58666>

   Biel,

   It depends. It is much easier to make linear ADC, so it is common to
   have linear ADC with the quantization step matching the sensor readout
   noise (~= noise in the darks) and having as many steps as needed to cover
   the full range. And this can be 12 bit for small sensors, 14 or even more
   for cooled scientific-grade sensors (mostly for long readout times – not
   suitable for video). In the most of the sensor output range (excluding the
   most dark areas) the predominant noise is shot noise (~=square root of
   number of electrons in a pixel) and so most cameras use gamma-conversion.

   Such non-linear conversion originated from the nonlinearities of the CRT
   used for viewing, but now it serves different purpose even being called the
   same. It is the same what we do – match the “levels of grey” in the sensor
   (non-linear steps) with the output code. So yes, camera can provide raw 12
   or 14 bits, and it will have somewhat smaller noise than the encoded – but
   the difference is small for the expense of much more data. If you make the
   quantization step half of the sensor noise, it will add just 25% to the
   total shot+quantization noise. And if camera provides more than 8 bit – it
   normally means – linear bits, not 212 or 214 levels of grey.

   ADC is always outside the CCD sensor (CCD technology is incompatible
   with the suitable for ADC ones), and it is usually (but not always) on the
   same chip with CMOS sensors. On the CMOS you can put thousands of ADC on
   the same chip.

   In CMOS sensor the ADC is “given” (you can not change it), and it
   usually matches the performance of the pixels. Sometimes – not completely –
   as in the sensor we use – this is why they add programmable gain analog
   stage between the sensor pixels and the ADC (“ISO settings”). I would
   rather have a 14-bit output from teh same sensor and no additional analog
   gains.

   Number of ADC bits required – it is what I wrote above. Full output
   range of the sensor (determined by the pixel full well capacity – maximal
   number of electrons it can store) divided by the pixel readout noise (>=1
   electron) gives you the number of the (linear) ADC levels. Number of
   required encoded (non-linear) bits is different – what I tried to explain
   earlier.
     8.   Biel Bestué says:
    October 28, 2012 at 5:31
PM<http://blog.elphel.com/2012/10/heptaclops-camera-and-the-393/comment-page-1/#comment-58693>

   damn, that is a lot of info. [image: :D]

   the “GAINR,GAING,GAINB and GAINGB” values in the Elphel 353 is this
   analog gain in the CMOS?

   what do you mean by “linear bits” versus “2^12 2^14″? maybe by “linear
   bits” do you mean the colour gradient from black to white resulting from
   using a gamma value of 1, versus the same colour gradient but with gamma
   0.46?

   shot noise is the noise in the “well exposed” areas isn’t it? the
   minimum “vibration” of the colours in the well exposed part of the range?

   what part does “Black level” play in this issue? I’ve seen that Black
   level changes where the gamma curve starts to affect the picture, Black
   level is an effector after the acquisition, as gamma seems to be, but
   before the picture compression? I’ve seen that there are colour deviations
   closer to the dark tones, like colour getting too green in the dark tones,
   is this possible that green black level should be displaced further closer
   so the lowered green could deliver a much cleaner dark colour
   representation?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://support.elphel.com/pipermail/support-list_support.elphel.com/attachments/20121029/d014597f/attachment-0002.html>


More information about the Support-list mailing list