[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] viterbi.c, integer overflow at cumulative metric

From: Jan Krämer
Subject: Re: [Discuss-gnuradio] viterbi.c, integer overflow at cumulative metric
Date: Mon, 14 Jul 2014 09:19:45 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0

Hash: SHA1

Hey Vesa,

the cc_decoder_impl.cc and it's corresponding volk kernel
"volk_8u_x4_conv_k7_r2_8u.h" in gr-fec avoid this problem already by
normalizing the path metric after each step in the trellis. I'm not
sure if they are on the master branch yet, but you can check out
Tom Rondeau's gnuradio repo (www.github.com/trondeau).

- -Jan

On 14.07.2014 08:54, Vaskelainen Vesa wrote:
> Hi all,
> I noticed that in viterbi.c when input symbol pair quality is good,
> branch metrics 'mets[ ]' maximum value is 512. In case of errorless
> input symbols, this causes cumulative metric increase by 512 for
> the best path for each decoded bit. In BUTTERFLY macro, m0 and m1
> are type of int, i.e. they can hold integers less than 2^31, but
> since metric increases by 8x2^9=2^12 for each output byte, this
> implies that after 2^31 / 2^12 = 2^19:th byte occurs an integer
> overflow and decoder output gets messy.
> This is simple to check, if you connect constant source of -1's to
> ccsds27decoder and write output to a file and then run hexdump of
> the file.
> I corrected this by changing the type of m0, m1 and metric to long
> long, when the integer overflow should not exist before 2^51:th
> byte. However, with periodic initialization of metric the integer
> overflow of cumulative metric would be fully avoided.
> Best regards, Vesa Vaskelainen
> _______________________________________________ Discuss-gnuradio
> mailing list address@hidden 
> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]