discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] SIGSEGV on benchmark_rx.py


From: Serdar KOYLU
Subject: [Discuss-gnuradio] SIGSEGV on benchmark_rx.py
Date: Sat, 26 Mar 2016 15:03:08 +0000 (UTC)

I recently get two USRP-N210 device, two desktop computer etc., build GNURadio and others...

Finally many samples runs finely, such as UHD_FTT etc. But both benchmark_rx.py for OFDM and narrowband, get an SIGSEGV and halted.

My CPU conf: I7 4790K, 4GHz, 16 GB RAM, H97 Chipset, on two machines.

from cpuinfo:
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt

My software versions:

address@hidden centos]# gnuradio-config-info -v
3.7.9.1

address@hidden centos]# uhd_config_info
linux; GNU C++ version 4.8.5 20150623 (Red Hat 4.8.5-4); Boost_105300; UHD_003.010.git-119-g42a3eeb6

address@hidden centos]# uname -a
Linux localhost.localdomain 3.10.0-327.4.5.el7.x86_64 #1 SMP Mon Jan 25 22:07:14 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

For a basic debug test,  ofdm/benchmark_py in gdb (only "bt" cmd responses):

run ./benchmark_rx.py -f 1299250000 -a addr=192.168.10.2

Using Volk machine: avx2_64_mmx
[New Thread 0x7fffdd0d6700 (LWP 30663)]
[New Thread 0x7fffcffff700 (LWP 30664)]
[New Thread 0x7fffcf7fe700 (LWP 30665)]
[New Thread 0x7fffceffd700 (LWP 30666)]
[New Thread 0x7fffce7fc700 (LWP 30667)]
[New Thread 0x7fffcdffb700 (LWP 30668)]
[New Thread 0x7fffcd7fa700 (LWP 30669)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffcf7fe700 (LWP 30665)]
volk_32fc_x2_multiply_32fc_a_avx2_fma (cVector=0x31a5f70, aVector=0x318bf40, bVector=0x31aa4e0, num_points=512) at /home/serdar/gnuradio/volk/kernels/volk/volk_32fc_x2_multiply_32fc.h:254
254        _mm256_store_ps((float*)c,z); // Store the results back into the C container
(gdb)
(gdb) bt
#0  volk_32fc_x2_multiply_32fc_a_avx2_fma (cVector=0x31a5f70, aVector=0x318bf40, bVector=0x31aa4e0, num_points=512) at /home/serdar/gnuradio/volk/kernels/volk/volk_32fc_x2_multiply_32fc.h:254
#1  0x00007fffe6065200 in gr::filter::kernel::fft_filter_ccc::filter (this=0x318a600, address@hidden, input=0x7ffff7fd0000, output=0x7ffff7f22000)
    at /home/serdar/gnuradio/gr-filter/lib/fft_filter.cc:323
#2  0x00007fffe6092f46 in gr::filter::fft_filter_ccc_impl::work (this=0x3189270, noutput_items=358, input_items=..., output_items=...) at /home/serdar/gnuradio/gr-filter/lib/fft_filter_ccc_impl.cc:112
#3  0x00007fffeff6cd67 in gr::sync_decimator::general_work (this=0x31892a0, noutput_items=<optimized out>, ninput_items=..., input_items=..., output_items=...)
    at /home/serdar/gnuradio/gnuradio-runtime/lib/sync_decimator.cc:66
#4  0x00007fffeff3a467 in gr::block_executor::run_one_iteration (address@hidden) at /home/serdar/gnuradio/gnuradio-runtime/lib/block_executor.cc:451
#5  0x00007fffeff75116 in gr::tpb_thread_body::tpb_thread_body (this=0x7fffcf7fddb0, block=..., max_noutput_items=<optimized out>) at /home/serdar/gnuradio/gnuradio-runtime/lib/tpb_thread_body.cc:122
#6  0x00007fffeff6a7e1 in operator() (this=0x31f5030) at /home/serdar/gnuradio/gnuradio-runtime/lib/scheduler_tpb.cc:44
#7  operator() (this=0x31f5030) at /home/serdar/gnuradio/gnuradio-runtime/include/gnuradio/thread/thread_body_wrapper.h:51
#8  boost::detail::function::void_function_obj_invoker0<gr::thread::thread_body_wrapper<gr::tpb_container>, void>::invoke (function_obj_ptr=...) at /usr/include/boost/function/function_template.hpp:153
#9  0x00007fffeff22240 in operator() (this=<optimized out>) at /usr/include/boost/function/function_template.hpp:767
#10 boost::detail::thread_data<boost::function0<void> >::run (this=<optimized out>) at /usr/include/boost/thread/detail/thread.hpp:117
#11 0x00007fffef43b24a in thread_proxy () from /lib64/libboost_thread-mt.so.1.53.0
#12 0x00007ffff7800dc5 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff6e2521d in clone () from /lib64/libc.so.6
(gdb)

At this point, gdb cannot show "z" value, (<optimized out>). Belongated code this, i think:

static inline void volk_32fc_x2_multiply_32fc_a_avx2_fma(lv_32fc_t* cVector, const lv_32fc_t* aVector, const lv_32fc_t* bVector, unsigned int num_points){
  unsigned int number = 0;
  const unsigned int quarterPoints = num_points / 4;
   
  lv_32fc_t* c = cVector;
  const lv_32fc_t* a = aVector;
  const lv_32fc_t* b = bVector;
   
  for(;number < quarterPoints; number++){
    const __m256 x = _mm256_load_ps((float*)a); // Load the ar + ai, br + bi as ar,ai,br,bi
    const __m256 y = _mm256_load_ps((float*)b); // Load the cr + ci, dr + di as cr,ci,dr,di
    const __m256 yl = _mm256_moveldup_ps(y); // Load yl with cr,cr,dr,dr
    const __m256 yh = _mm256_movehdup_ps(y); // Load yh with ci,ci,di,di
    const __m256 tmp2x = _mm256_permute_ps(x,0xB1); // Re-arrange x to be ai,ar,bi,br
    const __m256 tmp2 = _mm256_mul_ps(tmp2x, yh); // tmp2 = ai*ci,ar*ci,bi*di,br*di
    const __m256 z = _mm256_fmaddsub_ps(x, yl, tmp2); // ar*cr-ai*ci, ai*cr+ar*ci, br*dr-bi*di, bi*dr+br*di

// ------------------------------------------------------------------
// At this point, SIGSEGV Occurred and gdb reports:
// c = cVector = 0x31a5f70
// number = 0
// z = Unknown, optimized out :(

    _mm256_store_ps((float*)c,z); // Store the results back into the C container
   
    a += 4;
    b += 4;
    c += 4;
  }

----------------------------------------------------------------------------------------------------------------

I'm very novice for DSP or SIMD coding, i can't help more :(

Excuse me for bad english..

Best regards..











reply via email to

[Prev in Thread] Current Thread [Next in Thread]