[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] [USRP-users] Receiving 200 Msps with an X300

From: Peter Witkowski
Subject: Re: [Discuss-gnuradio] [USRP-users] Receiving 200 Msps with an X300
Date: Wed, 4 Mar 2015 16:14:51 -0500

Hi Steve,

On my machine, I had to do two additional steps.

1.  Set the PCIe transfer to 4096.  See method #3 here: http://dak1n1.com/blog/7-performance-tuning-intel-10gbe .
2.  Set the number of descriptors in the NIC to 4096 (maximum).  The command for this is ethtool -G ethX rx 4096 tx 4096 (where ethX is your 10GigE port).

Also, you can distinguish a dropped packet from an overflow (i.e., the device's internal buffers) by doing ethtool -S and looking at rx_missed_errors.  If this number is increasing, your NIC cannot keep up and it's dropping packets.

Also, when I added buffering, I left the dirty_background_ratio alone.

Good luck capturing data.

On Wed, Mar 4, 2015 at 2:25 PM, Steve Taylor via USRP-users <address@hidden> wrote:
Hello all,

I am trying to use the USRP X300 to record 200 Msps (2x 100 Msps streams) to disk. I am converting the signed, 16-bit otw format to 4-bit samples, thus reducing the data-rate to something less demanding on the drives (200 MB/s).

For the most part, everything works great, however, I occasionally see a "D", indicating a sample was dropped. I will outline below what I have tried so far. My question is, what is going on? How can I prevent the dropped samples?

My current platform:
i7-4790K @ 4.0 GHz
4x 8GB DDR3 1333 MHz
ASUS Maximus VI Hero
4x WD black drives in RAID 0 (via mdadm)
10 Gb Intel card sold by Ettus
USRP X300 with 2x WBX 120 MHz
Ubuntu 14.04
Linux 3.13.0-45-generic
UHD git branch master as of 3/3/2015

I have set net.core.rmem_max to 33554432 as recommended in the manual. I have set rtprio to 99 for the current user's group as recommended in the manual. I have set the mtu to 9000 as recommended in the manual. I have also set vm.dirty_background_ratio to 0 (to coax Linux to not buffer the data before writing to disk, as I need long, sustained writes, not bursts).

In my call to recv(), I pass in a buffer which can hold 1 ms of data. With this buffer size, I will see 3-4 dropped packets in about 15 minutes of recording. Increasing the buffer size to 10 ms decreases the number of drops, but I ultimately lose more data due to the longer buffer size. Decreasing the buffer size to 0.1 ms results in many more dropped packet indicators.

In order to prove that the RAID array is not the issue, I have also tested writing to /dev/null, as well as removing the call to std::ofstream::write() entirely. I still get several dropped packets in 15 minutes, even when I call only recv() in a while loop without doing anything with the data.

I have also switched the conversion to the UHD sc16 -> sc16 to eliminate my converter implementation as a possible problem. For this test, I also took out the calls to ofstream::write(). Again, I saw drops.

My application creates a multi_usrp which uses both frontends of the X300. I have also seen the drops in rx_sample_to_file when streaming with only one frontend at 200 Msps.

Please let me know if there is more information I should provide.

Thank you for your consideration,


USRP-users mailing list

Peter Witkowski

reply via email to

[Prev in Thread] Current Thread [Next in Thread]