discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Re: UHD Announcement - February 25rd 2011


From: Feng Andrew Ge
Subject: Re: [Discuss-gnuradio] Re: UHD Announcement - February 25rd 2011
Date: Tue, 01 Mar 2011 15:52:59 -0500
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101208 Thunderbird/3.1.7

Josh,

That's great, thanks.

When using UHD in GNU Radio, I observed huge time overhead: for example, using the raw_Ethernet code and 500kb/s, tunnel.py has only about 8 ms ping RTT time between two nodes; now with UHD, I have 17ms in average. As I increase the ping payload, the overhead time (excluding the extra data communication time) increases accordingly.  Since USRP2 by default sends data samples immediately and the RTT time between UHD and USRP2 is negligible, I think that the interface between UHD and GNU Radio is introducing some overhead.  Do you have any thought on this?

Would you tell me what threads are running in UHD when uhd_single_usrp_sink and uhd_single_usrp_source is called? It seems that at least two threads are called for each.

Is it right that the maximum amount of data that each socket.send() or socket.recv() can operate is dynamically determined by noutput_items/ninput_items from the general work function in uhd_single_usrp_*?  Originally I thought the num_recv_frames have control over this; but I noticed that the UDP transport document is updated: "Note1: num_recv_frames and num_send_frames do not affect performance."


Andrew


On 03/01/2011 02:45 PM, Josh Blum wrote:

      
Thanks a lot for the explanation.

To explain your observations for the curious:

Prior to the fix, a recv buffer would be lost to the transport layer on
each timeout (thanks to an optimization I made earlier).

So, for every 100ms window (the default timeout) that did not have at
least 90 packets transmitted. A receive buffer was lost. After 32
timeouts, there were no more available buffers and the flow control
throttled back.

-Josh

answers below:

When you say 90 packets, I assume that you mean UDP packets (which
contain samples). Given the default MTU (1500-8-20)B and 2 samples per
symbol as well as 4B per sample, for BPSK or GMSK, 90 packets of samples
correspond to 90*1472/(2*4*8)=2070B of user data. If I use 1500B per
user packet, that's less than 2 packets. For 700 UDP packets, that's
about 10 user packets. This actually explains what I observed, after
about 10 user packets, my transmission stopped. According to you, the
host blocked first. However, it seemed that USRP didn't send back update
packets for some reason--which is unusual. So it's likely timeout was
called.  To help my understanding what caused the above behavior, would
you please spend little time answering the following questions?

(1) Which parameter (*ups_per_fifo or **ups_per_sec*) corresponds to the
above control parameter here (90 transmission packets and 700 packets
update)? (2) How is the update packet generated on USRP?  (3) In normal
cases, when the host transmits a packet, does it specify a transmission
time for USRP? If so, it must get the clock of USRP first and then leave
some margin, this introduce some time overhead; if not, does the USRP
send whatever it receives immediately? (4) What's the content of the
short update packet?

1) ups_per_fifo

2) it counts the number of transmitted packets, and sends an update
packet every nth packet (default n = 90)

3) a transmission time is optional, when not specified the send is immediate

4) the sequence of the last transmitted packet

Andrew

On 02/28/2011 05:58 PM, Josh Blum wrote:
A brief description on the flow control implementation:

The flow control is only used on the transmit size to throttle the host
calls to send(). Update packets are received by the host every 90
transmitted packets. If the host does not get an update packet after
about 700 packets, the calls to send will block until the an update
packet arrives or timeout.

This does not incur any overhead on receive. Update packets are small,
have their own dedicated udp port, and arrive infrequently. The overhead
on the transmit side is a check of the flow control condition per
packet. Which looks like this:
return seq_type(_last_seq_out -_last_seq_ack)<  _max_seqs_out;

-josh

On 02/28/2011 02:42 PM, Feng Andrew Ge wrote:
Marc,

Unfortunately I don't much experience  with Ethernet pause-frame flow
control.  For my applications, sending is not an issue since we don't
send high data rates; we are more concerned at the receiver side,
particularly its latency (which is related to CPU consumption too.)

Andrew


On 02/28/2011 05:19 PM, Marc Epard wrote:
On Feb 28, 2011, at 3:54 PM, Feng Andrew Ge wrote:

Josh,

I haven't found time to try what you suggested yet; however, it would
be nice that the user has the option to choose host-side flow
control.  In principle, I don't fully trust host-side flow control
simply because there is too much uncertainty in a general-purpose
processor (determined by both its hardware architecture and OS).

Andrew
(We should probably move this to one of the public lists.)

Andrew,

How much experience do you have with Ethernet pause-frame flow
control? It caused us a bunch of problems and judging by the lists
over the last 8 months or so, it creates huge customer support
problems.

-Marc




reply via email to

[Prev in Thread] Current Thread [Next in Thread]