On 05/05/2017 10:27 AM, Jason Wang wrote:
On 2017年05月04日 18:58, Wang, Wei W wrote:
Hi,
I want to re-open the discussion left long time ago:
https://lists.gnu.org/archive/html/qemu-devel/2015-11/msg06194.html
, and discuss the possibility of changing the hardcoded (256) TX queue
size to be configurable between 256 and 1024.
Yes, I think we probably need this.
That's great, thanks.
The reason to propose this request is that a severe issue of packet
drops in
TX direction was observed with the existing hardcoded 256 queue size,
which causes performance issues for packet drop sensitive guest
applications that cannot use indirect descriptor tables. The issue
goes away
with 1K queue size.
Do we need even more, what if we find 1K is even not sufficient in
the future? Modern nics has size up to ~8192.
Yes. Probably, we can also set the RX queue size to 8192 (currently
it's 1K) as well.
The concern mentioned in the previous discussion (please check the link
above) is that the number of chained descriptors would exceed
UIO_MAXIOV (1024) supported by the Linux.
We could try to address this limitation but probably need a new
feature bit to allow more than UIO_MAXIOV sgs.
I think we should first discuss whether it would be an issue below.
From the code, I think the number of the chained descriptors is
limited to
MAX_SKB_FRAGS + 2 (~18), which is much less than UIO_MAXIOV.
This is the limitation of #page frags for skb, not the iov limitation.
I think the number of page frags are filled into the same number of
descriptors
in the virtio-net driver (e.g. use 10 descriptors for 10 page frags).
On the other
side, the virtio-net backend uses the same number of iov for the
descriptors.
Since the number of page frags is limited to 18, I think there
wouldn't be more
than 18 iovs to be passed to writev, right?