qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC net-next 00/18] virtio_net XDP offload


From: Jason Wang
Subject: Re: [RFC net-next 00/18] virtio_net XDP offload
Date: Thu, 28 Nov 2019 12:18:15 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0


On 2019/11/28 上午11:32, Alexei Starovoitov wrote:
On Tue, Nov 26, 2019 at 12:35:14PM -0800, Jakub Kicinski wrote:
I'd appreciate if others could chime in.
The performance improvements are quite appealing.
In general offloading from higher layers into lower layers is necessary long 
term.

But the approach taken by patches 15 and 17 is a dead end. I don't see how it
can ever catch up with the pace of bpf development.


This applies for any hardware offloading features, isn't it?


  As presented this approach
works for the most basic programs and simple maps. No line info, no BTF, no
debuggability. There are no tail_calls either.


If I understand correctly, neither of above were implemented in NFP. We can collaborate to find solution for all of those.


  I don't think I've seen a single
production XDP program that doesn't use tail calls.


It looks to me we can manage to add this support.


Static and dynamic linking
is coming. Wraping one bpf feature at a time with virtio api is never going to
be complete.


It's a common problem for hardware that want to implement eBPF offloading, not a virtio specific one.


How FDs are going to be passed back? OBJ_GET_INFO_BY_FD ?
OBJ_PIN/GET ? Where bpffs is going to live ?


If we want pinning work in the case of virt, it should live in both host and guest probably.


  Any realistic XDP application will
be using a lot more than single self contained XDP prog with hash and array
maps.


It's possible if we want to use XDP offloading to accelerate VNF which often has simple logic.


It feels that the whole sys_bpf needs to be forwarded as a whole from
guest into host. In case of true hw offload the host is managing HW. So it
doesn't forward syscalls into the driver. The offload from guest into host is
different. BPF can be seen as a resource that host provides and guest kernel
plus qemu would be forwarding requests between guest user space and host
kernel. Like sys_bpf(BPF_MAP_CREATE) can passthrough into the host directly.
The FD that hosts sees would need a corresponding mirror FD in the guest. There
are still questions about bpffs paths, but the main issue of
one-feature-at-a-time will be addressed in such approach.


We try to follow what NFP did by starting from a fraction of the whole eBPF features. It would be very hard to have all eBPF features implemented from the start.  It would be helpful to clarify what's the minimal set of features that you want to have from the start.


There could be other
solutions, of course.



Suggestions are welcomed.

Thanks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]