qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC net-next 00/18] virtio_net XDP offload


From: Jason Wang
Subject: Re: [RFC net-next 00/18] virtio_net XDP offload
Date: Wed, 27 Nov 2019 10:59:37 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

Hi Jakub:

On 2019/11/27 上午4:35, Jakub Kicinski wrote:
On Tue, 26 Nov 2019 19:07:26 +0900, Prashant Bhole wrote:
Note: This RFC has been sent to netdev as well as qemu-devel lists

This series introduces XDP offloading from virtio_net. It is based on
the following work by Jason Wang:
https://netdevconf.info/0x13/session.html?xdp-offload-with-virtio-net

Current XDP performance in virtio-net is far from what we can achieve
on host. Several major factors cause the difference:
- Cost of virtualization
- Cost of virtio (populating virtqueue and context switching)
- Cost of vhost, it needs more optimization
- Cost of data copy
Because of above reasons there is a need of offloading XDP program to
host. This set is an attempt to implement XDP offload from the guest.
This turns the guest kernel into a uAPI proxy.

BPF uAPI calls related to the "offloaded" BPF objects are forwarded
to the hypervisor, they pop up in QEMU which makes the requested call
to the hypervisor kernel. Today it's the Linux kernel tomorrow it may
be someone's proprietary "SmartNIC" implementation.

Why can't those calls be forwarded at the higher layer? Why do they
have to go through the guest kernel?


I think doing forwarding at higher layer have the following issues:

- Need a dedicated library (probably libbpf) but application may choose to do eBPF syscall directly
- Depends on guest agent to work
- Can't work for virtio-net hardware, since it still requires a hardware interface for carrying  offloading information - Implement at the level of kernel may help for future extension like BPF object pinning and eBPF helper etc.

Basically, this series is trying to have an implementation of transporting eBPF through virtio, so it's not necessarily a guest to host but driver and device. For device, it could be either a virtual one (as done in qemu) or a real hardware.



If kernel performs no significant work (or "adds value", pardon the
expression), and problem can easily be solved otherwise we shouldn't
do the work of maintaining the mechanism.


My understanding is that it should not be much difference compared to other offloading technology.



The approach of kernel generating actual machine code which is then
loaded into a sandbox on the hypervisor/SmartNIC is another story.


We've considered such way, but actual machine code is not as portable as eBPF bytecode consider we may want:

- Support migration
- Further offload the program to smart NIC (e.g through macvtap passthrough mode etc).

Thanks


I'd appreciate if others could chime in.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]