I was in a discussion with the AGL folks today talking about approaches
to achieving zero-copy when running VirGL virtio guests. AIUI (which is
probably not very much) the reasons for copy can be due to a number of
- the GPA not being mapped to a HPA that is accessible to the final HW
- the guest allocation of a buffer not meeting stride/alignment requirements
- data needing to be transformed for consumption by the real hardware?
any others? Is there an impedance between different buffer resource
allocators in the guest and the guest? Is that just a problem for
non-FLOSS blob drivers in the kernel?
I'm curious if it's possible to measure the effect of these extra copies
and where do they occur?
Making a good benchmark is going to be difficult. Copying has big impacts on:
- L3 pressure (pure cost of evictions and loss of "sticky" cache lines benefits)
- Memory request queue and prefetching
- TLB pressure
Conversely, as we are in VM environments the pressure that other VMs have on those resources, the jitter of the bounce copies will grow.
(lesson learnt from high speed - > 100Gbps - user pace networking)
All this to say that a unitest may be wrongly give impression that copy is not that costly.
Do all resources get copied from the guest buffer to
host or does this only occur when there is a mismatch in the buffer
Are there any functions where I could add trace points to measure this?
If this occurs in the kernel I wonder if I could use an eBPF probe to
count the number of bytes copied?
Apologies for the wall of questions I'm still very new to the 3D side of