qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device
Date: Wed, 10 Mar 2010 19:30:58 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100301 Fedora/3.0.3-1.fc12 Thunderbird/3.0.3

On 03/10/2010 07:13 PM, Anthony Liguori wrote:
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using mmio callbacks instead of directly, and issue the appropriate barriers there.


Not good enough unless you want to severely restrict the use of shared memory within the guest.

For instance, it's going to be useful to assume that you atomic instructions remain atomic. Crossing architecture boundaries here makes these assumptions invalid. A barrier is not enough.

You could make the mmio callbacks flow to the shared memory server over the unix-domain socket, which would then serialize them. Still need to keep RMWs as single operations. When the host supports it, implement the operation locally (you can't render cmpxchg16b on i386, for example).

But now you have a requirement that the shmem server runs in lock-step with the guest VCPU which has to happen for every single word of data transferred.


Alternative implementation: expose a futex in a shared memory object and use that to serialize access. Now all accesses happen from vcpu context, and as long as there is no contention, should be fast, at least relative to tcg.

You're much better off using a bulk-data transfer API that relaxes coherency requirements. IOW, shared memory doesn't make sense for TCG :-)

Rather, tcg doesn't make sense for shared memory smp. But we knew that already.

--
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]