qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PER] Re: socket, mcast looping back frames -> IPv6 br


From: Mike Lovell
Subject: Re: [Qemu-devel] [PER] Re: socket, mcast looping back frames -> IPv6 broken
Date: Mon, 01 Apr 2013 00:35:03 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4

On 03/08/2013 05:47 AM, Samuel Thibault wrote:
Samuel Thibault, le Fri 08 Mar 2013 10:08:55 +0100, a écrit :
There does exist some unique address, which is returned by recvfrom,
I'll have a look at how to get access to it.
Ah, no, it's not unique... It's just the host IP address and the same
port as the multicast address, so it'll be the same for all qemus on the
same host.  I've checked how Linux bounces the datagram, it's through
the loopback interface, and thus dispatched over all listeners without
distinction. I don't see any way to get the information that the packet
comes from us, except using the ethernet content.

this is actually a problem that i dealt with when i was building the switched multicast backend i did last year. ( http://lists.nongnu.org/archive/html/qemu-devel/2012-06/msg04082.html )

one solution is to actually use two sockets. one that is bound to the multicast address, which receives the multicast packets, and another that is just bound to any ephemeral udp port, which is used for sending packets. when a packet is to be sent out to the multicast address, call sendto on the ephemeral socket with a destination of the multicast address. then, using recv_from on the multicast socket, packets being received can be compared to the local ephemeral address. if the address on the recv_from matches the address for the local ephemeral socket, the packet can just be dropped. no inspection of the packet being passed around is needed in this case.

if the group is interested is a solution like this, i can probably make some time over the next couple days to cook up a patch. thoughts?

mike



reply via email to

[Prev in Thread] Current Thread [Next in Thread]