qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH RFC 0/4] Dumping traffic when using netdev d


From: Markus Armbruster
Subject: Re: [Qemu-devel] Re: [PATCH RFC 0/4] Dumping traffic when using netdev devices
Date: Fri, 16 Jul 2010 17:41:39 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Anthony Liguori <address@hidden> writes:

> On 07/15/2010 03:22 PM, Miguel Di Ciurcio Filho wrote:
>> Hello,
>>
>> This is a prototype suggestion. I mostly copied and pasted the code from
>> net/dump.c into net.c and made some adjustments. There is no command line
>> parsing involved yet, just the internals and small changes in net/tap.c and
>> net/slirp.c do make the thing work.
>>
>> In my tests, using tap as backend, e1000 as a guest device and running iperf 
>> from
>> guest to host, the overhead of dumping the traffic caused a loss of around 
>> 30%
>> of performance.
>>
>> I opened the dumped files in wireshark and they looked fine. When using slirp
>> all requests were dumped fine too.
>>    
>
> A less invasive way to do this would be to chain netdev devices.
>
> Basically:
>
> -netdev tap,fd=X,id=foo
> -netdev dump,file=foo.pcap,netdev=foo,id=bar
> -net nic,model=virtio,netdev=bar

Is this really less invasive?  It breaks the simple 1:1 relationship
between NIC and network backend.  All the code dealing with
VLANClientState member peer needs to be touched.  For instance, this is
the code to connect peers, in qemu_new_net_client():

        if (peer) {
            assert(!peer->peer);
            vc->peer = peer;
            peer->peer = vc;
        }

Possibly worth it if we had a number of different things we want to
insert between the end points, but I don't see that right now.

> I think this has some clear advantages to this architecturally.  From
> a user perspective, the only loss is that you have to add the dump
> device at startup (you can still enable/disable capture dynamically).

I don't like this restriction at all.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]