qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 17/21] virtio-channel: parse qga stream for VMDU


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 17/21] virtio-channel: parse qga stream for VMDUMP_INFO event
Date: Wed, 5 Apr 2017 18:39:55 +0100
User-agent: Mutt/1.7.1 (2016-10-04)

On Wed, Apr 05, 2017 at 12:06:56PM -0500, Eric Blake wrote:
> On 04/05/2017 11:12 AM, Daniel P. Berrange wrote:
> > On Sat, Mar 11, 2017 at 05:22:52PM +0400, Marc-André Lureau wrote:
> >> On virtio channel "org.qemu.guest_agent.0", parse the json stream until
> >> the VMDUMP_INFO is received and retrieve the dump details.
> >>
> 
> > 
> > so we just continually feed data into the json parser until we see the
> > event we care about....
> > 
> > What kind of denial of service protection does our JSON parser have. Now
> > that QEMU is directly parsing JSON from QEMU guest agent, it is exposed
> > to malicious attack by the guest agent.
> 
> Our JSON parser rejects input that exceeds various limits:
> 
> json-lexer.c:
> #define MAX_TOKEN_SIZE (64ULL << 20)
> 
> json-streamer.c:
> #define MAX_TOKEN_SIZE (64ULL << 20)
> #define MAX_TOKEN_COUNT (2ULL << 20)
> #define MAX_NESTING (1ULL << 10)
> 
> > 
> > eg what happens if the 'vmcoreinfo' string in the JSON doc received from
> > the guest ends up being 10GB in size ? Is that going to cause our JSON
> > parser to allocate QString which is 10GB in size which we'll further
> > try to strdup just below too...
> 
> The parser will have rejected the guest data long before the 10GB mark.
> But our error recovery from that rejection may not be ideal...

Ok, good, we should be pretty much ok then


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]