qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] 9pfs: fix init_in_iov_from_pdu truncating size


From: Christian Schoenebeck
Subject: Re: [PATCH 2/2] 9pfs: fix init_in_iov_from_pdu truncating size
Date: Wed, 13 May 2020 15:11:52 +0200

On Mittwoch, 13. Mai 2020 01:24:16 CEST Stefano Stabellini wrote:
> Let me premise that this patch fixes an important bug, so I am not
> asking you to do any more work to get this patch committed right now :-)

Actually, I realized that this overall issue is much more delicate to handle 
than I thought. My patch would still miss other important things, like e.g. it 
does not roll back the read / readdir streaming positions of the respective 
FID on 9P server side. -> data loss / undefined behaviour on subsequent read / 
readdir

Also the 'count' parameter in the Rread response would need to be updated as 
well. -> undefined behaviour

For these and other reasons already described, I start to think that it is 
actually not a good idea to truncate the response at all, at least not at that 
final transport stage (as my or your patch do).

Question: is there really no way to handle this overall issue exlusively on 
Xen side, i.e. delaying response delivery until large enough transport buffer 
would be available or acquiring on-demand the required buffer size? Or by 
looking ahead at msize in advance (see below)?

>From what I see now, I would recommend completely reverting response 
truncation on virtio transport side at least. Because so far I have not seen a 
single case with virtio where the transport buffer was too small for 
delivering any response. And as you can see, response truncation at late 
transport level opens all kinds of severe issues on other ends. It is not easy 
to do and requires a lot of code!

> But I think it would be good to test what would happen if the client did
> a read on a directory with hundreds of entries, such as a maildir
> directory. There has to be point where the number of directory entries
> is larger than the shared buffer. What happens then?

I'm talking about virtio only now: that scenario does not happen. When client 
opens a 9P session, client transmits 'msize' as parameter for the 9P sesssion 
which is the "maximum message size" ever to be assumed on both sides for  
client requests and server responses. By default msize is 4 kiB (4096 bytes) 
with Linux clients, and our server would immediately error out if any client 
tries to open a session with 'msize' < 4 kiB (for similar reasons discussed; 
because some 9P response types cannot be truncated at all):

https://github.com/qemu/qemu/commit/
e16453a31a00c1c0a199cab0617e8aa888f6419a#diff-f3e98ed0a65e27743b14785fa85b7d79

The Rread and Rreaddir responses are already truncated (in a sane and correct 
way) in 9p.c by server's request handler in case the response would exceed 
'msize'. For that reason it is actually unexpected that transport would not be 
able to provide the required transport buffer size for any response.

> I am guessing that we'll have to implement the "truncate at the
> boundaries of individual entries" to get it right in all cases.
> 
> Given that it doesn't look like truncation would work right with
> Rreaddir anyway today, I think it would be OK to fix it in a separate
> patch.

If there is really no way with Xen to ensure there's always a buffer size 
according to msize, then I fear this would need to be addressed in a 
completely different way: by changing the transport interface, early asking 
transport what the biggest buffer size would be, and then passing that info 
already early over to the 9P request handler (9p.c), such that it would 
already be able to assemble the individual responses *correctly* by using

        max_response_size = MIN(msize, client_requested_size, transport_size)

> > And that's it, for all other 9P types we can't truncate response at all.
> > For those types we could probably just return EAGAIN, but would it help?
> > Probably would require changes on client side as well then to handle this
> > correctly.
> I think the critical ones are Rread and Rreaddir. I wonder if we could
> even reproduce a bug with any of the other 9p types. Probably not. So
> returning an error might be good enough. Returning an error is also way
> better than calling virtio_error or xen_9pfs_disconnect, more on this
> below.

Xen *MUST* ensure to always deliver a transport buffer at least 4kiB in size 
for 9P responses. See my commit log message in the commit link provided above 
for detailed explanation why.

And IMO Xen should always be able to provide a transport buffer that's at 
least msize large. If that's the case, then we don't have to worry about 
anything else and can drop this transport truncation code.

> > > > Fixes: 16724a173049ac29c7b5ade741da93a0f46edff7
> > > > Fixes: https://bugs.launchpad.net/bugs/1877688
> > > > Signed-off-by: Christian Schoenebeck <address@hidden>
> > > > ---
> > > > 
> > > >  hw/9pfs/virtio-9p-device.c | 35 +++++++++++++++++++++++++++--------
> > > >  hw/9pfs/xen-9p-backend.c   | 38
> > > >  +++++++++++++++++++++++++++++---------
> > > >  2 files changed, 56 insertions(+), 17 deletions(-)
> > > > 
> > > > diff --git a/hw/9pfs/virtio-9p-device.c b/hw/9pfs/virtio-9p-device.c
> > > > index 536447a355..57e4d92ecb 100644
> > > > --- a/hw/9pfs/virtio-9p-device.c
> > > > +++ b/hw/9pfs/virtio-9p-device.c
> > > > @@ -154,15 +154,34 @@ static void virtio_init_in_iov_from_pdu(V9fsPDU
> > > > *pdu, struct iovec **piov,>
> > > > 
> > > >      VirtQueueElement *elem = v->elems[pdu->idx];
> > > >      size_t buf_size = iov_size(elem->in_sg, elem->in_num);
> > > > 
> > > > -    if (buf_size < P9_IOHDRSZ) {
> > > > -        VirtIODevice *vdev = VIRTIO_DEVICE(v);
> > > > +    if (pdu->id + 1 == P9_RREAD) {
> > > > +        /* size[4] Rread tag[2] count[4] data[count] */
> > > 
> > > 4+2+4 = 10
> > 
> > That's 4+1+2+4 = 11
> > 
> > You were missing "Rread" here which identifies the (numeric) 9P response
> > type and which is always 1 byte in size.
> 
> I thought so. Could you please update the comment in the code? As is it
> is a bit confusing.

I just copy pasted the 9P specs here, but NP, that's the smallest issue to 
address.

> I think we have to return here because the connection gets closed by
> xen_9pfs_disconnect. I wonder if we should return -EAGAIN like you
> suggested instead of calling xen_9pfs_disconnect() which is
> irrecoverable. But that could be done later in a separate patch.

Even -EAGAIN would require a bunch of code, since the precise error response 
type depends a) on the protocol dialect (9p2000.L vs. 9p2000.u) and b) even 
depending on the precise request type. For some 9P request types it is not 
even allowed to return an error response at all.

As you can see it all boils down to one thing: better look for a solution for 
Xen to deliver the required buffer size, or in worst case: yield the 
coroutine, i.e. delay delivery of response until that can be assured by Xen.

Best regards,
Christian Schoenebeck





reply via email to

[Prev in Thread] Current Thread [Next in Thread]