qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 2/5] hw/block/nvme: pull aio error handling


From: Klaus Jensen
Subject: Re: [PATCH v8 2/5] hw/block/nvme: pull aio error handling
Date: Tue, 17 Nov 2020 08:16:33 +0100

On Nov 16 19:18, Klaus Jensen wrote:
> On Nov 16 09:57, Keith Busch wrote:
> > On Thu, Nov 12, 2020 at 08:59:42PM +0100, Klaus Jensen wrote:
> > > +static void nvme_aio_err(NvmeRequest *req, int ret)
> > > +{
> > > +    uint16_t status = NVME_SUCCESS;
> > > +    Error *local_err = NULL;
> > > +
> > > +    switch (req->cmd.opcode) {
> > > +    case NVME_CMD_READ:
> > > +        status = NVME_UNRECOVERED_READ;
> > > +        break;
> > > +    case NVME_CMD_FLUSH:
> > > +    case NVME_CMD_WRITE:
> > > +    case NVME_CMD_WRITE_ZEROES:
> > > +        status = NVME_WRITE_FAULT;
> > > +        break;
> > > +    default:
> > > +        status = NVME_INTERNAL_DEV_ERROR;
> > > +        break;
> > > +    }
> > 
> > Just curious, is there potentially a more appropriate way to set an nvme
> > status based on the value of 'ret'? What is 'ret' representing anyway?
> > Are these errno values?
> > 
> 
> Yes, it's errno values from down below.
> 
> But looking at this more closely, it actually looks like this is where
> we should behave as dictated by the rerror and werror drive options.
> 
> I'll do a follow up patch to fix that.

So, following up on this after looking more into it.

Currently, the device is basically behaving as if werror and rerror were
both set to "report" - that is, report the error to the guest.

Since we currently do not support werror and rerror, I think it is fine
to behave as if it was report and set a meaningful status code that fits
the command that failed (if we can).

But I'll start working on a patch to support rerror/werror, since it
would be nice to support.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]