qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] iscsi: add support for bdrv_co_is_allocated


From: ronnie sahlberg
Subject: Re: [Qemu-devel] [PATCH 1/2] iscsi: add support for bdrv_co_is_allocated()
Date: Fri, 21 Jun 2013 10:13:49 -0700

On Fri, Jun 21, 2013 at 10:06 AM, Peter Lieven <address@hidden> wrote:
> Am 21.06.2013 18:31, schrieb Paolo Bonzini:
>> Il 21/06/2013 13:07, Kevin Wolf ha scritto:
>>>>>>> Note that you're blocking here. The preferred way would be something
>>>>>>> involving a yield from the coroutine and a reenter as soon as all
>>>>>>> requests are done. Maybe a CoRwLock does what you need?
>>>>> Is there a document how to use it? Or can you help here?
>>> The idea would be to take a read lock while any request is in flight
>>> (i.e. qemu_co_rwlock_rdlock() before it's started and
>>> qemu_co_rwlock_unlock() when it completes), and to take a write lock
>>> (qemu_co_rwlock_wrlock) for the part of iscsi_co_is_allocated() that
>>> requires that no other request runs in parallel.
>>>
>> You can just send the SCSI command asynchronously and wait for the
>> result.  There is an example in block/qed.c, the same would apply for iscsi.
>
> thanks for the pointer paolo, this was i was looking for. this here seems to 
> work:
>
> static void
> iscsi_co_is_allocated_cb(struct iscsi_context *iscsi, int status,
>                         void *command_data, void *opaque)
> {
>     struct IscsiTask *iTask = opaque;
>     struct scsi_task *task = command_data;
>     struct scsi_get_lba_status *lbas = NULL;
>
>     iTask->complete = 1;
>
>     if (status != 0) {
>         error_report("iSCSI: Failed to get_lba_status on iSCSI lun. %s",
>                      iscsi_get_error(iscsi));
>         iTask->status   = 1;
>         goto out;
>     }
>
>     lbas = scsi_datain_unmarshall(task);
>     if (lbas == NULL) {
>         iTask->status   = 1;
>         goto out;
>     }
>
>     memcpy(&iTask->lbasd, &lbas->descriptors[0],
>            sizeof(struct scsi_lba_status_descriptor));

Only the first descriptor?
sector_num -> sector_num+nb_sectors  could be partially allocated in
which case you get multiple descriptors


>
>     iTask->status   = 0;
>
> out:
>     scsi_free_scsi_task(task);
>
>     if (iTask->co) {
>         qemu_coroutine_enter(iTask->co, NULL);
>     }
> }
>
> static int coroutine_fn iscsi_co_is_allocated(BlockDriverState *bs,
>                                               int64_t sector_num,
>                                               int nb_sectors, int *pnum)
> {
>     IscsiLun *iscsilun = bs->opaque;
>     struct IscsiTask iTask;
>     int ret;
>
>     *pnum = nb_sectors;
>
>     if (iscsilun->lbpme == 0) {
>         return 1;
>     }
>
>     iTask.iscsilun = iscsilun;
>     iTask.status = 0;
>     iTask.complete = 0;
>     iTask.bs = bs;
>
>     if (iscsi_get_lba_status_task(iscsilun->iscsi, iscsilun->lun,
>                                   sector_qemu2lun(sector_num, iscsilun),
>                                   8 + 16, iscsi_co_is_allocated_cb,
>                                   &iTask) == NULL) {
>         *pnum = 0;
>         return 0;
>     }
>
>     while (!iTask.complete) {
>         iscsi_set_events(iscsilun);
>         iTask.co = qemu_coroutine_self();
>         qemu_coroutine_yield();
>     }
>
>     if (iTask.status != 0) {
>         /* in case the get_lba_status_callout fails (i.e.
>          * because the device is busy or the cmd is not
>          * supported) we pretend all blocks are allocated
>          * for backwards compatiblity */
>         return 1;
>     }
>
>     if (sector_qemu2lun(sector_num, iscsilun) != iTask.lbasd.lba) {
>         *pnum = 0;
>         return 0;
>     }
>
>     *pnum = iTask.lbasd.num_blocks * (iscsilun->block_size / 
> BDRV_SECTOR_SIZE);
>     if (*pnum > nb_sectors) {
>         *pnum = nb_sectors;
>     }
>
>     return (iTask.lbasd.provisioning == SCSI_PROVISIONING_TYPE_MAPPED) ? 1 : 
> 0;
>
>     return ret;
> }



reply via email to

[Prev in Thread] Current Thread [Next in Thread]