qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Verita


From: ashish mittal
Subject: Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Tue, 23 Aug 2016 15:57:05 -0700

Hi Daniel,

In V4 of the patch:

On Mon, Aug 15, 2016 at 3:20 AM, Daniel P. Berrange <address@hidden> wrote:

>> + * Convert the json formatted command line into qapi.
>> +*/
>> +
>> +static int vxhs_parse_json(BlockdevOptionsVxHS *conf,
>> +                                  QDict *options, Error **errp)
>> +{

>> +}
>
> Ewww this is really horrible code. It is open-coding a special purpose
> conversion of QemuOpts -> QDict -> QAPI scheme. We should really put
> my qdict_crumple() API impl as a pre-requisite of this, so you can then
> use qdict_crumple + qmp_input_visitor to do this conversion in a generic
> manner removing all this code.
>
>   https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg03118.html

Changed to not do the manual conversion anymore.

>
>> +/*
>> + * vxhs_parse_uri: Parse the incoming URI and populate *conf with the
>> + * vdisk_id, and all the host(s) information. Host at index 0 is local 
>> storage
>> + * agent and the rest, reflection target storage agents. The local storage
>> + * agent ip is the efficient internal address in the uri, e.g. 192.168.0.2.
>> + * The local storage agent address is stored at index 0. The reflection 
>> target
>> + * ips, are the E-W data network addresses of the reflection node agents, 
>> also
>> + * extracted from the uri.
>> + */
>> +static int vxhs_parse_uri(BlockdevOptionsVxHS *conf,
>> +                               const char *filename)
>
> Delete this method entirely. We should not be adding URI syntax for any new
> block driver. The QAPI schema syntax is all we need.
>

Changed per suggestion from Kevin.


>> +    of_vsa_addr = g_new0(char, OF_MAX_SERVER_ADDR);
>> +    file_name = g_new0(char, OF_MAX_FILE_LEN);
>> +    snprintf(file_name, OF_MAX_FILE_LEN, "%s%s", vdisk_prefix, 
>> s->vdisk_guid);
>> +    snprintf(of_vsa_addr, OF_MAX_SERVER_ADDR, "of://%s:%d",
>> +             s->vdisk_hostinfo[s->vdisk_cur_host_idx].hostip,
>> +             s->vdisk_hostinfo[s->vdisk_cur_host_idx].port);
>
> *Never* use  g_new + snprintf, particularly not with a fixed length
> buffer. g_strdup_printf() is the right way.
>

Fixed all such places.


>> +out:
>> +    if (file_name != NULL) {
>> +        g_free(file_name);
>> +    }
>> +    if (of_vsa_addr != NULL) {
>> +        g_free(of_vsa_addr);
>> +    }
>
> Useless if-before-free - g_free happily accepts NULL so don't check
> for it yourself.
>

Removed all if-before-free - g_free.

>
>> +
>> +/*
>> + * This is called by QEMU when a flush gets triggered from within
>> + * a guest at the block layer, either for IDE or SCSI disks.
>> + */
>> +int vxhs_co_flush(BlockDriverState *bs)
>> +{
>> +    BDRVVXHSState *s = bs->opaque;
>> +    uint64_t size = 0;
>> +    int ret = 0;
>> +    uint32_t iocount = 0;
>> +
>> +    ret = qemu_iio_ioctl(s->qnio_ctx,
>> +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
>> +            VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
>> +
>> +    if (ret < 0) {
>> +        /*
>> +         * Currently not handling the flush ioctl
>> +         * failure because of network connection
>> +         * disconnect. Since all the writes are
>> +         * commited into persistent storage hence
>> +         * this flush call is noop and we can safely
>> +         * return success status to the caller.
>> +         *
>> +         * If any write failure occurs for inflight
>> +         * write AIO because of network disconnect
>> +         * then anyway IO failover will be triggered.
>> +         */
>> +        trace_vxhs_co_flush(s->vdisk_guid, ret, errno);
>> +        ret = 0;
>> +    }
>> +
>> +    iocount = vxhs_get_vdisk_iocount(s);
>> +    if (iocount > 0) {
>> +        trace_vxhs_co_flush_iocnt(iocount);
>> +    }
>
> You're not doing anything with the iocount value here. Either
> delete this, or do something useful with it.
>

Deleted.


>> +unsigned long vxhs_get_vdisk_stat(BDRVVXHSState *s)
>> +{
>> +    void *ctx = NULL;
>> +    int flags = 0;
>> +    unsigned long vdisk_size = 0;
>
> Is this meansured in bytes ? If so 'unsigned long' is a rather
> limited choice for 32-bit builds. long long  / int64_t
> would be better.
>

You are right. This will need change in the qnio library as well. Hope
I can fix this later!

>> +    int ret = 0;
>> +
>> +    ret = qemu_iio_ioctl(s->qnio_ctx,
>> +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
>> +            VDISK_STAT, &vdisk_size, ctx, flags);
>> +
>> +    if (ret < 0) {
>> +        trace_vxhs_get_vdisk_stat_err(s->vdisk_guid, ret, errno);
>> +    }
>> +
>> +    trace_vxhs_get_vdisk_stat(s->vdisk_guid, vdisk_size);
>> +    return vdisk_size;
>
> Presumably vdisk_size is garbage when ret < 0, so you really
> need to propagate the error better.
>

Fixed.


>> +    /*
>> +     * build storage agent address and vdisk device name strings
>> +     */
>> +    of_vsa_addr = g_new0(char, OF_MAX_SERVER_ADDR);
>> +    file_name = g_new0(char, OF_MAX_FILE_LEN);
>> +    snprintf(file_name, OF_MAX_FILE_LEN, "%s%s", vdisk_prefix, 
>> s->vdisk_guid);
>> +    snprintf(of_vsa_addr, OF_MAX_SERVER_ADDR, "of://%s:%d",
>> +             s->vdisk_hostinfo[index].hostip, 
>> s->vdisk_hostinfo[index].port);
>
> Again no snprintf please.

Fixed.


>> +out:
>> +    if (of_vsa_addr) {
>> +        g_free(of_vsa_addr);
>> +    }
>> +    if (file_name) {
>> +        g_free(file_name);
>> +    }
>
> More useless-if-before-free
>

Fixed.


>> +
>> +/*
>> + * The line below is how our drivier is initialized.
>> + * DO NOT TOUCH IT
>> + */
>
> This is a rather pointless comment - the function name itself makes
> it obvious what this is doing.
>

Removed.


>
>> +#define vxhsErr(fmt, ...) { \
>> +    time_t t = time(0); \
>> +    char buf[9] = {0}; \
>> +    strftime(buf, 9, "%H:%M:%S", localtime(&t)); \
>> +    fprintf(stderr, "[%s: %lu] %d: %s():\t", buf, pthread_self(), \
>> +                __LINE__, __func__); \
>> +    fprintf(stderr, fmt, ## __VA_ARGS__); \
>> +}
>
> This appears unused now, please delete it.
>

Removed.

>> diff --git a/configure b/configure
>> index 8d84919..fc09dc6 100755
>> --- a/configure
>> +++ b/configure
>> @@ -320,6 +320,7 @@ vhdx=""
>>  numa=""
>>  tcmalloc="no"
>>  jemalloc="no"
>> +vxhs="yes"
>
> You should set this to "", to indicate that configure should
> auto-probe for support. Setting it to 'yes' indicates a mandatory
> requirement which we don't want for an optional library like yours.
>
> This would fix the automated build failure report this patch got.
>

Fixed.

>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>> index 5e2d7d7..064d620 100644
>> --- a/qapi/block-core.json
>> +++ b/qapi/block-core.json
>> @@ -1693,12 +1693,13 @@
>>  #
>>  # @host_device, @host_cdrom: Since 2.1
>>  # @gluster: Since 2.7
>> +# @vxhs: Since 2.7
>>  #
>>  # Since: 2.0
>>  ##
>>  { 'enum': 'BlockdevDriver',
>>    'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
>> -            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
>> +            'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom', 'vxhs',
>>              'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
>>              'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
>>              'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>> @@ -2150,6 +2151,22 @@
>>              'server': ['GlusterServer'],
>>              '*debug-level': 'int' } }
>>
>> +# @BlockdevOptionsVxHS
>> +#
>> +# Driver specific block device options for VxHS
>> +#
>> +# @vdisk_id:    UUID of VxHS volume
>> +#
>> +# @server:      list of vxhs server IP, port
>> +#
>> +# Since: 2.7
>> +##
>> +{ 'struct': 'BlockdevOptionsVxHS',
>> +  'data': { 'vdisk_id': 'str',
>> +            'server': ['InetSocketAddress'] } }
>
> Is there any chance that you are going to want to support UNIX domain socket
> connections in the future ? If so, perhaps we could use SocketAddress instead
> of InetSocketAddress to allow more flexibility in future.
>

We don't see the need for UNIX sockets at present.

Regards,
Ashish



reply via email to

[Prev in Thread] Current Thread [Next in Thread]