[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM front
From: |
Xu, Quan |
Subject: |
Re: [Qemu-devel] [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend driver |
Date: |
Wed, 28 Jan 2015 05:34:19 +0000 |
> -----Original Message-----
> From: Stefano Stabellini [mailto:address@hidden
> Sent: Tuesday, January 20, 2015 1:19 AM
> To: Xu, Quan
> Cc: address@hidden; address@hidden;
> address@hidden
> Subject: Re: [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend
> driver
>
> On Tue, 30 Dec 2014, Quan Xu wrote:
> > +int vtpm_recv(struct XenDevice *xendev, uint8_t* buf, size_t *count)
> > +{
> > + struct xen_vtpm_dev *vtpmdev = container_of(xendev, struct
> xen_vtpm_dev,
> > + xendev);
> > + struct tpmif_shared_page *shr = vtpmdev->shr;
> > + unsigned int offset;
> > +
> > + if (shr->state == TPMIF_STATE_IDLE) {
> > + return -ECANCELED;
> > + }
> > +
> > + while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) {
> > + vtpm_aio_wait(vtpm_aio_ctx);
> > + }
>
> Is this really necessary to write this as a busy loop?
> I think you should write it as a proper aio callback for efficiency:
> QEMU is going to burn 100% of the cpu polling and not doing anything else!
For further check and test, it is unnecessary to implement a busy loop in
vtpm_recv. I can remove
It in v4.
It is similar to Linux PV frontend driver. In theory, the aio at the end of
vtpm_send makes
This busy loop unnecessary.
-------------
| |
| vtpm |
| Domain |
| |
-------------
^
(Send) ||
. ||
. ||(Recv)
(Send) ||
v
-------------
| |
| QEMU |
| vtpm |
| frontend |
| |
-------------
The aio of vTPM is to make Send/Recv in order.
-Quan
>
>
> > + offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> > + memcpy(buf, offset + (uint8_t *)shr, shr->length);
> > + *count = shr->length;
> > +
> > + return 0;
> > +}