qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 42/42] WIP: add TPM CRB device


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH 42/42] WIP: add TPM CRB device
Date: Mon, 6 Nov 2017 12:49:49 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 10/09/2017 06:56 PM, Marc-André Lureau wrote:
+
+#define CRB_INTF_TYPE_CRB_ACTIVE 0b1
+#define CRB_INTF_VERSION_CRB 0b1
+#define CRB_INTF_CAP_LOCALITY_0_ONLY 0b0
+#define CRB_INTF_CAP_IDLE_FAST 0b0
+#define CRB_INTF_CAP_XFER_SIZE_64 0b11
+#define CRB_INTF_CAP_FIFO_NOT_SUPPORTED 0b0
+#define CRB_INTF_CAP_CRB_SUPPORTED 0b1
+#define CRB_INTF_IF_SELECTOR_CRB 0b1
+#define CRB_INTF_IF_SELECTOR_UNLOCKED 0b0
+
+#define CRB_CTRL_CMD_SIZE (TPM_CRB_ADDR_SIZE - sizeof(struct crb_regs))

So this here seems to declare the size of the buffer, which is a bit less than 4k now. Theoretically this could cause problems with the TPM on the backend side that may assume that there's a buffer of 4k available on the interface side. So there can be a mismatch between these buffersize if for example one usesCRB with passthrough and the host has a TIS with a TPM with full 4k buffersize.

I went through the exercise of extending libtpms and swtpm with commands to get and set the buffersize 'swtpm' is suppose to use. TPM/TPM2_GetCapability() can be used by a user to read the buffer size from the device and that now returns a possibly adjusted buffersize. I also modified QEMU to read the buffer size from the backend and have it configure the frontend's buffer to use the backend's buffer size. It is currently hard coded to 4096 and correct for the TIS.

https://github.com/stefanberger/qemu-tpm/commits/tpm_backend_buffer_size

The problem may only be theoretical since I am not sure whether any commands of TPM 2 (where it's relevant for CRB) would return a full 4k buffer that the CRB wouldn't be able to pass on to the OS. We have the possibility to match up the buffersizes now, though.

    Stefan





reply via email to

[Prev in Thread] Current Thread [Next in Thread]