[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5.1 01/12] mini-os/tpm{back, front}: Change shared page ABI



On 04/11/2013 10:27 AM, Ian Campbell wrote:
On Fri, 2013-03-22 at 22:30 +0000, Daniel De Graaf wrote:

+struct vtpm_shared_page {
+    uint32_t length;         /* request/response length in bytes */

The data is inline immediately after this struct? How does it interact
with the extra_pages stuff?

The data follows immediately after the extra_pages array whose size is
visible in nr_extra_pages. The current code only supports skipping the
right number of bytes if this array is filled in, it doesn't actually
read or write the grant IDs.

+
+    uint8_t state;           /* enum vtpm_state */
+    uint8_t locality;        /* for the current request */

I've had a look at the 7/12 and 10/12 and I'm still not sure how this
byte is used -- it's looked up in the XSM label as a string but how does
it become a uint8_t agreed by both the front and backend?

The frontend can set this byte (the existing Linux patch does not do so,
but the v2 includes a sysfs attribute that allows you to set the locality
for a given request). In the hardware TPM 1.2 interface, the TPM exposes
a distinct MMIO page for each locality and the chipset provides limits on
when writes to locality 4 (and possibly 3) are allowed.

Could we perhaps get a few more words on the protocol in general? Or
have I missed some existing doc?

This protocol emulates the request/response behavior of a TPM using a Xen
shared memory interface. All interaction with the TPM is at the direction
of the frontend, since a TPM (hardware or virtual) is a passive device -
the backend only processes commands as requested by the frontend.

The frontend sends a request to the TPM by populating the shared page with
the request packet, changing the state to VTPM_STATE_SUBMIT, and sending
and event channel notification. When the backend is finished, it will set
the state to VTPM_STATE_FINISH and send an event channel notification.

In order to allow long-running commands to be canceled, the frontend can
at any time change the state to VTPM_STATE_CANCEL and send a notification.
The TPM can either finish the command (changing state to VTPM_STATE_FINISH)
or can cancel the command and change the state to VTPM_STATE_IDLE. The TPM
can also change the state to VTPM_STATE_IDLE instead of VTPM_STATE_FINISH
if another reason for cancellation is required - for example, a physical
TPM may cancel a command if the interface is seized by another locality.


If you would like this description in the Xen tree, where is the best place
to locate it? The existing docs/misc/vtpm.txt is more focused on the use of
the TPM domains, not the protocol, but an additional section could be added
for documenting the Xen protocol.

What is the format of the payload, is it defined by some independent TPM
standard?

The payload is a TPM packet as defined by the TPM specification:
http://www.trustedcomputinggroup.org/resources/tpm_main_specification
(part 3 defines the packet format).

+    uint8_t pad;             /* should be zero */
+
+    uint8_t nr_extra_pages;  /* extra pages for long packets; may be zero */
+    uint32_t extra_pages[0]; /* grant IDs; length is actually nr_extra_pages */

Not actually used AFAICT? Future expansion I presume?

Yes. While normally TPM packets are all under 4000 bytes, the specification 
allows
larger packets (the TPM itself defines the maximum), and future versions of the
TPM specification that do not limit the TPM to 2048-bit RSA keys may require 
using
larger packets. Some commands (such as GetRandom) can produce packets of 
arbitrary
size, although it is reasonable for an implementation to limit what it returns 
so
this is not a problem for the vTPM.

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.