[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/5] vTPM: event channel bind interdomain with para/hvm virtual machine



On 01/06/2015 11:46 AM, Xu, Quan wrote:
-----Original Message-----
From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx]
On 12/30/2014 11:44 PM, Quan Xu wrote:[...]
diff --git a/extras/mini-os/tpmback.c b/extras/mini-os/tpmback.c
[...]
+   domid = (domtype == T_DOMAIN_TYPE_HVM) ? 0 : tpmif->domid;

Unless I'm missing something, this still assumes that the HVM device model
is located in domain 0, and so it will not work if a stub domain is used for
qemu.


QEMU is running in Dom0 as usual, so the domid is 0.
as similar to Linux PV frontend driver, this frontend driver is enabled in QEMU.

This is a valid configuration of Xen and these patches do suffice to
make it work.  I am trying to ensure that an additional type of guest
setup will also work with these patches.

A useful feature of Xen is the ability to execute the QEMU device model
in a domain instead of a process in dom0.  When combined with driver
domains for devices, this can significantly reduce both the attack
surface of and amount of trust required of domain 0.

Any doubt, feel free to contact. I will try my best to explain. I think your 
suggestions are very helpful in previous email(Oct. 31th, 2014.
' Re: FW: [PATCH 1/6] vTPM: event channel bind interdomain with para/hvm 
virtual machine')
Maybe this is still a vague description :(

This is accurate but possibly incomplete.

This is my current understanding of the communications paths and support
for vTPMs in Xen:

  Physical TPM (1.2; with new patches, may also be 2.0)
        |
 [MMIO pass-through]
        |
  vtpmmgr domain
        |
 [minios tpmback/front] ----- ((other domains' vTPMs))
        |
   vTPM domain (currently always emulates a TPM v1.2)
        |
 [minios tpmback]+----[Linux tpmfront]-- PV Linux domain (fully working)
        |         \
        |          +--[Linux tpmfront]-- HVM Linux with optional PV drivers
        |           \
 [QEMU XenDevOps]  [minios or Linux tpmfront]
        |                  |
 QEMU dom0 process   QEMU stub-domain
        |                  |
 [MMIO emulation]   [MMIO emulation]
        |                  |
   Any HVM guest      Any HVM guest


The series you are sending will enable QEMU to talk to tpmback directly.
This is the best solution when QEMU is running inside domain 0, because
it is not currently a good idea to use Linux's tpmfront driver to talk to
each guest's vTPM domain.

When QEMU is run inside a stub domain, there are a few more things to consider:

 * This stub domain will not have domain 0; the vTPM must bind to another
   domain ID.
 * It is possible to use the native TPM driver for the stub domain (which may
   either run Linux or mini-os) because there is no conflict with a real TPM
   software stack running inside domain 0

Supporting this feature requires more granularity in the TPM backend changes.
The vTPM domain's backend must be able to handle:

 (1) guest domains which talk directly to the vTPM on their own behalf
 (2) QEMU processes in domain 0
 (3) QEMU domains which talk directly to the vTPM on behalf of a guest

Cases (1) and (3) are already handled by the existing tpmback if the proper
domain ID is used.

Your patch set currently breaks case (1) and (3) for HVM guests while
enabling case (2).  An alternate solution that does not break these cases
while enabling case (2) is preferable.

My thoughts on extending the xenstore interface via an example:

Domain 0: runs QEMU for guest A
Domain 1: vtpmmgr
Domain 2: vTPM for guest A
Domain 3: HVM guest A

Domain 4: vTPM for guest B
Domain 5: QEMU stubdom for guest B
Domain 6: HVM guest B

/local/domain/2/backend/vtpm/3/0/*: backend A-PV
/local/domain/3/device/vtpm/0/*: frontend A-PV

/local/domain/2/backend/vtpm/0/3/*: backend A-QEMU
/local/domain/0/qemu-device/vtpm/3/*: frontend A-QEMU  (uses XenDevOps)

/local/domain/4/backend/vtpm/5/0/*: backend B-QEMU
/local/domain/5/device/vtpm/0/*: frontend B-QEMU

/local/domain/4/backend/vtpm/6/0/*: backend B-PV
/local/domain/6/device/vtpm/0/*: frontend B-PV

Connections A-PV, B-PV, and B-QEMU would be created in the same manner as
the existing "xl vtpm-attach" command does now.  If the HVM guest is not
running Linux with the Xen tpmfront.ko loaded, the A-PV and B-PV devices
will remain unconnected; this is fine.

Connection A-QEMU has a modified frontend state path to prevent Linux from
attaching its own TPM driver to the guest's TPM.  This requires a few changes:
libxl must support changing the frontend path; this is similar to how disk
backend supports both qdisk and vbd (and others), but instead changes the path
for the frontend.  The minios backend also needs to change the sscanf in
parse_eventstr to something like "/local/domain/%u/%*[^/]/vtpm/%u/%40s".

In any case, the vTPM does not need to know if the guest is PV, HVM, or PVH.

BTW, professor J. Wang(Wuhan University, China) and I have enabled TPM 2.0 
simulator for Linux, Maybe we will try to integrate with vtpm domain to provide
TPM 2.0 vtpm function for virtual machine in Q2 or later.

This would be quite useful as it would allow people to use TPM 2.0 features
in guests, which may be expected when TPM 2.0 becomes more prevalent.  It
may also aid in the adoption of TPM 2.0 features because it enables those who
only have a 1.2 hardware TPM to still write and use software that interfaces
with a TPM 2.0.

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.