[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] [Linux] Transfer TPM locality info in theringstructure

I think you have lost some of the characteristics of locality in this mechanism, and while I'm not sure what the precise ramifications of this are, I am sure that redefining the characteristics of part of TPM access control mechanism shouldn't be done without careful analysis first.
1) Some localities are protected by the chipset, not the software running on a machine. Locality 4 should only accessible by the Dynamic Root of Trust for Measurement (DRTM). We currently have no virtual DRTM, but if we did, it would need to be outside of the VM's OS in order to satisfy even the loosest interpretation of the TCG DRTM definitions. With the driver specifying the locality, I'm not sure how you will be able limit access to locality 4 to only this "external" DRTM. Locality 3 also has special considerations.
2) TPM Localities are independent from each other. By putting each locality on it's own page, standard memory protection mechanisms can enable different execution contexts to access the appropriate locality and no other. In your mechanism, any driver that can access the shared page can set any locality it wants. This forces us down a different use model of having a single trusted driver who sets locality based on the caller. This leads to a whole different set of questions. How will the trusted driver identify which locality is appropriate based on the caller? An ioctl won't give you this. A locality should be assignable to any arbitrary execution context. What does this all mean for applications that expect the traditional TPM model for localities?
I think in order to keep the characteristics of the TPM locality model, we'd need to have 4 shared pages. The Linux driver only needs to support 1 locality, but flexibility to program it to point it at any locality 0-2 on initialization may be valuable. If a virtual machine wants to use multiple localities, it should have multiple TPM drivers (one per locality), just like TCG forces for physical machines. A more privileged piece of code like a VMM or a trusted reference monitor would use memory protection mechanisms to ensure each driver can only access the correct locality page. How the software running in the VM chooses to create these guarantees is up to them, just like in a physical machine. The locality 4 page would always be inaccessible to code running in the VM. Only some external DRTM code invoked by a hyper call or something would be able to access the locality 4 page.
-Vinnie Scarlata

From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Stefan Berger
Sent: Thursday, January 03, 2008 6:26 PM
To: Cihula, Joseph
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; keir@xxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH] [Linux] Transfer TPM locality info in theringstructure

"Cihula, Joseph" <joseph.cihula@xxxxxxxxx> wrote on 01/03/2008 08:48:41 PM:

> On Wednesday, January 02, 2008 11:27 AM, Stefan Berger wrote:
> > Transfer TPM locality information in the ring structure. Add a version
> > identifier to the ring structure for possible future extensions.
> >
> > Signed-off-by: Stefan Berger <stefanb@xxxxxxxxxx>
> Stefan,
> How do you expect to use the locality value and how would it get set (to
> a non-zero value)?

The TIS interface offers the different address ranges for using a locality. A TIS driver can make the localities available through an ioctl(). A similar ioctl() could exist for the xen driver that allows the client to choose which locality to use.

> Since the locality value is provided by the originating domain, it can't
> really be "trusted" by the backend without some other type of
> validation.

Except for maybe checking that the value of the locality is not out of range I don't see what else would need to be checked or is there maybe some restriction for an OS to let applications use any other locality than locality '0'?


> Joe
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.