[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/4] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend driver




> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx]
> Sent: Monday, November 03, 2014 7:54 PM
> To: Xu, Quan
> Cc: qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx;
> stefano.stabellini@xxxxxxxxxxxxx
> Subject: Re: [PATCH 2/4] Qemu-Xen-vTPM: Register Xen stubdom vTPM
> frontend driver
> 
> On Sun, 2 Nov 2014, Quan Xu wrote:
> > This drvier transfers any request/repond between TPM xenstubdoms
> > driver and Xen vTPM stubdom, and facilitates communications between
> > Xen vTPM stubdom domain and vTPM xenstubdoms driver
> >
> > Signed-off-by: Quan Xu <quan.xu@xxxxxxxxx>
> 
> Please describe what changes did make to xen_backend.c and why.
> The commit message should contains info on all the changes made by the
> patch below.
> 
        
Thanks Stefano. 
one more day, I will explain in detail what changes did make to xen_backend.c 
and why. 
The following 2 sections are introduction and architecture. 

> Please also describe what is the "Xen vTPM stubdom", what is the
> "vTPM xenstubdoms driver" and how the communicate with each others.
> 


Add 2 parts for detailed descriptions, introduction and architecture.  

*INTRODUCTION*

The goal of virtual Trusted Platform Module (vTPM) is to provide a TPM 
functionality
to virtual machines (Fedora, Ubuntu, Redhat, Windows .etc). This allows 
programs to
interact with a TPM in a virtual machine the same way they interact with a TPM 
on the
physical system. Each virtual machine gets its own unique, emulated, software 
TPM.
Each major component of vTPM is implemented as a stubdom, providing secure 
separation
guaranteed by the hypervisor.
The vTPM stubdom is a Xen mini-OS domain that emulates a TPM for the virtual 
machine
to use. It is a small wrapper around the Berlios TPM emulator. TPM commands are 
passed
from mini-os TPM backend driver.
This patch series are only the Qemu part to enable Xen stubdom vTPM for HVM 
virtual
machine.
===========

*ARCHITECTURE*

The architecture of stubdom vTPM for HVM virtual machine:

            +--------------------+
            | Windows/Linux DomU | ...
            |        |  ^        |
            |        v  |        |
            |  Qemu tpm1.2 Tis   |
            |        |  ^        |
            |        v  |        |
            |        vTPM        |
            | XenStubdoms driver |  (new ..)
            +--------------------+
                     |  ^
                     v  |
            +--------------------+
            |  xen_vtpmdev_ops   |  (new ..)
            +--------------------+
                     |  ^
                     v  |
            +--------------------+
            |  mini-os/tpmback   |
            |        |  ^        |
            |        v  |        |
            |   vTPM stubdom     | ...
            |        |  ^        |
            |        v  |        |
            |  mini-os/tpmfront  |
            +--------------------+
                     |  ^
                     v  |
            +--------------------+
            |  mini-os/tpmback   |
            |        |  ^        |
            |        v  |        |
            |  vtpmmgr stubdom   |
            |        |  ^        |
            |        v  |        |
            |  mini-os/tpm_tis   |
            +--------------------+
                     |  ^
                     v  |
            +--------------------+
            |    Hardware TPM    |
            +--------------------+


* Windows/Linux DomU:
    The HVM based guest that wants to use a vTPM. There may be
    more than one of these.

 * Qemu tpm1.2 Tis:
    Implementation of the tpm1.2 Tis interface for HVM virtual
    machines. It is Qemu emulation device.

 * vTPM xenstubdoms driver:
    Similar to a TPM passthrough backend driver, it is a new TPM
    backend for emulated TPM TIS interface. This driver provides
    vTPM initialization and sending data and commends to a Xen
    vTPM stubdom.

 * xen_vtpmdev_ops:
    Register Xen stubdom vTPM backend, and transfer any request/
    repond between TPM xenstubdoms driver and Xen vTPM stubdom.
    Facilitate communications between Xen vTPM stubdom and vTPM
    xenstubdoms driver.

 * mini-os/tpmback:
    Mini-os TPM backend driver. The Linux frontend driver connects
    to this backend driver to facilitate communications between the
    Linux DomU and its vTPM. This driver is also used by vtpmmgr
    stubdom to communicate with vTPM stubdom.

 * vTPM stubdom:
    A mini-os stub domain that implements a vTPM. There is a
    one to one mapping between running vTPM stubdom instances and
    logical vTPMs on the system. The vTPM Platform Configuration
    Registers (PCRs) are all initialized to zero.

 * mini-os/tpmfront:
    Mini-os TPM frontend driver. The vTPM mini-os domain vTPM
    stubdom uses this driver to communicate with vtpmmgr-stubdom.
    This driver could also be used separately to implement a mini-os
    domain that wishes to use a vTPM of its own.

 * vtpmmgr stubdom:
    A mini-os domain that implements the vTPM manager. There is only
    one vTPM manager and it should be running during the entire lifetime
    of the machine. vtpmmgr domain securely stores encryption keys for
    each of the vTPMs and accesses to the hardware TPM to get the root of
    trust for the entire system.

 * mini-os/tpm_tis:
    Mini-os TPM version 1.2 TPM Interface Specification (TIS) driver.
    This driver used by vtpmmgr-stubdom to talk directly to the hardware
    TPM. Communication is facilitated by mapping hardware memory pages
    into vtpmmgr stubdom.

 * Hardware TPM:
    The physical TPM 1.2 that is soldered onto the motherboard.

===========



> Where does the vTPM backend lives?
The vTPM backend lives in Xen vTPM stubdom (Xen Mini-os)

> 
> 
> >  hw/xen/Makefile.objs         |   1 +
> >  hw/xen/xen_backend.c         | 182 ++++++++++++++++++++++-
> >  hw/xen/xen_stubdom_vtpm.c    | 333
> +++++++++++++++++++++++++++++++++++++++++++
> >  include/hw/xen/xen_backend.h |  11 ++
> >  include/hw/xen/xen_common.h  |   6 +
> >  xen-hvm.c                    |  13 ++
> >  6 files changed, 544 insertions(+), 2 deletions(-)
> >  create mode 100644 hw/xen/xen_stubdom_vtpm.c
> >
> > diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
> > index a0ca0aa..724df8d 100644
> > --- a/hw/xen/Makefile.objs
> > +++ b/hw/xen/Makefile.objs
> > @@ -1,5 +1,6 @@
> >  # xen backend driver support
> >  common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o
> xen_devconfig.o
> > +common-obj-$(CONFIG_TPM_XENSTUBDOMS) += xen_stubdom_vtpm.o
> >
> >  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
> >  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o
> xen_pt_config_init.o xen_pt_msi.o
> > diff --git a/hw/xen/xen_backend.c b/hw/xen/xen_backend.c
> > index b2cb22b..45a5778 100644
> > --- a/hw/xen/xen_backend.c
> > +++ b/hw/xen/xen_backend.c
> > @@ -194,6 +194,32 @@ int xen_be_set_state(struct XenDevice *xendev,
> enum xenbus_state state)
> >      return 0;
> >  }
> >
> > +/*get stubdom backend*/
> > +static char *xen_stubdom_be(const char *type, int dom, int dev)
> > +{
> > +    char *val, *domu;
> > +    char path[XEN_BUFSIZE];
> > +    unsigned int len, ival;
> > +
> > +    /*front domu*/
> > +    domu = xs_get_domain_path(xenstore, dom);
> > +    snprintf(path, sizeof(path), "%s/device/%s/%d/backend-id",
> > +             domu, type, dev);
> > +    g_free(domu);
> > +
> > +    val = xs_read(xenstore, 0, path, &len);
> > +    if (!val || 1 != sscanf(val, "%d", &ival)) {
> > +        g_free(val);
> > +        return NULL;
> > +    }
> > +    g_free(val);
> > +
> > +    /*backend domu*/
> > +    domu = xs_get_domain_path(xenstore, ival);
> > +
> > +    return domu;
> > +}
> > +
> >  /* ------------------------------------------------------------- */
> >
> >  struct XenDevice *xen_be_find_xendev(const char *type, int dom, int
> dev)
> > @@ -222,6 +248,7 @@ static struct XenDevice *xen_be_get_xendev(const
> char *type, int dom, int dev,
> >                                             struct XenDevOps *ops)
> >  {
> >      struct XenDevice *xendev;
> > +    char *stub;
> >
> >      xendev = xen_be_find_xendev(type, dom, dev);
> >      if (xendev) {
> > @@ -235,8 +262,15 @@ static struct XenDevice
> *xen_be_get_xendev(const char *type, int dom, int dev,
> >      xendev->dev   = dev;
> >      xendev->ops   = ops;
> >
> > -    snprintf(xendev->be, sizeof(xendev->be), "backend/%s/%d/%d",
> > -             xendev->type, xendev->dom, xendev->dev);
> > +    if (ops->flags & DEVOPS_FLAG_STUBDOM_BE) {
> > +        stub = xen_stubdom_be(xendev->type, xendev->dom,
> xendev->dev);
> > +        snprintf(xendev->be, sizeof(xendev->be),
> "%s/backend/%s/%d/%d",
> > +                 stub, xendev->type, xendev->dom, xendev->dev);
> > +        g_free(stub);
> > +    } else {
> > +        snprintf(xendev->be, sizeof(xendev->be), "backend/%s/%d/%d",
> > +                 xendev->type, xendev->dom, xendev->dev);
> > +    }
> >      snprintf(xendev->name, sizeof(xendev->name), "%s-%d",
> >               xendev->type, xendev->dev);
> >
> > @@ -611,6 +645,47 @@ static int xenstore_scan(const char *type, int
> dom, struct XenDevOps *ops)
> >      return 0;
> >  }
> >
> > +static void stubdom_update_be(char *watch, char *type, int dom,
> > +                              struct XenDevOps *ops)
> > +{
> > +    struct XenDevice *xendev;
> > +    char path[XEN_BUFSIZE];
> > +    char *ptr, *bepath;
> > +    unsigned int len, dev;
> > +
> > +    if (!(ops->flags & DEVOPS_FLAG_STUBDOM_BE)) {
> > +        return;
> > +    }
> > +
> > +    len = snprintf(path, sizeof(path), "backend/%s/%d", type, dom);
> > +    ptr = strstr(watch, path);
> > +    if (ptr == NULL) {
> > +        return;
> > +    }
> > +
> > +    if (sscanf(ptr+len, "/%u/%255s", &dev, path) != 2) {
> > +        strcpy(path, "");
> > +        if (sscanf(ptr+len, "/%u", &dev) != 1) {
> > +            dev = -1;
> > +        }
> > +    }
> > +
> > +    if (dev == -1) {
> > +        return;
> > +    }
> > +
> > +    xendev = xen_be_get_xendev(type, dom, dev, ops);
> > +    if (xendev != NULL) {
> > +        bepath = xs_read(xenstore, 0, xendev->be, &len);
> > +        if (bepath == NULL) {
> > +            xen_be_del_xendev(dom, dev);
> > +        } else {
> > +            free(bepath);
> > +            xen_be_backend_changed(xendev, path);
> > +        }
> > +    }
> > +}
> > +
> >  static void xenstore_update_be(char *watch, char *type, int dom,
> >                                 struct XenDevOps *ops)
> >  {
> > @@ -681,6 +756,10 @@ static void xenstore_update(void *unused)
> >      if (sscanf(vec[XS_WATCH_TOKEN], "fe:%" PRIxPTR, &ptr) == 1) {
> >          xenstore_update_fe(vec[XS_WATCH_PATH], (void*)ptr);
> >      }
> > +    if (sscanf(vec[XS_WATCH_TOKEN], "stub:%" PRIxPTR ":%d:%"
> PRIxPTR,
> > +               &type, &dom, &ops) == 3) {
> > +        stubdom_update_be(vec[XS_WATCH_PATH], (void *)type, dom,
> (void *)ops);
> > +    }
> >
> >  cleanup:
> >      free(vec);
> > @@ -732,11 +811,74 @@ err:
> >      return -1;
> >  }
> >
> > +int xen_vtpm_register(struct XenDevOps *ops)
> > +{
> > +    struct XenDevice *xendev;
> > +    char path[XEN_BUFSIZE], token[XEN_BUFSIZE];
> > +    char *domu;
> > +    unsigned int cdev, j, rc;
> > +    const char *type = "vtpm";
> > +    char **dev = NULL;
> > +
> > +    /*front domu*/
> > +    domu = xs_get_domain_path(xenstore, xen_domid);
> > +    snprintf(path, sizeof(path), "%s/device/%s",
> > +             domu, type);
> > +    free(domu);
> > +    dev = xs_directory(xenstore, 0, path, &cdev);
> > +    if (dev == NULL) {
> > +        return 0;
> > +    }
> > +
> > +    for (j = 0; j < cdev; j++) {
> > +        xendev = xen_be_get_xendev(type, xen_domid, atoi(dev[j]),
> ops);
> > +        if (xendev == NULL) {
> > +            xen_be_printf(xendev, 0, "xen_vtpm_register xendev is
> NULL.\n");
> > +            continue;
> > +        }
> > +
> > +        if (xendev->ops->initialise) {
> > +            rc = xendev->ops->initialise(xendev);
> > +
> > +            /*if initialise failed, delete it*/
> > +            if (rc != 0) {
> > +                xen_be_del_xendev(xen_domid, atoi(dev[j]));
> > +                continue;
> > +            }
> > +        }
> > +
> > +        /*setup watch*/
> > +        snprintf(token, sizeof(token), "stub:%p:%d:%p",
> > +                 type, xen_domid, xendev->ops);
> > +        if (!xs_watch(xenstore, xendev->be, token)) {
> > +            xen_be_printf(xendev, 0, "xen_vtpm_register xs_watch
> failed.\n");
> > +            return -1;
> > +        }
> > +    }
> > +
> > +    free(dev);
> > +    return 0;
> > +}
> 
> What does this function do? I sholdn't need  to guess from the code, I
> should be able to tell from the patch description.
> 
> 
> >  int xen_be_register(const char *type, struct XenDevOps *ops)
> >  {
> >      return xenstore_scan(type, xen_domid, ops);
> >  }
> >
> > +int xen_be_alloc_unbound(struct XenDevice *xendev, int dom, int
> remote_dom)
> > +{
> > +    xendev->local_port =
> xc_evtchn_bind_unbound_port(xendev->evtchndev,
> > +
> remote_dom);
> > +    if (xendev->local_port == -1) {
> > +        xen_be_printf(xendev, 0, "xc_evtchn_alloc_unbound failed\n");
> > +        return -1;
> > +    }
> > +    xen_be_printf(xendev, 2, "bind evtchn port %d\n",
> xendev->local_port);
> > +    qemu_set_fd_handler(xc_evtchn_fd(xendev->evtchndev),
> > +                        xen_be_evtchn_event, NULL, xendev);
> > +    return 0;
> > +}
> > +
> >  int xen_be_bind_evtchn(struct XenDevice *xendev)
> >  {
> >      if (xendev->local_port != -1) {
> > @@ -770,6 +912,42 @@ int xen_be_send_notify(struct XenDevice
> *xendev)
> >      return xc_evtchn_notify(xendev->evtchndev, xendev->local_port);
> >  }
> >
> > +int xen_vtpm_send(unsigned char *buf, size_t count)
> > +{
> > +    struct XenDevice *xendev;
> > +    int rc = -1;
> > +
> > +    xendev = xen_be_find_xendev("vtpm", xen_domid, 0);
> > +    if (xendev == NULL) {
> > +        xen_be_printf(xendev, 0, "Con not find vtpm device\n");
> > +        return -1;
> > +    }
> > +
> > +    if (xendev->ops->send) {
> > +        rc = xendev->ops->send(xendev, buf, count);
> > +    }
> > +
> > +    return rc;
> > +}
> > +
> > +int xen_vtpm_recv(unsigned char *buf, size_t *count)
> > +{
> > +    struct XenDevice *xendev;
> > +    int rc = -1;
> > +
> > +    xendev = xen_be_find_xendev("vtpm", xen_domid, 0);
> > +    if (xendev == NULL) {
> > +        xen_be_printf(xendev, 0, "Con not find vtpm device\n");
> > +        return -1;
> > +    }
> > +
> > +    if (xendev->ops->recv) {
> > +        xendev->ops->recv(xendev, buf, count);
> > +    }
> > +
> > +    return rc;
> > +}
> 
> xen_backend.c is supposed to be generic, so stubdom functions might be
> OK but vtpm specific functions should not be here.
> 
> 
> >  /*
> >   * msg_level:
> >   *  0 == errors (stderr + logfile).
> > diff --git a/hw/xen/xen_stubdom_vtpm.c b/hw/xen/xen_stubdom_vtpm.c
> > new file mode 100644
> > index 0000000..0d740c1
> > --- /dev/null
> > +++ b/hw/xen/xen_stubdom_vtpm.c
> > @@ -0,0 +1,333 @@
> > +/*
> > + * Connect to Xen vTPM stubdom domain
> > + *
> > + *  Copyright (c) 2014 Intel Corporation
> > + *  Authors:
> > + *    Quan Xu <quan.xu@xxxxxxxxx>
> > + *
> > + * This library is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2 of the License, or (at your option) any later version.
> > + *
> > + * This library is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with this library; if not, see
> <http://www.gnu.org/licenses/>
> > + */
> > +
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <stdarg.h>
> > +#include <string.h>
> > +#include <unistd.h>
> > +#include <signal.h>
> > +#include <inttypes.h>
> > +#include <time.h>
> > +#include <fcntl.h>
> > +#include <errno.h>
> > +#include <sys/ioctl.h>
> > +#include <sys/types.h>
> > +#include <sys/stat.h>
> > +#include <sys/mman.h>
> > +#include <sys/uio.h>
> > +
> > +#include "hw/hw.h"
> > +#include "block/aio.h"
> > +#include "hw/xen/xen_backend.h"
> > +
> > +enum tpmif_state {
> > +    TPMIF_STATE_IDLE,        /* no contents / vTPM idle / cancel
> complete */
> > +    TPMIF_STATE_SUBMIT,      /* request ready / vTPM working */
> > +    TPMIF_STATE_FINISH,      /* response ready / vTPM idle */
> > +    TPMIF_STATE_CANCEL,      /* cancel requested / vTPM working */
> > +};
> > +
> > +static AioContext *vtpm_aio_ctx;
> > +
> > +enum status_bits {
> > +    VTPM_STATUS_RUNNING  = 0x1,
> > +    VTPM_STATUS_IDLE     = 0x2,
> > +    VTPM_STATUS_RESULT   = 0x4,
> > +    VTPM_STATUS_CANCELED = 0x8,
> > +};
> > +
> > +struct tpmif_shared_page {
> > +    uint32_t length;         /* request/response length in bytes */
> > +
> > +    uint8_t  state;           /* enum tpmif_state */
> > +    uint8_t  locality;        /* for the current request */
> > +    uint8_t  pad;             /* should be zero */
> > +
> > +    uint8_t  nr_extra_pages;  /* extra pages for long packets; may be
> zero */
> > +    uint32_t extra_pages[0]; /* grant IDs; length is actually
> nr_extra_pages */
> > +};
> > +
> > +struct QEMUBH {
> > +    AioContext *ctx;
> > +    QEMUBHFunc *cb;
> > +    void *opaque;
> > +    QEMUBH *next;
> > +    bool scheduled;
> > +    bool idle;
> > +    bool deleted;
> > +};
> > +
> > +struct XenVtpmDev {
> > +    struct XenDevice xendev;  /* must be first */
> > +    struct           tpmif_shared_page *shr;
> > +    xc_gntshr        *xen_xcs;
> > +    int              ring_ref;
> > +    int              bedomid;
> > +    QEMUBH           *sr_bh;
> > +};
> > +
> > +static uint8_t vtpm_status(struct XenVtpmDev *vtpmdev)
> > +{
> > +    switch (vtpmdev->shr->state) {
> > +    case TPMIF_STATE_IDLE:
> > +    case TPMIF_STATE_FINISH:
> > +        return VTPM_STATUS_IDLE;
> > +    case TPMIF_STATE_SUBMIT:
> > +    case TPMIF_STATE_CANCEL:
> > +        return VTPM_STATUS_RUNNING;
> > +    default:
> > +        return 0;
> > +    }
> > +}
> > +
> > +static int xenbus_switch_state(struct XenDevice *xendev, enum
> xenbus_state xbus)
> > +{
> > +    xs_transaction_t xbt = XBT_NULL;
> > +
> > +    if (xendev->fe_state == xbus) {
> > +        return 0;
> > +    }
> > +
> > +    xendev->fe_state = xbus;
> > +
> > +retry_transaction:
> > +    xbt = xs_transaction_start(xenstore);
> > +    if (xbt == XBT_NULL) {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    if (xenstore_write_int(xendev->fe, "state", xbus)) {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    if (!xs_transaction_end(xenstore, xbt, 0)) {
> > +        if (errno == EAGAIN) {
> > +            goto retry_transaction;
> > +        }
> > +    }
> > +
> > +    return 0;
> > +
> > +abort_transaction:
> > +    xs_transaction_end(xenstore, xbt, 1);
> > +    return -1;
> > +}
> > +
> > +static int vtpm_aio_wait(QEMUBH *qbh)
> > +{
> > +    return aio_poll(qbh->ctx, true);
> > +}
> > +
> > +static void sr_bh_handler(void *opaque)
> > +{
> > +}
> > +
> > +static int vtpm_recv(struct XenDevice *xendev, uint8_t* buf, size_t
> *count)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +    struct tpmif_shared_page *shr = vtpmdev->shr;
> > +    unsigned int offset;
> > +
> > +    if (shr->state == TPMIF_STATE_IDLE) {
> > +        return -ECANCELED;
> > +    }
> > +
> > +    while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) {
> > +        vtpm_aio_wait(vtpmdev->sr_bh);
> > +    }
> > +
> > +    offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> > +    memcpy(buf, offset + (uint8_t *)shr, shr->length);
> > +    *count = shr->length;
> > +
> > +    return 0;
> > +}
> > +
> > +static int vtpm_send(struct XenDevice *xendev, uint8_t* buf, size_t
> count)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +    struct tpmif_shared_page *shr = vtpmdev->shr;
> > +    unsigned int offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> > +
> > +    while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) {
> > +        vtpm_aio_wait(vtpmdev->sr_bh);
> > +    }
> > +
> > +    memcpy(offset + (uint8_t *)shr, buf, count);
> > +    shr->length = count;
> > +    barrier();
> > +    shr->state = TPMIF_STATE_SUBMIT;
> > +    xen_wmb();
> > +    xen_be_send_notify(&vtpmdev->xendev);
> > +
> > +    while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) {
> > +        vtpm_aio_wait(vtpmdev->sr_bh);
> > +    }
> > +
> > +    return count;
> > +}
> > +
> > +static int vtpm_initialise(struct XenDevice *xendev)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +    xs_transaction_t xbt = XBT_NULL;
> > +    unsigned int ring_ref;
> > +
> > +    vtpmdev->xendev.fe = xenstore_read_be_str(&vtpmdev->xendev,
> "frontend");
> > +    if (vtpmdev->xendev.fe == NULL) {
> > +        return -1;
> > +    }
> > +
> > +    /* Get backend domid */
> > +    if (xenstore_read_fe_int(&vtpmdev->xendev, "backend-id",
> > +                             &vtpmdev->bedomid)) {
> > +        return -1;
> > +    }
> > +
> > +    /*alloc share page*/
> > +    vtpmdev->shr = xc_gntshr_share_pages(vtpmdev->xen_xcs,
> vtpmdev->bedomid, 1,
> > +                                         &ring_ref,
> PROT_READ|PROT_WRITE);
> > +    vtpmdev->ring_ref = ring_ref;
> > +    if (vtpmdev->shr == NULL) {
> > +        return -1;
> > +    }
> > +
> > +    /*Create event channel */
> > +    if (xen_be_alloc_unbound(&vtpmdev->xendev, 0,
> vtpmdev->bedomid)) {
> > +        xc_gntshr_munmap(vtpmdev->xen_xcs, vtpmdev->shr, 1);
> > +        return -1;
> > +    }
> > +
> > +    xc_evtchn_unmask(vtpmdev->xendev.evtchndev,
> > +                     vtpmdev->xendev.local_port);
> > +
> > +again:
> > +    xbt = xs_transaction_start(xenstore);
> > +    if (xbt == XBT_NULL) {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    if (xenstore_write_int(vtpmdev->xendev.fe, "ring-ref",
> > +                           vtpmdev->ring_ref)) {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    if (xenstore_write_int(vtpmdev->xendev.fe, "event-channel",
> > +                           vtpmdev->xendev.local_port)) {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    /* Publish protocol v2 feature */
> > +    if (xenstore_write_int(vtpmdev->xendev.fe, "feature-protocol-v2", 1))
> {
> > +        goto abort_transaction;
> > +    }
> > +
> > +    if (!xs_transaction_end(xenstore, xbt, 0)) {
> > +        if (errno == EAGAIN) {
> > +            goto again;
> > +        }
> > +    }
> > +    /* Tell vtpm backend that we are ready */
> > +    xenbus_switch_state(&vtpmdev->xendev, XenbusStateInitialised);
> > +
> > +    return 0;
> > +
> > +abort_transaction:
> > +    xc_gntshr_munmap(vtpmdev->xen_xcs, vtpmdev->shr, 1);
> > +    xs_transaction_end(xenstore, xbt, 1);
> > +    return -1;
> > +}
> > +
> > +static void vtpm_backend_changed(struct XenDevice *xendev, const char
> *node)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +    int be_state;
> > +
> > +    if (strcmp(node, "state") == 0) {
> > +        xenstore_read_be_int(&vtpmdev->xendev, node, &be_state);
> > +        switch (be_state) {
> > +        case XenbusStateConnected:
> > +            /*TODO*/
> > +            break;
> > +        case XenbusStateClosing:
> > +        case XenbusStateClosed:
> > +            xenbus_switch_state(&vtpmdev->xendev,
> XenbusStateClosing);
> > +            break;
> > +        default:
> > +            break;
> > +        }
> > +    }
> > +}
> > +
> > +static int vtpm_free(struct XenDevice *xendev)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +    QEMUBH *qbh = vtpmdev->sr_bh;
> > +
> > +    aio_poll(qbh->ctx, false);
> > +    qemu_bh_delete(vtpmdev->sr_bh);
> > +    if (vtpmdev->shr) {
> > +        xc_gntshr_munmap(vtpmdev->xen_xcs, vtpmdev->shr, 1);
> > +    }
> > +    xc_interface_close(vtpmdev->xen_xcs);
> > +    return 0;
> > +}
> > +
> > +static void vtpm_alloc(struct XenDevice *xendev)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +
> > +    vtpm_aio_ctx = aio_context_new(NULL);
> > +    if (vtpm_aio_ctx == NULL) {
> > +        return;
> > +    }
> > +    vtpmdev->sr_bh = aio_bh_new(vtpm_aio_ctx, sr_bh_handler,
> vtpmdev);
> > +    qemu_bh_schedule(vtpmdev->sr_bh);
> > +    vtpmdev->xen_xcs = xen_xc_gntshr_open(0, 0);
> > +}
> > +
> > +static void vtpm_event(struct XenDevice *xendev)
> > +{
> > +    struct XenVtpmDev *vtpmdev = container_of(xendev, struct
> XenVtpmDev,
> > +                                              xendev);
> > +
> > +    qemu_bh_schedule(vtpmdev->sr_bh);
> > +}
> > +
> > +struct XenDevOps xen_vtpmdev_ops = {
> > +    .size             = sizeof(struct XenVtpmDev),
> > +    .flags            = DEVOPS_FLAG_IGNORE_STATE |
> > +                        DEVOPS_FLAG_STUBDOM_BE,
> > +    .event            = vtpm_event,
> > +    .free             = vtpm_free,
> > +    .alloc            = vtpm_alloc,
> > +    .initialise       = vtpm_initialise,
> > +    .backend_changed  = vtpm_backend_changed,
> > +    .recv             = vtpm_recv,
> > +    .send             = vtpm_send,
> > +};
> 
> Is this the frontend, like the subject line would seem to imply?
> If so, XenDevOps are made for backends, while this is a frontend. In
> fact this is the first PV frontend in QEMU. We need to introduce
> something generic and similar to struct XenDevOps and xen_backend.c but
> for frontends.
> 
> 
> > diff --git a/include/hw/xen/xen_backend.h
> b/include/hw/xen/xen_backend.h
> > index 3b4125e..45fd6d3 100644
> > --- a/include/hw/xen/xen_backend.h
> > +++ b/include/hw/xen/xen_backend.h
> > @@ -15,6 +15,8 @@ struct XenDevice;
> >  #define DEVOPS_FLAG_NEED_GNTDEV   1
> >  /* don't expect frontend doing correct state transitions (aka console
> quirk) */
> >  #define DEVOPS_FLAG_IGNORE_STATE  2
> > +/*dev backend is stubdom*/
> > +#define DEVOPS_FLAG_STUBDOM_BE    4
> >
> >  struct XenDevOps {
> >      size_t    size;
> > @@ -26,6 +28,8 @@ struct XenDevOps {
> >      void      (*event)(struct XenDevice *xendev);
> >      void      (*disconnect)(struct XenDevice *xendev);
> >      int       (*free)(struct XenDevice *xendev);
> > +    int       (*send)(struct XenDevice *xendev, uint8_t* buf, size_t
> count);
> > +    int       (*recv)(struct XenDevice *xendev, uint8_t* buf, size_t
> *count);
> >      void      (*backend_changed)(struct XenDevice *xendev, const
> char *node);
> >      void      (*frontend_changed)(struct XenDevice *xendev, const
> char *node);
> >  };
> > @@ -91,12 +95,19 @@ int xen_be_send_notify(struct XenDevice
> *xendev);
> >  void xen_be_printf(struct XenDevice *xendev, int msg_level, const char
> *fmt, ...)
> >      GCC_FMT_ATTR(3, 4);
> >
> > +/*Xen stubdom vtpm*/
> > +int xen_vtpm_register(struct XenDevOps *ops);
> > +int xen_be_alloc_unbound(struct XenDevice *xendev, int dom, int
> remote_dom);
> > +int xen_vtpm_send(unsigned char *buf, size_t count);
> > +int xen_vtpm_recv(unsigned char *buf, size_t *count);
> > +
> >  /* actual backend drivers */
> >  extern struct XenDevOps xen_console_ops;      /* xen_console.c
> */
> >  extern struct XenDevOps xen_kbdmouse_ops;     /* xen_framebuffer.c
> */
> >  extern struct XenDevOps xen_framebuffer_ops;  /* xen_framebuffer.c */
> >  extern struct XenDevOps xen_blkdev_ops;       /* xen_disk.c
> */
> >  extern struct XenDevOps xen_netdev_ops;       /* xen_nic.c
> */
> > +extern struct XenDevOps xen_vtpmdev_ops;      /*
> xen_stubdom_vtpm.c*/
> >
> >  void xen_init_display(int domid);
> >
> > diff --git a/include/hw/xen/xen_common.h
> b/include/hw/xen/xen_common.h
> > index 95612a4..fb43084 100644
> > --- a/include/hw/xen/xen_common.h
> > +++ b/include/hw/xen/xen_common.h
> > @@ -130,6 +130,12 @@ static inline XenXC xen_xc_interface_open(void
> *logger, void *dombuild_logger,
> >      return xc_interface_open(logger, dombuild_logger, open_flags);
> >  }
> >
> > +static inline xc_gntshr *xen_xc_gntshr_open(void *logger,
> > +                                           unsigned int open_flags)
> > +{
> > +    return xc_gntshr_open(logger, open_flags);
> > +}
> > +
> >  /* FIXME There is now way to have the xen fd */
> >  static inline int xc_fd(xc_interface *xen_xc)
> >  {
> > diff --git a/xen-hvm.c b/xen-hvm.c
> > index 21f1cbb..c99ace8 100644
> > --- a/xen-hvm.c
> > +++ b/xen-hvm.c
> > @@ -1067,6 +1067,11 @@ int xen_hvm_init(ram_addr_t
> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
> >      int i, rc;
> >      unsigned long ioreq_pfn;
> >      unsigned long bufioreq_evtchn;
> > +
> > +#ifdef CONFIG_TPM_XENSTUBDOMS
> > +    unsigned long stubdom_vtpm = 0;
> > +#endif
> > +
> >      XenIOState *state;
> >
> >      state = g_malloc0(sizeof (XenIOState));
> > @@ -1169,6 +1174,14 @@ int xen_hvm_init(ram_addr_t
> *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
> >          fprintf(stderr, "%s: xen backend core setup failed\n",
> __FUNCTION__);
> >          return -1;
> >      }
> > +
> > +#ifdef CONFIG_TPM_XENSTUBDOMS
> > +    xc_get_hvm_param(xen_xc, xen_domid,
> HVM_PARAM_STUBDOM_VTPM, &stubdom_vtpm);
> > +    if (stubdom_vtpm) {
> > +        xen_vtpm_register(&xen_vtpmdev_ops);
> > +    }
> > +#endif
> 
> Given that vtpm is just a PV frontend, can't you just detect whether is
> present on xenstore and initialize it based on that? Like all the
> backend below?

Also I will explain in my next email. 


> 
> 
> >      xen_be_register("console", &xen_console_ops);
> >      xen_be_register("vkbd", &xen_kbdmouse_ops);
> >      xen_be_register("qdisk", &xen_blkdev_ops);
> > --
> > 1.8.3.2
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.