[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] vTPM detaching issue



On June 13, 2016 11:11 PM, Andrea Genuise <and.genuise@xxxxxxxxx> wrote:
> I'm not sure if this is a bug or my fault, but when I create a domain
> with a vTPM attached, detaching it sometimes causes the following error
> to be thrown (I post the command sequence):
>

I am afraid this is not a bug. As 'xl vtpm-detach' is to - destroy a domain's 
virtual TPM device.
Based on your record, ...

> [root@localhost ~]# xl create /etc/xen/vtpmmgr-stubdom
> Parsing config from /etc/xen/vtpmmgr-stubdom
> [root@localhost ~]# xl create /etc/xen/vtpm1
> Parsing config from /etc/xen/vtpm1
> [root@localhost ~]# xl create /etc/xen/dom1_ima
> Parsing config from /etc/xen/dom1_ima
> [root@localhost ~]# xl vtpm-detach dom1 vtpm1

... here,  you have detached vtpm on success.

> [root@localhost ~]# xl destroy dom1
> [root@localhost ~]# xl vtpm-detach vtpm1 vtpmmgr

IMO, vtpm-detach doesn't support between vtpm stubdom and vtpmmgr stubdom.

Thanks
Quan Xu

> libxl: error: libxl_device.c:952:device_backend_callback: unable to
> remove device with path /local/domain/18/backend/vtpm/19/0
> libxl: error: libxl.c:1995:device_addrm_aocomplete: unable to remove
> vtpm with id 0
> libxl_device_vtpm_remove failed.
>
> Sometimes the error is raised while detaching vtpmmgr from vtpm (as
> reported), other times while detaching vtpm from domain. I think
> this could be a synchronization problem.
>
> I report some info:
>
> [root@localhost ~]# xl info
> host                   : localhost.localdomain
> release                : 3.18.25-19.without_tpm.el7.centos.x86_64
> version                : #1 SMP Sun Apr 10 18:10:14 CEST 2016
> machine                : x86_64
> nr_cpus                : 2
> max_cpu_id             : 3
> nr_nodes               : 1
> cores_per_socket       : 2
> threads_per_core       : 1
> cpu_mhz                : 2394
> hw_caps                : 
> bfebfbff:20100800:00000000:00000900:0408e3fd:00000000:00000001:00000000
> virt_caps              : hvm hvm_directio
> total_memory           : 3996
> free_memory            : 2888
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 6
> xen_extra              : .1-5.el7
> xen_version            : 4.6.1-5.el7
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : Tue Mar 29 11:02:43 2016 +0100 git:8210a62-dirty
> xen_commandline        : placeholder dom0_mem=1024M,max:1024M cpuinfo 
> com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all
> cc_compiler            : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
> cc_compile_by          : mockbuild
> cc_compile_domain      : centos.org
> cc_compile_date        : Tue Mar 29 12:15:50 UTC 2016
> xend_config_format     : 4
>
> [root@localhost ~]# cat /etc/xen/vtpmmgr-stubdom
> name = "vtpmmgr"
> kernel="/usr/lib/xen/boot/vtpmmgr-stubdom.gz"
> memory=16
> disk=["file:/srv/xen/vtpmmgr-stubdom.img,hda,w"]
> iomem=["fed40,5"]
>
> [root@localhost ~]# cat /etc/xen/vtpm1
> name="vtpm1"
> kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
> memory=8
> disk=["file:/srv/xen/vtpm1.img,hda,w"]
> vtpm=["backend=vtpmmgr,uuid=8aca22b3-768a-41e7-b2cb-123d23901996"]
>
> [root@localhost ~]# cat /etc/xen/dom1_ima
> kernel = "/srv/xen/vmlinuz-xen"
> ramdisk = "/srv/xen/initrd-xen"
> name = "dom1"
> memory = "512"
> disk = [ 'tap:aio:/srv/xen/dom1.img,xvda1,w' ]
> vcpus=1
> root = '/dev/xvda1 ro'
> extra = 'ima_tcb'
> vtpm=['backend=vtpm1']
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.