[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 0/*] xen: xen-domid-restrict improvements
Ross Lagerwall writes ("Re: [PATCH v2 0/*] xen: xen-domid-restrict improvements"): > On 10/04/2017 05:18 PM, Ian Jackson wrote: > > However, there are changes to qemu needed. In particular > > > > * The -xen-domid-restrict option does not work properly right now. > > It only restricts a small subset of the descriptors qemu has open. > > I am introducing a new library call in the Xen libraries for this, > > xentoolcore_restrict_all. ... > I'm testing your QEMU and Xen patch series and found that after being > restricted, QEMU fails to setup up the VGA memory properly which causes > a complete stall with stdvga. With cirrus it mostly works although it > seems to have reduced performance. Thanks for your testing. I admit that I didn't look at the VGA console of my guest. I'm using cirrus but my guest isn't using it very much. I use the "serial" console instead. > I think it happens when the VM sets up the BAR some time after > xen_restrict() has been called. The failure comes from QEMU calling > xc_domain_add_to_physmap() which calls do_memory_op() and finally > xencall2(). But the underlying xencall fd has been replaced with /dev/null. I think to fix this properly, we will need to add a dmop version of XENMEM_add_to_physmap. I don't propose to try to do that for Xen 4.10. In the meantime I think this is good enough for "tech preview", and provides a base to work on. > There is a caveat when using -xen-domid-restrict and -chroot at the same > time. The restriction happens after chrooting, so the chroot directory > has to contain a valid /dev/null. This is a bit annoying and prevents > the chroot being on a "nodev" mount. How annoying. I will fix the relevant qemu patch to do the Xen restrict before os_setup_post. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |