[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for multiple servers
> -----Original Message----- > From: Tim Deegan [mailto:tim@xxxxxxx] > Sent: 20 March 2014 11:12 > To: Paul Durrant > Cc: Ian Campbell; xen-devel@xxxxxxxxxxxxx > Subject: Re: [Xen-devel] [PATCH v3 5/6] ioreq-server: add support for > multiple servers > > At 14:52 +0000 on 17 Mar (1395064322), Paul Durrant wrote: > > > -----Original Message----- > > > From: Ian Campbell > > > I don't suppose there is any way to pull these pages out of the guest > > > pfn space while still accounting them to the guest. Or if there is it > > > would probably be a whole other kettle of fish than this series. > > > > > > > The closest analogy I can think of accounting-wise would be shadow > > pages. I'll have a look at how they are handled. > > It should be much simpler than shadow pages _provided_ that nobody > adds a XENMAPSPACE namespace that uses real MFNs. Shadow pages have > to be accounted weirdly becaus ethey can't be owned by the guest or PV > guests could just map them. > > As long as we stick to the rule that all HVM-guest operations have to > go through the p2m, and we don't let the guest map pages by MFN, then > removing it from the p2m is enough to stop the guest accessing it. > My plan is to use alloc_domheap_pages() for secondary emulators so that the pages are accounted to the guest, but never add those pages to the p2m. For the default emulator I'll use the existing specials, for compatibility. My concern is that this will limit secondary emulators to running in dom0, since they'll need to use DOMID_XEN to map the pages. Paul > Of course, for security purposes we ought to treat the device-model > process/stubdom as being under guest control anyway - hence the > stubdom qemu. > > Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |