[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.





On 6/22/2016 2:39 PM, Jan Beulich wrote:
On 21.06.16 at 16:38, <george.dunlap@xxxxxxxxxx> wrote:
On 21/06/16 10:47, Jan Beulich wrote:
And then - didn't we mean to disable that part of XenGT during
migration, i.e. temporarily accept the higher performance
overhead without the p2m_ioreq_server entries? In which case
flipping everything back to p2m_ram_rw after (completed or
canceled) migration would be exactly what we want. The (new
or previous) ioreq server should attach only afterwards, and
can then freely re-establish any p2m_ioreq_server entries it
deems necessary.

Well, I agree this part of XenGT should be disabled during migration.
But in such
case I think it's device model's job to trigger the p2m type
flipping(i.e. by calling
HVMOP_set_mem_type).
I agree - this would seem to be the simpler model here, despite (as
George validly says) the more consistent model would be for the
hypervisor to do the cleanup. Such cleanup would imo be reasonable
only if there was an easy way for the hypervisor to enumerate all
p2m_ioreq_server pages.
Well, for me, the "easy way" means we should avoid traversing the whole ept
paging structure all at once, right?
Yes.
Does calling p2m_change_entry_type_global() not satisfy this requirement?
Not really - that addresses the "low overhead" aspect, but not the
"enumerate all such entries" one.

I have not figured out any clean
solution
in hypervisor side, that's one reason I'd like to left this job to
device model
side(another reason is that I do think device model should take this
responsibility).
Let's see if we can get George to agree.
Well I had in principle already agreed to letting this be the interface
on the previous round of patches; we're having this discussion because
you (Jan) asked about what happens if an ioreq server is de-registered
while there are still outstanding p2m types. :-)
Indeed. Yet so far I understood you didn't like de-registration to
both not do the cleanup itself and fail if there are outstanding
entries.

I do think having Xen change the type makes the most sense, but if
you're happy to leave that up to the ioreq server, I'm OK with things
being done that way as well.  I think we can probably change it later if
we want.
Yes, since ioreq server interfaces will all be unstable ones, that
shouldn't be a problem. Albeit that's only the theory. With the call
coming from the device model, we'd need to make sure to put all
the logic (if any) to deal with the hypervisor implementation details
into libxc, so the caller of the libxc interface won't need to change.
I've learned during putting together the hvmctl series that this
wasn't done cleanly enough for one of the existing interfaces (see
patch 10 of that series).

Thanks Jan & George. So I guess you both accepted that we can left the clean up to
the device model side, right?

B.R.
Yu


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.