[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server

On 4/19/2016 6:05 PM, Paul Durrant wrote:
-----Original Message-----
From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
Sent: 19 April 2016 10:44
To: Paul Durrant; George Dunlap; xen-devel@xxxxxxxxxxxxx
Cc: Kevin Tian; Jan Beulich; Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan;
Subject: Re: [Xen-devel] [PATCH v2 3/3] x86/ioreq server: Add HVMOP to
map guest ram with p2m_ioreq_server to an ioreq server

On 4/19/2016 5:21 PM, Paul Durrant wrote:
-----Original Message-----
Does any other maintainers have any suggestions?

Note that it is a requirement that an ioreq server be disabled before VM
suspend. That means ioreq server pages essentially have to go back to
ram_rw semantics.


OK. So it should be hypervisor's responsibility to do the resetting.
Now we probably have 2 choices:
1> we reset the p2m type synchronously when ioreq server unmapping
happens, instead of deferring to the misconfig handling part. This
means performance impact to traverse the p2m table.

Do we need to reset at all. The p2m type does need to be transferred, it
will just be unclaimed on the far end (i.e. the pages are treated as r/w ram)
until the emulator starts up there. If that cannot be done without creating
yet another p2m type to handle logdirty (which seems a suboptimal way of
dealing with it) then I think migration needs to be disallowed on any domain
than contains any ioreq_server type pages at this stage.


Yes. We need - either the device model or hypervisor should grantee
there's no p2m_ioreq_server pages left after an ioreq server is unmapped
from this type (which is write protected in such senario), otherwise
its emulation might be forwarded to other unexpected device models which
claims the p2m_ioreq_server later.

That should be for the device model to guarantee IMO. If the 'wrong' emulator 
claims the ioreq server type then I don't think that's Xen's problem.

Thanks, Paul.

So what about the VM suspend case you mentioned above? Will that trigger
the unmapping of ioreq server? Could the device model also take the role
to change the p2m type back in such case?

It would be much simpler if hypervisor side does not need to provide
the p2m resetting logic, and we can support live migration at the same
time then. :)


So I guess approach 2> is your suggestion now.

Besides, previously, Jan also questioned the necessity of resetting the
p2m type when an ioreq server is mapping to the p2m_ioreq_server. His
argument is that we should only allow such p2m transition after an
ioreq server has already mapped to this p2m_ioreq_server. I think his
point sounds also reasonable.

I was kind of hoping to avoid that ordering dependency but if it makes things 
simpler then so be it.



2> we just disallow live migration when p2m->ioreq.server is not NULL.
This is not quite accurate, because having p2m->ioreq.server mapped
to p2m_ioreq_server does not necessarily means there would be such
outstanding entries. To be more accurate, we can add some other rough
check, e.g. both check if p2m->ioreq.server against NULL and check if
the hvmop_set_mem_type has ever been triggered once for the
p2m_ioreq_server type.

Both choice seems suboptimal for me. And I wonder if we have any
better solutions?


Thanks in advance! :)
If the answer is, "everything just works", that's perfect.

If the answer is, "Before logdirty mode is set, the ioreq server has
opportunity to detach, removing the p2m_ioreq_server entries, and
operating without that functionality", that's good too.

If the answer is, "the live migration request fails and the guest
continues to run", that's also acceptable.  If you want this series to
be checked in today (the last day for 4.7), this is probably your best


Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.