[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



>>> On 20.06.16 at 13:06, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 6/20/2016 6:45 PM, Jan Beulich wrote:
>>>>> On 20.06.16 at 12:30, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> On 6/20/2016 6:10 PM, George Dunlap wrote:
>>>> On 20/06/16 10:03, Yu Zhang wrote:
>>>>> So one solution is to disallow the log dirty feature in XenGT, i.e. just
>>>>> return failure when enable_logdirty()
>>>>> is called in toolstack. But I'm afraid this will restrict XenGT's future
>>>>> live migration feature.
>>>> I don't understand this -- you can return -EBUSY if live migration is
>>>> attempted while there are outstanding ioreq_server entries for the time
>>>> being, and at some point in the future when this actually works, you can
>>>> return success.
>>>>
>>> Well, the problem is we cannot easily tell if there's any outstanding
>>> p2m_ioreq_server entries.
>> That's easy to address: Keep a running count.
> 
> Oh, sorry, let me try to clarify: here by "outstanding p2m_ioreq_server 
> entries", I mean the
> entries with p2m_ioreq_server type which have not been set back to 
> p2m_ram_rw by device
> model when the ioreq server detaches. But with asynchronous resetting, 
> we can not differentiate
> these entries with the normal write protected ones which also have the 
> p2m_ioreq_server set.

I guess I'm missing something here, because I can't see why we
can't distinguish them (and also can't arrange for that).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.