[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.



On 22/06/16 11:10, Jan Beulich wrote:
>>>> On 22.06.16 at 11:47, <george.dunlap@xxxxxxxxxx> wrote:
>> On 22/06/16 10:29, Jan Beulich wrote:
>>>>>> On 22.06.16 at 11:16, <george.dunlap@xxxxxxxxxx> wrote:
>>>> On 22/06/16 07:39, Jan Beulich wrote:
>>>>>>>> On 21.06.16 at 16:38, <george.dunlap@xxxxxxxxxx> wrote:
>>>>>> On 21/06/16 10:47, Jan Beulich wrote:
>>>>>>>>>>> And then - didn't we mean to disable that part of XenGT during
>>>>>>>>>>> migration, i.e. temporarily accept the higher performance
>>>>>>>>>>> overhead without the p2m_ioreq_server entries? In which case
>>>>>>>>>>> flipping everything back to p2m_ram_rw after (completed or
>>>>>>>>>>> canceled) migration would be exactly what we want. The (new
>>>>>>>>>>> or previous) ioreq server should attach only afterwards, and
>>>>>>>>>>> can then freely re-establish any p2m_ioreq_server entries it
>>>>>>>>>>> deems necessary.
>>>>>>>>>>>
>>>>>>>>>> Well, I agree this part of XenGT should be disabled during migration.
>>>>>>>>>> But in such
>>>>>>>>>> case I think it's device model's job to trigger the p2m type
>>>>>>>>>> flipping(i.e. by calling
>>>>>>>>>> HVMOP_set_mem_type).
>>>>>>>>> I agree - this would seem to be the simpler model here, despite (as
>>>>>>>>> George validly says) the more consistent model would be for the
>>>>>>>>> hypervisor to do the cleanup. Such cleanup would imo be reasonable
>>>>>>>>> only if there was an easy way for the hypervisor to enumerate all
>>>>>>>>> p2m_ioreq_server pages.
>>>>>>>>
>>>>>>>> Well, for me, the "easy way" means we should avoid traversing the 
>>>>>>>> whole ept
>>>>>>>> paging structure all at once, right?
>>>>>>>
>>>>>>> Yes.
>>>>>>
>>>>>> Does calling p2m_change_entry_type_global() not satisfy this requirement?
>>>>>
>>>>> Not really - that addresses the "low overhead" aspect, but not the
>>>>> "enumerate all such entries" one.
>>>>
>>>> I'm sorry, I think I'm missing something here.  What do we need the
>>>> enumeration for?
>>>
>>> We'd need that if we were to do the cleanup in the hypervisor (as
>>> we can't rely on all p2m entry re-calculation to have happened by
>>> the time a new ioreq server registers for the type).
>>
>> So you're afraid of this sequence of events?
>> 1) Server A de-registered, triggering a ioreq_server -> ram_rw type change
>> 2) gfn N is marked as misconfigured
>> 3) Server B registers and marks gfn N as ioreq_server
>> 4) When N is accessed, the misconfiguration is resolved incorrectly to
>> ram_rw
>>
>> But that can't happen, because misconfigured entries are resolved before
>> setting a p2m entry; so at step 3, gfn N will be first set to
>> (non-misconfigured) ram_rw, then changed to (non-misconfigured)
>> ioreq_server.
>>
>> Or is there another sequence of events that I'm missing?
> 
> 1) Server A marks GFN Y as ioreq_server
> 2) Server A de-registered, triggering a ioreq_server -> ram_rw type
>    change
> 3) Server B registers and gfn Y still didn't become ram_rw again (as
>    the misconfiguration didn't trickle down the tree far enough)

There are some missing steps here.  Gfn Y is still misconfigured, right?
 What will happen when the misconfiguration is resolved?  Will it not
become ram_rw?  If not, what would it change to and why?

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.