[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 4/5] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.



>>> On 24.03.17 at 10:05, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 3/23/2017 5:00 PM, Jan Beulich wrote:
>>>>> On 23.03.17 at 04:23, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> On 3/22/2017 10:29 PM, Jan Beulich wrote:
>>>>>>> On 21.03.17 at 03:52, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>>>> --- a/xen/arch/x86/hvm/ioreq.c
>>>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>>>> @@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain 
>>>>> *d, 
> ioservid_t id,
>>>>>    
>>>>>        spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
>>>>>    
>>>>> +    if ( rc == 0 && flags == 0 )
>>>>> +    {
>>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>>> +
>>>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, 
>>>>> p2m_ram_rw);
>>>>> +    }
>>>> If you do this after dropping the lock, don't you risk a race with
>>>> another server mapping the type to itself?
>>> I believe it's OK. Remaining p2m_ioreq_server entries still needs to be
>>> cleaned anyway.
>> Are you refusing a new server mapping the type before being
>> done with the cleanup?
> 
> No. I meant even a new server is mapped, we can still sweep the p2m 
> table later asynchronously.
> But this reminds me other point - will a dm op be interrupted by another 
> one, or should it?

Interrupted? Two of them may run in parallel on different CPUs,
against the same target domain.

> Since we have patch 5/5 which sweep the p2m table right after the unmap 
> happens, maybe
> we should refuse any mapping requirement if there's remaining 
> p2m_ioreq_server entries.

That's what I've tried to hint at with my question.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.