[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 4/5] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.



>>> On 23.03.17 at 04:23, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> On 3/22/2017 10:29 PM, Jan Beulich wrote:
>>>>> On 21.03.17 at 03:52, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/ioreq.c
>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>> @@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, 
>>> ioservid_t id,
>>>   
>>>       spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
>>>   
>>> +    if ( rc == 0 && flags == 0 )
>>> +    {
>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +
>>> +        if ( read_atomic(&p2m->ioreq.entry_count) )
>>> +            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
>>> +    }
>> If you do this after dropping the lock, don't you risk a race with
>> another server mapping the type to itself?
> 
> I believe it's OK. Remaining p2m_ioreq_server entries still needs to be 
> cleaned anyway.

Are you refusing a new server mapping the type before being
done with the cleanup?

>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>> @@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
>>> unsigned long gfn)
>>>                       e.ipat = ipat;
>>>                       if ( e.recalc && p2m_is_changeable(e.sa_p2mt) )
>>>                       {
>>> +                         if ( e.sa_p2mt == p2m_ioreq_server )
>>> +                         {
>>> +                             p2m->ioreq.entry_count--;
>>> +                             ASSERT(p2m->ioreq.entry_count >= 0);
>> If you did the ASSERT() first (using > 0), you wouldn't need the
>> type be a signed one, doubling the valid value range (even if
>> right now the full 64 bits can't be used anyway, but it would be
>> one less thing to worry about once we get 6-level page tables).
> 
> Well, entry_count counts only for 4K pages, so even if the guest 
> physical address
> width is extended up to 64 bit in the future, entry_count will not 
> exceed 2^52(
> 2^64/2^12).

Oh, true. Still I'd prefer if you used an unsigned type for a count
when that's easily possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.