[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 2/3] Differentiate IO/mem resources tracked by ioreq server



>>> On 26.01.16 at 08:59, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:

> 
> On 1/22/2016 7:43 PM, Jan Beulich wrote:
>>>>> On 22.01.16 at 04:20, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> @@ -2601,6 +2605,16 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>>>           type = (p->type == IOREQ_TYPE_PIO) ?
>>>                   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>>>           addr = p->addr;
>>> +        if ( type == HVMOP_IO_RANGE_MEMORY )
>>> +        {
>>> +             ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
>>> +                                          &p2mt, P2M_UNSHARE);
>>
>> It seems to me like I had asked before: Why P2M_UNSHARE instead
>> of just P2M_QUERY? (This could surely be fixed up while committing,
>> the more that I've already done some cleanup here, but I'd like to
>> understand this before it goes in.)
>>
> Hah, sorry for my bad memory. :)
> I did not found P2M_QUERY; only P2M_UNSHARE and P2M_ALLOC are
> defined. But after reading the code in ept_get_entry(), I guess the
> P2M_UNSHARE is not accurate, maybe I should use 0 here for the
> p2m_query_t parameter in get_page_from_gfn()?

Ah, sorry for the misnamed suggestion. I'm not sure whether using
zero here actually matches your needs; P2M_UNSHARE though
seems odd in any case, so at least switching to P2M_ALLOC (to
populate PoD pages) would seem to be necessary.

>>> @@ -2642,6 +2656,11 @@ struct hvm_ioreq_server 
>>> *hvm_select_ioreq_server(struct domain *d,
>>>               }
>>>
>>>               break;
>>> +        case HVMOP_IO_RANGE_WP_MEM:
>>> +            if ( rangeset_contains_singleton(r, PFN_DOWN(addr)) )
>>> +                return s;
>>
>> Considering you've got p2m_mmio_write_dm above - can this
>> validly return false here?
> 
> Well, if we have multiple ioreq servers defined, it will...

Ah, right. That's fine then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.