[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server



>>> On 31.12.15 at 10:33, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> On 12/21/2015 10:45 PM, Jan Beulich wrote:
>>>>> On 15.12.15 at 03:05, <shuai.ruan@xxxxxxxxxxxxxxx> wrote:
>>> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server 
>>> *hvm_select_ioreq_server(struct domain *d,
>>>           type = (p->type == IOREQ_TYPE_PIO) ?
>>>                   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>>>           addr = p->addr;
>>> +        if ( type == HVMOP_IO_RANGE_MEMORY )
>>> +        {
>>> +             ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
>>> +                                          &p2mt, P2M_UNSHARE);
>>> +             if ( p2mt == p2m_mmio_write_dm )
>>> +                 type = HVMOP_IO_RANGE_WP_MEM;
>>> +
>>> +             if ( ram_page )
>>> +                 put_page(ram_page);
>>> +        }
>>
>> You evaluate the page's current type here - what if it subsequently
>> changes? I don't think it is appropriate to leave the hypervisor at
>> the mercy of the device model here.
> 
> Well. I do not quite understand your concern. :)
> Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
> or a write-protected ram. If this p2m type is changed, it should be
> triggered by the guest and device model, e.g. this RAM is not supposed
> to be used as the graphic translation table. And it should be fine.
> But I also wonder, if there's any other routine more appropriate to get
> a p2m type from the gfn?

No, the question isn't the choice of method to retrieve the
current type, but the lack of measures against the retrieved
type becoming stale by the time you actually use it.

>>> --- a/xen/include/asm-x86/hvm/domain.h
>>> +++ b/xen/include/asm-x86/hvm/domain.h
>>> @@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
>>>       bool_t           pending;
>>>   };
>>>
>>> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
>>> -#define MAX_NR_IO_RANGES  256
>>> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
>>> +#define MAX_NR_IO_RANGES  8192
>>
>> I'm sure I've objected before to this universal bumping of the limit:
>> Even if I were to withdraw my objection to the higher limit on the
>> new kind of tracked resource, I would continue to object to all
>> other resources getting their limits bumped too.
>>
> 
> Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new value,
> say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)

That would at least limit the damage to the newly introduced type.
But I suppose you realize it would still be a resource consumption
concern. In order for this to not become a security issue, you
might e.g. stay with the conservative old limit and allow a command
line or even better guest config file override to it (effectively making
the admin state his consent with the higher resource use).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.