[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.

>>> On 02.02.16 at 15:01, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> On 2/2/2016 7:12 PM, Jan Beulich wrote:
>>>>> On 02.02.16 at 11:56, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> I understand your concern, and to be honest, I do not think
>>> this is an optimal solution. But I also have no better idea
>>> in mind.  :(
>>> Another option may be: instead of opening this parameter to
>>> the tool stack, we use a XenGT flag, which set the rangeset
>>> limit to a default value. But like I said, this default value
>>> may not always work on future XenGT platforms.
>> Assuming that you think of something set e.g. by hypervisor
>> command line option: How would that work? I.e. how would
>> that limit the resource use for all VMs not using XenGT? Or if
>> you mean a flag settable in the domain config - how would you
>> avoid a malicious admin to set this flag for all the VMs created
>> in the controlled partition of the system?
> Well, I am not satisfied with this new parameter, because:
> 1> exposing an option like max_wp_ram_ranges to the user seems too
> detailed;
> 2> but if not, using a XenGT flag means it would be hard for hypervisor
> to find a default value which can work in all situations theoretically,
> although in practice, 8K is already a big enough one.
> However, as to the security concern you raised, I can not fully
> understand. :) E.g. I believe a malicious admin can also breach the
> system even without this patch. This argument may not be convincing to 
> you, but as to this specific case, even if an admin set XenGT flag to
> all VMs, what harm will this action do? It only means the ioreq server 
> can at most allocate 8K ranges, will that consume all the Xen heaps, 
> especially for 64 bit Xen?

First of all so far you meant to set a limit of 4G, which - taking a
handful of domains - if fully used would take even a mid-size
host out of memory. And then you have to consider bad effects
resulting from Xen itself not normally having a lot of memory left
(especially when "dom0_mem=" is not forcing most of the memory
to be in Xen's hands), which may mean that one domain
exhausting Xen's memory can affect another domain if Xen can't
allocate memory it needs to support that other domain, in the
worst case leading to a domain crash. And this all is still leaving
aside Xen's own health...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.