[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



>>> On 03.02.16 at 13:50, <Paul.Durrant@xxxxxxxxxx> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 03 February 2016 12:36
>> To: Paul Durrant
>> Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu;
>> Kevin Tian; zhiyuan.lv@xxxxxxxxx; Zhang Yu; xen-devel@xxxxxxxxxxxxx; Keir
>> (Xen.org)
>> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> max_wp_ram_ranges.
>> 
>> >>> On 03.02.16 at 13:20, <Paul.Durrant@xxxxxxxxxx> wrote:
>> >>  -----Original Message-----
>> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >> Sent: 03 February 2016 08:33
>> >> To: Zhang Yu
>> >> Cc: Andrew Cooper; Ian Campbell; Paul Durrant; Wei Liu; Ian Jackson;
>> Stefano
>> >> Stabellini; Kevin Tian; zhiyuan.lv@xxxxxxxxx; xen-devel@xxxxxxxxxxxxx; 
>> >> Keir
>> >> (Xen.org)
>> >> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
>> >> max_wp_ram_ranges.
>> >>
>> >> >>> On 03.02.16 at 08:10, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>> >> > On 2/2/2016 11:21 PM, Jan Beulich wrote:
>> >> >>>>> On 02.02.16 at 16:00, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>> >> >>> The limit of 4G is to avoid the data missing from uint64 to uint32
>> >> >>> assignment. And I can accept the 8K limit for XenGT in practice.
>> >> >>> After all, it is vGPU page tables we are trying to trap and emulate,
>> >> >>> not normal page frames.
>> >> >>>
>> >> >>> And I guess the reason that one domain exhausting Xen's memory
>> can
>> >> >>> affect another domain is because rangeset uses Xen heap, instead of
>> >> the
>> >> >>> per-domain memory. So what about we use a 8K limit by now for
>> XenGT,
>> >> >>> and in the future, if a per-domain memory allocation solution for
>> >> >>> rangeset is ready, we do need to limit the rangeset size. Does this
>> >> >>> sounds more acceptable?
>> >> >>
>> >> >> The lower the limit the better (but no matter how low the limit
>> >> >> it won't make this a pretty thing). Anyway I'd still like to wait
>> >> >> for what Ian may further say on this.
>> >> >>
>> >> > Hi Jan, I just had a discussion with my colleague. We believe 8K could
>> >> > be the biggest limit for the write-protected ram ranges. If in the
>> >> > future, number of vGPU page tables exceeds this limit, we will modify
>> >> > our back-end device model to find a trade-off method, instead of
>> >> > extending this limit. If you can accept this value as the upper bound
>> >> > of rangeset, maybe we do not need to add any tool stack parameters,
>> but
>> >> > define a MAX_NR_WR_RAM_RANGES for the write-protected ram
>> >> rangesset. As
>> >> > to other rangesets, we keep their limit as 256. Does this sounds OK? :)
>> >>
>> >> I'm getting the impression that we're moving in circles. A blanket
>> >> limit above the 256 one for all domains is _not_ going to be
>> >> acceptable; going to 8k will still need host admin consent. With
>> >> your rangeset performance improvement patch, each range is
>> >> going to be tracked by a 40 byte structure (up from 32), which
>> >> already means an overhead increase for all the other ranges. 8k
>> >> of wp ranges implies an overhead beyond 448k (including the
>> >> xmalloc() overhead), which is not _that_ much, but also not
>> >> negligible.
>> >>
>> >
>> > ... which means we are still going to need a toolstack parameter to set the
>> > limit. We already have a parameter for VRAM size so is having a parameter
>> for
>> > max. GTT shadow ranges such a bad thing?
>> 
>> It's workable, but not nice (see also Ian's earlier response).
>> 
>> > Is the fact that the memory comes
>> > from xenheap rather than domheap the real problem?
>> 
>> Not the primary one, since except on huge memory machines
>> both heaps are identical. To me the primary one is the quite
>> more significant resource consumption in the first place (I'm not
>> going to repeat what I've written in already way too many
>> replies before).
> 
> Ok. Well the only way round tracking specific ranges for emulation (and 
> consequently suffering the overhead) is tracking by type. For XenGT I guess 
> it would be possible to live with a situation where a single ioreq server can 
> register all wp mem emulations for a given VM. I can't say I particularly 
> like that way of doing things but if it's the only way forward then I guess 
> we may have to live with it.

Well, subject to Ian not objecting (still awaiting some follow-up by
him), I didn't mean to say doing it the proposed way is a no-go.
All that I really insist on is that this larger resource consumption
won't go without some form of host admin consent.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.