[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: eliminating 166G limit (was Re: [Xen-devel] Problem withnr_nodes on large memory NUMA machine)


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <eak@xxxxxxxxxx>
  • From: "Krysan, Susan" <KRYSANS@xxxxxxxxxx>
  • Date: Fri, 7 Dec 2007 07:20:59 -0600
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 07 Dec 2007 05:23:00 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acg4DY4czItYcaQAEdymVQAX8io7RQAxebeQ
  • Thread-topic: eliminating 166G limit (was Re: [Xen-devel] Problem withnr_nodes on large memory NUMA machine)

I tested this changeset on Unisys ES7000 with 256G ram and 64 processors
and it works:

xentop - 06:30:59   Xen 3.2-unstable
1 domains: 1 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0
shutdown
Mem: 268172340k total, 7669456k used, 260502884k free    CPUs: 64 @
3400MHz

I will be running our full test suite on this configuration today.

Thanks,
Sue Krysan
Linux Systems Group
Unisys Corporation
 

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Keir Fraser
Sent: Thursday, December 06, 2007 8:40 AM
To: eak@xxxxxxxxxx
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: eliminating 166G limit (was Re: [Xen-devel] Problem
withnr_nodes on large memory NUMA machine)

Try xen-unstable changeset 16548.

 -- Keir

On 3/12/07 19:49, "beth kon" <eak@xxxxxxxxxx> wrote:

> Has there been any more thought on this subject? The discussion seems
to
> have stalled, and we're hoping to find a way past this 166G limit...
> 
> Jan Beulich wrote:
> 
>>>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 27.11.07 10:21 >>>
>>>>>        
>>>>> 
>>> On 27/11/07 09:00, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>>> 
>>>    
>>> 
>>>>> I don't get how your netback approach works. The pages we transfer
do not
>>>>> originate from netback, so it has little control over them. And,
even if
>>>>> it
>>>>> did, when we allocate pages for network receive we do not know
which
>>>>> domain's packet will end up in each buffer.
>>>>>        
>>>>> 
>>>> Oh, right, I mixed up old_mfn and new_mfn in netbk_gop_frag().
Nevertheless
>>>> netback could take care of this by doing the copying there, as at
that
>>>> point i
>>>> already knows the destination domain.
>>>>      
>>>> 
>>> You may not know constraints on that domain's max_mfn though. We
could add
>>> an interface to Xen to interrogate that, but generally it's not
something we
>>> probably want to expose outside of Xen and the guest itself.
>>>    
>>> 
>> 
>> What constraints other than the guest's address size influence its
max_mfn?
>> Of course, if there's anything beyond the address size, then having a
way to
>> obtain the constraint explicitly would be desirable. But otherwise
(and as
>> fallback) using 37 bits (128G) seems quite reasonable.
>> 
>>  
>> 
>>>>> Personally I think doing it in Xen is perfectly good enough for
supporting
>>>>> this very out-of-date network receive mechanism.
>>>>>        
>>>>> 
>>>> I'm not just concerned about netback here. The interface exists,
and other
>>>> users might show up and/or exist already. Whether it would be
acceptable
>>>> for them to do allocation and copying is unknown. You'd therefore
either
>>>> need a way to prevent future users of the transfer mechanism, or
set proper
>>>> requirements on its use. I think that placing extra requirements on
the
>>>> user
>>>> of the interface is better than introducing extra (possibly hard to
>>>> reproduce/
>>>> recognize/debug) possibilities of failure.
>>>>      
>>>> 
>>> The interface is obsolete.
>>>    
>>> 
>> 
>> Then it should be clearly indicated as such, e.g. by a mechanism
similar to
>> deprecated_irq_flag() in Linux 2.6.22. And as a result, its use in
netback
>> should
>> then probably be conditional upon an extra config option, which could
at once
>> be used to provide a note to Xen that the feature isn't being used so
that
>> the
>> function could return -ENOSYS and the clipping could be
avoided/reverted.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>  
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.