[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)


  • To: <eak@xxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Mon, 03 Dec 2007 19:53:46 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 03 Dec 2007 12:02:55 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acg15jbodagKQqHZEdyVcgAX8io7RQ==
  • Thread-topic: eliminating 166G limit (was Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine)

I'll get something in for 3.2.0.

 -- Keir


On 3/12/07 19:49, "beth kon" <eak@xxxxxxxxxx> wrote:

> Has there been any more thought on this subject? The discussion seems to
> have stalled, and we're hoping to find a way past this 166G limit...
> 
> Jan Beulich wrote:
> 
>>>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 27.11.07 10:21 >>>
>>>>>        
>>>>> 
>>> On 27/11/07 09:00, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>>> 
>>>    
>>> 
>>>>> I don't get how your netback approach works. The pages we transfer do not
>>>>> originate from netback, so it has little control over them. And, even if
>>>>> it
>>>>> did, when we allocate pages for network receive we do not know which
>>>>> domain's packet will end up in each buffer.
>>>>>        
>>>>> 
>>>> Oh, right, I mixed up old_mfn and new_mfn in netbk_gop_frag(). Nevertheless
>>>> netback could take care of this by doing the copying there, as at that
>>>> point i
>>>> already knows the destination domain.
>>>>      
>>>> 
>>> You may not know constraints on that domain's max_mfn though. We could add
>>> an interface to Xen to interrogate that, but generally it's not something we
>>> probably want to expose outside of Xen and the guest itself.
>>>    
>>> 
>> 
>> What constraints other than the guest's address size influence its max_mfn?
>> Of course, if there's anything beyond the address size, then having a way to
>> obtain the constraint explicitly would be desirable. But otherwise (and as
>> fallback) using 37 bits (128G) seems quite reasonable.
>> 
>>  
>> 
>>>>> Personally I think doing it in Xen is perfectly good enough for supporting
>>>>> this very out-of-date network receive mechanism.
>>>>>        
>>>>> 
>>>> I'm not just concerned about netback here. The interface exists, and other
>>>> users might show up and/or exist already. Whether it would be acceptable
>>>> for them to do allocation and copying is unknown. You'd therefore either
>>>> need a way to prevent future users of the transfer mechanism, or set proper
>>>> requirements on its use. I think that placing extra requirements on the
>>>> user
>>>> of the interface is better than introducing extra (possibly hard to
>>>> reproduce/
>>>> recognize/debug) possibilities of failure.
>>>>      
>>>> 
>>> The interface is obsolete.
>>>    
>>> 
>> 
>> Then it should be clearly indicated as such, e.g. by a mechanism similar to
>> deprecated_irq_flag() in Linux 2.6.22. And as a result, its use in netback
>> should
>> then probably be conditional upon an extra config option, which could at once
>> be used to provide a note to Xen that the feature isn't being used so that
>> the
>> function could return -ENOSYS and the clipping could be avoided/reverted.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>  
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.