[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] hypervisor memory usage



Well, indeed. Your best bet is to only give dom0 the memory it needs, via
dom0_mem. If you want to give it all memory then you need to specify
something like dom0_mem=64G -- if that's failing to boot for you then you
may need swiotlb=off on dom0's command line (otherwise it will fail to
allocate memory for swiotlb, and hence crash, since it was already all
allocated to dom0!).

 -- Keir

On 29/10/2009 11:19, "Vladimir Zidar" <mr_w@xxxxxxxxxxxxx> wrote:

> And this is RHEL patch that caused it.
> 
> Now, does it really solve anything in long term? What if onboard
> graphics uses 512M?
> What are your thoughts about it?
> 
> 
> Kind Regards,
> Vladimir
> 
> 
> -- patch follows --
> From: Rik van Riel <riel@xxxxxxxxxx>
> Date: Fri, 21 Nov 2008 14:32:20 -0500
> Subject: [xen] increase maximum DMA buffer size
> Message-id: 20081121143220.08a94702@xxxxxxxxxxxxxxxxxxx
> O-Subject: [RHEL5.3 PATCH 3/3] xen: increase maximum DMA buffer size
> Bugzilla: 412691
> RH-Acked-by: Don Dutile <ddutile@xxxxxxxxxx>
> RH-Acked-by: Bill Burns <bburns@xxxxxxxxxx>
> RH-Acked-by: Glauber Costa <glommer@xxxxxxxxxx>
> 
> After more investigation, we have got the reason of the panic. Currently
> xen reserve 128M DMA buffer at most, while the on-board graphic card
> requires
> 256M memory. With following patch + xen patch + your patch in comments
> 30+31,
> everything works quite well.
> 
> Fixes bug 412691
> 
> Signed-off-by: Jiang, Yunhong <yunhong.jiang@xxxxxxxxx>
> Signed-off-by: Rik van Riel <riel@xxxxxxxxxx>
> 
> diff --git a/arch/x86/domain_build.c b/arch/x86/domain_build.c
> index c72c300..8dcf816 100644
> --- a/arch/x86/domain_build.c
> +++ b/arch/x86/domain_build.c
> @@ -138,12 +138,12 @@ static unsigned long __init
> compute_dom0_nr_pages(void)
>      /*
>       * If domain 0 allocation isn't specified, reserve 1/16th of available
>       * memory for things like DMA buffers. This reservation is clamped to
> -     * a maximum of 128MB.
> +     * a maximum of 384MB.
>       */
>      if ( dom0_nrpages == 0 )
>      {
>          dom0_nrpages = avail;
> -        dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT));
> +        dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT));
>          dom0_nrpages = -dom0_nrpages;
>      } else {
>          /* User specified a dom0_size.  Do not clamp the maximum. */
> 
> 
> 
> 
> Vladimir Zidar wrote:
>> I have nailed the problem down to RHEL version of
>> compute_dom0_nr_pages() function.
>> 
>> Vanilla xen uses something like this to reserve up to 128MB of ram for
>> DMA etc. The same alg. is used in rhel <= 5.2 and also in official xen
>> 3.4.1
>> 
>>    if ( dom0_nrpages == 0 )
>>    {
>>        dom0_nrpages = avail;
>>        dom0_nrpages = min(dom0_nrpages / 16, 128L << (20 - PAGE_SHIFT));
>>        dom0_nrpages = -dom0_nrpages;
>>    }
>> 
>> However, RHEL >= 5.3 uses this:
>> 
>>    /*
>>     * If domain 0 allocation isn't specified, reserve 1/16th of available
>>     * memory for things like DMA buffers. This reservation is clamped to
>>     * a maximum of 384MB.
>>     */
>>    if ( dom0_nrpages == 0 )
>>    {
>>        dom0_nrpages = avail;
>>        dom0_nrpages = min(dom0_nrpages / 8, 384L << (20 - PAGE_SHIFT));
>>        dom0_nrpages = -dom0_nrpages;
>>    } else {
>>        /* User specified a dom0_size.  Do not clamp the maximum. */
>>        dom0_max_nrpages = LONG_MAX;
>>    }
>> 
>> I do understand that they like the idea of reserving more ram, but
>> additionally /8 would make 1/8th of memory instead of 1/16th?
>> 
>> So this might be intended behavior, just not advertised anywhere, and
>> as a kind of side effect, specifying dom0_mem would altogether skip
>> this funny allocation scheme - at least in theory [ I've just put
>> dom0_mem=64G (but I have 8G only) ] and it is not coming up, and I
>> will not be able to t see the console for at least next couple of hours.
>> 
>> 
>> Vladimir Zidar wrote:
>>> Chris,
>>> 
>>> good that you pointed to 5.2 vs 5.3 vs 5.4,
>>> the difference in number of pages is noticed between these:
>>> 
>>>       xen.gz-2.6.18-92.1.22.el5  - last 5.2 update - all pages are ok,
>>>       xen.gz-2.6.18-128.el5 - first 5.3 release - ~80000 pages
>>> missing on 8GB ram setup.
>>> 
>>> Chris Lalancette wrote:
>>>> Vladimir Zidar wrote:
>>>>  
>>>>> Sounds possible. However it would be great if there was switch to
>>>>> disable that feature in case hardware is not capable of VT-d, as
>>>>> I'd rather use those 300mb than have software support for something
>>>>> that I can't actually use.
>>>>>     
>>>> 
>>>> In point of fact, VT-d is disabled by default; you need to
>>>> explicitly enable it
>>>> for it to use memory.  However, it's possible that there's a bug, or
>>>> some other
>>>> change caused the memory difference, so it's worthwhile to try and
>>>> track it down
>>>> a little better.  In particular, you jumped from the 5.2 kernel to
>>>> the 5.4, so
>>>> it would be worthwhile to try the 5.3 kernel and see what you get.
>>>> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.