[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH dom0 creation fails - the system freezes


  • To: bercarug@xxxxxxxxxx, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Thu, 26 Jul 2018 10:31:21 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, David Woodhouse <dwmw2@xxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, abelgun@xxxxxxxxxx
  • Delivery-date: Thu, 26 Jul 2018 08:31:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 26/07/18 10:15, bercarug@xxxxxxxxxx wrote:
> On 07/25/2018 07:12 PM, Roger Pau Monné wrote:
>> On Wed, Jul 25, 2018 at 05:05:35PM +0300, bercarug@xxxxxxxxxx wrote:
>>> On 07/25/2018 05:02 PM, Wei Liu wrote:
>>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
>>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
>>>>>>> What could be causing the available memory loss problem?
>>>>>> That seems to be Linux aggressively ballooning out memory, you go
>>>>>> from
>>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
>>>>> This might be related to the tools thinking dom0 is a PV domain.
>>>> Good point.
>>>>
>>>> In that case, xenstore-ls -fp would also be useful. The output should
>>>> show the balloon target for Dom0.
>>>>
>>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to
>>>> see
>>>> if it makes any difference.
>>>>
>>>> Wei.
>>> Also tried setting autoballooning off, but it had no effect.
>> This is a Linux/libxl issue that I'm not sure what's the best way to
>> solve. Linux has the following 'workaround' in the balloon driver:
>>
>> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
>>            &static_max);
>> if (err != 1)
>>     static_max = new_target;
>> else
>>     static_max >>= PAGE_SHIFT - 10;
>> target_diff = xen_pv_domain() ? 0
>>         : static_max - balloon_stats.target_pages;
>>
>> I suppose this is used to cope with the memory reporting mismatch
>> usually seen on HVM guests. This however interacts quite badly on a
>> PVH Dom0 that has for example:
>>
>> /local/domain/0/memory/target = "8391840"   (n0)
>> /local/domain/0/memory/static-max = "17179869180"   (n0)
>>
>> One way to solve this is to set target and static-max to the same
>> value initially, so that target_diff on Linux is 0. Another option
>> would be to force target_diff = 0 for Dom0.
>>
>> I'm attaching a patch for libxl that should solve this, could you
>> please give it a try and report back?
>>
>> I'm still unsure however about the best way to fix this, need to think
>> about it.
>>
>> Roger.
>> ---8<---
>> diff --git a/tools/libxl/libxl_mem.c b/tools/libxl/libxl_mem.c
>> index e551e09fed..2c984993d8 100644
>> --- a/tools/libxl/libxl_mem.c
>> +++ b/tools/libxl/libxl_mem.c
>> @@ -151,7 +151,9 @@ retry_transaction:
>>           *target_memkb = info.current_memkb;
>>       }
>>       if (staticmax == NULL) {
>> -        libxl__xs_printf(gc, t, max_path, "%"PRIu64, info.max_memkb);
>> +        libxl__xs_printf(gc, t, max_path, "%"PRIu64,
>> +                         libxl__domain_type(gc, 0) ==
>> LIBXL_DOMAIN_TYPE_PV ?
>> +                         info.max_memkb : info.current_memkb);
>>           *max_memkb = info.max_memkb;
>>       }
>>  
>>
> I have tried Roger's patch and it fixed the memory decrease problem. "xl
> list -l"
> 
> no longer causes any issue.
> 
> The output of "xenstore-ls -fp" shows that both target and static-max
> are now
> 
> set to the same value.

Right.

Meaning that it will be impossible to add memory to PVH dom0 e.g. after
memory hotplug.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.