[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/2] Introduce reserved Xenheap


  • To: Julien Grall <julien@xxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Tue, 1 Mar 2022 13:39:47 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=o54D5Ouvu3ReLaYNHPl9bo5bDSn+jk5JPf3ZQyesVtY=; b=Ddc9PFm8nWOTXRQdhtXJaIOT31eqEktkKxZEgW5YWfexDQKZZE2TOpZRAAvEhiJOI5Oq+yLjkGBVbVxoLSGyDyfCTKgjHWTQgCF0oSyyGKukEywnH8bxDu3DjnkDZIwEAX6amNjrH0olf5mrbC6bQ1g/Dm4ItxxrwJZOr3w7qQ67k1YMp7QqY5UQGfbpnrFhpbrzmrbm9RzGUyhbpqBeoVk8vF7fBKcLciNmJYTIhKKXF1WNzGu1EC0O7VEUiAt6zX5+par/MC37x6qZTJOGpRJOm8oBklNU+v7XeFQzYL1aHElHT2LaY2OC3HASx4Mogu4rKt8gJzqcZbqpwn5HAQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ABEosmYa8BbLW6jRK6lmNDMU7CIcDm7/KFcHHPxarsNYWpWQP63GuSi5UQbCSlhVFH+HerwcOsjMbVR403rOKb2FszbdQmWG85fjLU2yvIct7WbEPh4PZbuHG5TDUEZLTYrLCwmRFZhFmgKXgbzg1+Zk8PI0/N2e84xrlE1SdfRzSszqbwVWQYVxB1WOVRERL5C8+rekwBhHppN/IoR//OqrpmRVv/yDFhuiUopbO4RIc3F0tgg+fH8rals8COSAea2BZdmX+IWbThYY8usPTxCdTNXM+NcU0hgcDAAb3yr6oDl+96+IlU9FrvDYO6L3YlHT/x+qxvOojXvApw6Feg==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Henry Wang <Henry.Wang@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Penny Zheng <Penny.Zheng@xxxxxxx>
  • Delivery-date: Tue, 01 Mar 2022 13:40:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHYKR4hXoXrj++ew0mI1ZjbqpdQh6yktMKAgAPeNACAAMMegIABO0+A
  • Thread-topic: [RFC PATCH 0/2] Introduce reserved Xenheap

Hi,

> On 28 Feb 2022, at 18:51, Julien Grall <julien@xxxxxxx> wrote:
> 
> 
> 
> On 28/02/2022 07:12, Henry Wang wrote:
>> Hi Julien,
> 
> Hi Henry,
> 
>>> -----Original Message-----
>>> From: Julien Grall <julien@xxxxxxx>
>>> Sent: Saturday, February 26, 2022 4:09 AM
>>> To: Henry Wang <Henry.Wang@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx;
>>> sstabellini@xxxxxxxxxx
>>> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen
>>> <Wei.Chen@xxxxxxx>; Penny Zheng <Penny.Zheng@xxxxxxx>
>>> Subject: Re: [RFC PATCH 0/2] Introduce reserved Xenheap
>>> 
>>> Hi Henry,
>>> 
>>> On 24/02/2022 01:30, Henry Wang wrote:
>>>> The reserved Xenheap, or statically configured Xenheap, refers to parts
>>>> of RAM reserved in the beginning for Xenheap. Like the static memory
>>>> allocation, such reserved Xenheap regions are reserved by configuration
>>>> in the device tree using physical address ranges.
>>> 
>>> In Xen, we have the concept of domheap and xenheap. For Arm64 and x86
>>> they would be the same. But for Arm32, they would be different: xenheap
>>> is always mapped whereas domheap is separate.
>>> 
>>> Skimming through the series, I think you want to use the region for both
>>> domheap and xenheap. Is that correct?
>> Yes I think that would be correct, for Arm32, instead of using the full
>> `ram_pages` as the initial value of `heap_pages`, we want to use the
>> region specified in the device tree. But we are confused if this is the
>> correct (or preferred) way for Arm32, so in this series we only
>> implemented the reserved heap for Arm64.
> 
> That's an interesting point. When I skimmed through the series on Friday, my 
> first thought was that for arm32 it would be only xenheap (so
> all the rest of memory is domheap).
> 
> However, Xen can allocate memory from domheap for its own purpose (e.g. we 
> don't need contiguous memory, or for page-tables).
> 
> In a fully static environment, the domheap and xenheap are both going to be 
> quite small. It would also be somewhat difficult for a user to size it. So I 
> think, it would be easier to use the region you introduce for both domheap 
> and xenheap.
> 
> Stefano, Bertrand, any opionions?

Only one region is easier to configure and I think in this case it will also 
prevent lots of over allocation.
So in a full static case, having only one heap is a good strategy for now.

There might be some cases where someone would want to fully control the memory 
allocated by Xen per domain and in this case be able to size it for each guest 
(to make sure one guest cannot be impacted by an other at all).
But this is definitely something that could be done later, if needed.

Cheers
Bertrand

> 
> On a separate topic, I think we need some documentation explaining how a user 
> can size the xenheap. How did you figure out for your setup?
> 
>>> 
>>> Furthemore, now that we are introducing more static region, it will get
>>> easier to overlap the regions by mistakes. I think we want to have some
>>> logic in Xen (or outside) to ensure that none of them overlaps. Do you
>>> have any plan for that?
>> Totally agree with this idea, but before we actually implement the code,
>> we would like to firstly share our thoughts on this: One option could be to
>> add data structures to notes down these static memory regions when the
>> device tree is parsed, and then we can check if they are overlapped.
> 
> This should work.
> 
>> Over
>> the long term (and this long term option is currently not in our plan),
>> maybe we can add something in the Xen toolstack for this usage?
> 
> When I read "Xen toolstack", I read the tools that will run in dom0. Is it 
> what you meant?
> 
>> Also, I am wondering if the overlapping check logic should be introduced
>> in this series. WDYT?
> 
> I would do that in a separate series.
> 
> Cheers,
> 
> -- 
> Julien Grall




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.