[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [RFC PATCH 0/2] Introduce reserved Xenheap
Hi Julien, > -----Original Message----- > From: Henry Wang <Henry.Wang@xxxxxxx> > Sent: 2022年3月1日 10:11 > To: Julien Grall <julien@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; > sstabellini@xxxxxxxxxx > Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; Wei Chen > <Wei.Chen@xxxxxxx>; Penny Zheng <Penny.Zheng@xxxxxxx> > Subject: RE: [RFC PATCH 0/2] Introduce reserved Xenheap > > Hi Julien, > > > -----Original Message----- > > From: Julien Grall <julien@xxxxxxx> > > On 28/02/2022 07:12, Henry Wang wrote: > > > Hi Julien, > > > > Hi Henry, > > > > >> -----Original Message----- > > >> From: Julien Grall <julien@xxxxxxx> > > >> Hi Henry, > > >> > > >> On 24/02/2022 01:30, Henry Wang wrote: > > >>> The reserved Xenheap, or statically configured Xenheap, refers to > parts > > >>> of RAM reserved in the beginning for Xenheap. Like the static memory > > >>> allocation, such reserved Xenheap regions are reserved by > configuration > > >>> in the device tree using physical address ranges. > > >> > > >> In Xen, we have the concept of domheap and xenheap. For Arm64 and > > x86 > > >> they would be the same. But for Arm32, they would be different: > xenheap > > >> is always mapped whereas domheap is separate. > > >> > > >> Skimming through the series, I think you want to use the region for > both > > >> domheap and xenheap. Is that correct? > > > > > > Yes I think that would be correct, for Arm32, instead of using the > full > > > `ram_pages` as the initial value of `heap_pages`, we want to use the > > > region specified in the device tree. But we are confused if this is > the > > > correct (or preferred) way for Arm32, so in this series we only > > > implemented the reserved heap for Arm64. > > > > That's an interesting point. When I skimmed through the series on > > Friday, my first thought was that for arm32 it would be only xenheap (so > > all the rest of memory is domheap). > > > > However, Xen can allocate memory from domheap for its own purpose (e.g. > > we don't need contiguous memory, or for page-tables). > > > > In a fully static environment, the domheap and xenheap are both going to > > be quite small. It would also be somewhat difficult for a user to size > > it. So I think, it would be easier to use the region you introduce for > > both domheap and xenheap. > > > > Stefano, Bertrand, any opionions? > > > > On a separate topic, I think we need some documentation explaining how a > > user can size the xenheap. How did you figure out for your setup? > > Not sure if I fully understand the question. I will explain in two parts: > I tested > this series on a dom0less (static mem) system on FVP_Base. > (1) For configuring the system, I followed the documentation I added in > the > first patch in this series (docs/misc/arm/device-tree/booting.txt). The > idea is > adding some static mem regions under the chosen node. > > chosen { > + #xen,static-mem-address-cells = <0x2>; > + #xen,static-mem-size-cells = <0x2>; > + xen,static-mem = <0x8 0x80000000 0x0 0x00100000 0x8 0x90000000 > 0x0 0x08000000>; > [...] > } > > (2) For verifying this series, what I did was basically playing with the > region size > and number of the regions, adding printks and also see if the guests can > boot > as expected when I change the xenheap size. > > > > > >> > > >> Furthemore, now that we are introducing more static region, it will > get > > >> easier to overlap the regions by mistakes. I think we want to have > some > > >> logic in Xen (or outside) to ensure that none of them overlaps. Do > you > > >> have any plan for that? > > > > > > Totally agree with this idea, but before we actually implement the > code, > > > we would like to firstly share our thoughts on this: One option could > be to > > > add data structures to notes down these static memory regions when the > > > device tree is parsed, and then we can check if they are overlapped. > > > > This should work. > > Ack. > > > > > > Over > > > the long term (and this long term option is currently not in our plan), > > > maybe we can add something in the Xen toolstack for this usage? > > > > When I read "Xen toolstack", I read the tools that will run in dom0. Is > > it what you meant? > > Nonono, sorry for the misleading. I mean a build time tool that can run > on host (build machine) to generate/configure the Xen DTS for static > allocated memory. But maybe this tool can be placed in Xen tool or it can > be a separate tool that out of Xen's scope. > > Anyway, this is just an idea as we find it is not easy for users to > configure > so many static items manually. Not only for this one. As v8R64 support code also includes lots of static allocated items, it will also encounter this user configuration issue. So this would be a long term consideration. We can discuss this topic after Xen V8R64 support code upstream work be done. And this tool does not necessarily need to be provided by the community. Vendors that want to use Xen also can do it. IMO, it would be better if community could provide it. Anyway let's defer this topic : ) Thanks, Wei Chen > > > > > > > > > Also, I am wondering if the overlapping check logic should be > introduced > > > in this series. WDYT? > > > > I would do that in a separate series. > > Ack. > > Kind regards, > > Henry > > > > > Cheers, > > > > -- > > Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |