[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
Hi Jan, On 19/05/2021 10:49, Jan Beulich wrote: On 19.05.2021 05:16, Penny Zheng wrote:From: Julien Grall <julien@xxxxxxx> Sent: Tuesday, May 18, 2021 5:46 PM On 18/05/2021 06:21, Penny Zheng wrote:--- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -88,7 +88,15 @@ struct page_info */ u32 tlbflush_timestamp; }; - u64 pad; + + /* Page is reserved. */ + struct { + /* + * Reserved Owner of this page, + * if this page is reserved to a specific domain. + */ + struct domain *domain; + } reserved;The space in page_info is quite tight, so I would like to avoid introducing new fields unless we can't get away from it. In this case, it is not clear why we need to differentiate the "Owner" vs the "Reserved Owner". It might be clearer if this change is folded in the first user of the field. As an aside, for 32-bit Arm, you need to add a 4-byte padding.Yeah, I may delete this change. I imported this change as considering the functionality of rebooting domain on static allocation. A little more discussion on rebooting domain on static allocation. Considering the major user cases for domain on static allocation are system has a total pre-defined, static behavior all the time. No domain allocation on runtime, while there still exists domain rebooting. And when rebooting domain on static allocation, all these reserved pages could not go back to heap when freeing them. So I am considering to use one global `struct page_info*[DOMID]` value to store.Except such a separate array will consume quite a bit of space for no real gain: v.free has 32 bits of padding space right now on Arm64, so there's room for a domid_t there already. Even on Arm32 this could be arranged for, as I doubt "order" needs to be 32 bits wide. I agree we shouldn't need 32-bit to cover the "order". Although, I would like to see any user reading the field before introducing it. Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |