|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 10/11] xen/riscv: add definition of guest RAM banks
On 31.03.2026 18:14, Oleksii Kurochko wrote:
> On 3/30/26 5:51 PM, Jan Beulich wrote:
>> On 23.03.2026 17:29, Oleksii Kurochko wrote:
>>> The dom0less solution uses defined RAM banks as compile-time constants,
>>> so introduce macros to describe guest RAM banks.
>>>
>>> The reason for 2 banks is that there is typically always a use case for
>>> low memory under 4 GB, but the bank under 4 GB ends up being small because
>>> there are other things under 4 GB it can conflict with (interrupt
>>> controller, PCI BARs, etc.).
>>
>> Fixed layouts like the one you suggest come with (potentially severe)
>> downsides. For example, what if more than 2Gb of MMIO space are needed
>> for non-64-bit BARs?
>
> It looks where usually RAM on RISC-V boards start, so I expect that 2gb
> before RAM start is enough for MMIO space.
Likely in the common case. Board designers aren't constrained by this,
though (aiui). Whereas you set in stone a single, fixed layout.
Arm maintainers - since a similar fixed layout is used there iirc,
could you chime in here, please?
> Answering your question it will be an issue or it will also use some
> space before banks, no?
I fear I don't understand what you're trying to tell me.
> Further, assuming that the space 4G...8G is what
>> you expect 64-bit BARs to be put into, what if there's a device with a
>> 4G BAR? It'll eat up that entire space, requiring everything else to
>> fit in the 2G you reserve below 4G.
>
> I assume that such big devices could use high memory without any issue.
Well, I could go (almost) arbitrarily low with individual BAR size,
merely increasing the number of BARs accordingly. Assuming 2G BARs are
64-bit capable is likely fine. Maybe the same is true for 1G and 512M
ones as well. Yet a some size the assumption will break.
IMO RAM layout wants establishing dynamically based on the MMIO needs
of a guest.
>>> --- a/xen/include/public/arch-riscv.h
>>> +++ b/xen/include/public/arch-riscv.h
>>> @@ -50,6 +50,22 @@ typedef uint64_t xen_ulong_t;
>>>
>>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>
>>> +#define GUEST_RAM_BANKS 2
>>> +
>>> +/*
>>> + * The way to find the extended regions (to be exposed to the guest as
>>> unused
>>> + * address space) relies on the fact that the regions reserved for the RAM
>>> + * below are big enough to also accommodate such regions.
>>> + */
>>> +#define GUEST_RAM0_BASE xen_mk_ullong(0x80000000) /* 2GB of low RAM @
>>> 2GB */
>>> +#define GUEST_RAM0_SIZE xen_mk_ullong(0x80000000)
>>
>> Connecting this with my comment on the earlier patch regarding kernel,
>> initrd,
>> and DTB fitting in bank 0: How's that going to work with a huge kernel and/or
>> initrd (I expect DTBs can't grow very large)?
>
> The short answer it won't, but does initrd usually so big?
Not usually, but nothing keeps it from being arbitrary size.
> DTB is limited to 2MB, IIRC. So it isn't expect to grow to much...
>
> As I mentioned in the reply to earlier patch, I agree that we could
> leave bank0 for kernel and all other put to bank1.
Kernels can also be arbitrarily large.
> Even more I can try to put kernel in ban1 as I don't see any place at
> the moment where it will be a problem for RISC-V Linux kernel to be in
> high memory.
Yes, the less restrictions from the beginning, the less worries later.
>>> +#define GUEST_RAM1_BASE xen_mk_ullong(0x0200000000) /* 1016 GB of RAM @
>>> 8GB */
>>> +#define GUEST_RAM1_SIZE xen_mk_ullong(0xFE00000000)
>>> +
>>> +#define GUEST_RAM_BANK_BASES { GUEST_RAM0_BASE, GUEST_RAM1_BASE }
>>> +#define GUEST_RAM_BANK_SIZES { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE }
>>
>> Why's this needed in the public header?
>
> xl toolstack could use them so I expected what toolstack will use to
> live in this header.
But these last two #define-s are merely convenience definitions. They
even prescribe a certain data layout in order to be usable. I don't
think anything like this should be put in the public headers.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |