[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/11] xen/riscv: add definition of guest RAM banks


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 1 Apr 2026 08:17:43 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Romain Caritey <Romain.Caritey@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 01 Apr 2026 06:17:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 31.03.2026 18:14, Oleksii Kurochko wrote:
> On 3/30/26 5:51 PM, Jan Beulich wrote:
>> On 23.03.2026 17:29, Oleksii Kurochko wrote:
>>> The dom0less solution uses defined RAM banks as compile-time constants,
>>> so introduce macros to describe guest RAM banks.
>>>
>>> The reason for 2 banks is that there is typically always a use case for
>>> low memory under 4 GB, but the bank under 4 GB ends up being small because
>>> there are other things under 4 GB it can conflict with (interrupt
>>> controller, PCI BARs, etc.).
>>
>> Fixed layouts like the one you suggest come with (potentially severe)
>> downsides. For example, what if more than 2Gb of MMIO space are needed
>> for non-64-bit BARs? 
> 
> It looks where usually RAM on RISC-V boards start, so I expect that 2gb 
> before RAM start is enough for MMIO space.

Likely in the common case. Board designers aren't constrained by this,
though (aiui). Whereas you set in stone a single, fixed layout.

Arm maintainers - since a similar fixed layout is used there iirc,
could you chime in here, please?

> Answering your question it will be an issue or it will also use some 
> space before banks, no?

I fear I don't understand what you're trying to tell me.

> Further, assuming that the space 4G...8G is what
>> you expect 64-bit BARs to be put into, what if there's a device with a
>> 4G BAR? It'll eat up that entire space, requiring everything else to
>> fit in the 2G you reserve below 4G.
> 
> I assume that such big devices could use high memory without any issue.

Well, I could go (almost) arbitrarily low with individual BAR size,
merely increasing the number of BARs accordingly. Assuming 2G BARs are
64-bit capable is likely fine. Maybe the same is true for 1G and 512M
ones as well. Yet a some size the assumption will break.

IMO RAM layout wants establishing dynamically based on the MMIO needs
of a guest.

>>> --- a/xen/include/public/arch-riscv.h
>>> +++ b/xen/include/public/arch-riscv.h
>>> @@ -50,6 +50,22 @@ typedef uint64_t xen_ulong_t;
>>>   
>>>   #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>   
>>> +#define GUEST_RAM_BANKS   2
>>> +
>>> +/*
>>> + * The way to find the extended regions (to be exposed to the guest as 
>>> unused
>>> + * address space) relies on the fact that the regions reserved for the RAM
>>> + * below are big enough to also accommodate such regions.
>>> + */
>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 
>>> 2GB */
>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>
>> Connecting this with my comment on the earlier patch regarding kernel, 
>> initrd,
>> and DTB fitting in bank 0: How's that going to work with a huge kernel and/or
>> initrd (I expect DTBs can't grow very large)?
> 
> The short answer it won't, but does initrd usually so big?

Not usually, but nothing keeps it from being arbitrary size.

> DTB is limited to 2MB, IIRC. So it isn't expect to grow to much...
> 
> As I mentioned in the reply to earlier patch, I agree that we could 
> leave bank0 for kernel and all other put to bank1.

Kernels can also be arbitrarily large.

> Even more I can try to put kernel in ban1 as I don't see any place at 
> the moment where it will be a problem for RISC-V Linux kernel to be in 
> high memory.

Yes, the less restrictions from the beginning, the less worries later.

>>> +#define GUEST_RAM1_BASE   xen_mk_ullong(0x0200000000) /* 1016 GB of RAM @ 
>>> 8GB */
>>> +#define GUEST_RAM1_SIZE   xen_mk_ullong(0xFE00000000)
>>> +
>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE, GUEST_RAM1_BASE }
>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE, GUEST_RAM1_SIZE }
>>
>> Why's this needed in the public header?
> 
> xl toolstack could use them so I expected what toolstack will use to 
> live in this header.

But these last two #define-s are merely convenience definitions. They
even prescribe a certain data layout in order to be usable. I don't
think anything like this should be put in the public headers.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.