[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header



On Wed, 18 Feb 2026, Jan Beulich wrote:
> On 18.02.2026 15:38, Oleksii Kurochko wrote:
> > On 2/18/26 2:12 PM, Jan Beulich wrote:
> >> On 18.02.2026 13:58, Oleksii Kurochko wrote:
> >>> On 2/17/26 8:34 AM, Jan Beulich wrote:
> >>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
> >>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
> >>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> >>>>>>> domain_use_host_layout() is generic enough to be moved to the
> >>>>>>> common header xen/domain.h.
> >>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, 
> >>>>>> ...
> >>>>>>
> >>>>>>> --- a/xen/include/xen/domain.h
> >>>>>>> +++ b/xen/include/xen/domain.h
> >>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> >>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> >>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> >>>>>>>    
> >>>>>>> +/*
> >>>>>>> + * Is the domain using the host memory layout?
> >>>>>>> + *
> >>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == 
> >>>>>>> MFN.
> >>>>>>> + * To avoid any trouble finding space, it is easier to force using 
> >>>>>>> the
> >>>>>>> + * host memory layout.
> >>>>>>> + *
> >>>>>>> + * The hardware domain will use the host layout regardless of
> >>>>>>> + * direct-mapped because some OS may rely on a specific address 
> >>>>>>> ranges
> >>>>>>> + * for the devices.
> >>>>>>> + */
> >>>>>>> +#ifndef domain_use_host_layout
> >>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> >>>>>>> +                                    is_hardware_domain(d))
> >>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see 
> >>>>>> further
> >>>>>> proliferate in common (non-DT) code.
> >>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> >>>>> domain) on x86 as well. In fact, we already have a working prototype,
> >>>>> although it is not suitable for upstream yet.
> >>>>>
> >>>>> In addition to the PSP use case that we discussed a few months ago,
> >>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
> >>>>> must be 1:1 mapped, we also have a new use case. We are running the full
> >>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> >>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
> >>>>> are configured as PVH.
> >>>> Hmm. Then adjustments need making, for commentary and macro to be correct
> >>>> on x86. First and foremost none of what is there is true for PV.
> >>> As is_domain_direct_mapped() returns always false for x86, so
> >>> domain_use_host_layout macro will return incorrect value for non-hardware
> >>> domains (dom0?). And as PV domains are not auto_translated domains so are
> >>> always direct-mapped, so technically is_domain_direct_mapped() (or
> >>> domain_use_host_layout()) should return true in such case.
> >> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> >> some special purpose (absence of IOMMU iirc).
> > 
> > I made such conclusion because of the comments in the code mentioned below:
> >   - 
> > https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
> >   - 
> > https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> > 
> > Also, in the comment where it is introduced (d66bf122c0a "xen: introduce 
> > XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> > is mentioned that:
> >    XENFEAT_direct_mapped is always set for not auto-translated guests.
> 
> Hmm, this you're right with, and XENVER_get_features handling indeed has
> 
>             if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
>                 fi.submap |= (1U << XENFEAT_direct_mapped);
> 
> Which now I have a vague recollection of not having been happy with back at
> the time. Based solely on the GFN == MFN statement this may be correct, but
> "GFN" is a questionable term for PV in the first place. See how e.g.
> common/memory.c resorts to using GPFN and GMFN, in line with commentary in
> public/memory.h.
> 
> What the above demonstrates quite well though is that there's no direct
> relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().

Let's start from the easy case: domain_use_host_layout.

domain_use_host_layout is meant to indicate whether the domain memory
map (e.g. the address of the interrupt controller, the start of RAM,
etc.) matches the host memory map or not.

It is implemented as:

#define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
                                   is_hardware_domain(d))

Because on ARM there are two cases:
1) hardware domain is always using the host layout
2) non-hardware domain only use the host layout when directly mapped
(more on the later)


I think this can be generalized and made arch-neutral with the caveat
that it should return False for PV guests as Jan mentioned. After all
the virtual interrupt controller in a PV domain doesn't start at the
same guest physical address of the real interrupt controller. The
comment can be improved, but let's get to it after we talk about
is_domain_direct_mapped.


is_domain_direct_mapped is meant to indicate that a domain's memory is
allocated 1:1 such that GFN == MFN. is_domain_direct_mapped is easily
applicable as-is to PVH and HVM guests where there are two stages of
translation.

What about PV guests? One could take the stance that given that there
are no real GFN space, then GFN is always the same as MFN. But this is
more philosophical than practical.

Practically, is_domain_direct_mapped() triggers a different code path in
xen/common/memory.c:populate_physmap for contiguous 1:1 memory
allocations which is probably undesirable for PV guests.

Practically, there is a related flag exposed to Linux
XENFEAT_direct_mapped. For HVM/PVH guests makes sense to be one and the
same as is_domain_direct_mapped(). This flag is used by Linux to know
whether it can use swiotlb-xen or not. Specifically, swiotlb-xen is only
usable when XENFEAT_direct_mapped is enabled for ARM guests and the
principle could apply to HVM/PVH guests too. What about PV guests?
They also make use of swiotlb-xen and XENFEAT_direct_mapped is set to
True for PV guests today.


In conclusion, is_domain_direct_mapped() was born for autotranslated
guests and is meant to trigger large contigous memory allocations is Xen
and permit the usage of swiotlb-xen in Linux. For PV guests, while we
want swiotlb-xen and the XENFEAT_direct_mapped flag is already set to
True, we don't want to change the memory allocation scheme. 

So I think is_domain_direct_mapped() should be always False on x86:
- PV guests should be always False
- PVH/HVM guests could be True but it is currently unimplemented (AMD
  is working on an implementation)

For compatibility and functionality, XENFEAT_direct_mapped should be
left as is.

The implementation of domain_use_host_layout() can be moved to common
code with a change:


/*
 * Is the auto-translated domain using the host memory layout?
 *
 * domain_use_host_layout() is always False for PV guests.
 *
 * Direct-mapped domains (autotranslated domains with memory allocated
 * contiguously and mapped 1:1 so that GFN == MFN) are always using the
 * host memory layout to avoid address clashes.
 *
 * The hardware domain will use the host layout (regardless of
 * direct-mapped) because some OS may rely on a specific address ranges
 * for the devices. PV Dom0, like any other PV guests, has
 * domain_use_host_layout() returning False.
 */
#define domain_use_host_layout(d) (is_domain_direct_mapped(d) ||
                                   (paging_mode_translate(d) &&
                                    is_hardware_domain(d)))




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.