[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Aligning Xen to physical memory maps on embedded systems



Hi,

On 22/02/2021 13:37, Levenglick Dov wrote:
(+ Stefano)

On 21/02/2021 16:30, Levenglick Dov wrote:
Hi,

Hi,

I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen
4.11, taken from https://github.com/Xilinx/xen.

This tree is not an official Xen Project tree. I can provide feedback based on
how Xen upstream works, but I don't know for sure if this will apply to the
Xilinx tree.

For any support, I would recommend to contect Xilinx directly.

I will approach their representatives. Can you comment regarding the approach 
that I
outline in the rest of the mail as though it were referring to the Xen upstream?

The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and
the DomU have an allocation of 1.25GB, per this memory map:
1. DomU1: 0x60000000 - 0x80000000
2. DomU2: 0x40000000 - 0x60000000
3. Xen: 0x30000000 - 0x40000000

How did you tell Xen which regions is assigned to which guests? Are your
domain mapped 1:1 (i.e guest physical address == host physical address)?

I am working on a solution where if the "xen,domain" memory has #size-cell 
cells the
content is backward compatible. But if it contains (#address-cells + 
#size-cells), the
address cells should be considered the physical start address.
During the mapping of the entire address space insetup_mm(), the carved out 
addresses
would be added to the  reserved memory address space. When the DomU is to be 
created,
this physical space would be mapped to it. The virtual addresses are less of an 
issue and
needn't be mapped 1x1 (although they could be).



I am able to support True Dom0-less by means of the patch/hack
demonstrated By Stefano Stabellini at
https://youtu.be/UfiP9eAV0WA?t=1746.

I was able to forcefully put the Xen binary at the address range
immediately below 0x40000000 by means of modifying get_xen_paddr() -
in itself an ugly hack.

My questions are:
1. Since Xen performs runtime allocations from its heap, it is allocating
     downwards from 0x80000000 - thereby "stealing" memory from DomU1.

In theory, any memory reserved for domains should have been carved out
from the heap allocator. This would be sufficient to prevent Xen allocating
memory from the ranges you described above.

Therefore, to me this looks like a bug in the tree you are using.

This would be a better approach, but because Xen perform allocations from its
heap prior to allocating memory to DomU - and since it allocates from the top of
the heap - it is basically taking memory that I wanted to set aside for the 
DomU.
This is why I am thinking of reserving the memory.

That's correct. We want to carve out memory from the heap allocator so it can't be used by Xen. I would recommend to read [1] as we discussed the issue in the greater length about reserving memory.

Cheers,

[1] https://lore.kernel.org/xen-devel/a316ed70-da35-8be0-a092-d992e56563d2@xxxxxxx/

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.