[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.12 v2 2/2] xen/arm: Stop relocating Xen



Hello Julien,

Let me speculate a bit about the topic.

On 14.12.18 13:44, Julien Grall wrote:
At the moment, Xen is relocated towards the end of the memory.
This statement is not really true. Some time ago, XEN was relocated toward the 
end of
the low memory (under 4 GB). Currently, on my board I see some kind of mess:

    (XEN) RAM: 0000000048000000 - 00000000bfffffff
    (XEN) RAM: 0000000500000000 - 000000057fffffff
    (XEN) RAM: 0000000600000000 - 000000067fffffff
    (XEN) RAM: 0000000700000000 - 000000077fffffff
    (XEN)
    (XEN) MODULE[0]: 0000000048000000 - 0000000048013000 Device Tree
    (XEN) MODULE[1]: 000000007a000000 - 000000007c000000 Kernel
    (XEN) MODULE[2]: 000000007c000000 - 000000007c010000 XSM
    (XEN)  RESVD[0]: 0000000048000000 - 0000000048013000
    (XEN)
    (XEN)
    (XEN) Command line: dom0_mem=3G console=dtuart dtuart=serial0 
dom0_max_vcpus=2 bootscrub=0 loglvl=all cpufreq=none tbuf_size=8192 
loglvl=all/none guest_loglvl=all/none
    (XEN) parameter "cpufreq" unknown!
    (XEN) Placing Xen at 0x000000077fe00000-0x0000000780000000
    (XEN) Update BOOTMOD_XEN from 0000000078080000-0000000078188d81 => 
000000077fe00000-000000077ff08d81

As you can see XEN is moved towards the end of the first GB of the low memory 
instead of the end of under 4GB RAM.


While
this has the advantage to free space in low memory, the code is not
compliant with the break-before-make because it requires to switch
between two sets of page-table. This is not entirely trivial to fix as
it would require us to go through an identity mapping and disabling MMU.
I understand this motivation though.

Furthermore, it looks like that some platform (such as the Hikey960)
may not be able to bring-up secondary CPUs if the entry is too high.Just a 
reminder that long time ago we implemented a move of XEN toward the real end of 
the memory, over 4GB.
As long as CPUs were not able to start to the code placed over 4 GB, we set 
secondary CPUs to be brought up to a XEN instance under 4GB, then jump to a 
copy over 4GB, following CPU0.

I don't believe the low memory is an issue because Xen is quite tiny
(< 2MB).
It is really tiny, but the problem is that Dom0 low memory (lower than 4 GB) 
RAM banks
start and end are aligned by 128MB. So existing of a single 1MB XEN cuts out 
128MB from low memory for Dom0.
On my current setup I have two 128MB chunks stolen: one by relocated XEN, 
kernel module an XSM module, other another by device tree. So Dom0 gets 1664MB 
of low RAM, instead of physically available 1920MB.

    (XEN) Loading Domd0 kernel from boot module @ 000000007a000000
    (XEN) Allocating 1:1 mappings totalling 3072MB for dom0:
    (XEN) BANK[0] 0x00000050000000-0x00000078000000 (640MB)
    (XEN) BANK[1] 0x00000080000000-0x000000c0000000 (1024MB)
    (XEN) BANK[2] 0x00000540000000-0x00000580000000 (1024MB)
    (XEN) BANK[3] 0x00000748000000-0x00000760000000 (384MB)
    (XEN) Grant table range: 0x0000077fe00000-0x0000077fe40000

Such loss might be painful for those who are targeting low memory eager 
use-cases (i.e. multimedia) and lack of IOMMU on a SoC.

So the best solution is to stop relocating Xen.
And those who cares about XEN placement should configure their bootloader to 
put XEN (and other boot modules) to a proper place right away.

This has the advantage to simplify the code and should speed-up the boot as 
relocation
is not necessary anymore.
Boot time improvements always make me glad :)

Please also note, that all above are kind of generic ideas. They are not linked 
to our target setup, we do not care about Dom0 and its 1:1 mapped memory. And, 
all in all, we have an IOMMU.

--
Sincerely,
Andrii Anisov.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.