[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HVM support for e820_host (Was: Bug: Limitation of <=2GB RAM in domU persists with 4.3.0)

On Fri, 6 Sep 2013 09:04:35 -0400, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
On Thu, Sep 05, 2013 at 11:33:18PM +0100, Gordan Bobic wrote:
On 09/05/2013 10:13 PM, Gordan Bobic wrote:

>I seem to be getting two different E820 table dumps with e820_host=1:
>(XEN) HVM1: BIOS map:
>(XEN) HVM1:  f0000-fffff: Main BIOS
>(XEN) HVM1: build_e820_table:91 got 8 op.nr_entries
>(XEN) HVM1: E820 table:
>(XEN) HVM1:  [00]: 00000000:00000000 - 00000000:3f790000: RAM
>(XEN) HVM1:  [01]: 00000000:3f790000 - 00000000:3f79e000: ACPI
>(XEN) HVM1:  [02]: 00000000:3f79e000 - 00000000:3f7d0000: NVS
>(XEN) HVM1:  [03]: 00000000:3f7d0000 - 00000000:3f7e0000: RESERVED
>(XEN) HVM1:  HOLE: 00000000:3f7e0000 - 00000000:3f7e7000
>(XEN) HVM1:  [04]: 00000000:3f7e7000 - 00000000:40000000: RESERVED
>(XEN) HVM1:  HOLE: 00000000:40000000 - 00000000:fee00000
>(XEN) HVM1:  [05]: 00000000:fee00000 - 00000000:fee01000: RESERVED
>(XEN) HVM1:  HOLE: 00000000:fee01000 - 00000000:ffc00000
>(XEN) HVM1:  [06]: 00000000:ffc00000 - 00000001:00000000: RESERVED
>(XEN) HVM1:  [07]: 00000001:00000000 - 00000001:68870000: RAM

I get it - this is the host e820 map. In dom0, dmesg shows:

e820: BIOS-provided physical RAM map:
Xen: [mem 0x0000000000000000-0x000000000009cfff] usable
Xen: [mem 0x000000000009d000-0x00000000000fffff] reserved
Xen: [mem 0x0000000000100000-0x000000003f78ffff] usable
Xen: [mem 0x000000003f790000-0x000000003f79dfff] ACPI data
Xen: [mem 0x000000003f79e000-0x000000003f7cffff] ACPI NVS
Xen: [mem 0x000000003f7d0000-0x000000003f7dffff] reserved
Xen: [mem 0x000000003f7e7000-0x000000003fffffff] reserved
Xen: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Xen: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved
Xen: [mem 0x0000000100000000-0x0000000cbfffffff] usable

That tallies up with the above map exactly. So far so good. Not sure
if the following is relevant, but here it is anyway just in case:

e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
e820: remove [mem 0x000a0000-0x000fffff] usable
e820: last_pfn = 0xcc0000 max_arch_pfn = 0x400000000
e820: last_pfn = 0x3f790 max_arch_pfn = 0x400000000
Zone ranges:
  DMA      [mem 0x00001000-0x00ffffff]
  DMA32    [mem 0x01000000-0xffffffff]
  Normal   [mem 0x100000000-0xcbfffffff]
e820: [mem 0x40000000-0xfedfffff] available for PCI devices

>(XEN) HVM1: E820 table:
>(XEN) HVM1:  [00]: 00000000:00000000 - 00000000:0009e000: RAM
>(XEN) HVM1:  [01]: 00000000:0009e000 - 00000000:000a0000: RESERVED
>(XEN) HVM1:  HOLE: 00000000:000a0000 - 00000000:000e0000
>(XEN) HVM1:  [02]: 00000000:000e0000 - 00000000:00100000: RESERVED
>(XEN) HVM1:  [03]: 00000000:00100000 - 00000000:a7800000: RAM
>(XEN) HVM1:  HOLE: 00000000:a7800000 - 00000000:fc000000
>(XEN) HVM1:  [04]: 00000000:fc000000 - 00000001:00000000: RESERVED
>(XEN) HVM1: Invoking ROMBIOS ...

Comparing this to the above, it seems that 9d000-9e000 is marked as
reserved in dom0, but RAM in domU. Am I right in thinking that
dom0(usable) == domU(RAM) in terms of meaning?

What does "HOLE" actually mean in domU? Does it mean this space is
OK to map domU IOMEM into? Or something else? Either way full
possible chasl summary:

dom0: reserved  9d000-9e000
domU: RAM       9d000-9e000

dom0: reserved  a0000-dffff
domU: HOLE      a0000-dffff

dom0: ACPI data 3f790000-3f79dfff
dom0: ACPI NVS  3f79e000-3f7cffff
dom0: reserved  3f7d0000-3f7dffff
dom0: reserved

.. you are missing a range here.

It wasn't meant as an exhaustive list, I was only looking at the
interesting/overlapping areas.

domU: RAM       00100000-a7800000

Then there seems to be a hole in dom0:
40000000-fedfffff which talles up with the dom0 dmesg output above
about it being for the PCI devices, i.e. that's the IOMEM region
(from 1GB to a lilttle under 4GB).

But in domU, the 40000000-a77fffff is available as RAM.

OK, so that is the goal - make hvmloader construct the E820 memory
layout and all of its pieces to fit that layout.

I am actually leaning toward only copying the holes from the
host E820. The domU already seems to be successfully using various
memory ranges that correspond to reserved and acpi ranges, so
it doesn't look like these are a problem.

On the face of it, that's actually fine - my PCI IOMEM mappings show
the lowest mapping (according to lspci -vvv) starts at a8000000,


Indeed - on the host, the hole is 1GB-4GB, but there is no IOMEM
mapped between 1024M and 2688MB. Hence why I can get away with a
domU memory allocation up to 2688MB.

which falls into the domU area marked as "HOLE" (a7800000-fc000000).
And this does in fact appears to be where domU maps the GPU in both
of my VMs:


and this doesn't overlap with any mapped PCI IOMEM according to lspci.

If we assume that anything below a8000000 doesn't actually matter in
this case (since if I give up to a8000000 memory to a domU
everything works absolutely fine indefinitely, I am at a loss to

Just to make sure I am not leading you astray. You are getting _no_ crashes
when you have a guest with 1GB?

I haven't tried limiting a guest to 1GB recently. My PCI passthrough
domUs all have 2688MB assigned, and this works fine. More than that
and they crash eventually. Does that answer your question? Or were
you after something very specific to the 1GB domU case?

explain what is actually going wrong and why the crash is still
occuring - unless some other piece of hardware is having it's domU
IOMEM mapped somewhere in the range f3df4000-fec8b000 and that is
causing a memory overwrite.

I am just not seeing any obvious memory stomp at the moment...

Neither am I.

I may have pasted the wrong domU e820. I have a sneaky suspicion
that this above map was from a domU with 2688MB of RAM assigned,
hence why there is on domU RAM in the map above a7800000. I'll
re-check when I'm in front of that machine again.

Are you OK with the plan to _only_ copy the holes from host E820
to the hvmloader E820? I think this would be sufficient and not
cause any undue problems. The only things that would need to
change are:
1) Enlarge the domU hole
2) Do something with the top reserved block, starting at
RESERVED_MEMBASE=0xFC000000. What is this actually for? It
overlaps with the host memory hole which extends all the way up
to 0xfee00000. If it must be where it is, this could be
problematic. What to do in this case?

This does, also bring up another question - is there any point
in bothering with matching the host holes? I would hazard a
guess that no physical hardware is likely to have a memory
hole bigger than 3GB under the 4GB limit.

So would it perhaps be neater, easier, more consistent and
more debuggable to just make the hvmloader put in a hole
between 0x40000000-0xffffffff (the whole 3GB) by default?
Or is that deemed to be too crippling for 32-bit non-PAE
domUs (and are there enough of these aroudn to matter?)?

Caveat - this alone wouldn't cover any other weirdness such as
the odd memory hole 0x3f7e0000-0x3f7e7000 on my hardware. Was
this what you were thinking about when asking whether my domUs
work OK with 1GB of RAM? Since that is just under the 1GB

To clarify, I am not suggesting just hard coding a 3GB memory
hole - I am suggesting defaulting to at least that and them
mapping in any additional memory holes as well. My reasoning
behind this suggestion is that it would make things more
consistent between different (possibly dissimilar) hosts.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.