[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Limitation in HVM physmap

On 01/11/13 12:21, Wei Liu wrote:
On Fri, Oct 18, 2013 at 03:20:12PM +0100, Wei Liu wrote:
Hi Jan, Tim and Keir

I currently run into the limitation of HVM's physmap: one MFN can only
be mapped into one guest physical frame. Why is it designed like that?

The scenario is that: when QEMU boots with OVMF (UEFI firmware), OVMF
will first map the framebuffer to 0x80000000, resulting the framebuffer
MFNs added to corresponding slots in physmap. A few moments later when
Linux kernel loads, it tries to map framebuffer MFNs to 0xf00000000,
which fails because those MFNs have already been mapped in other
locations. Is there a way to fix this?

FWIW I tested this on real hardware, a Dell R710 server.

[   39.394807] efifb: probing for efifb
[   39.437552] efifb: framebuffer at 0xd5800000, mapped to 0xffffc90013f00000, 
using 1216k, total 1216k
[   39.546549] efifb: mode is 640x480x32, linelength=2560, pages=1
[   39.617140] efifb: scrolling: redraw
[   39.659709] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0

lspci -vvv:
0b:03.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW 
WPCM450 (rev 0a) (prog-if 00 [VGA controller])
         Subsystem: Dell PowerEdge R710 MGA G200eW WPCM450
         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
         Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- 
<MAbort- >SERR- <PERR- INTx-
         Latency: 64 (4000ns min, 8000ns max), Cache Line Size: 64 bytes
         Interrupt: pin A routed to IRQ 10
         Region 0: Memory at d5800000 (32-bit, prefetchable) [size=8M]
         Region 1: Memory at de7fc000 (32-bit, non-prefetchable) [size=16K]
         Region 2: Memory at de800000 (32-bit, non-prefetchable) [size=8M]
         [virtual] Expansion ROM at de000000 [disabled] [size=64K]
         Capabilities: [dc] Power Management version 1
                 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
                 Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-

So I think the behavior of OVMF is consistent with real hardware.

Good to know - thanks for testing this. So it looks like we'll need to support the same functionality for HVM guests, one way or another.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.