[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu


  • To: Alexey G <x1917x@xxxxxxxxx>
  • From: "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx>
  • Date: Mon, 24 Jul 2017 08:07:02 +0000
  • Accept-language: en-US
  • Cc: "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 24 Jul 2017 08:07:25 +0000
  • Dlp-product: dlpe-windows
  • Dlp-reaction: no-action
  • Dlp-version: 10.0.102.7
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AdMCDFliQB9vwJzuRPKDdhnf3NXUCv//q40AgAAIAwD/+yrscA==
  • Thread-topic: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu

> > On Fri, 21 Jul 2017 10:57:55 +0000
> > "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx> wrote:
> >
> > > On an intel skylake machine with upstream qemu, if I add
> > > "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't
> > > boot up and continues reboot.
> > >
> > > Steps to reproduce this issue:
> > >
> > > 1)       Boot xen with iommu=1 to enable iommu
> > > 2)       hvm.cfg contain:
> > >
> > > builder="hvm"
> > >
> > > memory=xxxx
> > >
> > > disk=['win8.1 img']
> > >
> > > device_model_override='qemu-system-i386'
> > >
> > > device_model_version='qemu-xen'
> > >
> > > rdm="strategy=host,policy=strict"
> > >
> > > 3)       xl cr hvm.cfg
> > >
> > > Conditions to reproduce this issue:
> > >
> > > 1)       DomU memory size > the top address of RMRR. Otherwise, this
> > > issue will disappear.
> > > 2)       rdm=" strategy=host,policy=strict" should exist
> > > 3)       Windows DomU.  Linux DomU doesn't have such issue.
> > > 4)       Upstream qemu.  Traditional qemu doesn't have such issue.
> > >
> > > In this situation, hvmloader will relocate some guest ram below RMRR to
> > > high memory, and it seems window guest access an invalid address. Could
> > > someone give me some suggestions on how to debug this ?
> >
> > You're likely have RMRR range(s) below 2GB boundary.
> >
> > You may try the following:
> >
> > 1. Specify some large 'mmio_hole' value in your domain configuration file,
> > ex. mmio_hole=2560
> > 2. If it won't help, 'xl dmesg' output might come useful
> >
> > Right now upstream QEMU still doesn't support relocation of parts
> > of guest RAM to >4GB boundary if they were overlapped by MMIO ranges.
> > AFAIR forcing allow_memory_relocate to 1 for hvmloader didn't bring
> > anything good for HVM guest.
> >
> > Setting the mmio_hole size manually allows to create a "predefined"
> > memory/MMIO hole layout for both QEMU (via 'max-ram-below-4g') and
> > hvmloader (via a XenStore param), effectively avoiding MMIO/RMRR
> overlaps
> > or RAM relocation in hvmloader, so this might help.
> 
> Wrote too soon, "policy=strict" means that you won't be able to create a
> DomU if RMRR was below 2G... so it's actually should be above 2GB. Anyway,
> try setting mmio_hole size.
[Zhang, Xiong Y] Thanks for your suggestion.
Indeed, if I set mmi_hole >= 4G - RMRR_Base, this could fix my issue.
For this I still have two questions, could you help me ?
1) If hvmloader do low memory relocation, hvmloader and qemu will see a 
different guest memory layout . So qemu ram maybe overlop with mmio, does xen 
have plan to fix this ?

2) Just now, I did an experiment: In hvmloader, I set HVM_BELOW_4G_RAM_END to 
3G and reserve one area for qemu_ram_allocate like 0xF0000000 ~ 0xFC000000; In 
Qemu, I modified xen_ram_alloc() to make sure it only allocate gfn in 
0xF0000000 ~ 0xFC000000. In this case qemu_ram won't overlap with mmio, but 
this workaround couldn't fix my issue.
 It seems qemu still has another interface to allocate gfn except 
xen_ram_alloc(), do you know this interface ?

thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.