[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu


  • To: Alexey G <x1917x@xxxxxxxxx>
  • From: "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx>
  • Date: Tue, 25 Jul 2017 02:52:15 +0000
  • Accept-language: en-US
  • Cc: "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 25 Jul 2017 02:52:58 +0000
  • Dlp-product: dlpe-windows
  • Dlp-reaction: no-action
  • Dlp-version: 10.0.102.7
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AdMCDFliQB9vwJzuRPKDdhnf3NXUCv//q40AgAAIAwD/+yrscIAJtWiA//7Nd1A=
  • Thread-topic: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu

> On Mon, 24 Jul 2017 08:07:02 +0000
> "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx> wrote:
> 
> > [Zhang, Xiong Y] Thanks for your suggestion.
> > Indeed, if I set mmi_hole >= 4G - RMRR_Base, this could fix my issue.
> > For this I still have two questions, could you help me ?
> > 1) If hvmloader do low memory relocation, hvmloader and qemu will see a
> > different guest memory layout . So qemu ram maybe overlop with mmio,
> does
> > xen have plan to fix this ?
> >
> > 2) Just now, I did an experiment: In hvmloader, I set
> > HVM_BELOW_4G_RAM_END to 3G and reserve one area for
> qemu_ram_allocate
> > like 0xF0000000 ~ 0xFC000000; In Qemu, I modified xen_ram_alloc() to
> make
> > sure it only allocate gfn in 0xF0000000 ~ 0xFC000000. In this case
> > qemu_ram won't overlap with mmio, but this workaround couldn't fix my
> > issue. It seems qemu still has another interface to allocate gfn except
> > xen_ram_alloc(), do you know this interface ?
> 
> Please share your 'xl dmesg' output, to have a look at your guest's MMIO
> map and which RMRRs and PCI MBARs are present there.
[Zhang, Xiong Y] Thanks a lot for your help.
The attachment is my 'xl dmesg' output:
RMRR region: base_addr 3a271000 end_addr 3a290fff
RMRR region: base_addr 3b800000 end_addr 3fffffff
Because they are below 2G, I set rdm_mem_boundary=700 to avoid guest creation 
failure.

Guest Ram is 1G.
Guest's MMIO are:
(d47) pci dev 03:0 bar 10 size 002000000: 0e0000008
(d47) pci dev 02:0 bar 14 size 001000000: 0e2000008
(d47) pci dev 04:0 bar 30 size 000040000: 0e3000000
(d47) pci dev 03:0 bar 30 size 000010000: 0e3040000
(d47) pci dev 03:0 bar 14 size 000001000: 0e3050000
(d47) pci dev 02:0 bar 10 size 000000100: 00000c001
(d47) pci dev 04:0 bar 10 size 000000100: 00000c101
(d47) pci dev 04:0 bar 14 size 000000100: 0e3051000
(d47) pci dev 01:2 bar 20 size 000000020: 00000c201
(d47) pci dev 01:1 bar 20 size 000000010: 00000c221
Gfn: f0000000 ~ fc000000 are reserved for xen_ram_alloc().
> 
> If RMRR range happens to overlap some guest's RAM below pci_start
> (dictated by lack of relocation support and low_mem_pgend value), I think
> your problem might be solved by sacrificing some part of guest RAM which is
> overlapped by RMRR -- by changing the E820 map in hvmloader.
[Zhang, Xiong Y] yes, this is my case and could fix it by your suggestion.
> 

Attachment: xl_dmesg
Description: xl_dmesg

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.