[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH 4/5] tools:firmware:hvmloader: reserve RMRR mappings in e820



> From: Chen, Tiejun
> Sent: Tuesday, August 12, 2014 5:57 PM
> 
> On 2014/8/12 20:25, Jan Beulich wrote:
> >>>> On 12.08.14 at 12:59, <tiejun.chen@xxxxxxxxx> wrote:
> >> On 2014/8/12 0:00, Tian, Kevin wrote:
> >>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >>>> Sent: Sunday, August 10, 2014 11:53 PM
> >>>>>>> On 08.08.14 at 23:47, <kevin.tian@xxxxxxxxx> wrote:
> >>>>> strictly speaking besides reserving in e820, you should also poke later
> >>>>> MMIO BAR allocations to avoid confliction too. Currently it's relative
> >>>>> to low_mem_pgend, which is likely to be different from host layout
> >>>>> so it's still possible to see a virtual MMIO bar base conflicting to the
> >>>>> RMRR ranges which are supposed to be sparse.
> >>>>
> >>>> Correct. And what's worse: Possible collisions between RMRRs and
> >>>> the BIOS we place into the VM need to be taken care of, which may
> >>>> turn out rather tricky.
> >>>>
> >>>
> >>> right that becomes tricky. We can provide another hypercall to allow a
> >>> VM tell Xen which RMRR can't be assigned due to confliction with gust
> >>> BIOS or other hvmloader allocation (if confliction can't be resolved).
> >>>
> >>> If Xen detects a device owning RMRR is already assigned to the VM,
> >>> then fail the hypercall and hvmloader just panic with information to
> >>> indicate confliction.
> >>>
> >>> Otherwise Xen records the information and future dynamic device
> >>> assignment like hotplug will be failed if associated RMRR will be in
> >>> the confliction list.
> >>
> >>   From my point of view its becoming over complicated.
> >>
> >> In HVM case, theoretically any devices involving RMRR may be assigned to
> >> any given VM. So it may not be necessary to introduce such complex
> >> mechanism. Therefore, I think we can reserve all RMRR maps simply in
> >> e820, and check if MMIO is overlapping with RMRR for every VM. It should
> >> be acceptable.
> >
> > Then you didn't understand what Kevin and I said above. Just
> 
> I have to admit I'm poor in this coverage.
> 
> > keep in mind that the RMRRs can conflict not just with MMIO
> > ranges inside the guest, but also RAM ranges (which include, as
> > mentioned above, the range where the BIOS for the guest gets
> > put).
> >
> > Jan
> >
> 
> So just to clarify, as a summary there are four ranges we should be
> addressed:
> 
> #1 MMIO in guest
> 
> In my patch [RFC][v2][PATCH 5/6] tools:libxc: check if mmio BAR is out
> of RMRR mappings,
> 
> I will check if this is overlapping.

hvmloader controls actual mmio BAR allocation, so it's important to have
check there. And your patch treats the whole mmio as one big region
to check overlapping with RMRR which is too coarse-grained. Better to check
overlapping every time when an allocation, either of memory ranges, or
MMIO ranges, actually happen.

> 
> #2 RAM in guest
> 
> tools/firmware/hvmloader/e820.c:
>      e820[nr].addr = 0x100000;
>      e820[nr].size = (hvm_info->low_mem_pgend << PAGE_SHIFT) -
> e820[nr].addr;

Note memory allocation actually happens in libxc, where when you see a
populate_physmap occurs it actually means a real allocation in guest 
physical address space. That's what you want to detect and avoid overlapping
in the 1st place. hvmloader builds e820 assuming the same policy as libxc
where it's the 2nd place to check (it's not a good design between hvmloader
and libxc, ideally the memory e820 ranges should be passed from libxc)

> 
> #3 Guest BIOS itself
> 
> tools/firmware/hvmloader/e820.c:
>      e820[nr].addr = bios_image_base;
> 
> For #2 and #3 in my patch [RFC][v2][PATCH 3/6] tools:firmware:hvmloader:
> reserve RMRR mappings in e820, we will check if RMRR is overlapping
> these ranges.
> 
> #4 Machine RAM range for a given guest
> 
> In this point I think RMRR already is as reserved in host e820, so its
> not possible to allocate any RMRR as physical RAM to a VM.

yes this is not the concern in this topic.

> 
> If I'm still misunderstanding please correct me.
> 
> Thanks
> Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.