[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] (v2) Design proposal for RMRR fix



On Mon, Jan 12, 2015 at 1:56 PM, Pasi KÃrkkÃinen <pasik@xxxxxx> wrote:
> On Mon, Jan 12, 2015 at 11:25:56AM +0000, George Dunlap wrote:
>> So qemu-traditional didn't particularly expect to know the guest
>> memory layout.  qemu-upstream does; it expects to know what areas of
>> memory are guest memory and what areas of memory are unmapped.  If a
>> read or write happens to a gpfn which *xen* knows is valid, but which
>> *qemu-upstream* thinks is unmapped, then qemu-upstream will crash.
>>
>> The problem though is that the guest's memory map is not actually
>> communicated to qemu-upstream in any way.  Originally, qemu-upstream
>> was only told how much memory the guest had, and it just "happens" to
>> choose the same guest memory layout as the libxc domain builder does.
>> This works, but it is bad design, because if libxc were to change for
>> some reason, people would have to simply remember to also change the
>> qemu-upstream layout.
>>
>> Where this really bites us is in PCI pass-through.  The default <4G
>> MMIO hole is very small; and hvmloader naturally expects to be able to
>> make this area larger by relocating memory from below 4G to above 4G.
>> It moves the memory in Xen's p2m, but it has no way of communicating
>> this to qemu-upstream.  So when the guest does an MMIO instuction that
>> causes qemu-upstream to access that memory, the guest crashes.
>>
>> There are two work-arounds at the moment:
>> 1. A flag which tells hvmloader not to relocate memory
>> 2. The option to tell qemu-upstream to make the memory hole larger.
>>
>> Both are just work-arounds though; a "proper fix" would be to allow
>> hvmloader some way of telling qemu that the memory has moved, so it
>> can update its memory map.
>>
>> This will (I'm pretty sure) have an effect on RMRR regions as well,
>> for the reasons I've mentioned above: whether make the "holes" for the
>> RMRRs in libxc or in hvmloader, if we *move* that memory up to the top
>> of the address space (rather than, say, just not giving that RAM to
>> the guest), then qemu-upstream's idea of the guest memory map will be
>> wrong, and will probably crash at some point.
>>
>> Having the ability for hvmloader to populate and/or move the memory
>> around, and then tell qemu-upstream what the resulting map looked
>> like, would fix both the MMIO-resize issue and the RMRR problem, wrt
>> qemu-upstream.
>>
>
> Hmm, wasn't this changed slightly during Xen 4.5 development by Don Slutz?
>
> You can now specify the mmio_hole size for HVM guests when using 
> qemu-upstream:
> http://wiki.xenproject.org/wiki/Xen_Project_4.5_Feature_List
>
>
> "Bigger PCI MMIO hole in QEMU via the mmio_hole parameter in guest config, 
> which allows configuring the MMIO size below 4GB. "
>
> "Backport pc & q35: Add new machine opt max-ram-below-4g":
> http://xenbits.xen.org/gitweb/?p=qemu-upstream-unstable.git;a=commit;h=ffdacad07002e14a8072ae28086a57452e48d458
>
> "x86: hvm: Allow configuration of the size of the mmio_hole.":
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=2d927fc41b8e130b3b8910e4442d4691111d2ac7

Yes -- that's workaround #2 above ("tell qemu-upstream to make the
memory hole larger").  But it's still a work-around, because it
requires the admin to figure out how big a memory hole he needs.  With
qemu-traditional, he could just assign whatever devices he wanted, and
hvmloader would make it the right size automatically.  Ideally that's
how it would work for qemu-upstream as well.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.