[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hvmloader: probe memory below 4G before allocation for OVMF



On 03/04/2020 16:05, Jan Beulich wrote:
> On 03.04.2020 16:47, Igor Druzhinin wrote:
>> On 03/04/2020 15:39, Andrew Cooper wrote:
>>> On 03/04/2020 14:53, Jan Beulich wrote:
>>>> On 02.04.2020 18:18, Igor Druzhinin wrote:
>>>>> The area just below 4G where OVMF image is originally relocated is not
>>>>> necessarily a hole - it might contain pages preallocated by device model
>>>>> or the toolstack. By unconditionally populating on top of this memory
>>>>> the original pages are getting lost while still potentially foreign mapped
>>>>> in Dom0.
>>>> When there are pre-allocated pages - have they been orphaned? If
>>>> so, shouldn't whoever populated them unpopulate rather than
>>>> orphaning them? Or if not - how is the re-use you do safe?
>>>
>>> So this is a mess.
>>>
>>> OVMF is linked to run at a fixed address suitable for native hardware,
>>> which is in the SPI ROM immediately below the 4G boundary (this is
>>> correct).  We also put the framebuffer there (this is not correct).
>>>
>>> This was fine for RomBIOS which is located under the 1M boundary.
>>>
>>> It is also fine for a fully-emulated VGA device in Qemu, because the the
>>> framebuffer if moved (re-set up actually) when the virtual BAR is moved,
>>> but with a real GPU (SR-IOV in this case), there is no logic to play games.
> 
> So are you saying OVMF starts out appearing to run in VRAM then
> in the OVMF case, until the frame buffer gets moved? If so,
> with the logic added by this patch, how would both places end
> (old VRAM address, where OVMF lives, and new VRAM address) get
> backed by distinct pages? Wouldn't the avoided re-populate
> result in the same page having two uses? Or alternatively there
> being a hole in OVMF space, which would be a problem if this
> was backing runtime memory? 

In normal case (not SR-IOV) VRAM gets evacuated (by PCI logic) before
hvmloader overwrites it. So the issue is avoided. But for SR-IOV VRAM
stays so VRAM area is temporary used to hold OVMF image - until decompression
is complete. With this patch VRAM pages would be used for that purpose
instead new ones.

>>> The problem is entirely caused by the framebuffer in Xen not being like
>>> any real system.  The framebuffer isn't actually in a BAR, and also
>>> doesn't manifest itself in the way that graphics-stolen-ram normally
>>> does, either.
>>
>> Adding to what Andrew said:
>>
>> There multiple technical complications that caused this mess.
>> One of them is that there is no unfortunately a better place for the
>> framebuffer to be located initially. Second, SR-IOV device
>> is real and adding a virtual BAR to it is also complicated (due to
>> compatibility reasons) and NVIDIA decided to avoid that.
> 
> In which case I wonder - aren't you ending up with the MMIO case
> that I had mentioned, and that you said is difficult to deal with?

No, it's VRAM area (normal RAM pages) - not MMIO.

Igor



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.