[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN PATCH] hvmloader: Enable MMIO and I/O decode, after all resource allocation

On 07.04.2020 10:14, Paul Durrant wrote:
>> -----Original Message-----
>> From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>> Sent: 07 April 2020 09:07
>> To: paul@xxxxxxx
>> Cc: 'Harsha Shamsundara Havanur' <havanur@xxxxxxxxxx>; 
>> xen-devel@xxxxxxxxxxxxxxxxxxxx; 'Wei Liu'
>> <wl@xxxxxxx>; 'Andrew Cooper' <andrew.cooper3@xxxxxxxxxx>; 'Ian Jackson' 
>> <ian.jackson@xxxxxxxxxxxxx>;
>> 'Jan Beulich' <jbeulich@xxxxxxxx>
>> Subject: Re: [XEN PATCH] hvmloader: Enable MMIO and I/O decode, after all 
>> resource allocation
>> On Tue, Apr 07, 2020 at 08:48:42AM +0100, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of 
>>>> Harsha Shamsundara Havanur
>>>> Sent: 06 April 2020 18:47
>>>> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
>>>> Cc: Wei Liu <wl@xxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Ian 
>>>> Jackson
>>>> <ian.jackson@xxxxxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Harsha 
>>>> Shamsundara Havanur
>>>> <havanur@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>> Subject: [XEN PATCH] hvmloader: Enable MMIO and I/O decode, after all 
>>>> resource allocation
>>>> It was observed that PCI MMIO and/or IO BARs were programmed with
>>>> BUS master, memory and I/O decodes (bits 0,1 and 2 of PCI COMMAND
>>>> register) enabled, during PCI setup phase. This resulted in
>>>> spurious and premature bus transactions as soon as the lower bar of
>>>> the 64 bit bar is programmed. It is highly recommended that BARs be
>>>> programmed whilst decode bits are cleared to avoid spurious bus
>>>> transactions.
>>> It's not so much spurious transactions that are the issue. I think 
>>> "spurious and premature bus
>> transactions" should be replaced with "incorrect mappings being created".
>>> I believe the PCI spec says all three bits should be clear after reset 
>>> anyway, and BAR programming
>> whilst decodes are enabled causes problems for emulators such as QEMU which 
>> need to create and destroy
>> mappings between the gaddr being programming into the virtual BAR and the 
>> maddr programmed into the
>> physical BAR.
>>> Specifically the case we see is that a 64-bit memory BAR is programmed by 
>>> writing the lower half and
>> then the upper half. After the first write the BAR is mapped to an address 
>> under 4G that happens to
>> contain RAM, which is displaced by the mapping. After the second write the 
>> BAR is re-mapped to the
>> intended location but the RAM displaced by the other mapping is not 
>> restored. The OS then continues to
>> boot and function until at some point it happens to try to use that RAM at 
>> which point it suffers a
>> page fault and crashes. It was only by noticing that the faulting address 
>> lay within the transient BAR
>> mapping that we figured out what was happening.
>> In order to fix this isn't it enough to just disable memory and IO
>> decodes, leaving bus mastering as it is?
>> I assume there is (or was) some reason why bus master is enabled by
>> hvmloader in the first place?
> I admit it is a while since I went mining for the reason BME
> was being set but IIRC the commit that added the original code
> did not state why it was done.

I can second this observation, but this is not a reason to drop
the enabling again without suitable justification. If there are
babbling devices, perhaps they should be quirked rather than,
as you did suggest in reply to Andrew, ones which require it
enabled? (If such a requirement indeed exists, I assume they
would get handed to hvmloader with the bit already set, in
which case a middle ground may be to simply leave the bit
alone rather than force-enabling or force-disabling it. But
again the commit adding the enabling would stand against this,
because it likely was done for a reason even if that reason is
not stated in the commit.)




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.