|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.13] AMD/IOMMU: honour IR setting while pre-filling DTEs
On 26.11.2019 15:34, Andrew Cooper wrote:
> On 26/11/2019 14:14, Jan Beulich wrote:
>> On 26.11.2019 13:25, Andrew Cooper wrote:
>>> On 26/11/2019 08:42, Jan Beulich wrote:
>>>> On 25.11.2019 22:05, Igor Druzhinin wrote:
>>>>> --- a/xen/drivers/passthrough/amd/iommu_init.c
>>>>> +++ b/xen/drivers/passthrough/amd/iommu_init.c
>>>>> @@ -1279,7 +1279,7 @@ static int __init amd_iommu_setup_device_table(
>>>>> for ( bdf = 0, size /= sizeof(*dt); bdf < size; ++bdf )
>>>>> dt[bdf] = (struct amd_iommu_dte){
>>>>> .v = true,
>>>>> - .iv = true,
>>>>> + .iv = iommu_intremap,
>>>> This was very intentionally "true", and ignoring "iommu_intremap":
>>> Deliberate or not, it is a regression from 4.12.
>> I accept it's a regression (which wants fixing), but I don't think
>> this is the way to address is. I could be convinced by good
>> arguments, though.
>>
>>> Booting with iommu=no-intremap is a common debugging technique, and that
>>> means no interrupt remapping anywhere in the system, even for
>>> supposedly-unused DTEs.
>> Whether IV=1 or IV=0, there's no interrupt _remapping_ with this
>> option specified. There's some interrupt _blocking_, yes. It's
>> not immediately clear to me whether this is a good or a bad thing.
>
> You're attempting to argue semantics. Blocking is a special case remapping.
>
> "iommu=no-intremap" (for better or worse, naming wise) refers to the
> interrupt mediation functionality in the IOMMU, and means "don't use any
> of it". Any other behaviour is a regression.
I can accept this pov. Nevertheless I'd like to first see whether
we can't address the issue at hand with a less big hammer solution.
We can then always decide to still put in this change.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |