|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 6/8] x86/iommu: call pi_update_irte through an hvm_function callback
On 13.01.2023 08:44, Xenia Ragiadakou wrote:
>
> On 1/12/23 14:37, Jan Beulich wrote:
>> On 12.01.2023 13:16, Jan Beulich wrote:
>>> On 04.01.2023 09:45, Xenia Ragiadakou wrote:
>>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>>> @@ -2143,6 +2143,14 @@ static bool cf_check vmx_test_pir(const struct vcpu
>>>> *v, uint8_t vec)
>>>> return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
>>>> }
>>>>
>>>> +static int cf_check vmx_pi_update_irte(const struct vcpu *v,
>>>> + const struct pirq *pirq, uint8_t
>>>> gvec)
>>>> +{
>>>> + const struct pi_desc *pi_desc = v ? &v->arch.hvm.vmx.pi_desc : NULL;
>>>> +
>>>> + return pi_update_irte(pi_desc, pirq, gvec);
>>>> +}
>>>
>>> This being the only caller of pi_update_irte(), I don't see the point in
>>> having the extra wrapper. Adjust pi_update_irte() such that it can be
>>> used as the intended hook directly. Plus perhaps prefix it with vtd_.
>>
>> Plus move it to vtd/x86/hvm.c (!HVM builds shouldn't need it), albeit I
>> realize this could be done independent of your work. In principle the
>> function shouldn't be VT-d specific (and could hence live in x86/hvm.c),
>> as msi_msg_write_remap_rte() is already available as IOMMU hook anyway,
>> provided struct pi_desc turns out compatible with what's going to be
>> needed for AMD.
>
> Since the posted interrupt descriptor is vmx specific while
> msi_msg_write_remap_rte is iommu specific, can I propose the following:
>
> - Keep the name as is (i.e vmx_pi_update_irte) and keep its definition
> in xen/arch/x86/hvm/vmx/vmx.c
>
> - Open code pi_update_irte() inside the body of vmx_pi_update_irte() but
> replace intel-specific msi_msg_write_remap_rte() with generic
> iommu_update_ire_from_msi().
>
> Does this approach make sense?
Why not - decouples one place of the assumed "CPU vendor" == "IOMMU vendor".
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |