[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN PATCH v14 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 4 Sep 2024 08:04:14 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <gwd@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Anthony PERARD <anthony@xxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, "Daniel P . Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 04 Sep 2024 06:04:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 04.09.2024 03:43, Stefano Stabellini wrote:
> On Tue, 3 Sep 2024, Jan Beulich wrote:
>> On 03.09.2024 12:53, Chen, Jiqian wrote:
>>> On 2024/9/3 17:25, Jan Beulich wrote:
>>>> On 03.09.2024 09:58, Chen, Jiqian wrote:
>>>>> On 2024/9/3 15:42, Jan Beulich wrote:
>>>>>> On 03.09.2024 09:04, Jiqian Chen wrote:
>>>>>>> When dom0 is PVH type and passthrough a device to HVM domU, Qemu code
>>>>>>> xen_pt_realize->xc_physdev_map_pirq and libxl code pci_add_dm_done->
>>>>>>> xc_physdev_map_pirq map a pirq for passthrough devices.
>>>>>>> In xc_physdev_map_pirq call stack, function hvm_physdev_op has a check
>>>>>>> has_pirq(currd), but currd is PVH dom0, PVH has no X86_EMU_USE_PIRQ 
>>>>>>> flag,
>>>>>>> so it fails, PHYSDEVOP_map_pirq is not allowed for PVH dom0 in current
>>>>>>> codes.
>>>>>>>
>>>>>>> But it is fine to map interrupts through pirq to a HVM domain whose
>>>>>>> XENFEAT_hvm_pirqs is not enabled. Because pirq field is used as a way to
>>>>>>> reference interrupts and it is just the way for the device model to
>>>>>>> identify which interrupt should be mapped to which domain, however
>>>>>>> has_pirq() is just to check if HVM domains route interrupts from
>>>>>>> devices(emulated or passthrough) through event channel, so, the 
>>>>>>> has_pirq()
>>>>>>> check should not be applied to the PHYSDEVOP_map_pirq issued by dom0.
>>>>>>>
>>>>>>> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
>>>>>>> PHYSDEVOP_unmap_pirq for the removal device path to unmap pirq. Then the
>>>>>>> interrupt of a passthrough device can be successfully mapped to pirq 
>>>>>>> for domU.
>>>>>>
>>>>>> As before: When you talk about just Dom0, ...
>>>>>>
>>>>>>> --- a/xen/arch/x86/hvm/hypercall.c
>>>>>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>>>>>> @@ -73,6 +73,8 @@ long hvm_physdev_op(int cmd, 
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>      {
>>>>>>>      case PHYSDEVOP_map_pirq:
>>>>>>>      case PHYSDEVOP_unmap_pirq:
>>>>>>> +        break;
>>>>>>> +
>>>>>>>      case PHYSDEVOP_eoi:
>>>>>>>      case PHYSDEVOP_irq_status_query:
>>>>>>>      case PHYSDEVOP_get_free_pirq:
>>>>>>
>>>>>> ... that ought to match the code. IOW you've again lost why it is 
>>>>>> okay(ish)
>>>>>> (or even necessary) to also permit the op for non-Dom0 (e.g. a PVH 
>>>>>> stubdom).
>>>>>> Similarly imo Dom0 using this on itself wants discussing.
>>>>> Do you mean I need to talk about why permit this op for all HVM
>>>>
>>>> You don't need to invent reasons, but it needs making clear that wider than
>>>> necessary (for your purpose) exposure is at least not going to be a 
>>>> problem.
>>>>
>>>>> and  where can prevent PVH domain calling this for self-mapping, not only 
>>>>> dom0?
>>>>
>>>> Aiui use on itself is limited to Dom0, so only that would need clarifying
>>>> (along the lines of the above, i.e. that/why it is not a problem). For
>>>> has_pirq() domains use on oneself was already permitted before.
>>>
>>> Changed commit message to below. Please check and that will be great 
>>> helpful if you would show me how to modify it.
>>> {
>>> x86/pvh: Allow (un)map_pirq when dom0 is PVH
>>>
>>> Problem: when dom0 is PVH type and passthrough a device to HVM domU, Qemu
>>> code xen_pt_realize->xc_physdev_map_pirq and libxl code pci_add_dm_done->
>>> xc_physdev_map_pirq map a pirq for passthrough devices.
>>> In xc_physdev_map_pirq call stack, function hvm_physdev_op has a check
>>> has_pirq(currd), but currd is PVH dom0, PVH has no X86_EMU_USE_PIRQ flag,
>>> so it fails, PHYSDEVOP_map_pirq is not allowed for PVH dom0 in current
>>> codes.
>>>
>>> To solve above problem, need to remove the chack has_pirq() for that
>>> situation(PHYSDEVOP_map_pirq is issued by dom0 for domUs). But without
>>> adding other restrictions, PHYSDEVOP_map_pirq will be allowed wider than
>>> what the problem need.
>>> So, clarify below:
>>>
>>> For HVM domUs whose XENFEAT_hvm_pirqs is not enabled,it is fine to map
>>> interrupts through pirq for them. Because pirq field is used as a way to
>>> reference interrupts and it is just the way for the device model to
>>> identify which interrupt should be mapped to which domain, however
>>> has_pirq() is just to check if HVM domains route interrupts from
>>> devices(emulated or passthrough) through event channel, so, remove
>>> has_pirq() check has no impact on HVM domUs.
>>>
>>> For PVH domUs that performs such an operation will fail at the check
>>> xsm_map_dedomain_pirq() in physdev_map-nirq().
>>>
>>> For PVH dom0, it uses vpci and doesn't use event channel, as above talks,
>>> it also has no impact.
>>> }
>>
>> This is better than what you had before, and I don't really fancy re-
>> writing the description effectively from scratch. So let's just go from
>> there. As to the "excess" permission for the sub-ops: The main thing I'm
>> after is that it be clarified that we're not going to introduce any
>> security issues here. That requires auditing the code, and merely saying
>> "also has no impact" is a little too little for my taste. For Dom0 an
>> argument may be that it is overly powerful already anyway, even if for
>> PVH we're a little more strict than for PV (I think).
> 
> Hi Jan, for clarity and to make this actionable, are you suggesting to
> clarify the commit message by adding wording around "Dom0 is overly
> powerful already anyway so it is OK so this is OK" ?

Yes, perhaps with "deemed" added. And text for DomU-s similarly extended,
as the pointing at the XSM check is presently incomplete (stubdom-s can
pass that check, after all, as can de-priv qemu running in Dom0).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.