[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC XEN PATCH 3/6] x86/pvh: shouldn't check pirq flag when map pirq in PVH


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Huang Rui <ray.huang@xxxxxxx>
  • Date: Tue, 21 Mar 2023 18:09:48 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4KR8SS7bAJAEyjjkp2H+c3Jo+wmQy4iL5rXZ577QZr4=; b=Wj47Kzp3JjCaxBwpPvu+KFRkDCEuRnHYLtid7yvhRoFgiYGsRbg+/0grAKYCBIYZrYZXbN/vpPNnbmoNpjK+v2uYAHTVDULieCA4xe9osbvHz2fRIxyJd8DRc342OmwucD2cp4cYEaVBkgT2BjXZazNciEtkqicuhWXik8WFnXCmhvRL87bWSXde3wUVmoaRuAw+YdRv+2AMl4cbgB8/xF8fExCGdlFteUaQtCDOYj2Cxe61pqTFKsnyGxJXMHyD7MVNJN4SeBB3Ogx+2tUJ2QhLKQaKKfgqC8l+/+GZhwLBDXcnw4hfDWLxIvIyTtG2MLFaUvxSMHr+gikElnlX4g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TBed0m6L2HTS7LvuSgkmN6kta5XjDRttBs1HpmxJHnvrQ+s5ItHK3qqGKbmr3VUiX8zYpFM/WF0X3kFiTcoaV1QM3c2ma6r1H+u2CjPDizQ6eLsRl+KD1gOZqXJ/9fZM48B0fABoCp48JtnAgnqHGBTJozqphux7D7g8e4Pkv6yRy/bX8EdPNWiFQ6Zon2mNfVQI5mblSGqAy1/od6IBtMELxBj41wBnJr9NMdOG6kFRAwpUQLq03xSsVONTKd2LiK1IUQ+zUqC20Yby3E45JeEOSVyPy1rRi36uEdQ1qK6X12qli0d5KnBtrw3k4ZRl6CAQmR3UD8sFnOtLuO/r3Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Deucher, Alexander" <Alexander.Deucher@xxxxxxx>, "Koenig, Christian" <Christian.Koenig@xxxxxxx>, "Hildebrand, Stewart" <Stewart.Hildebrand@xxxxxxx>, Xenia Ragiadakou <burzalodowa@xxxxxxxxx>, "Huang, Honglei1" <Honglei1.Huang@xxxxxxx>, "Zhang, Julia" <Julia.Zhang@xxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Tue, 21 Mar 2023 10:10:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Mar 15, 2023 at 11:57:45PM +0800, Roger Pau Monné wrote:
> On Sun, Mar 12, 2023 at 03:54:52PM +0800, Huang Rui wrote:
> > From: Chen Jiqian <Jiqian.Chen@xxxxxxx>
> > 
> > PVH is also hvm type domain, but PVH hasn't X86_EMU_USE_PIRQ
> > flag. So, when dom0 is PVH and call PHYSDEVOP_map_pirq, it
> > will fail at check has_pirq();
> > 
> > Signed-off-by: Chen Jiqian <Jiqian.Chen@xxxxxxx>
> > Signed-off-by: Huang Rui <ray.huang@xxxxxxx>
> > ---
> >  xen/arch/x86/hvm/hypercall.c | 2 --
> >  1 file changed, 2 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
> > index 405d0a95af..16a2f5c0b3 100644
> > --- a/xen/arch/x86/hvm/hypercall.c
> > +++ b/xen/arch/x86/hvm/hypercall.c
> > @@ -89,8 +89,6 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) 
> > arg)
> >      case PHYSDEVOP_eoi:
> >      case PHYSDEVOP_irq_status_query:
> >      case PHYSDEVOP_get_free_pirq:
> > -        if ( !has_pirq(currd) )
> > -            return -ENOSYS;
> 
> Since I've taken a look at the Linux side of this, it seems like you
> need PHYSDEVOP_map_pirq and PHYSDEVOP_setup_gsi, but the later is not
> in this list because has never been available to HVM type guests.

Do you mean HVM guest only support MSI(-X)?

> 
> I would like to better understand the usage by PVH dom0 for GSI
> passthrough before deciding on what to do here.  IIRC QEMU also uses
> PHYSDEVOP_{un,}map_pirq in order to allocate MSI(-X) interrupts.
> 

The MSI(-X) interrupt doesn't work even on the passthrough device at domU
even the dom0 is PV domain. It seems a common problem, I remember Christian
encountered the similar issue as well. So we fallback to use the GSI
interrupt instead.

Thanks,
Ray



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.