[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PVH Dom0 related UART failure


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 23 May 2023 12:59:49 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WtYrYjSQ21RavIziIh1mIxcGPRwDCU5VsbbaVOKXO28=; b=YjE9Khzh6OxvVlJetn1GsB1ofJR/M0g/D1ZUsKb1Dmn5wGe5JwnUMPaCJmS8Yu27j0UdmqC736ZAf/65aUB/b0EZMVPlgColy9PNW7czulU9niILBfbJymd8ziQdsaV3/Ziklj4lWd4WrbUCENVIzIv0dCwr/fmL7wg01qf7i3B88neEMrnajo2P8Rv4w4c0mFsR3OhEa/CmO9aoJhjVeeF3VEAMJ5BbdgwTMUohx99m4yccqXmKfNkoZSMmm12xsDcEKZ5Glx5zXfEf6oiVB45RnDvm19VKXIobatMUJrBnrrDCkFIhTUgEIiH6fLoFPlYuLKRTj3ReGgf69EZA9A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ijHgPNX8r9bgClz6UbkzA5Xx1RA2VL1+nJmAS2r+WyHqWb6VoOnl6Om4cS1rxSa8dwp3IQGDLkKilW4k50mGuhy65vhp5CsSWeki+yHtfNxrCx683xJ0DD65d7cWId69yR/op+n+SFqU2KLUaAq9JS1sSAo5X+KhmS/bNvrJp9aSyZ7/t/PhF87eT5WziYqY9kRz9lQ3Vubi0c/0Xj9ks6KqF0SKCtcjjK47ucac1waFXNxx3pNYQAc7TrP6Br7O3T52xygBIMBxDhCXL1iiHBSGCb6WQf/Z0jhpgi2Ncdn1a0BQLTQtDkyU7wXEpWqWj4TzdPpm4jK7uJjCkbOwKQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, andrew.cooper3@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, marmarek@xxxxxxxxxxxxxxxxxxxxxx, xenia.ragiadakou@xxxxxxx
  • Delivery-date: Tue, 23 May 2023 11:00:22 +0000
  • Ironport-data: A9a23:yhS/yKjcdF0MWQBuNp52ETjBX161jBEKZh0ujC45NGQN5FlHY01je htvDGCDP/reZGfzKth/a97l8h9S6JLdmt5kGVBkrihjFC8b9cadCdqndUqhZCn6wu8v7q5Ex 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv cKai8DEMRqu1iUc3lg8sspvkzsx+qyq0N8klgZmP6sT4QWFzyN94K83fsldEVOpGuG4IcbiL wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+ tQ4CmEHTBWOgtumg/WjS+ZNrZQJPvfSadZ3VnFIlVk1DN4AaLWbGeDmwIQd2z09wMdTAfzZe swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluSzWDbWUoXiqcF9hEGXq 3iA523kKhobKMae2XyO9XfEaurnxHqhBdtDRePlnhJsqGfJ52xLMBMraQf4mcC5rHCuZ+MAe 1NBr0LCqoB3riRHVOLVXRe1vXqFtR40QMdLHqsx7wTl4rHP/w+TC2wATzhAQN8rrsk7QXotz FDht8PkA3ljvaOYTVqZ96yItnWiNC4NN2gAaCQYCwwf7LHLpYgpixvVQ9VLEairj8b0EzX93 zCLqiclg7wZy8UM0s2T7V3BgjvqvJHGTwc57wbQQ0qs6w8/b4mgD7FE8nDe5PdEaYqcFV+Iu SBen9DEtLxQS5aQiCaKXeMBWqmz4OqIOyHdhlgpGIQ98zOq+DioeoU4DCxCGXqF+/0sIVfBC HI/cysKjHOPFBNGtZNKXr8=
  • Ironport-hdrordr: A9a23:hqAyzqoOgYK7QIC0lobqO78aV5rbeYIsimQD101hICG9Evb0qy nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2Ki7GC/9SJIUbDH4VmpM VdmsZFaOEYdmIK6voT4GODYqodKNvsytHWuQ8JpU0dMz2DaMtbnnZE4h7wKDwReOHfb6BJbq Z14KB81kOdUEVSVOuXLF8fUdPOotXa/aiWHCLvV3YcmXGzZSrD0s+ALySl
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, May 23, 2023 at 08:44:48AM +0200, Jan Beulich wrote:
> On 23.05.2023 00:20, Stefano Stabellini wrote:
> > On Sat, 20 May 2023, Roger Pau Monné wrote:
> >> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
> >> index ec2e978a4e6b..0ff8e940fa8d 100644
> >> --- a/xen/drivers/vpci/header.c
> >> +++ b/xen/drivers/vpci/header.c
> >> @@ -289,6 +289,13 @@ static int modify_bars(const struct pci_dev *pdev, 
> >> uint16_t cmd, bool rom_only)
> >>       */
> >>      for_each_pdev ( pdev->domain, tmp )
> >>      {
> >> +        if ( !tmp->vpci )
> >> +        {
> >> +            printk(XENLOG_G_WARNING "%pp: not handled by vPCI for %pd\n",
> >> +                   &tmp->sbdf, pdev->domain);
> >> +            continue;
> >> +        }
> >> +
> >>          if ( tmp == pdev )
> >>          {
> >>              /*
> >> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> >> index 652807a4a454..0baef3a8d3a1 100644
> >> --- a/xen/drivers/vpci/vpci.c
> >> +++ b/xen/drivers/vpci/vpci.c
> >> @@ -72,7 +72,12 @@ int vpci_add_handlers(struct pci_dev *pdev)
> >>      unsigned int i;
> >>      int rc = 0;
> >>  
> >> -    if ( !has_vpci(pdev->domain) )
> >> +    if ( !has_vpci(pdev->domain) ||
> >> +         /*
> >> +          * Ignore RO and hidden devices, those are in use by Xen and vPCI
> >> +          * won't work on them.
> >> +          */
> >> +         pci_get_pdev(dom_xen, pdev->sbdf) )
> >>          return 0;
> >>  
> >>      /* We should not get here twice for the same device. */
> > 
> > 
> > Now this patch works! Thank you!! :-)
> > 
> > You can check the full logs here
> > https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/4329259080
> > 
> > Is the patch ready to be upstreamed aside from the commit message?
> 
> I don't think so. vPCI ought to work on "r/o" devices. Out of curiosity,
> have you also tried my (hackish and hence RFC) patch [1]?

For r/o devices there should be no need of vPCI handlers, reading the
config space of such devices can be done directly.

There's some work to be done for hidden devices, as for those dom0 has
write access to the config space and thus needs vPCI to be setup
properly.

The change to modify_bars() in order to handle devices without vpci
populated is a bugfix, as it's already possible to have devices
assigned to a domain that don't have vpci setup, if the call to
vpci_add_handlers() in setup_one_hwdom_device() fails.  That one could
go in separately of the rest of the work in order to enable support
for hidden devices.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.