[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] vpci/msix: handle accesses adjacent to the MSI-X table


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 22 Mar 2023 18:05:55 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1tO6LWwIJvKxuNZ9gS973aQcaXeLO0WPRZAcNXifVCE=; b=jejfW5EFyMsF6nCrLJS9Ha+G70rivT8MB3gyPaoBGcLMxDKiHIbFNe/W+Ie13foTROc4yvLcW9hHwkhbjPQn5dBZdktnUbpevvLmf+xUqxSbqGtB1bKTJYvHaeArhZG3uTr431vcenk1pQ+yhftseQjfRTNqThcuSw6Dgv86pqM5zKCJUhYu5J7J9l8fXhMdLE4Z0+CrHO7YbsjU308cNzaYZQnzqZAyZWkGoeXik7oXvbwWefrI0NXo3yVuPbjKMoYN3fq2epMTK7Hu/7wXwTIZyaZAJxRx1fCmitZYsTRrMYFqfUuu/eHjuAlfKS+P4ggdRW3CzZu9JaNhsDUHBQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lwNeXtRefGaD8wu3VXqKJcE8j4fzidsjuOKlHlwvoNTI6nJxJjBB0RO+D7BXYGn8MOTJg50U/02apY13sVowFLG6seOC8fozkeg7G30V1/X/dOknBSa/RI/LTpCymjBOwmGuFmsYftABJm6szWYT9QqdcMBIVTEdB4I1L4XYCeiYKsqW3vs23+xhvPXLbn4xQXKD9amauxtnee2jJKaCdl6aF7o1BDYjgRwLYf+WWNN/Pc4C4ZkNWwczvCplET/QY8OUnUPJ8OZ7/ptWb2rCi1vBWfZhm/H16Lm+zRM81plcO5oJRC3jPOBp6DHR2dxcay+kFjVYd9+j/zUMZL3nxA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 22 Mar 2023 17:06:31 +0000
  • Ironport-data: A9a23:FSAvdqLUkkzZOy2dFE+Rw5QlxSXFcZb7ZxGr2PjKsXjdYENS1zEHy DQWXD+FP63cMWumKN9/bt+/8xgPsMSGz95gQAZlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpJrfPTwP9TlK6q4mhA5QZkPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c4tXHEJx 81EBwsxcxuD2++T7uKEU/Vz05FLwMnDZOvzu1lG5BSAV7MKZM6GRK/Ho9hFwD03m8ZCW+7EY NYUYiZuaxKGZABTPlAQC9Q1m+LAanvXKmUE7g7K4/dmpTGMkmSd05C0WDbRUsaNSshP2F6Ru 0rN/njjAwFcP9uaodaA2iv03rGezHOkB+r+EpWg2OdbhXGIx1UONwNOd0Gi+uChr0ihDoc3x 0s8v3BGQbIJ3E6hQ8T5Xha4iGWZpRNaUN1Ve8Uq5QfIxqfK7gKxAmkfUiUHeNEgrNUxRzEhy hmOhdyBONB0mLicSHbY+rLKqzq3YHIRNTVbOnFCShYZ6d7+po11lgjIUttoDK+yiJvyBC30x DeJ6iM5gt3/kPI26klyxnif6xrEm3QDZlFdCtn/No590j5EWQ==
  • Ironport-hdrordr: A9a23:rzwSFKi/56Q4Eea+fbqxF4lEmnBQXmEji2hC6mlwRA09TyVXra GTdZMgpHnJYVcqKRYdcLW7UpVoLkmwyXcY2+Us1PKZLWrbUIXBFvAf0WKg+UycJ8XGntQtqp uICpIOduEYb2IbsS+K2njdLz96+qj/zEnAv463pEuFDzsaCZ2IiT0XNu/xKDwSeOApP/QE/b Onl7t6jgvlV3QLT9ixQkIIV/LEoLTw5ejbSC9DKR47yRWEyQil4r7iExSew34lIkhy6IZn32 jZshDzop6uufGjyhPayiv64plMlMH6o+EzdPCku4w6KijMlg3tXohnVrGY1QpF2N2S1A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Mar 22, 2023 at 04:14:54PM +0100, Jan Beulich wrote:
> On 22.03.2023 15:30, Roger Pau Monne wrote:
> > Changes since v2:
> >  - Slightly adjust VMSIX_ADDR_SAME_PAGE().
> >  - Use IS_ALIGNED and unlikely for the non-aligned access checking.
> >  - Move the check for the page mapped before the aligned one.
> >  - Remove cast of data to uint8_t and instead use a mask in order to
> >    avoid undefined behaviour when shifting.
> >  - Remove Xen maps of the MSIX related regions when memory decoding
> >    for the device is enabled by dom0, in order to purge stale maps.
> 
> I'm glad you thought of this. The new code has issues, though:
> 
> > @@ -182,93 +187,201 @@ static struct vpci_msix_entry *get_entry(struct 
> > vpci_msix *msix,
> >      return &msix->entries[(addr - start) / PCI_MSIX_ENTRY_SIZE];
> >  }
> >  
> > -static void __iomem *get_pba(struct vpci *vpci)
> > +static void __iomem *get_table(struct vpci *vpci, unsigned int slot)
> >  {
> >      struct vpci_msix *msix = vpci->msix;
> >      /*
> > -     * PBA will only be unmapped when the device is deassigned, so access 
> > it
> > -     * without holding the vpci lock.
> > +     * Regions will only be unmapped when the device is deassigned, so 
> > access
> > +     * them without holding the vpci lock.
> 
> The first part of the sentence is now stale, and the second part is in
> conflict ...
> 
> > @@ -482,6 +641,26 @@ int vpci_make_msix_hole(const struct pci_dev *pdev)
> >          }
> >      }
> >  
> > +    if ( is_hardware_domain(d) )
> > +    {
> > +        unsigned int i;
> > +
> > +        /*
> > +         * For the hardware domain only remove any hypervisor mappings of 
> > the
> > +         * MSIX or PBA related areas, as dom0 is capable of moving the 
> > position
> > +         * of the BARs in the host address space.
> > +         *
> > +         * We rely on being called with the vPCI lock held in order to not 
> > race
> > +         * with get_table().
> 
> ... with what you say (and utilize) here. Furthermore this comment also wants
> clarifying that apply_map() -> modify_decoding() not (afaics) holding the lock
> when calling here is not a problem, as no mapping can exist yet that may need
> tearing down. (I first wondered whether you wouldn't want to assert that the
> lock is being held. You actually could, but only after finding a non-NULL
> table entry.)

Oh, yes, sorry, I should update those comments.  vpci_make_msix_hole()
gets called before bars[].enabled gets set, so there should be no
users of the mappings at that time because we don't handle accesses
when the BAR is not mapped.

Not sure whether we should consider an access from when the BAR was
actually enabled by a different thread could still continue while on
another thread the BAR has been disabled and enabled again (and thus
the mapping removed).  It's a theoretical race, so I guess I will look
into making sure we cannot hit it.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.