[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 11/11] xen/arm: translate virtual PCI bus topology for guests


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 8 Nov 2021 15:23:48 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PSu6tFoMXOq2G4LXBocxdp36pScLVF2ALjbM/o5dKCk=; b=GnHSIGuBZmabq9rvib3MjYdX+EYXWrPgWg9Z51EkAIUdE4pu76XgVk/iji8aZtjNK61uHax2pfURbV/iRM3FnmlypeYylsHP2TFB8B6JUUqX4MP1CdcPz9dpWAFliZZ7wfTDapQGWjRbo/1eLAbQmOVhgY3Sp6019Ve4OyrxnZ9CThkT+xelQ0SMkBhq4GTlPex8K9+xFexyerY3nD0sM/4LdUw8h8e6JSR6NA0uj5qGl+BD9FpHb7JXGOYLVqp0mClkfhDZMwUCHZ0ynkEIg16+XXiFfssV28Gl+k7NLhfWugxtih8nPxvAfa+PX8z0vCe4OzKHDdVpG+J+wrX6fA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g+zNnVMaN9oWUJCesvCIjxLxI5R/Bl5PVXt/Q6rOgeqvKhznjJfWK8IXuTbkQ+EO7jCmKv//+kz8MPYSaxB6/sq8EpqL/k7uLN+hOvekvil+9kY5iZ1bcXvMWQ/dYz7/vlMba0k66rHMV6N2QjeH5Ch8RzE2ygVNF/rPWX68NYczcvPhDJQ0Kfw2aW7wsDhvERuKZk6q17Mxt0v9BI6J2yBeFS5PcVVuO6QBkg5cS9+y+QfQJYOcRYYFow0PI52F1g5majUPQzmYonvbmlRT1gg0umL0cWxw8qKbbWMc/H/RWPGoJ6v6OJRR3iKTELxy4QYpxPcBm/c9H6lPvTHVJg==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "andrew.cooper3@xxxxxxxxxx" <andrew.cooper3@xxxxxxxxxx>, "george.dunlap@xxxxxxxxxx" <george.dunlap@xxxxxxxxxx>, "paul@xxxxxxx" <paul@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 08 Nov 2021 14:24:17 +0000
  • Ironport-data: A9a23:xiyxHqC7Vb6dGBVW/zDlw5YqxClBgxIJ4kV8jS/XYbTApDoi1GcCz DQeXzyDP6yKY2WhLt0kO4219U0DuZ6AnNc2QQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA xA2M4GYRCwMo/u1Si6FatANl1ElvU2zbue6WLGs1hxZH1c+EX540UI7wobVv6Yz6TSHK1LV0 T/Ni5W31G+Ng1aY5UpNtspvADs21BjDkGtwUm4WPJinj3eH/5UhN7oNJLnZEpfNatI88thW5 Qr05OrREmvxp3/BAz4++1rxWhVirrX6ZWBihpfKMkQLb9crSiEai84G2PQghUh/kQfWh9R/+ PR38sK6GB90PYDNkqciTEwNe81+FfUuFL7vJHG+tYqYzlHccmuqyPJrZK00FdRGoKAtWzgIr KFGbmBWBvyAr7veLLaTUO5ji95lNMD2FIgepmth3XfSCvNOrZXrHvSQuI8CjWhYasZmHa78a fEYcDtWXDfMfwVFJEwsKck3pbL97pX4W2IB8w/EzUYt2EDLxRF1+KjgNpzSYNPibexPgkudk UfX8G34Dw8yOcSWzHyO9XfEruXChz/hUYQeUrix7Od3gUa7z3YWThYRUDOTsfS/z0KzRd9bA 0gV4TY167g/8lSxSdvwVAH+p2SL1jYGUtpNF6sh6QeCyoLd+QPfDW8BJhZ/b9ghuN4zVCYd/ FaDlNP0BhRiqLSQD3ma89+8rzm/JCwUJm8qfjIfQE0O5NyLiKY3gxHUR9BvCpmciNHvBCrwy DCHqis5r7gLhMtN3KK+lXjFnjatq57hXgMzoALNUQqNyQd0Z5WsYYCy3mTK9vZLLIufTV6ps WANno6V6+VmJYqWiCWHTeEJHbeoz/WIKjvRhRhoBZZJyti20yf9J8YKumg4fRo3dJZfEdP0X KPNkRlju4dXMkuLV+gpadOpVscHzKKjKsuwA5g4ceFySpR2cQaG+gRnakiRw33hnSAQrE0vB XuIWZ3yVChHUMyL2BLzHr5AiuFzmkjS0EuKHcijpylLx4Zyc5J8pV0tFFKVJt4046qfyOk+2 4YObpDao/mzvQCXX8U2zWLxBQ1SRZTYLcqvwyCySgJkClA3cFzN89eLndscl3VNxsy5bNvg8 HCnQVN/w1Hin3DBIgjiQik9M+20BM8m8S5rY3BE0bOUN58LO9fH0UviX8FvIelPGBJLlKYco wY5lzWoXa0UF2WvF8U1Zpjhto1yHClHdirVVxdJlAMXJsY6LySQo4eMVlK2qEEmU3rm3eNj8 ubI/l6KHvI+q/FKUZ++hASHlAjq4xDwWYtaAiP1HzWkUB+1qNk1enCh15fa4agkcH3++9dT7 C7PaT8wrujRuY4ltt7PgKGPtYCyFOViWEFdGgHmAXyebEE2J0Kvnt1NVvimZzfYWD+m8amuf 7wNnfr9LOcGjBBBtI8lS+Rnyqc35t3Oob5Gz1s7QCWXPgrzUr4wcGOb2cRvt7FWwuMLswWBR U/SqMJRPq+EOZ25HQdJdhYldOmKydodhiLWsaYuOEz/6SIupOiHXExeMgOikitYKLcpYoopz f144Jwd6hCliwpsOdGD13gG+2OJJ30GcqMmqpBFX9O71lt1kglPOMWOBDX37ZeDb8R3HnMre jLE1rDfg7l8x1bZdyZhH3b6wucA148FvwpHzQFeKg3RyMbFnPI+wDZY7S8zElZO1hxC3u9+Z jprOklyKfnc9jtknpEeDWWlGgUHDxyF4E3hjVAOkTSBHUWvU2XMKkw7OPqMox9Foz4NIGAD8 eHK0nvhXBbrYNr1j3k7VkNSovD+ScB8q1/Zk8e9EsXZR5Q3bFIJWEN1ibbkf/c/Pf4MuQ==
  • Ironport-hdrordr: A9a23:ijNOaaxz6hbLXM7p7tTYKrPxyOskLtp133Aq2lEZdPULSKOlfp GV8MjziyWYtN9wYhAdcdDpAtjmfZr5z+8O3WBxB8bYYOCCggWVxe5ZnOnfKlHbakjDH6tmpN pdmstFeaPN5DpB/L/HCWCDer5Kqrn3k9HYuQ6d9QYUcegDUdAe0+4TMHf8LqQZfngjOXJvf6 Dsmvav6gDQMkg/X4CePD0oTuLDr9rEmNbPZgMHPQcu7E2rgSmz4LD3PhCE1lNGOgk/jIsKwC zgqUjU96+ju/a0xlv10HLS1Y1fnJ/ExsFYDMKBp8AJInHHixquZq5mR7qe1QpF6t2H2RIPqp 3hsh0gN8N85zf4eXy0mwLk303a3DMn+xbZuCmlqEqmhfa8aCMxCsJHi44cWADe8VAcsNZ117 8O936FtrJMZCmw0hjV1pztbVVHh0C0qX0tnao4lHpES7YTb7dXsMg24F5VKpEdByj3gbpXX9 WGNPuspMq+TGnqLEww5gJUsZ6RtzUIb1u7q3E5y42oO2M8pgE986MarPZv6UvouqhND6Ws3N 60QZiAoos+OvP+XZgNdNvpfvHHeFAlYSi8eV56cm6XXJ3uBRr22uvKCfMOlaaXRKA=
  • Ironport-sdr: cb61RG6maJ/nlqROdZjWA4RsP8R6krg0e2kQeC5uGmXQ+Gj7QC2kAYxSM96bnicbLOiSsqfYT1 MAvrMA73WJGJP83SKTWmVErf97l1QGicfdt7s/rvYX+ZuFICK7R+tKlFQn3hOCJI0cp3dVHphR KBpoDe4W3YEmSLLdbQQRLoZGdKiOBuf6jjn8Dx4YCjHxGKdj7oGxIKxcVHy7iMQ25kZ5oYwctk qYbVosezmcwRDJ+RyyIPSu/zBsypprkojkXHmdCJi1zkw64hzZDuCDccdrV5kP7quMbYfO04SC yuEXTGbUy/LEPYqdKpa3Ma1g
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Nov 08, 2021 at 11:16:42AM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 08.11.21 13:10, Jan Beulich wrote:
> > On 05.11.2021 07:56, Oleksandr Andrushchenko wrote:
> >> --- a/xen/arch/arm/vpci.c
> >> +++ b/xen/arch/arm/vpci.c
> >> @@ -41,6 +41,15 @@ static int vpci_mmio_read(struct vcpu *v, mmio_info_t 
> >> *info,
> >>       /* data is needed to prevent a pointer cast on 32bit */
> >>       unsigned long data;
> >>   
> >> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> >> +    /*
> >> +     * For the passed through devices we need to map their virtual SBDF
> >> +     * to the physical PCI device being passed through.
> >> +     */
> >> +    if ( !bridge && !vpci_translate_virtual_device(v->domain, &sbdf) )
> >> +            return 1;
> > Nit: Indentation.
> Ouch, sure
> >
> >> @@ -59,6 +68,15 @@ static int vpci_mmio_write(struct vcpu *v, mmio_info_t 
> >> *info,
> >>       struct pci_host_bridge *bridge = p;
> >>       pci_sbdf_t sbdf = vpci_sbdf_from_gpa(bridge, info->gpa);
> >>   
> >> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> >> +    /*
> >> +     * For the passed through devices we need to map their virtual SBDF
> >> +     * to the physical PCI device being passed through.
> >> +     */
> >> +    if ( !bridge && !vpci_translate_virtual_device(v->domain, &sbdf) )
> >> +            return 1;
> > Again.
> Will fix
> >
> >> @@ -172,10 +175,37 @@ REGISTER_VPCI_INIT(vpci_add_virtual_device, 
> >> VPCI_PRIORITY_MIDDLE);
> >>   static void vpci_remove_virtual_device(struct domain *d,
> >>                                          const struct pci_dev *pdev)
> >>   {
> >> +    ASSERT(pcidevs_locked());
> >> +
> >>       clear_bit(pdev->vpci->guest_sbdf.dev, &d->vpci_dev_assigned_map);
> >>       pdev->vpci->guest_sbdf.sbdf = ~0;
> >>   }
> >>   
> >> +/*
> >> + * Find the physical device which is mapped to the virtual device
> >> + * and translate virtual SBDF to the physical one.
> >> + */
> >> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf)
> > const struct domain *d ?
> Will change
> >
> >> +{
> >> +    const struct pci_dev *pdev;
> >> +    bool found = false;
> >> +
> >> +    pcidevs_lock();
> >> +    for_each_pdev( d, pdev )
> >> +    {
> >> +        if ( pdev->vpci->guest_sbdf.sbdf == sbdf->sbdf )
> >> +        {
> >> +            /* Replace virtual SBDF with the physical one. */
> >> +            *sbdf = pdev->sbdf;
> >> +            found = true;
> >> +            break;
> >> +        }
> >> +    }
> >> +    pcidevs_unlock();
> > I think the description wants to at least mention that in principle
> > this is too coarse grained a lock, providing justification for why
> > it is deemed good enough nevertheless. (Personally, as expressed
> > before, I don't think the lock should be used here, but as long as
> > Roger agrees with you, you're fine.)
> Yes, makes sense

Seeing as we don't take the lock in vpci_{read,write} I'm not sure we
need it here either then.

Since on Arm you will add devices to the guest at runtime (ie: while
there could already be PCI accesses), have you seen issues with not
taking the lock here?

I think the whole pcidevs locking needs to be clarified, as it's
currently a mess. If you want to take it here that's fine, but overall
there are issues in other places that would make removing a device at
runtime not reliable.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.