[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN v4 04/11] xen/drivers: ns16550: Use paddr_t for io_base/io_size


  • To: Ayan Kumar Halder <ayankuma@xxxxxxx>, Ayan Kumar Halder <ayan.kumar.halder@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 30 Mar 2023 08:55:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ig8A952EM7iw9scAtj6SUv+cQh/8f3uyhdczDEc3b8g=; b=h+QbLXfhylTW41/DKG/mIls9wE+Afz0ejkWUuCmScN9fPp9M8qLwZk2zxlamhcoZqL9EwKsgjJBTcTYsRbe3k8rt/xi4t2mTiJySxNCCtl5htsU/pal2FFTZSyeQBfq/NN2N4TSj5AxgGklD+N2/l7ZYhpdeTCglv6tFGDPXIfZuNS8iDRIYopK+o0QlL540ONHFDze0SbbAuqVEse4TGBfiTkk3IWQkZ/joURk7acAYlshZ0pZ97D1i6eC3Zda10H/p21jKMN6u0ZlV3iReSMRGXBj0G0xju5X/GsvT5s1uwf3hlQyfLwZV6a8CNSo/hqfGi1H6AZF1d3mqle9QRA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JuP7ZS7JaS82xgIPPnCKTFLVC6emAqQlvEM8QChTBpWO0NB74XTi6Kh1mZd2kU4EQoEkVkYZ5HW7pD/P6Gk7lSYPYRKpYF5tT6NN0vH6TYjHlYaQxOB0lb4Se3cdnVGIxvcXBigM1IK6rRwLU10n9dGmVNN8QQFRyHycYPjKETC8kknxtoCM2UPF4nDbohIbnN0CIO559/XEEgR7M3S2cWfWiew7ZyauXmTXwfF8wBuF6OEydEd06QJu7XCgtZyDA/f4Kw24LCJ5LbUfxsHO2nw9/tPDSf1ctl/3KRlS77PZ0QsZHOiDUaZWNJQ2QV9NCXT/AjpXsls51t0uSpnJ5A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: sstabellini@xxxxxxxxxx, stefano.stabellini@xxxxxxx, julien@xxxxxxx, Volodymyr_Babchuk@xxxxxxxx, bertrand.marquis@xxxxxxx, andrew.cooper3@xxxxxxxxxx, george.dunlap@xxxxxxxxxx, wl@xxxxxxx, rahul.singh@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 30 Mar 2023 06:55:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 29.03.2023 16:35, Ayan Kumar Halder wrote:
> Please let me know if the below patch looks fine.

Apart from the comments below there may be formatting issues, which
I can't sensibly comment on when the patch was mangled by your mailer
anyway. (Which in turn is why it is generally better to properly send
a new version, rather than replying with kind-of-a-new-version on an
earlier thread.)

Additionally, up front: I'm sorry for the extra requests, but I'm
afraid to sensibly make the changes you want to make some things need
sorting first, to avoid extending pre-existing clumsiness. This is
irrespective of the present state of things clearly not being your
fault.

> @@ -1235,6 +1235,8 @@ pci_uart_config(struct ns16550 *uart, bool_t 
> skip_amt, unsigned int idx)
>                   /* MMIO based */
>                   if ( param->mmio && !(bar & PCI_BASE_ADDRESS_SPACE_IO) )
>                   {
> +                    uint64_t pci_uart_io_base;
> +
>                       pci_conf_write32(PCI_SBDF(0, b, d, f),
>                                        PCI_BASE_ADDRESS_0 + bar_idx*4, ~0u);
>                       len = pci_conf_read32(PCI_SBDF(0, b, d, f),
> @@ -1259,8 +1261,17 @@ pci_uart_config(struct ns16550 *uart, bool_t 
> skip_amt, unsigned int idx)
>                       else
>                           size = len & PCI_BASE_ADDRESS_MEM_MASK;
> 
> -                    uart->io_base = ((u64)bar_64 << 32) |
> -                                    (bar & PCI_BASE_ADDRESS_MEM_MASK);
> +                    pci_uart_io_base = ((uint64_t)bar_64 << 32) |
> +                                        (bar & PCI_BASE_ADDRESS_MEM_MASK);
> +
> +                    /* Truncation detected while converting to paddr_t */
> +                    if ( pci_uart_io_base != (paddr_t)pci_uart_io_base )
> +                    {
> +                        printk("ERROR: Truncation detected for io_base 
> address");
> +                        return -EINVAL;
> +                    }

Further down the function returns -1, so here you assume EINVAL != 1.
Such assumptions (and mixing of value spaces) is generally not a good
idea. Since there are other issues (see below), maybe you really want
to add a prereq patch addressing those? That would include changing the
"return -1" to either "return 1" or making it use some sensible and
properly distinguishable errno value.

> @@ -1519,20 +1530,40 @@ static bool __init parse_positional(struct 
> ns16550 *uart, char **str)
>   #ifdef CONFIG_HAS_PCI
>           if ( strncmp(conf, "pci", 3) == 0 )
>           {
> -            if ( pci_uart_config(uart, 1/* skip AMT */, uart - 
> ns16550_com) )
> +            int ret;
> +
> +            ret = pci_uart_config(uart, 1/* skip AMT */, uart - 
> ns16550_com);
> +
> +            if ( ret == -EINVAL )
> +                return false;
> +            else if ( ret )
>                   return true;

With skip_amt != 0 the function presently can only return 0. You're
therefore converting pre-existing dead code to another form of dead
code. Otoh (and as, I think, previously indicated) ...

> +
>               conf += 3;
>           }
>           else if ( strncmp(conf, "amt", 3) == 0 )
>           {
> -            if ( pci_uart_config(uart, 0, uart - ns16550_com) )
> +            int ret = pci_uart_config(uart, 0, uart - ns16550_com);
> +
> +            if ( ret == -EINVAL )
> +                return false;
> +            else if ( ret )
>                   return true;

... the equivalent of this in parse_namevalue_pairs() not checking
the return value is bogus. But it is further bogus that the case
where skip_amt has passed 1 for it sets dev_set to true
unconditionally, i.e. even when no device was found. IOW I also
question the correctness of the final "return 0" in pci_uart_config().
I looks to me as if this wants to be a skip_amt-independent
"return -ENODEV". skip_amt would only control whether uart->io_base is
restored before returning (leaving aside the question of why that is).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.