[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] gpu/xen: Fix a use after free in xen_drm_drv_init


  • To: Lv Yunlong <lyl2019@xxxxxxxxxxxxxxxx>, "airlied@xxxxxxxx" <airlied@xxxxxxxx>, "daniel@xxxxxxxx" <daniel@xxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Thu, 25 Mar 2021 06:53:46 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YBODP1W7j6ZcILAQc0m1CiDNrOSChCd9P+aXQPHAeeE=; b=eJJllhkPnJSqr+bTMKv7Ndh0QRPNqIoAFDfQoZJpZBK+RbX7yRBDIAiUEp6oRZrpwoyIKYJMnkdSmK5DqlVAyov/qiv3O3YT7rnKjHKT/s8aGIxdp0c9amBH2fyIcUPwv2vemn8n9bGTu+bUvRSSSIDAtINzl0aMbfEVJsfQNPNIg3IcZhHiAsYd/i1RT2c/qjLpRh2RD9VkiUlmqWiQSEdFDawJz35pwXd3gu5XBuzpIbKbZbBa06RIZ41oAx9cmgLs9GScmsDy/5detmWRExcAsGkqve2u2ClvCMGOtok7WzaOdIZEjk9zJXDWav9ZIXOSzINgdSvmLm8CYGoo7A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oFhhLCKUd+0HydW+yYX4rltHbaXPcYpGnwakRQC3gH5ryXAivi1kr/OVvYDmkFmlWigXPGmBvIexk3CZOI7DSPmz0RLE8+rEo9yRcdcm+dYBAiQdw6BL2U5z1SwpQseSIAGV4QtuzzlWfaWTtWnH6HJ/KXdnc/+OxdbcaWuJQRDwRyTX02CQR0cO+IoTvXd87IclO8LbBUj7HCM7Wb62qZiM28/Sb9xfCrvGsZFffIGldYu47GDBCxfZWKSueUkJCNiXGsDYH6W32srjxbO0kMyBr2rLismgk6lu0B+V1N3slVkco80Dcb91cngAt+hZVZ+AMXgTDwQKkLbblka0Zw==
  • Authentication-results: mail.ustc.edu.cn; dkim=none (message not signed) header.d=none;mail.ustc.edu.cn; dmarc=none action=none header.from=epam.com;
  • Cc: "dri-devel@xxxxxxxxxxxxxxxxxxxxx" <dri-devel@xxxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 25 Mar 2021 06:53:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXH4ZwjUja6WTelkeHEbGl7WGsSaqUSCcA
  • Thread-topic: [PATCH] gpu/xen: Fix a use after free in xen_drm_drv_init

Hi,

good catch

On 3/23/21 3:46 AM, Lv Yunlong wrote:
> In function displback_changed, has the call chain
> displback_connect(front_info)->xen_drm_drv_init(front_info).
> We can see that drm_info is assigned to front_info->drm_info
> and drm_info is freed in fail branch in xen_drm_drv_init().
>
> Later displback_disconnect(front_info) is called and it calls
> xen_drm_drv_fini(front_info) cause a use after free by
> drm_info = front_info->drm_info statement.
>
> My patch has done two things. First fixes the fail label which
> drm_info = kzalloc() failed and still free the drm_info.
> Second sets front_info->drm_info to NULL to avoid uaf.
>
> Signed-off-by: Lv Yunlong <lyl2019@xxxxxxxxxxxxxxxx>

Thank you for the patch,

Reviewed-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>

Will apply to drm-misc-next-fixes

Thank you,

Oleksandr

> ---
>   drivers/gpu/drm/xen/xen_drm_front.c | 6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c 
> b/drivers/gpu/drm/xen/xen_drm_front.c
> index 30d9adf31c84..9f14d99c763c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -521,7 +521,7 @@ static int xen_drm_drv_init(struct xen_drm_front_info 
> *front_info)
>       drm_dev = drm_dev_alloc(&xen_drm_driver, dev);
>       if (IS_ERR(drm_dev)) {
>               ret = PTR_ERR(drm_dev);
> -             goto fail;
> +             goto fail_dev;
>       }
>   
>       drm_info->drm_dev = drm_dev;
> @@ -551,8 +551,10 @@ static int xen_drm_drv_init(struct xen_drm_front_info 
> *front_info)
>       drm_kms_helper_poll_fini(drm_dev);
>       drm_mode_config_cleanup(drm_dev);
>       drm_dev_put(drm_dev);
> -fail:
> +fail_dev:
>       kfree(drm_info);
> +     front_info->drm_info = NULL;
> +fail:
>       return ret;
>   }
>   

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.