[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 7/9] xen/arm: create shared memory nodes in guest device tree


  • To: Luca Fancellu <Luca.Fancellu@xxxxxxx>, Julien Grall <julien@xxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Fri, 9 Sep 2022 15:35:40 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bTXaZa66l9Tvpto+JOENxDB9ttjjVJlV81dOMp4Hv1s=; b=Dc4p+2rTee2VTTnruc0HPpw3IVRScrfxkb546YMMwgBCeAKYYKbeiJbM4ojzt5oxe/hBNMV1aPdZC+nGyqL7fgbLTzOHiEC1Ieg9k8M23JHrPDbi8XxIFG3tWUWI9ioLMCk4/ZdUTsfmK6k8BdMTCSTuLvjxbt7eyrzGTEVbv/qK87MIIRU2v4lZWpdonR2pWg+fll9QGpDNrO28Z+kMH8iRhojctrioxwpoWLqU0pb+gtzrND8NxQrq0HVvTI0iSmma9QpxELp532v+550ehc80qqBuJ9KeMb2wDEhnrhy/qibjR4mdV0VTjRhRUl5ln2yji5blt8SJnImaz+TPqQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=czGeS8hw/H+Wd8xCk5YTZuL+TnSO2edWGhs+C5NlqzEQGhMiyhTWnpHoH7sW6qCjZYVUI+TESxrqluchFTYq5vN+MtUf8IIrGA4InsHBuSaR1+EKrvJIPg8h2pdd+T37GaDa3HiTpeli7jgPga9KXosv700uF/vi6Jh3khxtKGhEnFcp1LtH22DASVgVaTL06EFKkUYpc5UjNA3UnaI0tRGThbooFV1650PmkfHdnUKBE4peF7640raxV8HjqKocyP4KohzVik/rS/CklIRNmqyLhQNCXyNK1B9QjzAWa6WIpYZaC3SPrQmkw0EtBHlAM9zPmiXYrcoRsd3DYjhovA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Penny Zheng <Penny.Zheng@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Delivery-date: Fri, 09 Sep 2022 13:35:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Luca,

On 09/09/2022 14:35, Luca Fancellu wrote:
> 
>> On 9 Sep 2022, at 10:40, Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote:
>>
>> Hi Julien,
>>
>>> On 9 Sep 2022, at 10:27, Julien Grall <julien@xxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> On 09/09/2022 08:45, Bertrand Marquis wrote:
>>>>>
>>>>> It should be:
>>>>>
>>>>> /*
>>>>> * TODO:
>>>>> *
>>>>>
>>>>> I think this is good to go. The two minor style issues could be fixed on
>>>>> commit. I haven't committed to give Julien & Bertrand another chance to
>>>>> have a look.
>>>> I think that it is ok to fix those on commit and I am ok with the rest so:
>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>>
>>> This series doesn't build without !CONFIG_STATIC_SHM:
>>>
>>> UPD     include/xen/compile.h
>>> Xen 4.17-unstable
>>> make[1]: Nothing to be done for `include'.
>>> make[1]: `arch/arm/include/asm/asm-offsets.h' is up to date.
>>> CC      common/version.o
>>> LD      common/built_in.o
>>> CC      arch/arm/domain_build.o
>>> arch/arm/domain_build.c: In function ‘make_shm_memory_node’:
>>> arch/arm/domain_build.c:1445:1: error: no return statement in function 
>>> returning non-void [-Werror=return-type]
>>> }
>>> ^
>>> cc1: all warnings being treated as errors
>>> make[2]: *** [arch/arm/domain_build.o] Error 1
>>> make[1]: *** [arch/arm] Error 2
>>> make: *** [xen] Error 2
>>>
>>> This is because...
>>>
>>>>>> +         * - xen,offset: (borrower VMs only)
>>>>>> +         *   64 bit integer offset within the owner virtual machine's 
>>>>>> shared
>>>>>> +         *   memory region used for the mapping in the borrower VM
>>>>>> +         */
>>>>>> +        res = fdt_property_u64(fdt, "xen,offset", 0);
>>>>>> +        if ( res )
>>>>>> +            return res;
>>>>>> +
>>>>>> +        res = fdt_end_node(fdt);
>>>>>> +        if ( res )
>>>>>> +            return res;
>>>>>> +    }
>>>>>> +
>>>>>> +    return res;
>>>>>> +}
>>>>>> +#else
>>>>>> +static int __init make_shm_memory_node(const struct domain *d,
>>>>>> +                                       void *fdt,
>>>>>> +                                       int addrcells, int sizecells,
>>>>>> +                                       const struct meminfo *mem)
>>>>>> +{
>>>>>> +    ASSERT_UNREACHABLE();
>>>
>>> ... there is a missing 'return -ENOTSUPP' here. While this is simple enough 
>>> to fix, this indicates to me that this version was not tested with 
>>> !CONFIG_STATIC_SHM.
>>>
>>> As this is the default option, I will not commit until I get confirmation 
>>> that some smoke was done.
>>
>> This is a case our internal CI should have gone through.
>> Let me check and come back to you.
>>
> 
> Hi Julien,
> 
> Thanks for catching it, in this case I can confirm that the problem was that 
> we are building with CONFIG_DEBUG enabled, I don’t know why GCC doesn’t 
> complain when
> you have __builtin_unreachable() in that function without any return value, 
> it doesn’t even throw a warning. Could it be considered a bug in GCC?

This is not a bug. The documentation states what is the purpose of it even in 
case of functions returning type.
https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html

To sump up __builtin_unreachable generates code itself to return.

> 
> Building Xen without CONFIG_DEBUG instead shows up the error you found.
> 
> In this case this change will fix the problem, do you agree on it?
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 8c77c764bcf2..c5d66f18bd49 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1439,6 +1439,8 @@ static int __init make_shm_memory_node(const struct 
> domain *d,
>                                         const struct meminfo *mem)
>  {
>      ASSERT_UNREACHABLE();
> +
> +    return -EOPNOTSUPP;
>  }
>  #endif
> 
> Is it something that can be addressed on commit?
> 
> Cheers,
> Luca
> 
> 
>> Regards
>> Bertrand
>>
>>>
>>> Cheers,
>>>
>>> --
>>> Julien Grall
> 

~Michal



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.