[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the first access


  • To: Henry Wang <Henry.Wang@xxxxxxx>, Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Fri, 27 Jan 2023 12:52:52 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dhijC0uOxXO1WKKXskNj09v5rtoISG0vbj8f9f9JMAM=; b=m75QuJxAbkUasDJcAbSfnC9M5TBMd2enT6nkONGv/s+hCxnsG1v6r/B28GzWo3+JoI3PYifYYk5GVMrv33nVRzdFOaoW6AVoggJXFvLueejS5HblYfYpkd7zIeNSdCLVreHk3pBjRYFEcesqrjVN1YAuFafi5+OQ4b6ovS4wjYPFLGiFTpHfLZnzh2qgM+rMIuQ/kwlaI+S9EaMwpT3goICwq3BXO/7JLDpvew9mmBaFn4aWF0J/yZpy5XK7qIqgZ8ZuAsYEW2Kcf1lCjaFpJObK5pJRiLJJoa3RQIrtUSqKtZStqlBif2X98xN87LWHwzNxTD5yRRVZSBctc0/aUw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HwCDcd1VN0gR3Qq+MnTto6puKhcWRCMuWK1WprBgmPD5tZDhWNRdxK5e6PGrSL7WvZDzehFznFVCskebYGJXuBz0fM+Xqn24ZSaOySyN2GLLydXWlthjUXt5lS0K5o4Ni1sqxnf+CWbc5J/6RTxVEQ7xJGV2b7W4Ng5VNYIlKjy3+Rw6YNQTN+EDfC43GvMMHKIgQwUVxNkKRO5scfse+ick0+RT977VYnYV8LvBIXugEB91ga0/Qr2zlWhbhpwwlqdAVnTCliWRlJMyNPCffdxbArAV6Xuh4Z90NKvP9hhcPXxY+nN/k23xIc8pXuUHGP/OgnYARYTvDiomHFOmqw==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Fri, 27 Jan 2023 11:53:20 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Henry,

On 27/01/2023 12:39, Henry Wang wrote:
> 
> 
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xxxxxxx>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>>>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>>>>>        /* Base address for guest GIC */
>>>>>>>        paddr_t dbase; /* Distributor base address */
>>>>>>>        paddr_t cbase; /* CPU interface base address */
>>>>>>> +    paddr_t csize; /* CPU interface size */
>>>>>>> +    paddr_t vbase; /* virtual CPU interface base address */
>>>>>> Could you swap them so that base address variables are grouped?
>>>>>
>>>>> Sure, my original thought was grouping the CPU interface related fields
>> but
>>>>> since you prefer grouping the base address, I will follow your suggestion.
>>>>
>>>> I would actually prefer your approach because it is easier to associate
>>>> the size with the base.
>>>>
>>>> An alternative would be to use a structure to combine the base/size. So
>>>> it is even clearer the association.
>>>>
>>>> I don't have a strong opinion on either of the two approach I suggested.
>>>
>>> Maybe we can do something like this:
>>> ```
>>> paddr_t dbase; /* Distributor base address */
>>> paddr_t vbase; /* virtual CPU interface base address */
>>> paddr_t cbase; /* CPU interface base address */
>>> paddr_t csize; /* CPU interface size */
>>> ```
>>>
>>> So we can ensure both "base address variables are grouped" and
>>> "CPU interface variables are grouped".
>>>
>>> If you don't like this, I would prefer the way I am currently doing, as
>>> personally I think an extra structure would slightly be an overkill :)
>>
>> This is really a matter of taste here.
> 
> Indeed,
> 
>> My preference is your initial
>> approach because I find strange to have virtual CPU interface
>> information the physical one.
> 
> then I will keep it as it is if there is no strong objection from Michal.
there are none. It was just a suggestion.

~Michal



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.