[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the first access


  • To: Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Henry Wang <Henry.Wang@xxxxxxx>
  • Date: Fri, 27 Jan 2023 11:30:50 +0000
  • Accept-language: zh-CN, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TvrLCKSqW/H2c7Qr2QEItyxX61XTnTMZ1d6vVkbNZgQ=; b=EMPFRIeCl/57HYjJiXNByxAx/MLcHHYiyG+UT6qJYHjCT1BUcQF8kFqvsozTVVUARya52LG9vLmGKuUzmWXKtDBKXs3hu6MPCnb5z0Sli++n9fwAgi6gsviksqnMLk0GhdCgDqjZ+hhpC95JCrWaTbx+OJBOIZMimjh/IHW3d6q71yjwLjJRZgc+APR3P7BzAVHrh6NeMwA30Z/jzIk2Vyyxd/jC2E/O9KuPiRvXh08dv4+9D/BNMndNc1N5gRdPs5/x7mXShra0bxVub3roqS8idWGmJVP9VZ6rPvUmg7G7fMznNu9chuzn9yZnUTIhGquno/6l2E8pmPrLaInUpg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h+Qehr9IvC84WC2mxQbjg2izbf8m4t3f8M74iycxILtrIUpIotLOx3ri7G/wGBYkWDqj9ReTAXEuKPQLGLfcMnEFU+ZrNkW/i17roicMj9OfEVhx1lpfFHOty05FPTE0d3xWFg0LqOIoMlL/PEPG8kNtVANma7iNp1muCPVHTE+t7wLpQcQpRKpms/QtQZLJTDwMgOLTJOvvJcqpCiIv6X90dFiDC7+gcJHPsb96nJXL/YANPXPaP7Jg8iwT4TjHSpisOQ6Bo+jEHbqtTc2hrt3bnVNonR1ZOTWyC5uwmvm2f69IRQu0pMP/peq0lTQsClqwqrYJLL2+B/WYWjoR5A==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Fri, 27 Jan 2023 11:31:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHZKU4eIUricNo750ej72nDJ2giuq6nF8cAgAsT2vCAAASHgIAAAULQ
  • Thread-topic: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until the first access

Hi Julien,

> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
> the first access
> 
> Hi,
> 
> >>> @@ -153,6 +153,8 @@ struct vgic_dist {
> >>>       /* Base address for guest GIC */
> >>>       paddr_t dbase; /* Distributor base address */
> >>>       paddr_t cbase; /* CPU interface base address */
> >>> +    paddr_t csize; /* CPU interface size */
> >>> +    paddr_t vbase; /* virtual CPU interface base address */
> >> Could you swap them so that base address variables are grouped?
> >
> > Sure, my original thought was grouping the CPU interface related fields but
> > since you prefer grouping the base address, I will follow your suggestion.
> 
> I would actually prefer your approach because it is easier to associate
> the size with the base.
> 
> An alternative would be to use a structure to combine the base/size. So
> it is even clearer the association.
> 
> I don't have a strong opinion on either of the two approach I suggested.

Maybe we can do something like this:
```
paddr_t dbase; /* Distributor base address */
paddr_t vbase; /* virtual CPU interface base address */
paddr_t cbase; /* CPU interface base address */
paddr_t csize; /* CPU interface size */    
```

So we can ensure both "base address variables are grouped" and
"CPU interface variables are grouped".

If you don't like this, I would prefer the way I am currently doing, as
personally I think an extra structure would slightly be an overkill :)

Thanks.

Kind regards,
Henry

> 
> Cheers,
> 
> --
> Julien Grall

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.