[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/8] arm: vgic: Split vgic_domain_init() functionality into two functions



Hi Julien,

On 06/21/2016 09:48 AM, Julien Grall wrote:


On 21/06/16 15:36, Shanker Donthineni wrote:


On 06/21/2016 05:49 AM, Julien Grall wrote:
Hello Shanker,

On 19/06/16 00:45, Shanker Donthineni wrote:
Split code that installs mmio handlers for GICD and Re-distributor
regions to a new function. The intension of this separation is to defer
steps that registers vgic_v2/v3 mmio handlers.

Looking at this patch and the follow-up ones, I don't think this is
the right way to go. You differ the registration of the IO handlers
just because you are unable to find the size of the handlers array.

Is there any better approach?

Possibly using a different data structure.

I am wondering if the array for the handlers is the best solution
here. On another side, it would be possible to find the maximum of
handlers before hand.

The purpose of this change is to limit size of 'struct domain' less than
PAGE_SIZE. I can think of second approach split vgic_init() into two
stages, one for vgic registration and the second one for vgic_init().
This also requires a few lines of code changes to vgic_v2/v3_init() and
vgic_init().

I am fine as long as vgic_register_ does not do more than counting the number of IO handlers. You could re-use vgic_init_v{2,3} for this purpose.

The way we are doing vgic_init() initialization has to be cleaned-up and rearrange a few lines of code for retrieving the number mmio handlers that are required dom0/domU domain.

Regards,


--
Shanker Donthineni
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
Foundation Collaborative Project


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.