|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 07/24] xen/arm: Introduce xen, passthrough property
On 23/02/15 15:15, Ian Campbell wrote:
> On Fri, 2015-02-20 at 17:03 +0000, Julien Grall wrote:
>> On 20/02/15 15:42, Ian Campbell wrote:
>>> On Tue, 2015-01-13 at 14:25 +0000, Julien Grall wrote:
>>>> @@ -919,8 +943,14 @@ static int make_timer_node(const struct domain *d,
>>>> void *fdt,
>>>> return res;
>>>> }
>>>>
>>>> -/* Map the device in the domain */
>>>> -static int map_device(struct domain *d, struct dt_device_node *dev)
>>>> +/* For a given device node:
>>>
>>> Strictly speaking should be:
>>> /*
>>> * For a given...
>>>
>>> (I don't care all that much, but since I'm commenting elsewhere)
>>
>> Hmmm right. I will change it.
>
> FWIW I noticed this pattern a lot in this series.
>
>>>> @@ -947,7 +979,7 @@ static int map_device(struct domain *d, struct
>>>> dt_device_node *dev)
>>>> }
>>>> }
>>>>
>>>> - /* Map IRQs */
>>>> + /* Give permission and map IRQs */
>>>
>>> Another Nit: " " -> " ".
>>>
>>>> + if ( need_mapping )
>>>> + {
>>>> + /*
>>>> + * Checking the return of vgic_reserve_virq is not
>>>> + * necessary. It should not fail except when we try to map
>>>> + * twice the IRQ. This can happen if the IRQ is shared
>>>
>>> "when we try to map the IRQ twice"
>>>
>>> Other than those nits the code itself looks good, will ack once we've
>>> agreed on the bindings wording.
>>
>> BTW, should we upstream the bindings to device tree git?
>
> Arguably we should upstream all of our bindings (e.g.
> docs/misc/arm/device-tree/*, admittedly a single file right now) but
> doing just one/some seems worse than keeping them in tree.
>
> IOW it should be all or nothing, and I have no problem with you deciding
> that nothing is easier for you here...
For now, I will stick for nothing :).
Regards,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |