[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 22/24] tools/libxl: arm: Use an higher value for the GIC phandle
On Thu, 2015-01-29 at 13:48 +0000, Julien Grall wrote: > On 29/01/15 12:28, Stefano Stabellini wrote: > > On Thu, 29 Jan 2015, Julien Grall wrote: > >> On 29/01/15 11:07, Stefano Stabellini wrote: > >>> On Tue, 13 Jan 2015, Julien Grall wrote: > >>>> The partial device tree may contains phandle. The Device Tree Compiler > >>>> tends to allocate the phandle from 1. > >>>> > >>>> Reserve the ID 65000 for the GIC phandle. I think we can safely assume > >>>> that the partial device tree will never contain a such ID. > >>>> > >>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> > >>>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> > >>>> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> > >>>> > >>> > >>> Shouldn't we at least check that the partial device tree doesn't contain > >>> a conflicting phandle? > >> > >> I don't think so. This will unlikely happen, and if it happens the guest > >> will crash with an obvious error. > > > > It is good that the error is obvious. > > > > But how expensive is to check for it? > > I would have to check the validity of the properties (name + value > size). At least the properties "linux,phandle" and "phandle" should be > checked. > > Though I could do in copy_properties but I find it hackish. Can't you just track the largest phandle ever seen during copy_properties and then use N+1 for the GIC? > > Think about the poor user that ends up in this situation: the fact that > > is unlikely only makes it harder for a user to figure out what to do to > > fix it. > > The poor use will have to write his device tree by hand to hit this > error ;). Or use a new version of dtc which does things differently for some reason. > So using the right phandle is not a huge drawback. > > Regards, > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |