[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [patch] architecture-specific ELF header checking



On Tue, 2006-06-13 at 10:15 +0100, Keir Fraser wrote:
> On 12 Jun 2006, at 21:12, Hollis Blanchard wrote:
> 
> > This patch has only been compile-tested on x86, but it should be pretty
> > straightforward. It could break IA64 since it adds checks they weren't
> > doing before, but I would expect their ELF binaries are labeled
> > properly.
> 
> I am not keen on adding loads of -D CFLAGS options for very 
> localised/specific macros.

I agree; a per-arch header file would be ideal.

> They could go in a per-arch header file, but 
> I think in this case just having ifdef's in xc_elf.h is clean enough.

I would like to minimize the amount of code modified by new
architectures. I think this is a worthy goal because it would avoid
patch conflicts and minimize the chances of accidentally breaking other
architectures. (I guess this is true of all modular code actually; it
would be nice if one could add a new scheduler just by adding a new
source file, without needing to modify other code.)

In general we can use the build system to give us indirection, instead
of using conditionals in the code. For example, consider
xen/include/public/arch-*.h: just like the "asm" symlink, the build
system could create a symlink to the appropriate architecture's header
for us.

If we had a per-arch header file, I could place PPC ELF definitions in
that. If we keep everything in e.g. xc_elf.h, that means I need to
modify it.

> Nothing outside the ELF-parsing code should be looking at these values 
> so keeping them private is sensible. Apart from that, the general idea 
> is fine so I'll modify and apply.

Thanks; I will add ifdefs to xc_elf.h when I see your commit.

-- 
Hollis Blanchard
IBM Linux Technology Center


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.