[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/04] Kexec / Kdump: Release 20061122 (xen-unstable-12502)



On Wed, 2006-11-29 at 20:13 +0900, Magnus Damm wrote: 
> On 11/29/06, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> > On Wed, 2006-11-29 at 17:17 +0900, Magnus Damm wrote:
> > >
> > > The kexec tool creates (at load time) one PT_NOTE program header per
> > > note exported through /proc/iomem. The number of PT_NOTE program headers
> > > is the same as the NR_CPUS constant in the hypervisor.
> >
> > The guest kernel creates entries in /proc/iomem by calling
> > kexec_get_cpu(cpu) until it returns EINVAL. This currently happens when
> > cpu>NR_CPUS.
> >
> > I think this function should return EINVAL for cpu>num_present_cpus()
> > instead. Xen doesn't currently do PCPU hotplug and this wouldn't be the
> > only thing that would need fixing if it ever does (percpu data would be
> > another one I think ;-)).
> >
> > This would cause the tools to create notes only for CPUs which really
> > exist. That would make the loop in machine_crash_kexec() unnecessary.
> 
> I feel that using bss instead of per-cpu data is more robust and will
> make future cpu hotplug support a breeze to implement - at least in
> the case of kexec. Using bss also makes the loop in
> machine_crash_kexec() unnecessary.
> 
> Using num_present_cpus() will of course work as well, but I'd like to
> avoid adding code that likely needs to be rewritten in the near
> future.. But you know the future better than I do, so what do you
> think? =)

I don't think anyone is planning PCPU hotplug anytime soon (I've not
even heard rumours about the distant future ;-)). I do think that using
the existing infrastructure is the right way to go though rather than
open coding a different per cpu mechanism to solve a problem which
doesn't currently exist. If someone implements PCPU hotplug they will no
doubt need to update the per cpu infrastructure and if kdump is using it
then it can be taken into consideration at that time.

Cheers,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.