[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: c/s 18470


  • To: "Jan Beulich" <jbeulich@xxxxxxxxxx>
  • From: "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
  • Date: Thu, 18 Sep 2008 14:37:36 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 17 Sep 2008 23:38:10 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AckYmPt5dt1sDLo5R5u2wrOZYnM1cgAufSxg
  • Thread-topic: c/s 18470

Jan,

For the 1st issue, I notice your patch (c/s 18435), and think it's good
not to limit dom by NR_CPUS. Your change has a precondition that we
already know the max dom number, then alloc dom map according to max dom
number, it's good at our old cpufreq version since at that version,
hypervisor initialize cpufreq dom AFTER we get all px info from dom0 and
hence can get max dom number.
However, recently we update hypervisor cpufreq logic greatly, change
cpufreq init process by per cpu (old version is per dom), in this way we
don't know max dom number and cannot use xmalloc_arry(), so we
temporarily use dom array limit by NR_CPUS, and mark it as TODO that
will update it in later future (i.e. by link list).

For the 2nd issue, our idea is to use flags to separate px init process
from runtime dynamic px handle (like ppc).

Thanks,
Jinsong

-----Original Message-----
From: Jan Beulich [mailto:jbeulich@xxxxxxxxxx] 
Sent: Wednesday, September 17, 2008 3:43 PM
To: Liu, Jinsong
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: c/s 18470

This changeset reverts two previous corrections, for reasons that escape
me.

First, the domain map is again being confined to NR_CPUS, which I had
submitted a patch to fix recently (yes, I realize the code has a TODO in
there, but those really get forgotten about far too often).

Second, the platform hypercall was reverted back to require all
information to be passed to Xen in one chunk, whereas I recall that even
Intel folks (not sure if it was you) agreed that allowing incremental
information collection was more appropriate.

Could you clarify why these changes were necessary and if/when you
plan to address the resulting issues?

Thanks, Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.