[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 02/12] early PV on HVM
On Tue, Jun 08, 2010 at 05:25:52PM +0100, Stefano Stabellini wrote: > On Tue, 8 Jun 2010, Konrad Rzeszutek Wilk wrote: > > On Tue, Jun 08, 2010 at 04:55:33PM +0100, Stefano Stabellini wrote: > > > On Tue, 8 Jun 2010, Konrad Rzeszutek Wilk wrote: > > > > > > > + HYPERVISOR_shared_info = (struct shared_info *)shared_info_page; > > > > > > > + > > > > > > > + /* Don't do the full vcpu_info placement stuff until we have a > > > > > > > + possible map and a non-dummy shared_info. */ > > > > > > > > > > > > Might want to mention where the full vpcu placement is done. > > > > > > > > > > The comment is not accurate, we actually don't do any vcpu_info > > > > > placement on hvm because it is not very useful there. > > > > > Better just to remove the comment (I have done so in my tree). > > > > > > > > > > > > + per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0]; > > > > > > > > > > So.. what is the purpose of the per_cpu(xen_vcpu, 0) then? > > > > > > > > > > the vcpu info placement memory area is stored in per_cpu(xen_vcpu_info, > > > cpu); > > > per_cpu(xen_vcpu, cpu) is just a pointer to that area if it is > > > available, otherwise it points to the vcpu_info struct in the shared > > > info page. > > > > I was just wondering why are we doing this when you say: > > " don't do any vcpu_info placement on hvm because it is not very useful > > there." > > > > So if it is not useful, why do it? > > > > I think Jeremy replied to your question better than me: we still need > the vcpu_info stuff for the timer and event channels, but we don't need > it to be at a specific address in kernel memory. Ok, can you add that comment for the usage of the per_cpu(xen_vcpu,0) and mention that this is bootstrap code - hence only starting at CPU 0. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |