[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 02/12] xen/hvmlite: Factor out common kernel init code



On Fri, Jan 22, 2016 at 06:12:47PM -0500, Boris Ostrovsky wrote:
> On 01/22/2016 06:01 PM, Luis R. Rodriguez wrote:
> >On Fri, Jan 22, 2016 at 04:35:48PM -0500, Boris Ostrovsky wrote:
> >>HVMlite guests (to be introduced in subsequent patches) share most
> >>of the kernel initialization code with PV(H).
> >>
> >>Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> >>---
> >>  arch/x86/xen/enlighten.c |  225 
> >> ++++++++++++++++++++++++----------------------
> >>  1 files changed, 119 insertions(+), 106 deletions(-)
> >>
> >>diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> >>index d09e4c9..2cf446a 100644
> >>--- a/arch/x86/xen/enlighten.c
> >>+++ b/arch/x86/xen/enlighten.c
> >Whoa, I'm lost, its hard for me to tell what exactly stayed and what
> >got pulled into a helper, etc. Is there a possibility to split this
> >patch in 2 somehow to make the actual functional changes easier to
> >read? There are too many changes here and I just can't tell easily
> >what's going on.
> 
> 
> The only real changes that this patch introduces is it reorders some
> of the operations that used to be in xen_start_kernel(). This is
> done so that in the next patch when we add hvmlite we can easily put
> those specific to PV(H) inside 'if (!xen_hvm_domain())'. I probably
> should have said so in the commit message.

Ah, I see thanks.

> It is indeed difficult to review but I don't see how I can split
> this. Even if I just moved it (without reordering) it would still be
> hard to read.

A code shuffle but yet introducing non-functional changes as you did
in some other patches might help if possible, but sure if you can say
this is non-functional here or if you can split this up.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.