[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/18] x86: more bool_t to bool cleanup



On Mon, Jul 03, 2017 at 02:10:03AM -0600, Jan Beulich wrote:
> >>> On 30.06.17 at 19:01, <wei.liu2@xxxxxxxxxx> wrote:
> > Seeing that bool_t keeps creeping back in new patches I think the only 
> > solution
> > is to get rid of bool_t once and for all, as soon as possible.
> 
> I don't fully agree, and considering the flood of patches you're
> submitting in this area I think I finally need to voice my opinion
> here (I had really meant to only do this in Budapest, on a BoF
> I mean to propose): I appreciate you having and taking the
> time to do this cleanup. Nevertheless I'm not overly happy with
> it. For one, it requires time (even if not very much) to review.
> And as you likely know, patch volume and review bandwidth
> don't really fit together very well. (I had managed to deal with
> all my old, non-RFC reviews during the last week, only to find
> I'm again at almost 50 again this morning, and I haven't
> finished working through the all the xen-devel traffic having
> come in over the weekend. This is simply frustrating.)

I can see two aspects in your email. I want to step back and deconstruct
it a bit.

1. The cost / benefit of doing cleanup at once vs gradually (this
   series only)

You seem to think doing tree-wide cleanup in one go is a bad thing
because it will make backporting harder (from below).

I disagree. Changes are just changes. You and Andrew have done a lot of
changes in recent releases. They will also make backporting harder, but
you definitely think it is worthy of the effort.

It now all comes down to how one calculates cost / benefit. I prefer to
do things all at once so that we don't confuse future contributors and
save the need to point out the same issues over and over again. I value
consistency a lot.

Linux kernel has done this sort of tree-wide cleanups, Xen toolstack has
done it too (either by me or external contributors). I think we could do
the same thing for the hypervisor.

Obviously if you don't agree with this approach and my value
proposition, there is no point in me pursuing this further. I will let
you and Andrew figure out what is best suited. I just need a clear
answer from you two.

2. The limited bandwidth of reviewers

You seem to think more patches is a bad thing because you don't have
enough time to review all of them. I don't think there is an easy
solution and I share your frustration. We'd better discuss this during
the summit.

> 
> It would perhaps be okay if no comments were needed at all,

This is just impossible: 1. I can't read yours and Andrew's mind; 2. I
have my own opinions in certain areas - I will try to convince you or be
convinced, but that will require discussions.

> but I think in all of the series you sent to this effect there were
> further corrections necessary (leaving aside merely desirable
> ones). Especially bulk cleanup work like this should introduce as
> little overhead to others as possible. Hence the comments here
> also apply to the PV code splitting work you've apparently
> invested quite a bit of time into.
> 

I do try to be as careful as possible with the code -- I don't think I
ever broke the hypervisor too badly, if at all, in my recent work.  Now
I've mostly figured out what you and Andrew like patch-wise. If you
think of anything that can be done better, do let me know.

And frankly I didn't mean / want to do the cleanup in the first place --
I wanted to do another thing: PV in PVH. But the code as-is is just not
in the right shape to work with. As I went along, it gradually grew into
a useful project of its own right. To be clear, this is not to blame
anyone involved in the past or now. The constraints then were different
from the ones we have now.  I've foolishly signed myself up to this big
project because I think it is worth it. :-)

> Furthermore there's the issue of backports: If cleanups like
> these are being done over time (as code is being touched
> anyway), backports (security and non-security ones) generally
> go more smoothly.
> 
> But as said, I mean to bring the overall situation up in
> Budapest, so I'm not sure how much of need should / needs
> to be discussed up front via mail.
> 

I looked at the proposed session. It only covers the second aspect, so I
wrote this reply to at least make my first point across.

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.