[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/17] blktap2 related bugfix patches

On Wed, Oct 15, 2014 at 2:05 AM, Wen Congyang <wency@xxxxxxxxxxxxxx> wrote:
> On 10/14/2014 11:48 PM, Ian Jackson wrote:
>> Wen Congyang writes ("[PATCH 00/17] blktap2 related bugfix patches"):
>>> These bugs are found when we implement COLO, or rebase
>>> COLO to upstream xen. They are independent patches, so
>>> post them in separate series.
>> blktap2 is unmaintained AFAICT.
>> In the last year there has been only one commit which shows evidence
>> of someone caring even slightly about tools/blktap2/.  The last
>> substantial attention was in March 2013.
>> (I'm disregarding commits which touch tools/blktap2/ to fix up compile
>> problems with new compilers, sort out build system and file
>> rearrangements, etc.)
>> The file you are touching in your 01/17 was last edited (by anyone, at
>> all) in January 2010.
>> Under the circumstances, we should probably take all these changes
>> without looking for anyone to ack them.
>> Perhaps you would like to become the maintainers of blktap2 ? :-)
> Hmm, I don't have any knowledge about disk format, but blktap2 have
> such codes(For example: block-vhd.c, block-qcow.c...). I think I can
> maintain the rest codes.

Congyang, were you aware that XenServer has a fork of blktap is
actually still under active development and maintainership outside of
the main Xen tree?


Both CentOS and Fedora are actually using snapshots of the "blktap2"
branch in that tree for their Xen RPMs.  (I'm sure CentOS is, I
believe Fedora is.)  It's not unlikely that the bugs you're fixing
here have already been fixed in the XenServer fork.

I think we could consider taking these patches for the 4.5 release, as
it's obviously too late to do anything more drastic at this point.
But I think long-term we need to sort out a better solution.  I'll
write up an e-mail here to talk about a longer-term plan shortly...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.