[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] OSSTEST: introduce a raisin build test



On Mon, 2015-05-18 at 11:54 +0100, George Dunlap wrote:
> On 05/18/2015 11:33 AM, Ian Campbell wrote:
> > On Mon, 2015-05-18 at 11:08 +0100, George Dunlap wrote:
> >> On Wed, May 13, 2015 at 12:48 PM, Stefano Stabellini
> >> <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> >>> On Wed, 13 May 2015, Ian Campbell wrote:
> >>>> On Tue, 2015-05-12 at 12:46 +0100, Stefano Stabellini wrote:
> >>>>>> Would a separate clone of the same raisin version with some sort of
> >>>>>> "dist" directory transported over be sufficient and supportable? Or are
> >>>>>> raisin's outputs not in one place and easily transportable?
> >>>>>>
> >>>>>> i.e. today build-$ARCH-libvirt picks up the dist.tar.gz files from the
> >>>>>> corresponding build-$ARCH, unpacks them and asks libvirt to build
> >>>>>> against that tree.
> >>>>>
> >>>>> Moving the dist directory over should work, although I have never tested
> >>>>> this configuration.
> >>>>
> >>>> Would you be willing to support this as a requirement going forward?
> >>>
> >>> Yeah, I think it is OK
> >>>
> >>>> I assume that it is not also necessary to reclone all the trees for the
> >>>> preexisting components, just the new ones?
> >>>
> >>> Only if the user asks for a components to be built, the corresponding
> >>> tree is cloned.
> >>
> >> Won't the problem here be disentangling the stuff installed in dist/
> >> (or whatever it's called) from the things we want to rebuild vs the
> >> things we want to change?
> > 
> > From the osstest PoV at least the proposal here only involves building
> > additional things, not rebuilding anything which came from a previous
> > build.
> > 
> > e.g. given a build of xen.git now do a build of libvirt.git using those
> > previously built Xen libs.
> 
> Sure; but what I'm saying is if you do xen-full-build, you'll have a
> dist/ which contains:
>  * qemut
>  * qemuu
>  * seabios
>  * xen
>  * libvirt
>  * (&c)
> 
> But when you re-build just libvirt, what you want is a dist/ that contains:
>  * qemut
>  * qemuu
>  * seabios
>  * xen
> 
> Specifically, you want it *not* to contain anything from the previous
> libvirt builds.  That's what I'm talking about.

That's not what I was talking about ;-).

WRT the osstest usage the first build wouldn't be a full build, and in
particular it would exclude the libvirt.

I appreciate there may be reasons to care about the scenario you
presented, but right now I'm trying to figure out how we can best
integrate raisin into osstest and whether it can some how be made
suitable for building actual build artefacts for osstest to use for
further testing, as opposed to just existing as a test case for the sole
purpose of testing that raisin works.

If in solving that we also address your scenario, then great.

> > Per component dist dirs is similarly surely possible but perhaps not
> > something raisin wants.
> 
> You could in theory have per-component "output" directories, and then a
> global "input" directory which was blown away at the beginning of every
> raisin build and re-constructed as needed.  That would be the sort of
> equivalent of the mock-style RPM build (where the chroot represents the
> global "input").
> 
> Not sure how well that would work, though.

In essence everything builds into dist.$component and then at the end of
each component raisin automatically takes that and overlays whatever it
contains over some central dist.all which subsequent components actually
build against? Perhaps with a mode to seed dist.all from dist.* iff
dist.all doesn't exist.

I can see how that might work in general and I think it would solve at
least osstest's desired use case but I don't know enough about raisin
internals to know if it will actually fit in. Lets see what Stefano
says.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.