|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] OSSTEST: introduce a raisin build test
Stefano Stabellini writes ("Re: [PATCH] OSSTEST: introduce a raisin build
test"):
> That's fine as there is no hidden git cloning with raisin. All the trees
> are specified explicitly in the config file.
Is this a fundamental design principle ?
The rump kernel build system uses git submodules, which are (very
annoying and) a kind of hidden git cloning, and it also has a
[psuedo-submodule a bit like xen.git wrt qemu et al.
> > Lastly you will (eventually) need to divide the output into one or more
> > component subtrees (e.g. ts-xen-build splits the hypervisor from the
> > tools in order to support 32-on-64 configs) and call built_stash_file on
> > them. Those then produce the outputs which other jobs can consume.
>
> Raisin has the capability of installing and configuring stuff on the
> host. I guess osstest wouldn't want to reuse that?
Probably not.
> Also how is the separation supposed to be done? Given that osstest
> requested raisin to build a certain number of components together,
> raisin would put them all in the same deb package. From what you wrote I
> take that ts-raisin-build should operate differently, but how?
Your ts-raisin-build could request building components separately, of
course, but I don't think that's sufficient unless your notion of a
`component' separates the Xen tools from the Xen hypervisor.
Here is an example use case, as done by osstest:
- build Xen on amd64
- split the hypervisor from the tools,
producing an amd64 hv and amd64 tools
- build Xen on i386
- split the hypervisor (if any) from the tools,
producing an i386 hv (in applicable Xen versions) and i386 tools
- install a fresh i386 box
- put the amd64 hv and the i386 tools on it
- boot the result, producing a 32-on-64 dom0
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |