[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [OSSTEST PATCH] README.hardware-acquisition [and 1 more messages] [and 2 more messages]



Hi all, 

adding Wei because of  ...

User facing part: https://gitlab.com/xen-project/xen/pipelines
Back-end: https://gitlab.com/xen-project/xen-gitlab-ci
There are also some scripts in 
http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=automation;hb=HEAD related to 
this

On 01/11/2018, 18:12, "Stefano Stabellini" <sstabellini@xxxxxxxxxx> wrote:

    Hi Ian,
    
    Thank you for the detailed answer and the willingness to see OSSTest
    changed in this respect.
    
    Let me premise that as much as I would like this to be done, I had a
    look at my schedule, and, realistically, I can only volunteer very
    little time on this. In regards to the two Xilinx boards, it looks like
    we'll just have to wait for Debian.
    
    For the sake of this discussion and brainstorming solutions, I have a
    couple of questions and answers on how to support different kernels with
    Debian below.
    
    
    On Thu, 1 Nov 2018, Ian Jackson wrote:
    > > Yes, we should discuss the technical details on how to use our own
    > > quasi-vanilla Linux branch together with the Debian installer. That's
    > > all we need AFAICT.
    > 
    > OK.  So:
    > 
    > 
    > I see two possible approaches:
    > 
    > Firstly, chicken-and-egg: Use osstest's `anointed job' mechanism to
    > chain one Xen ARM kernel build from the next.  (The anointed job
    > feature in osstest allows a certain build to be declared generally
    > good for use by other jobs.  The anointment typically takes place at
    > the end of a push gate flight, when the build job that is being
    > anointed has been shown to work properly.)
    > 
    > Secondly, cross-compilation on x86.
    > 
    > I think cross-compilation on x86 is probably going to be easier
    > because it is conceptually simpler.  It also avoids difficulties if
    > the anointed build should turn out to be broken on some hosts (this
    > ought to be detected by the push gate system, but...).  And, frankly,
    > our x86 hardware is a lot faster.
    > 
    > So, assuming the plan is to do cross-compilation on x86.
    > 
    > The prerequisite is obviously an appropriate cross-compiler.  Will the
    > Debian cross-compilers do ?
    
    Probably it would work, but I don't know for sure. Most people use the
    Linaro compiler and toolchain:
    
    
https://releases.linaro.org/components/toolchain/binaries/latest-7/aarch64-linux-gnu/
    https://releases.linaro.org/components/toolchain/gcc-linaro/latest-7/
    
    Testing the Debian cross-compiler would be very easy.
    
I was wondering whether we could use images in 
https://gitlab.com/xen-project/xen/container_registry as baseline for OSSTESTIN 
in these instances
We may be close to solving the build issues (via a WorksOnArm) via the GitLab CI
And it should be possible to create some infrastructure to build some custom 
images and put them into https://gitlab.com/xen-project/xen/container_registry 
and pull them from there. 

I don’t know whether that solves the full problem and how easy it would be: 
e.g. would we still need the cross-compiler for Xen
But we could separate the Dom0 kernel / distro build from OSSTEST 
    
    > If not then maybe this is not the best
    > approach because otherwise it's not clear where we'll get a suitable
    > compiler.
    > 
    > If the Debian cross compilers are OK, then I think the necessary
    > changes to osstest are:
    > 
    > 1. Introduce a distinction between the host (GCC terminology: build)
    >    and target (GCC terminology: host) architectures, in ts-xen-build.
    >    This includes adding a call to target_install_packages to install
    >    the cross compiler, and appropriately amending the configure and
    >    make runes.  Perhaps some of this will want to be in
    >    Osstest/BuildSupport.pm.  The runvars for build jobs will need to
    >    be reviewed to decide whether a new runvar is needed or whether
    >    cross-compilation can be inferred from a currently-unsupported
    >    combination of runvars (particularly, arch vs., hostflags).
    > 
    > 2. Maybe change ts-kernel-build to be able to additionally produce a
    >    .deb, or cpio full of modules, for use by step 5.  (This should be
    >    optional, controlled by a runvar, since it probably doubles the
    >    size of the build output...)
    > 
    > 3. Change make*flight and mfi-* to, on ARM, run the existing kernel
    >    build job on x86 by setting the job runvars appropriately.
    > 
    > 4a. Teach the debian-installer driver in Debian.pm how to pick up a
    >    kernel image from another job.  It would look at a runvar
    >    dikernelbuildjob or something I guess.
    > 
    > 4b. Teach it to pick up a kernel modules from another job and stuff
    >    them into its installer cpio before use.
    > 
    > 4c. Teach it to put the kernel and modules onto the being-installed
    >    system.
    > 
    >    This would be a variant of, or amendment to, or alternative to,
    >    Osstest/Debian.pm:di_special_kernel or its call site.  The kernel's
    >    ability to handle concatenated cpio images may be useful.
    > 
    >    We will want to refactor into a utility library (probably a file
    >    of shell functions) at least some of the code in
    >    mg-debian-installer-update for unpicking a kernel .deb (usually
    >    from -backports) and fishing out the kernel image and the modules,
    >    and stuffing the modules into an existing installer cpio archive.
    > 
    >    Whatever approach is taking, the modules in the installer must be a
    >    subset because the whole set of modules is very large and may make
    >    the initramfs too big to be booted.  See the list of module paths
    >    in mg-debian-installer-update.
    > 
    >    NB overall there are four aspects to (4): (i) arranging to boot the
    >    right kernel; (ii) getting the modules into the installer
    >    environment; and getting both (iii) kernel and (iv) modules into
    >    the being-installed system.
    > 
    > 5. Change make*flight and mfi-* on ARM to add the new runvar so that
    >    ARM flights use our own kernels rather than Debian's.
    > 
    > 6. Review the arrangements for reuse of existing build jobs, to maybe
    >    reuse ARM kernel builds more often.  Search cr-daily-branch for
    >    mg-adjust-flight-makexrefs.  Probably, an additional call should be
    >    added with some appropriate conditions.
    
    I thought that we could have provided a deb repository with alternative
    kernels for OSSTests to use. We would have scripts to generate those deb
    packages from the Xen ARM Linux tree in a repository on xenbits, but we
    wouldn't necessarily have OSSTest run the script. Initially, we could
    run the scripts by hand, then, we could run them automatically in
    OSSTest or elsewhere. Is that a possibility? I already have Dockerfiles
    (AKA bash scripts) to build an ARM kernel on a few distros, that's
    something I could make available.
    
    This morning Julien had one more different suggestion: building the
    kernel with OSSTest on SoftIron, that we know it works, it would be a
    native compilation. Then we could use the built kernel together with the
    Debian installer on the other boards (Xilinx, Renesas, etc.)
    
    Either way, the kernel to be used with the embedded boards doesn't need
    to be rebuilt often, only once a month or so.
    
That would fit with the https://gitlab.com/xen-project/xen/container_registry 
model 
where we store Dom0 baselines as containers for builds via the Gitlab CI 

This may be a stupid idea, but I wanted to make sure that we consider all 
options

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.