[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/3] Yocto Gitlab CI


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Wed, 19 Oct 2022 11:06:35 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=s2pVKOxocZIZCBd4njKlEy+jDzbfa8B8TJzQnz4hs1E=; b=iQip9Uc+a6/bqerxVcp4XmVn6Dxxfd+8tkskEpMp6+k6w0nOsVnpKOq0CEXC5kX7H97URsEGvlqbPqiL+nz0y2eP8nPxX2YJcfLOQ4P7fNyFUkOEaLclRKigGroVQVvbWz32flCc9Fw5v2jQzeu/k5kNrCJnqF/LRogf4clkIw17ZCLo4YoKqXFRDnJxyMku/LWWCy/mzdL443QPdLSDjidSvSBI3Yo01/XgD7lInhEwdGgbVjK1zTYrKp8SloKpy0a9pyLjxczj+EBnH7nNcQ7VnGYqO/W3H5CSO7g+eA/lS4NtLRtoDGl8083lT5u2qnJQd1lqR8SeBrjBT/UClw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Nyyn6//JrE9e/On1w/ZQ5kryvvmc+Qh7a1AqsYGHrLcs12NJHkQQKF3bFrtj9E8/2NZlpUP1VxMbF5RpqfEL8R2VFsSL3nB8D5xkX/eRqiqboDJyovhdn86re7U3J9JWGTaZJJ7Zn3/eN+Qh+W29cf5+qwU+iRk7W6Z1lO13rgWZPapNZH1hpyN1dTrb2UQ9Qu99oKrP9WxCvrjDKitzMqsmgdVWlLXTNbiRW1jdbmm1uwsx46QM17kz8SPoGh/OBUJi/vP9Vrpwx/4weeUuM9w97sYqeD+dk5OvxkXnZsHRxYLErrjoUsBuZFlVC82HT+f/S6ECE6/aMTOi9N5ooA==
  • Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Doug Goldstein" <cardoe@xxxxxxxxxx>
  • Delivery-date: Wed, 19 Oct 2022 09:07:14 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Stefano,

On 19/10/2022 02:02, Stefano Stabellini wrote:
> 
> 
> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
>> It should be
>>
>> BB_NUMBER_THREADS="2"
>>
>> but that worked! Let me a couple of more tests.
> 
> I could run successfully a Yocto build test with qemuarm64 as target in
> gitlab-ci, hurray! No size issues, no build time issues, everything was
> fine. See:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=oWrGVbloqkJoOxvvxTr55RbKVzd3YmS4iiLPyxDZCYY%3D&amp;reserved=0
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=%2BXf3ZB1bsVi8K%2BzEEe1Dhpg0GSohpseogff12GaK3Gw%3D&amp;reserved=0
> 
> I made the appended changes in top of this series.
> 
> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
>   registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
>   xen.git, not from a copy stored inside a container
> - when building the kirkstone-qemuarm64 container the first time
>   (outside of gitlab-ci) I used COPY and took the script from the local
>   xen.git tree
> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
>   this and it breaks on some workstations, please add it
> - I am running the yocto build on arm64 so that we can use the arm64
>   hardware to do it in gitlab-ci
> 
> Please feel free to incorporate these changes in your series, and add
> corresponding changes for the qemuarm32 and qemux86 targets.
> 
> I am looking forward to it! Almost there!
> 
> Cheers,
> 
> Stefano
> 
> 
> diff --git a/automation/build/yocto/build-yocto.sh 
> b/automation/build/yocto/build-yocto.sh
> index 0d31dad607..16f1dcc0a5 100755
> --- a/automation/build/yocto/build-yocto.sh
> +++ b/automation/build/yocto/build-yocto.sh
> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " 
> ssh-pregen-hostkeys"
>  # Save some disk space
>  INHERIT += "rm_work"
> 
> +# Reduce number of jobs
> +BB_NUMBER_THREADS="8"
> +
>  EOF
> 
>      if [ "${do_localsrc}" = "y" ]; then
> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile 
> b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> index f279a7af92..aea3fc1f3e 100644
> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> @@ -16,7 +16,8 @@ ARG target=qemuarm64
> 
>  # This step can take one to several hours depending on your download bandwith
>  # and the speed of your computer
> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
> +COPY ./build-yocto.sh /
> +RUN /build-yocto.sh --dump-log $target
> 
>  FROM $from_image
> 
> diff --git a/automation/build/yocto/kirkstone.dockerfile 
> b/automation/build/yocto/kirkstone.dockerfile
> index 367a7863b6..ffbd91aa90 100644
> --- a/automation/build/yocto/kirkstone.dockerfile
> +++ b/automation/build/yocto/kirkstone.dockerfile
> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
>               /home/$USER_NAME/xen && \
>      chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
> 
> -# Copy the build script
> -COPY build-yocto.sh /home/$USER_NAME/bin/
> -
>  # clone yocto repositories we need
>  ARG yocto_version="kirkstone"
>  RUN for rep in \
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index ddc2234faf..4b8bcde252 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
>      EXTRA_XEN_CONFIG: |
>        CONFIG_BOOT_TIME_CPUPOOLS=y
> 
> +yocto-kirkstone-qemuarm64:
> +  stage: build
> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> +  script:
> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs 
> --xen-dir=`pwd` qemuarm64
> +  variables:
> +    CONTAINER: yocto:kirkstone-qemuarm64
> +  artifacts:
> +    paths:
> +      - '*.log'
> +      - '*/*.log'
The above lines are not needed as the logs/* below will handle them all (logs 
are only stored in logs/).

> +      - 'logs/*'
> +    when: always
> +  tags:
> +    - arm64
> +
build-yocto.sh performs both build and run actions. I think it'd be better to 
move this into test.yaml in that case.
The best would be to create one build job (specifying --no-run) in build.yaml 
and one test job (specifying --no-build) in test.yaml.
This however would probably require marking path build/tmp/deploy/***/qemuarm64 
as an build artifact. The question then is
whether having this path would be enough for runqemu (Bertrand's opinion 
needed).

Apart from that there is an aspect of Yocto releases and the containers/tests 
names.
Yocto needs to be up-to-date in order to properly build Xen+tools.
This basically means that we will need to update the containers once
per Yocto release. The old containers would still need to be stored in our CI 
container registry
so that we can use CI for older versions of Xen. However, updating the 
containers would also require
modifying the existing tests (for now we have e.g. yocto-kirkstone-qemuarm64 
but in a month we will have
to change them to yocto-langdale-qemuarm64). In a few years time this will 
result in several CI jobs
that are the same but differ only in name/container. I would thus suggest to 
name the CI jobs like this:
yocto-qemuarm64 (without yocto release name) and define the top-level 
YOCTO_CONTAINER variable to store
the current yocto release container. This will solve the issue I described 
above.


~Michal



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.