[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: xen cache colors in ARM



Hello Michal,

Yes, I use yocto.

Yesterday all day long I tried to follow your suggestions.
I faced a problem.
Manually in the xen config build file I pasted the strings:

CONFIG_EARLY_PRINTK
CONFIG_EARLY_PRINTK_ZYNQMP
CONFIG_EARLY_UART_CHOICE_CADENCE

Host hangs in build time. 
Maybe I did not set something in the config build file ?

Regards,
Oleg

чт, 20 апр. 2023 г. в 11:57, Oleg Nikitenko <oleshiiwood@xxxxxxxxx>:
Thanks Michal,

You gave me an idea.
I am going to try it today.

Regards,
O.

чт, 20 апр. 2023 г. в 11:56, Oleg Nikitenko <oleshiiwood@xxxxxxxxx>:
Thanks Stefano.

I am going to do it today.

Regards,
O.

ср, 19 апр. 2023 г. в 23:05, Stefano Stabellini <sstabellini@xxxxxxxxxx>:
On Wed, 19 Apr 2023, Oleg Nikitenko wrote:
> Hi Michal,
>
> I corrected xen's command line.
> Now it is
> xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native sched=null
> timer_slop=0 way_size=65536 xen_colors=0-3 dom0_colors=4-7";

4 colors is way too many for xen, just do xen_colors=0-0. There is no
advantage in using more than 1 color for Xen.

4 colors is too few for dom0, if you are giving 1600M of memory to Dom0.
Each color is 256M. For 1600M you should give at least 7 colors. Try:

xen_colors=0-0 dom0_colors=1-8



> Unfortunately the result was the same.
>
> (XEN)  - Dom0 mode: Relaxed
> (XEN) P2M: 40-bit IPA with 40-bit PA and 8-bit VMID
> (XEN) P2M: 3 levels with order-1 root, VTCR 0x0000000080023558
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Coloring general information
> (XEN) Way size: 64kB
> (XEN) Max. number of colors available: 16
> (XEN) Xen color(s): [ 0 ]
> (XEN) alternatives: Patching with alt table 00000000002cc690 -> 00000000002ccc0c
> (XEN) Color array allocation failed for dom0
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Error creating domain 0
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
>
> I am going to find out how command line arguments passed and parsed.
>
> Regards,
> Oleg
>
> ср, 19 апр. 2023 г. в 11:25, Oleg Nikitenko <oleshiiwood@xxxxxxxxx>:
>       Hi Michal,
>
> You put my nose into the problem. Thank you.
> I am going to use your point.
> Let's see what happens.
>
> Regards,
> Oleg
>
>
> ср, 19 апр. 2023 г. в 10:37, Michal Orzel <michal.orzel@xxxxxxx>:
>       Hi Oleg,
>
>       On 19/04/2023 09:03, Oleg Nikitenko wrote:
>       >       
>       >
>       >
>       > Hello Stefano,
>       >
>       > Thanks for the clarification.
>       > My company uses yocto for image generation.
>       > What kind of information do you need to consult me in this case ?
>       >
>       > Maybe modules sizes/addresses which were mentioned by @Julien Grall <mailto:julien@xxxxxxx> ?
>
>       Sorry for jumping into discussion, but FWICS the Xen command line you provided seems to be not the one
>       Xen booted with. The error you are observing most likely is due to dom0 colors configuration not being
>       specified (i.e. lack of dom0_colors=<> parameter). Although in the command line you provided, this parameter
>       is set, I strongly doubt that this is the actual command line in use.
>
>       You wrote:
>       xen,xen-bootargs = "console=dtuart dtuart=serial0 dom0_mem=1600M dom0_max_vcpus=2 dom0_vcpus_pin bootscrub=0 vwfi=native
>       sched=null timer_slop=0 way_szize=65536 xen_colors=0-3 dom0_colors=4-7";
>
>       but:
>       1) way_szize has a typo
>       2) you specified 4 colors (0-3) for Xen, but the boot log says that Xen has only one:
>       (XEN) Xen color(s): [ 0 ]
>
>       This makes me believe that no colors configuration actually end up in command line that Xen booted with.
>       Single color for Xen is a "default if not specified" and way size was probably calculated by asking HW.
>
>       So I would suggest to first cross-check the command line in use.
>
>       ~Michal
>
>
>       >
>       > Regards,
>       > Oleg
>       >
>       > вт, 18 апр. 2023 г. в 20:44, Stefano Stabellini <sstabellini@xxxxxxxxxx <mailto:sstabellini@xxxxxxxxxx>>:
>       >
>       >     On Tue, 18 Apr 2023, Oleg Nikitenko wrote:
>       >     > Hi Julien,
>       >     >
>       >     > >> This feature has not been merged in Xen upstream yet
>       >     >
>       >     > > would assume that upstream + the series on the ML [1] work
>       >     >
>       >     > Please clarify this point.
>       >     > Because the two thoughts are controversial.
>       >
>       >     Hi Oleg,
>       >
>       >     As Julien wrote, there is nothing controversial. As you are aware,
>       >     Xilinx maintains a separate Xen tree specific for Xilinx here:
>       >     https://github.com/xilinx/xen <https://github.com/xilinx/xen>
>       >
>       >     and the branch you are using (xlnx_rebase_4.16) comes from there.
>       >
>       >
>       >     Instead, the upstream Xen tree lives here:
>       >     https://xenbits.xen.org/gitweb/?p=xen.git;a=summary <https://xenbits.xen.org/gitweb/?p=xen.git;a=summary>
>       >
>       >
>       >     The Cache Coloring feature that you are trying to configure is present
>       >     in xlnx_rebase_4.16, but not yet present upstream (there is an
>       >     outstanding patch series to add cache coloring to Xen upstream but it
>       >     hasn't been merged yet.)
>       >
>       >
>       >     Anyway, if you are using xlnx_rebase_4.16 it doesn't matter too much for
>       >     you as you already have Cache Coloring as a feature there.
>       >
>       >
>       >     I take you are using ImageBuilder to generate the boot configuration? If
>       >     so, please post the ImageBuilder config file that you are using.
>       >
>       >     But from the boot message, it looks like the colors configuration for
>       >     Dom0 is incorrect.
>       >
>
>
>

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.