[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.12 5/8] pvh/dom0: warn when dom0_mem is not set to a fixed value



>>> On 07.02.19 at 16:39, <roger.pau@xxxxxxxxxx> wrote:
> On Wed, Feb 06, 2019 at 06:54:23AM -0700, Jan Beulich wrote:
>> >>> On 30.01.19 at 11:36, <roger.pau@xxxxxxxxxx> wrote:
>> > There have been several reports of the dom0 builder running out of
>> > memory when buildign a PVH dom0 without havingf specified a dom0_mem
>> 
>> "building" and "having"
>> 
>> > value. Print a warning message if dom0_mem is not set to a fixed value
>> > when booting in PVH mode.
>> 
>> Why does it need to be a fixed value? I.e. why can't you simply
>> put this warning next to where the default gets established,
>> when nr_pages is zero?
> 
> Ack, but I guess you likely also want to change the printed warning so
> it does say "fixed"?

Did you mean '... so it doesn't say "fixed"'? If so - sure, the message
of course should reflect what is happening.

>> > --- a/xen/arch/x86/dom0_build.c
>> > +++ b/xen/arch/x86/dom0_build.c
>> > @@ -344,6 +344,10 @@ unsigned long __init dom0_compute_nr_pages(
>> >      if ( !dom0_mem_set && CONFIG_DOM0_MEM[0] )
>> >          parse_dom0_mem(CONFIG_DOM0_MEM);
>> >  
>> > +    if ( is_hvm_domain(d) && !dom0_size.nr_pages )
>> > +        printk(
>> > +"WARNING: consider setting dom0_mem to a fixed value when using PVH 
>> > mode\n");
>> 
>> Pretty unusual indentation. Is there any reason for you doing so?
> 
> Did it that way to avoid splitting and to attempt to keep the line as
> short as possible. Would you prefer me to split the message?

Well, splitting after WARNING: seems reasonable and unlikely to get
in the way of grep-ing for the message. But if you think a split there
is undesirable, then put it all on one line.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.