[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Xen on Zen 3


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Matyáš Kroupa <kroupa.matyas@xxxxxxxxx>
  • Date: Tue, 07 Apr 2026 12:14:55 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=20251104 header.d=gmail.com header.i="@gmail.com" header.h="Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From"
  • Cc: xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Tue, 07 Apr 2026 10:15:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Dne úterý 7. dubna 2026 11:05:40, středoevropský letní čas, Jan Beulich 
napsal(a):
> Not exactly, there is an earlier exit from the function when num_roots is 0.
> If that line is the problem one, then presumably num_roots < num_nodes, thus
> yielding roots_per_node as 0. Sadly you didn't enable enough verbosity for
> 
>       pr_debug("Found %d AMD root devices\n", num_roots);
> 
> to actually leave a trace in the log. I'd guess the value to be 1, but there
> being multiple nodes at the same time. You may want to instrument the
> function a little more to be certain.

I booted it with either loglevel=8 or ignore_loglevel, but could not get the 
pr_debug to output anything. It did however print a lot of pci and other debug 
messages as expected.
 
> For your immediate purpose you may want to change the "!num_roots" check
> into a "num_roots < num_nodes" one (on the assumption that num_nodes
> can't be 0). Whether that's acceptable upstream I don't know, of course.

I'll try that later

Matyáš Kroupa







 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.