[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen hiding thermal capabilities from Dom0



On Thu, Nov 21, 2019 at 04:46:21PM +0100, J??rgen Gro?? wrote:
> On 21.11.19 16:36, Jan Beulich wrote:
> > On 21.11.2019 15:24, J??rgen Gro?? wrote:
> >> So: no, just giving dom0 access to the management hardware isn't going
> >> to fly. You need to have a proper virtualization layer for that purpose.
> > 
> > Or, like I had done in our XenoLinux forward port, you need to
> > go through hoops to make the coretemp driver actually understand
> > the environment it's running in.
> 
> This will still not guarantee you'll be able to reach all physical
> cpus. IIRC you pinned the vcpu to the respective physical cpu for
> performing its duty, but with cpupools this might not be possible for
> all physical cpus in the system.

Similar to the issue of MCE support, might it instead be better to have
*less* virtualization here instead of more?  The original idea behind Xen
was to leave the hard to virtualize bits visible and work with Domain 0.

Might it be better to expose this functionality to Domain 0, then
intercept the kernel calls?  Just needs 1 vcpu which can be scheduled on
any processor and that can be moved around to retrieve the data.  This
way Xen wouldn't need a proper driver for the management hardware.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@xxxxxxx  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.