[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for Allwinner H6 SoC



On Tue, Jul 28, 2020 at 11:16 AM Stefano Stabellini
<sstabellini@xxxxxxxxxx> wrote:
>
> On Tue, 28 Jul 2020, André Przywara wrote:
> > On 28/07/2020 11:39, Alejandro wrote:
> > > Hello,
> > >
> > > El dom., 26 jul. 2020 a las 22:25, André Przywara
> > > (<andre.przywara@xxxxxxx>) escribió:
> > >> So this was actually my first thought: The firmware (U-Boot SPL) sets up
> > >> some basic CPU frequency (888 MHz for H6 [1]), which is known to never
> > >> overheat the chip, even under full load. So any concern from your side
> > >> about the board or SoC overheating could be dismissed, with the current
> > >> mainline code, at least. However you lose the full speed, by quite a
> > >> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
> > >> However, without the clock entries in the CPU node, the frequency would
> > >> never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
> > >> to do this).
> > >> So from a practical point of view: unless you hack Xen to pass on more
> > >> cpu node properties, you are stuck at 888 MHz anyway, and don't need to
> > >> worry about overheating.
> > > Thank you. Knowing that at least it won't overheat is a relief. But
> > > the performance definitely suffers from the current situation, and
> > > quite a bit. I'm thinking about using KVM instead: even if it does
> > > less paravirtualization of guests,
> >
> > What is this statement based on? I think on ARM this never really
> > applied, and in general whether you do virtio or xen front-end/back-end
> > does not really matter.

When you say "in general" here, this becomes a very broad statement
about virtio and xen front-end/back-ends being equivalent and
interchangable, and that could cause some misunderstanding for a
newcomer.

There are important differences between the isolation properties of
classic virtio and Xen's front-end/back-ends -- and also the Argo
transport. It's particularly important for Xen because it has
priortized support for stronger isolation between execution
environments to a greater extent than some other hypervisors. It is a
critical differentiator for it. The importance of isolation is why Xen
4.14's headline feature was support for Linux stubdomains, upstreamed
to Xen after years of work by the Qubes and OpenXT communities.

> > IMHO any reasoning about performance just based
> > on software architecture is mostly flawed (because it's complex and
> > reality might have missed some memos ;-)

That's another pretty strong statement. Measurement is great, but
maybe performance analysis that is informed and directed by an
understanding of the architecture under test could potentially be more
rigorous and persuasive than work done without it?

> > So just measure your particular use case, then you know.

Hmm.

> > > I'm sure that the ability to use
> > > the maximum frequency of the CPU would offset the additional overhead,
> > > and in general offer better performance. But with KVM I lose the
> > > ability to have individual domU's dedicated to some device driver,
> > > which is a nice thing to have from a security standpoint.
> >
> > I understand the theoretical merits, but a) does this really work on
> > your board and b) is this really more secure? What do you want to
> > protect against?
>
> For "does it work on your board", the main obstacle is typically IOMMU
> support to be able to do device assignment properly. That's definitely
> something to check. If it doesn't work nowadays you can try to
> workaround it by using direct 1:1 memory mappings [1].  However, for
> security then you have to configure a MPU. I wonder if H6 has a MPU and
> how it can be configured. In any case, something to keep in mind in case
> the default IOMMU-based setup doesn't work for some reason for the
> device you care about.
>
> For "is this really more secure?", yes it is more secure as you are
> running larger portions of the codebase in unprivileged mode and isolated
> from each other with IOMMU (or MPU) protection. See what the OpenXT and
> Qubes OS guys have been doing.

Yes. Both projects have done quite a lot of work to enable and
maintain driver domains.

thanks,

Christopher

>
>
> [1] https://marc.info/?l=xen-devel&m=158691258712815



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.