[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On ven, 2014-03-14 at 12:49 +0000, Ian Campbell wrote:
> On Fri, 2014-03-14 at 13:15 +0100, Dario Faggioli wrote:
> > The fact that, if what you say below is true, "iomem" does not work at
> > all, and no one complained from the Linux world so far, seems to me to 
> 
> iomem *does* work just fine for x86 PV guests, which is what it was
> added for. Apparently it was never extended to HVM guests.
> 
Ok, understood.

> > For one, the "Allow guest to access" there leaves a lot of room for
> > interpretation, I think.
> 
> Not if you think about it in terms of being a PV guest option, where the
> mapping just happens when the guest makes a PTE to map it.
> 
I see it now.

> Probably this option is currently misplaced in the man page, it should
> be PV specific.
> 
That seems a good thing to do.

> > To me, a legitimate use case is this: I want to run version X of my non
> > DT capable OS on version Z of Xen, on release K of board B. In such
> > configuration, the GPIO controller is at MFN 0x000abcd, and I want only
> > VM V to have direct access to it (board B dos not have an IOMMU).
> > 
> > I would also assume that one is in full control of the guest address
> 
> If "one" here is the user then I don't think so.
> 
Well, fact is "user", in this context, is who puts the embedded system
together into a product, rather than folks actually using such product,
so, I don't see that much unlikely.

> A given version of Xen will provide a particular memory layout to the
> guest. If you want to run non-single image OSes (i.e. things without
> device tree) on Xen then you will need to build a specific kernel binary
> for that version of Xen hardcoding the particular layout of that version
> of Xen. If you upgrade Xen then you will need to rebuild your guest to
> use the correct address layout.
> 
Which sounds really bad, but at the same time is right the case, in the
most of the embedded scenarios I've been in touch with.

> If the user doesn't want that, i.e. they want a single binary to run on
> multiple versions of Xen, then they had better implement device tree
> support in their kernel.
> 
I totally agree. :-)

> > I certainly don't claim to have the right answer but, in the described
> > scenario, either:
> >  1) the combination of iomem=[ MFN,NR@PFN ]", defaulting to 1:1 if  
> >     "@PFN is missing, and e820_host
> >  2) calling (the equivalent of) XEN_DOMCTL_memory_map from the guest 
> >     kernel
> > 
> > would be good solutions, to the point that I think we could even support
> > both. The main point being that, I think, even in the worst case, any
> > erroneous usage of either, would "just" destroy the guest, and that's
> > acceptable.
> 
> I don't think we want both and I'm leaning towards #1 right now, but
> with the e820_host thing being unnecessary in the first instance.
> 
Well, perfect then, that's what I argued for too, since the beginning of
this thread. :-)

> > If going for _only_ 2), then "iomem=[]" would just be there to ensure
> > the future mapping operation to be successful, i.e., for granting
> > mapping rights, as it's doing right now. It would be up to the guest
> > kernel to make sure the MFN it is trying to map are consistent with what
> > was specified in "iomem=[]". Given the use case we're talking about, I
> > don't think this is an unreasonable request, as far as we make the iomem
> > man entry more clearly stating this.
> 
> My worry with this one is that it makes might make it harder to DTRT in
> the future, e.g. by adding device tree nodes to represent things mapped
> with iomem=[], by committing us to a world where the guest makes these
> mappings and not the tools.
> 
Again, I completely agree.

> > As I was saying above, I think there is room for both, but I don't mind
> > picking up one. However, if we want to fix iomem=[] and go as far as
> > having it doing the mapping, then I think we all agree we need the
> > DOMCTL.
> > 
> > So, looks like the discussion resolves to something like:
> >  - do we need the DOMCTL for other purposes than iomem=[] ?
> >  - if no, what do we want to do with iomem=[] ?
> 
> Please come up with a design which answers this, I've given my opinions
> above but if you think some other design is better then argue for it.
> 
Not at all, I concur with you. I like it because a guest kernel, which
is compiled to find some device registers at a certain address, can just
go ahead and use them without any further modification. In fact,
specifying what these address are, is usually quite simple. It would
require rebuilding, but there are config/board files, etc... There are a
few that even have nice graphical frontends for this (and I think ERIKA
Enterprise has one too). Having to issue the physmap call would not be
terrible in this case, as we're rebuilding anyway, but it's certainly
more modification.

I also agree with you when you say that this leaves us in a better
position for future decisions.

Finally, it looks to me as a more consistent extension of current
iomem's behavior, in the pure x86 PV case.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.