[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On Thu, 2014-03-13 at 15:34 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 03/13/2014 03:27 PM, Ian Campbell wrote:
> > On Mon, 2014-03-10 at 09:25 +0100, Arianna Avanzini wrote:
> >> Currently, the configuration-parsing code concerning the handling of the
> >> iomem parameter only invokes the XEN_DOMCTL_iomem_permission hypercall.
> >> This commit lets the XEN_DOMCTL_memory_mapping hypercall be invoked
> >> after XEN_DOMCTL_iomem_permission when the iomem parameter is parsed
> >> from a domU configuration file, so that the address range can be mapped
> >> to the address space of the domU.
> >> NOTE: the added code is still common to both x86 and ARM; it also
> >>       implements a simple 1:1 mapping that could clash with the domU's
> >>       existing memory layout if the range is already in use in the
> >>       guest's address space.
> > 
> > In that case you need to CC the x86 maintainers (Jan, Keir, Tim) here.
> > It doesn't seem to me that this is going to be the correct thing to do
> > for either x86 PV or x86 HVM guests.
> 
> From my reply on V1, the XEN_DOMCTL_memory_mapping hypercall is called
> by QEMU for HVM. I was unable to define where this call is made by PV.

It's probably not. A PV guest will simply write entries to its page
tables mapping these machine addresses and Xen will use the caps arrays
to decide if that is allowed or not.

IOW XEN_DOMCTL_memory_mapping is only useful for externally translated
guests (i.e. x86 HVM and ARM).

Ian.
> 
> >> Signed-off-by: Arianna Avanzini <avanzini.arianna@xxxxxxxxx>
> >> Cc: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> >> Cc: Paolo Valente <paolo.valente@xxxxxxxxxx>
> >> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >> Cc: Julien Grall <julien.grall@xxxxxxxxxx>
> >> Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
> >> Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
> >> Cc: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
> >> Cc: Viktor Kleinik <viktor.kleinik@xxxxxxxxxxxxxxx>
> >> ---
> >>  tools/libxl/libxl_create.c | 17 +++++++++++++++++
> >>  1 file changed, 17 insertions(+)
> >>
> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> index a604cd8..6c206c3 100644
> >> --- a/tools/libxl/libxl_create.c
> >> +++ b/tools/libxl/libxl_create.c
> >> @@ -1099,6 +1099,23 @@ static void domcreate_launch_dm(libxl__egc *egc, 
> >> libxl__multidev *multidev,
> >>                   "failed give dom%d access to iomem range 
> >> %"PRIx64"-%"PRIx64,
> >>                   domid, io->start, io->start + io->number - 1);
> >>              ret = ERROR_FAIL;
> > 
> > Please add a continue here and drop the else brining the remainder of
> > the added code out an indentation level.
> > 
> > The existing error handling in this function seems very sketchy to me,
> > there's a bunch of places where we set ret but then carry on regardless,
> > more than likely overwriting ret again. It's possible that a bunch of
> > goto error_out's should be added.
> > 
> >> +        } else {
> >> +            /*
> >> +             * NOTE: the following code is still common to both x86
> >> +             *       and ARM; it also implements a simple 1:1 mapping
> >> +             *       that could clash with the domU's existing memory
> >> +             *       layout if the range is already in use in the
> >> +             *       guest's address space.
> > 
> > The right thing to do here is to add a guest address field to
> > libxl_iomem_range and to extend the xl parse to accept
> >     iomem = [ "MFN,NR@PFN" ] syntax
> > where @PFN is optional and the default is 1:1.
> 
> I disagree with this solution, the user doesn't know the guest layout.
> How is it possible to let him choose the pfn?
> 
> I think, for now the best solution is to expose the same memory layout
> as the host when iomem is set.
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.